uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,941,325,221,188
arxiv
\section{Introduction} Let $V$ be a finite dimensional real vector space, whose dimension we will always denote by $d.$ The \emph{dual space} $W$ of $V$ is another real vector space together with a perfect pairing $\langle\cdot,\cdot\rangle:W\times V \to \mathbb{R}$. A \emph{polyhedron} $P \subset V$ is the solution set of a finite set of linear inequalities: \begin{equation}\label{ineq} P = \{ \textbf{x} \in V \ : \ \langle \textbf{a}_i, \textbf{x} \rangle \le b_i, \ i\in I \}, \end{equation} where $\textbf{a}_i$ are elements in $W$ and $b_i \in \mathbb{R}$ and $I$ is a finite set of indices. By choosing bases, we can abbreviate the above system of linear inequalities as \begin{equation}\label{matrix} \textrm{A} \textbf{x} \le \textbf{b}, \end{equation} where $\textrm{A}$ is the matrix whose row vectors are $\textbf{a}_i$'s and $\textbf{b}$ is the vector with components $b_i$'s. A \emph{polytope} is a bounded polyhedron. A $k$-dimensional polytope $P\subset V$ is \emph{simple} if each vertex lies on exactly $k$ facets. In this paper, we want to study special cases of the following question: For a fixed polytope $P_0 \subset V,$ how do we characterize all ``deformations'' of $P_0$? In the literature, there are different equivalent definitions for what we call deformations. The initial approach we take here is to move facets of $P_0$ without passing a vertex (See Definition \ref{defn:deform0}). We also make use of an alternative definition in terms of normal fans; deformations corresponds to coarsenings of the normal fan of $P_0$ (See Proposition \ref{prop:deform}). Lastly, we want to mention that this notion is equivalent (via Shepard's theorem \cite[Chapter 15, Theorem 2]{grunbaum}) to ``weak Minkowski summands'', which is central to McMullen's work on the polytope algebra (See \cite{mcmullen}). One important family of polytopes for this paper is \emph{generalized permutohedra}, which were originally introduced by Postnikov \cite[Definition 6.1]{post} as deformations of usual permutohedra. Generalized permutohedra contain many previously known interesting families of polytopes, including Stanley-Pitman polytopes \cite{stanley-pitman} and matroid polytopes \cite{ardila}. However, it turns out generalized permutohedra are translations of polymatroids (see Theorem \ref{thm:polymatroid}), which have been studied since the 70's. Polymatroids were initially defined in the context of optimization, in particular the greedy algorithm. See Edmonds' survey \cite{edmonds}, or Fujishige's book \cite{fujishige} for a more recent perspective. Since Postnikov's work \cite{post}, generalized permutohedra have received much research attention in the last ten years (see for example \cite{PosReiWil}, \cite{suho}, \cite{zele}). More recently, relations with hopf monoids have been developed \cite{aguiar}. The motivation of this article comes from two questions related to generalized permutohedra. We will discuss them in two parts below. \subsection*{Submodular Theorem} One well-known result on generalized permutohedra is the Submodular Theorem. \begin{definition}\label{defn:submod} Let $E$ be a finite set. A \emph{submodular function} is a set function $f: 2^E \to \mathbb{R}$ satisfying \[ f(S \cup T) + f(S \cap T) \le f(S) + f(T), \quad \forall S, T \subseteq E.\] \end{definition} \begin{theorem}[Submodular Theorem] \label{thm:submodular} There exists a bijection between generalized permutohedra of dimension at most $d$ and submodular functions $f$ on $2^{[d+1]}$ satisfying $f(\emptyset)=0$. (Here $[d+1]=\{1,2,\dots,d+1\}$.) \end{theorem} Even though the Submodular Theorem was known well before the original definition for generalized permutohedra was given by Postnikov, we couldn't find a direct reference for the statement and proof. Research papers commonly cite to \cite{post} and \cite{PosReiWil}; but it is written in neither. In \cite{ranktest} it appears as Proposition 15; but only the proof for one direction of the statement is provided. The standard proof we can find is in \cite[Chapter 44, Theorem 44.3]{schrijver} which has the statement in terms of polymatroids. However, the proof uses ideas from optimization, and we could not find a place that gives a clear statement of the connection between polymatroids and generalized permutohedra. Hence, it is still interesting to find a natural combinatorial proof for the Submodular Theorem. In \cite{PosReiWil}, the authors give several equivalent definitions for generalized permutohedra, one of which states that generalized permutohedra are precisely translations of polytopes whose normal fans are coarsenings of the ``Braid fan'' $\textrm{Br}_d$, which is the normal fan of the ``centralized regular permutohedron'' $\widetilde{\Pi_d}.$ (See Proposition \ref{prop:coarser}.) As a consequence, the Submodular Theorem is closely related to the characterization for the deformation cone of the polytope $\widetilde{\Pi_d}$ or the fan $\textrm{Br}_d.$ Having this in mind, we consider the question of determining deformation cones of a general polytope $P_0$ in Section \ref{sec:prel}. After providing a precise definition for deformations of $P_0$ using the idea of ``moving facets without passing vertices'', we derive general techniques for computing the deformation cone of $P_0,$ using which we provide in Section \ref{sec:GP} a new combinatorial proof for Theorem \ref{thm:submodular}. After the notation and machinery is introduced, the proof flows naturally, which is an indication that techniques layed out in Section \ref{sec:prel} is a good way of attacking this kind of problems. Another consequence of our techniques is a proof for the connection between polymatroids and generalized permutohedra. \subsection*{The nested Braid fan} One characterization for generalized permutohedra is that all the edge directions are in the form of $\textbf{e}_i - \textbf{e}_j$ (See Remark \ref{rem:edges}). However, if one tries to move some facet passing a vertex, edge directions in the form of $\textbf{e}_i+\textbf{e}_j - \textbf{e}_k - \textbf{e}_\ell$ can appear. Therefore, we ask whether the family of generalized permutohedra can be generalized further to allow these edge directions. This motivates the work in Section \ref{sec:NBF} of this article. \begin{figure}[t] \begin{center} \begin{tikzpicture}% \input{pi3.tex} \input{pi32.tex} \end{tikzpicture} \end{center} \caption{$\Pi_3$ and $\Pi^2_3(4,1)$} \label{fig:2polytopes} \end{figure} The maximal cones in the Braid fan $\textrm{Br}_d$ are sets of points whose coordinates are given in a fixed order. In Section \ref{sec:NBF}, we introduce the ``nested Braid fan'' $\textrm{Br}_d^2,$ which is a refinement of $\textrm{Br}_d$ by considering first differences of ordered coordinates. (See Definitions \ref{defn:NBF} and \ref{defn:NBF2} for detail.) We show that $\textrm{Br}_d^2$ is the normal fan of ``usual nested permutohedra'', a subfamily of which is $\Pi_d^2(M,N)$ (called ``regular nested permutohedra''), and thus is a projective fan. (See Figure \ref{fig:2polytopes} for a picture of $\Pi_3$ and $\Pi_3^2(4,1)$ side by side.) We then use the general techniques derived in Section \ref{sec:prel} to give a characterization for the deformation cone of $\textrm{Br}_d^2$ analogous to the results for the deformation cone of $\textrm{Br}_d.$ One key ingredient in our proof for the Submodular Theorem is the natural one-to-one correspondence between chains in the Boolean algebra ${\mathcal B}_{d+1}$ and faces of regular permutohedron. Parallely, in Section \ref{sec:NBF}, we consider the ``ordered partition poset'' ${\mathcal O}_{d+1}$ (see Definition \ref{defn:osp}), and show same statement holds for ${\mathcal O}_{d+1}$ and the regular nested permutohedron. We remark that the combinatorics of the nested Braid fan or nested permutohedra turns out to be very rich. Indeed, the equations defining the deformation cone of $\textrm{Br}_d^2$ are of a combinatorial nature. Even though we feel we have given a thorough description for the structures of these two related geometric objects, there are still questions remained to be answer. During a talk given by the first author on materials presented in Sections \ref{sec:GP} and \ref{sec:NBF}, Victor Reiner asked whether the nested Braid fan is the barycentric subdivision of the Braid fan. We give an affirmative answer to his question in Section \ref{sec:chisel}. \subsection*{Organization of the paper} In \S \ref{sec:prel}, we will present/review definitions of deformation cones of polytopes and projective fans, and discuss general techniques for computing them from the polytopal side. In \S \ref{sec:GP}, we review known facts about generalized permutohedra, apply techniques derived in \S \ref{sec:prel} to find deformation cones of $\textrm{Br}_d$, and give a proof for the Submodular Theorem. In \S \ref{sec:NBF}, we define nested Braid fan $\textrm{Br}_d^2$ and nested permutohedra, and discuss their combinatorics, using which we give inequality description for nested permutohedra and determine the deformation cone of $\textrm{Br}_d^2.$ In \S \ref{sec:chisel}, we describe how we can obtain the nested Braid fan as the barycentric subdivision of the Braid fan, answering Victor Reiner's question. We finish the main body of this article with some questions that might be interesting for future research in \S \ref{sec:question}. \subsection*{Acknowledgements} The second author is partially supported by NSF grant DMS-1265702 and a grant from the Simons Foundation \#426756. The final writing of the work was completed when both authors were attending the program ``Geometric and Topological Combinatorics'' at the Mathematical Sciences Research Institute in Berkeley, California, during the Fall 2017 semester, and they were partially supported by the NSF grant DMS-1440140. The authors would like to thank Federico Ardila and Brian Osserman for helpful discussion, and thank Alex Fink and Christian Haase for explaining Theorem \ref{thm:bary1}. \section{Determining deformation cones} \label{sec:prel} We assume familiarity with basic definitions of polyhedra and polytopes as presented in \cite{barvinok, zie}. The main purpose of this section is to derive a systemetic way to answer the following general question: For a fixed polytope $P_0 \subset V,$ how do we characterize all ``deformations'' of $P_0$? We start by setting up our question formally. \begin{setup}\label{setup1} Let $P_0$ be a fixed full-dimensional polytope in $V$ defined by $\textrm{A} \textbf{x} \le \textbf{b}_0$, where each inequality is \emph{facet-defining}, i.e., $\{ \textbf{x} \in P \ : \ \langle \textbf{a}_i, \textbf{x} \rangle = b_i\}$ is a facet of $P.$ Suppose $P_0$ has $n$ facets $F_1, \dots, F_m$. We may assume that the system defines $P$ is \begin{equation}\label{fineq} \langle \textbf{a}_i, \textbf{x} \rangle \le b_{0,i}, \quad 1 \le i \le m, \end{equation} where $\textbf{a}_i$ is an normal vector to the facet $F_i.$ \end{setup} Roughly speaking, a deformation of $P_0$ is a polytope obtained from $P_0$ by moving facets of $P_0$ ``without passing any vertices''. We make this more precise below. \begin{definition} \label{defn:deform0} A polytope $Q \subset V$ is a \emph{deformation} of $P_0$ (described in Setup \ref{setup1}), if there exists $\textbf{b} \in \mathbb{R}^m$ such that the following two conditions are satisfied: \begin{enumerate}[(a)] \item \label{item:pts} $Q$ is defined by $\textrm{A} \textbf{x} \le \textbf{b}$ (with the same matrix $\textrm{A}$ as in Setup \ref{setup1}) or equivalently, \begin{equation}\label{Qfineq} \langle \textbf{a}_i, \textbf{x} \rangle \le b_{i}, \quad 1 \le i \le m. \end{equation} \item \label{item:nopass} For any vertex $v$ of $P_0$, if $F_{i_1}, F_{i_2}, \dots, F_{i_k}$ are the facets of $P_0$ where $v$ lies on, then the intersection of \[ \{ \textbf{x} \in V \ : \ \langle \textbf{a}_{i_j}, \textbf{x} \rangle = b_{i_j}\}, \quad 1 \le j \le k \] is a vertex $u$ of $Q.$ \end{enumerate} We call $\textbf{b}$ a \emph{deforming vector} for $Q$. \end{definition} It is not hard to see that any deformation $Q$ of $P_0$ is associated with a \emph{unique} deforming vector $\textbf{b}$ because conditions \eqref{item:pts} and \eqref{item:nopass} imply that the entries of $\textbf{b}$ must satisfy \[ b_i = \max_{\textbf{x} \in Q} \langle \textbf{a}_i, \textbf{x} \rangle, \quad \forall 1 \le i \le m. \] Thus, we say $\textbf{b}$ is \emph{the} deforming vector for $Q.$ The uniqueness of $\textbf{b}$, together with condition (a), establishes a one-to-one correspondence between deformations $Q$ of $P_0$ and their associated deformation vectors. Therefore, we give the following definition. \begin{definition} The \emph{deformation cone} of $P_0$, denoted by $\operatorname{Def}(P_0)$, is the collection of deforming vectors $\textbf{b} \in \mathbb{R}^m$ described in Definition \ref{defn:deform0}. \end{definition} \begin{figure}[t] \begin{tikzpicture} \begin{scope}[scale=0.5] \node at (13,0) {$\left(\begin{array}{rr} -1&0 \\ 0&1 \\ 0&-1 \\ 1&-1 \end{array}\right)\left(\begin{array}{r} x\\y\end{array}\right) \leq \left(\begin{array}{r} 1 \\ 2 \\ 1 \\ 2 \end{array}\right).$}; \draw[help lines,dashed] (3,0)--(-3,0); \draw[help lines,dashed] (0,3)--(0,-3); \draw (-1,-3)--(-1,3); \draw (-3,2)--(5,2); \draw (-3,-1)--(3,-1); \draw (0,-2)--(5,3); \node[above left] at (-1,2) {$\scriptstyle (-1,2)$}; \node[below left] at (-1,-1) {$\scriptstyle (-1,-1)$}; \node[below right] at (1,-1) {$\scriptstyle (1,-1)$}; \node[above right] at (4,2) {$\scriptstyle (4,2)$}; \draw[fill=red] (-1,2) circle [radius=0.1]; \draw[fill=red] (-1,-1) circle [radius=0.1]; \draw[fill=red] (1,-1) circle [radius=0.1]; \draw[fill=red] (4,2) circle [radius=0.1]; \node at (0.8, 0.7) {\Large $\textcolor{blue}{P_0}$}; \node[above left] at (-1,0) {$F_1$}; \node[above] at (2,2) {$F_2$}; \node[below left] at (0,-1) {$F_3$}; \node[above right] at (3,0) {$F_4$}; \draw[pattern=dots] (-1,-1)--(1,-1)--(4,2)--(-1,2)--cycle; \end{scope} \begin{scope}[yshift=-2.8cm, scale=0.35] \draw[help lines,dashed] (3,0)--(-3,0); \draw[help lines,dashed] (0,3)--(0,-3); \draw (-3,-3)--(-3,3); \draw (-3.5,2)--(5,2); \draw (-3.5,0)--(3,0); \draw (0.5,-2)--(5.5,3); \node[above left] at (-3,2) {$\scriptstyle (-1,2)$}; \node[below left] at (-3,0) {$\scriptstyle (-3,0)$}; \node[below right] at (2.5,0) {$\scriptstyle (2.5,0)$}; \node[above right] at (4.5,2) {$\scriptstyle (4.5,2)$}; \draw[fill=red] (-3,2) circle [radius=0.2]; \draw[fill=red] (-3,0) circle [radius=0.2]; \draw[fill=red] (2.5,0) circle [radius=0.2]; \draw[fill=red] (4.5,2) circle [radius=0.2]; \draw[pattern=dots] (-3,0)--(2.5,0)--(4.5,2)--(-3,2)--cycle; \node at (0,-4) {$\textbf{b}_1=(3,2,0,2.5)$}; \node at (0.5,2.7) {\Large $\textcolor{blue}{Q_1}$}; \end{scope} \begin{scope}[xshift= 5cm, yshift=-2.8cm, scale=0.35] \draw[help lines,dashed] (3,0)--(-3,0); \draw[help lines,dashed] (0,3)--(0,-3); \draw (-1,-3)--(-1,3); \draw (-3,2)--(5,2); \draw (-3,-1)--(3,-1); \draw (-2,-2)--(3,3); \node[above left] at (-1,2) {$\scriptstyle (-1,2)$}; \node[below left] at (-1.2,-0.3) {$\scriptstyle (-1,-1)$}; \node[above right] at (2,2) {$\scriptstyle (2,2)$}; \draw[fill=red] (-1,2) circle [radius=0.2]; \draw[fill=red] (-1,-1) circle [radius=0.2]; \draw[fill=red] (2,2) circle [radius=0.2]; \draw[pattern=dots] (-1,-1)--(2,2)--(-1,2)--cycle; \node at (0,-4) {$\textbf{b}_2=(1,2,1,0)$}; \node at (0.5,2.7) {\Large $\textcolor{blue}{Q_2}$}; \end{scope} \begin{scope}[xshift= 9.5cm, yshift=-2.8cm, scale=0.35] \draw[help lines,dashed] (3,0)--(-3,0); \draw[help lines,dashed] (0,3)--(0,-3); \draw (-1,-3)--(-1,3); \draw (-3,2)--(5,2); \draw (-3,-2)--(3,-2); \draw (-2,-2)--(3,3); \node[above left] at (-1,2) {$\scriptstyle (-1,2)$}; \node[below left] at (-1.2,-0.3) {$\scriptstyle (-1,-1)$}; \node[above right] at (2,2) {$\scriptstyle (2,2)$}; \node[below right] at (-1,-2) {$\scriptstyle (-1,-2)$}; \node[below left] at (-1.5,-2) {$\scriptstyle (-2,-2)$}; \draw[fill=red] (-1,2) circle [radius=0.2]; \draw[fill=black] (-1,-1) circle [radius=0.2]; \draw[fill=red] (2,2) circle [radius=0.2]; \draw[fill=red] (-1,-2) circle [radius=0.2]; \draw[fill=red] (-2,-2) circle [radius=0.2]; \draw[pattern=dots] (-1,-1)--(2,2)--(-1,2)--cycle; \node at (0,-4) {$\textbf{b}_3=(1,2,2,0)$}; \node at (0.5,2.7) {\Large $\textcolor{blue}{Q_3}$}; \end{scope} \end{tikzpicture} \caption{Polytopes in Example \ref{ex:example}} \label{fig:exfigure} \end{figure} \begin{example}\label{ex:example} Let $P_0 \subset V = \mathbb{R}^2$ be the polytope on the top left of Figure \ref{fig:exfigure}, which is defined by the linear system given to its right. Let $\textrm{A}$ be the matrix in the linear system. Any deformation $Q$ of $P_0$ can be defined by $\textrm{A} \textbf{x} \le \textbf{b}$ for some $\textbf{b}.$ Two possible deformations $Q_1$ and $Q_2$ together with their respective deforming vectors $\textbf{b}_1$ and $\textbf{b}_2$ are shown on the bottom of Figure \ref{fig:exfigure}. Notice that $Q_3$ defined by $\textrm{A} \textbf{x} \le \textbf{b}_3$ is exactly the same polytope as $Q_2$, so is a deformation of $P_0$. However, $\textbf{b}_3$ does not satisfy condition \eqref{item:nopass}, and thus is not a deforming vector. Hence, $\textbf{b}_1, \textbf{b}_2 \in \operatorname{Def}(P_0)$, but $\textbf{b}_3 \not\in \operatorname{Def}(P_0).$ (This conclusion will be proved formally in Example \ref{ex:exampleprop}.) \end{example} The deformation cone of $P_0$ is a natural subject to study if one is interested in deformations of the fixed polytope $P_0$. We can now rephrase our initial general question. \begin{question}\label{ques1} Fix a full dimensional polytope $P_0 \subset V$. How do we find a characterization for $\operatorname{Def}(P_0)?$ \end{question} There is another equivalent way of defining deformations $Q$ of $P_0$ using normal fans of polytopes. (See Definition \ref{defn:normal} for a formal definition of normal cones and normal fans.) \begin{proposition}\label{prop:deform} A polytope $Q \subset V$ is a deformation of $P_0$ if and only if the normal fan $\Sigma(Q)$ of $Q$ is a coarsening of the normal fan $\Sigma(P_0)$ of $P_0$. \end{proposition} The proof of Proposition \ref{prop:deform} is quite different from what we discuss in the rest of the paper, so will be included in Appendix \ref{apd:normal}. Note that in Example \ref{ex:example}, the polytope $Q_1$ has the same normal fan as $P_0$, whereas $Q_2$'s normal fan is a coarsening. Proposition \ref{prop:deform} implies that if two polytopes $P_1$ and $P_2$ have the same normal fan $\Sigma$, they have exactly the same deformation cone. By abusing the notation, we might denote this deformation cone by $\operatorname{Def}(\Sigma),$ and call it the \emph{deformation cone} of $\Sigma.$ We say a fan $\Sigma$ is \emph{projective} if it is the normal fan of a polytope. (It is not true all the fans are projective.) Once we know that a projective fan $\Sigma$ is the normal fan a polytope, one can check that the polytope is full dimensional if and only if $0 \in \Sigma,$ i.e., all cones in $\Sigma$ are pointes. We use these language to rewrite Setup \ref{setup1} and Question \ref{ques1} \begin{setup}\label{setup2} Let $\Sigma_0$ be a projective fan in $W$ such that $0 \in \Sigma_0$. Assume it has $m$ one dimensional cones that generated by rays $\textbf{a}_1, \dots, \textbf{a}_m,$ respectively. \end{setup} \begin{question}\label{ques2} Given a fixed fan $\Sigma_0$ as described in Setup \ref{setup2}, how do we find a characterization for $\operatorname{Def}(\Sigma_0)?$ \end{question} Questions \ref{ques1} and \ref{ques2} are the same question in two different languages, and both have been studied, where the latter one is related to the study of toric varieties. (See \cite{cls} for general results on toric varieties.) It is worth remarking that big part of the motivation and tools come from that branch of mathematics. When $\Sigma_0$ is smooth, then $\operatorname{Def}(\Sigma_0)$, modulo its lineality space, is isomorphic to $\textrm{Nef}(\Sigma_0)$, the cone of \emph{numerically effective divisors} (see \cite[Chapter 6]{cls}). \begin{remark} In addition to the two definitions we have provided, there are additional different but equivalent ways of defining deformations of polytopes. In particular, in the Appendix of \cite{PosReiWil}, the authors discuss five different ways, including the normal fan version stated in Proposition \ref{prop:deform}. However, they restrict their definitions to simple polytopes only, while our definition is for \emph{any} polytope. Furthermore, it seems our Definition \ref{defn:deform0} has not (or at least not explicitly) appeared in the literature, and actually is very important for determining deformation cones as the techniques (that will be shown below) is derived from it directly. \end{remark} There are three main results that will be presented in the rest of this section. The first result is Corollary \ref{cor:ineqs}, in which we give an explicit description for deformation cone $\operatorname{Def}(P_0)$ of $P_0$ using linear equalities and inequalities. This will be derived directly from Definition \ref{defn:deform0}. We then analyze inequalities in Corollary \ref{cor:ineqs} further and apply it to simple polytopes to obtain in Proposition \ref{prop:redux} a simpler description for $\operatorname{Def}(P_0)$ using inequalities indexed by edges of $P_0$. We then give our third result - Proposition \ref{prop:reduxfan} - by restating Proposition \ref{prop:redux} using the language of simplicial fans, in which inequalities are indexed by pairs of adjacent maximal cones in the fan. We end this section with a discussion on how to determine whether a polytope is a deformation of $P_0$ using $\operatorname{Def}(P_0).$ \subsection*{Deformation cones of (not necessarily simple) polytopes} Even though condition \eqref{item:pts} of Definition \ref{defn:deform0} is necessary for the definition of deformations of a fixed polytope, if one only concerns about deforming vectors, only condition \eqref{item:nopass} is needed as stated in the following lemma. \begin{lemma}\label{lem:detb} Let $\textbf{b} \in \mathbb{R}^m.$ Then $\textbf{b} \in \operatorname{Def}(P_0)$ if and only if for any vertex $v$ of $P_0$, let $F_{i_1}, F_{i_2}, \dots, F_{i_k}$ be the facets of $P_0$ that $v$ lies on and let $u_\textbf{b}$ be the intersection of \[ \{ \textbf{x} \in V \ : \ \langle \textbf{a}_{i_j}, \textbf{x} \rangle = b_{i_j}\}, \quad 1 \le j \le k, \] then following ``NEI'' (short for ``non-empty-interesection'') and ``no-passing'' conditions are true: \begin{enumerate} \item[(NEI)] $u_\textbf{b}$ is nonempty, so is a point; and \item[(no-passing)] it satisfies $\textrm{A} u_\textbf{b} \le \textbf{b},$ or equivalently, \[ \langle \textbf{a}_i, u_\textbf{b} \rangle \le b_i, \quad \forall i \neq i_1,\dots, i_k.\] \end{enumerate} \end{lemma} \begin{proof} The forward implication follows directly from Definition \ref{defn:deform0}. Conversely, suppose the two conditions hold. Let $Q$ be defined by $\textrm{A} \textbf{x} \le \textbf{b}.$ Condition \eqref{item:pts} of Definition \ref{defn:deform0} automatically satisfied, and the NEI and no-passing conditions guarantee that $u_\textbf{b}$ is a vertex of $Q$, and thus condition \eqref{item:nopass} holds. \end{proof} The no-passing condition can fail in different scenarios. For the polytope $Q_3$ of Example \ref{ex:example}, not only the inequality $-y \le 2$ is not facet-defining, but the hyperplane determined by $-y = 2$ does not ``touch'' $Q_3,$ which causes the failure of the no-passing condition. Below, we show a different example where the no-passing condition fails even though all the inequalities are still facet-defining. \begin{example} See the $3$-dimensional polytopes $P_0$ and $Q$ shown on the left of Figure \ref{fig:above}. The right of Figure \ref{fig:above} shows how they look like when being viewed from above. \begin{figure}[!htb] \begin{tikzpicture}% \begin{scope} [x={(-0.2 cm, -0.1 cm)}, y={(0.5 cm,0 cm)}, z={(0cm, 0.5cm)}, scale=0.5, back/.style={loosely dotted, thin}, edge/.style={color=black!95!black, thick}, facet/.style={fill=black!95!black,fill opacity=0.200}, vertex/.style={inner sep=1pt,circle,draw=red!25!black,fill=red!75!black,thick,anchor=base}] \coordinate (8.00000, 0.00000, 0.00000) at (8.00000, 0.00000, 0.00000); \coordinate (8.00000, 10.00000, 0.00000) at (8.00000, 10.00000, 0.00000); \coordinate (0.00000, 10.00000, 0.00000) at (0.00000, 10.00000, 0.00000); \coordinate (4.00000, 6.00000, 4.00000) at (4.00000, 6.00000, 4.00000); \coordinate (4.00000, 4.00000, 4.00000) at (4.00000, 4.00000, 4.00000); \coordinate (0.00000, 0.00000, 0.00000) at (0.00000, 0.00000, 0.00000); \fill[facet] (4.00000, 4.00000, 4.00000) -- (8.00000, 0.00000, 0.00000) -- (8.00000, 10.00000, 0.00000) -- (4.00000, 6.00000, 4.00000) -- cycle {}; \fill[facet] (4.00000, 6.00000, 4.00000) -- (8.00000, 10.00000, 0.00000) -- (0.00000, 10.00000, 0.00000) -- cycle {}; \fill[facet] (0.00000, 0.00000, 0.00000) -- (0.00000, 10.00000, 0.00000) -- (4.00000, 6.00000, 4.00000) -- (4.00000, 4.00000, 4.00000) -- cycle {}; \fill[facet] (0.00000, 0.00000, 0.00000) -- (8.00000, 0.00000, 0.00000) -- (4.00000, 4.00000, 4.00000) -- cycle {}; \draw[edge] (8.00000, 0.00000, 0.00000) -- (8.00000, 10.00000, 0.00000); \draw[edge] (8.00000, 0.00000, 0.00000) -- (4.00000, 4.00000, 4.00000); \draw[edge] (8.00000, 0.00000, 0.00000) -- (0.00000, 0.00000, 0.00000); \draw[edge] (8.00000, 10.00000, 0.00000) -- (0.00000, 10.00000, 0.00000); \draw[edge] (8.00000, 10.00000, 0.00000) -- (4.00000, 6.00000, 4.00000); \draw[edge] (0.00000, 10.00000, 0.00000) -- (4.00000, 6.00000, 4.00000); \draw[edge] (0.00000, 10.00000, 0.00000) -- (0.00000, 0.00000, 0.00000); \draw[edge] (4.00000, 6.00000, 4.00000) -- (4.00000, 4.00000, 4.00000); \draw[edge] (4.00000, 4.00000, 4.00000) -- (0.00000, 0.00000, 0.00000); \node[vertex] at (8.00000, 0.00000, 0.00000) {}; \node[vertex] at (8.00000, 10.00000, 0.00000) {}; \node[vertex] at (0.00000, 10.00000, 0.00000) {}; \node[vertex] at (4.00000, 6.00000, 4.00000) {}; \node[vertex] at (4.00000, 4.00000, 4.00000) {}; \node[vertex] at (0.00000, 0.00000, 0.00000) {}; \node at (-6,-4) {$\textcolor{blue}{P_0}$}; \end{scope} \begin{scope}% [xshift=3.5cm, x={(-0.2 cm, -0.1 cm)}, y={(0.6 cm,0 cm)}, z={(0cm, 0.4cm)}, scale=0.5, back/.style={loosely dotted, thin}, edge/.style={color=black!95!black, thick}, facet/.style={fill=black!95!black,fill opacity=0.200}, vertex/.style={inner sep=1pt,circle,draw=red!25!black,fill=red!75!black,thick,anchor=base}] \coordinate (10.00000, 0.00000, 0.00000) at (10.00000, 0.00000, 0.00000); \coordinate (10.00000, 8.00000, 0.00000) at (10.00000, 8.00000, 0.00000); \coordinate (6.00000, 4.00000, 4.00000) at (6.00000, 4.00000, 4.00000); \coordinate (4.00000, 4.00000, 4.00000) at (4.00000, 4.00000, 4.00000); \coordinate (0.00000, 0.00000, 0.00000) at (0.00000, 0.00000, 0.00000); \coordinate (0.00000, 8.00000, 0.00000) at (0.00000, 8.00000, 0.00000); \fill[facet] (6.00000, 4.00000, 4.00000) -- (10.00000, 0.00000, 0.00000) -- (10.00000, 8.00000, 0.00000) -- cycle {}; \fill[facet] (0.00000, 8.00000, 0.00000) -- (10.00000, 8.00000, 0.00000) -- (6.00000, 4.00000, 4.00000) -- (4.00000, 4.00000, 4.00000) -- cycle {}; \fill[facet] (0.00000, 8.00000, 0.00000) -- (4.00000, 4.00000, 4.00000) -- (0.00000, 0.00000, 0.00000) -- cycle {}; \fill[facet] (0.00000, 0.00000, 0.00000) -- (10.00000, 0.00000, 0.00000) -- (6.00000, 4.00000, 4.00000) -- (4.00000, 4.00000, 4.00000) -- cycle {}; \draw[edge] (10.00000, 0.00000, 0.00000) -- (10.00000, 8.00000, 0.00000); \draw[edge] (10.00000, 0.00000, 0.00000) -- (6.00000, 4.00000, 4.00000); \draw[edge] (10.00000, 0.00000, 0.00000) -- (0.00000, 0.00000, 0.00000); \draw[edge] (10.00000, 8.00000, 0.00000) -- (6.00000, 4.00000, 4.00000); \draw[edge] (10.00000, 8.00000, 0.00000) -- (0.00000, 8.00000, 0.00000); \draw[edge] (6.00000, 4.00000, 4.00000) -- (4.00000, 4.00000, 4.00000); \draw[edge] (4.00000, 4.00000, 4.00000) -- (0.00000, 0.00000, 0.00000); \draw[edge] (4.00000, 4.00000, 4.00000) -- (0.00000, 8.00000, 0.00000); \draw[edge] (0.00000, 0.00000, 0.00000) -- (0.00000, 8.00000, 0.00000); \node[vertex] at (10.00000, 0.00000, 0.00000) {}; \node[vertex] at (10.00000, 8.00000, 0.00000) {}; \node[vertex] at (6.00000, 4.00000, 4.00000) {}; \node[vertex] at (4.00000, 4.00000, 4.00000) {}; \node[vertex] at (0.00000, 0.00000, 0.00000) {}; \node[vertex] at (0.00000, 8.00000, 0.00000) {}; \node at (-6,-4) {$\textcolor{blue}{Q}$}; \end{scope} \begin{scope}[xshift=7cm, yshift=-1cm, scale=0.2] \draw (0,0)--(10,0)--(10,8)--(0,8)--cycle; \draw (0,0)--(4,4)--(6,4)--(10,0); \draw (0,8)--(4,4)--(6,4)--(10,8); \draw[fill] (0,0) circle [radius=0.2]; \draw[fill] (10,0) circle [radius=0.2]; \draw[fill] (10,8) circle [radius=0.2]; \draw[fill] (0,8) circle [radius=0.2]; \draw[fill] (4,4) circle [radius=0.2]; \draw[fill] (6,4) circle [radius=0.2]; \node at (1.5,4) {$\textcolor{blue}{P_0}$}; \end{scope} \begin{scope}[xshift=10cm, yshift=-1cm, scale=0.2] \draw (0,0)--(0,10)--(8,10)--(8,0)--cycle; \draw (0,0)--(4,4)--(4,6)--(0,10); \draw (8,0)--(4,4)--(4,6)--(8,10); \draw[fill] (0,0) circle [radius=0.2]; \draw[fill] (0,10) circle [radius=0.2]; \draw[fill] (8,10) circle [radius=0.2]; \draw[fill] (8,0) circle [radius=0.2]; \draw[fill] (4,4) circle [radius=0.2]; \draw[fill] (4,6) circle [radius=0.2]; \node at (2,5) {$\textcolor{blue}{Q}$}; \end{scope} \end{tikzpicture} \caption{} \label{fig:above} \end{figure} $Q$ is obtained from $P_0$ by moving the $\textsc{left}$ and $\textsc{right}$ facets of $P_0$ inward ``too much''. Notice that in $P_0$, the facets $\textsc{front},\textsc{back}$ and $\textsc{right}$ intersect in a vertex, but in $Q$ they do not. More precisely, the hyperplanes determined by the $\textsc{front},\textsc{back}$ and $\textsc{right}$ facets of $Q$ intersect at a point outside of $Q$, and thus is on the wrong side of the hyperplane determined by the $\textsc{left}$ facet. So $Q$ is not a deformation of $P_0$ even though it can be defined using the same matrix $\textrm{A}$ as $P_0.$ \end{example} It is straightforward to translate conditions in Lemma \ref{lem:detb} to explicit linear conditions. We give the following notation and definition before stating Corollary \ref{cor:ineqs}. \begin{notation} For convenience, for any facet $F = F_i$ of $P_0,$ we sometimes use $F$ as the subscripts for $\textbf{a}_i$ and $b_i,$ that is \[ \textbf{a}_F = \textbf{a}_i, \quad b_F = b_i.\] \end{notation} \begin{definition} Let $v$ be a vertex of $P_0.$ Suppose $F_{i_1}, F_{i_2}, \dots, F_{i_k}$ (with $i_1 < i_2 < \cdots < i_k$) are the facets of $P_0$ where $v$ lies on. (Note that we must have $k \ge d$.) We say $F_{i_1},\dots, F_{i_d}$ are the \emph{first $d$ supporting facets of $v$}, and $F_{i_j}$ for $d < j \le k$ is an \emph{extra supporting facet of $v$}. (Note these definitions rely on the specific ordering we give for facets of $P_0$.) For any $\textbf{b} \in \mathbb{R}^m$, let $v_\textbf{b}$ be the intersection of the hyperplanes determined by the first $d$ supporting facets of $v,$ that is, $v_\textbf{b}$ is the the intersection of \[ \{ \textbf{x} \in V \ : \ \langle \textbf{a}_{i_j}, \textbf{x} \rangle = b_{i_j}\}, \quad 1 \le j \le d, \] which clearly is a point. For any vertex $v$ of $P_0$ and any facet $F$ of $P_0$, we associate with the pair $(v, F)$ an equality or an inequality as below: \begin{align*} E_{v,F}(\textbf{b}) :& \quad \langle \textbf{a}_F, v_\textbf{b} \rangle = b_F, \\ I_{v,F}(\textbf{b}) :& \quad \langle \textbf{a}_F, v_\textbf{b} \rangle \le b_F. \end{align*} \end{definition} \begin{corollary}\label{cor:ineqs} The deformation cone $\operatorname{Def}(P_0)$ is the collection of vectors $\textbf{b}$ satisfying the following two conditions: \begin{enumerate}[(i)] \item All the equalities $E_{v,F}(\textbf{b})$ hold, where $(v,F)$ is a vertex-facet pair of $P_0$ such that $F$ is an extra supporting facet of $v$. \item All the inequalities $I_{v,F}(\textbf{b})$ holds, where $(v,F)$ is a vertex-facet pair of $P_0$ such that $F$ is not a supporting facet of $v.$ \end{enumerate} Therefore, $\operatorname{Def}(P_0)$ is (indeed) a polyhedral cone. \end{corollary} \begin{proof} One sees that condition (i) is equivalent to the NEI condition, and condition (ii) is equivalent to the no-passing condition. Moreover, since $v_\textbf{b}$ is the solution of a linear system, it is written as a linear combinations of entries in $\textbf{b}.$ Therefore, each equality or inequality is linear. So the solution set of $\textbf{b}$ is a polyhedral cone. \end{proof} \begin{remark}\label{rem:nef} The deformation cone is related to the $\textrm{Nef}$ cone (see \cite[Definition 6.3.18]{cls}) of the toric variety associated with $\Sigma(P)$, as follows. Any polytope whose normal fan is a coarsening of $\Sigma(P)$ gives an \emph{basepoint free} divisor, which for toric varieties is the same as \emph{nef} (\cite[Theorem 6.3.12]{cls}) divisor. The difference is that the $\textrm{Nef}$ cone do not distinguish between translations of the same polytope, since they give the same divisor modulo rational equivalence. Hence, the $\textrm{Nef}$ cone is isomorphic to the deformation cone modulo translations. \end{remark} The number of inequalities in Corollary \ref{cor:ineqs} can be reduced. Given a polytope $P_0$, we say a facet $F$ is a \emph{neighbor} of a vertex $v$ and $(v,F)$ is a \emph{neighboring pair} of $P_0$, if $v\notin F$ but there exist a vertex $v'\in F$ such that $\{v, v'\}$ is an edge of $P_0$. \begin{proposition}\label{prop:preredux} Let $\textbf{b} \in \mathbb{R}^m.$ The followings are equivalent. \begin{enumerate} \item $\textbf{b} \in \operatorname{Def}(P_0).$ \item Conditions (i) and (ii) of Corollary \ref{cor:ineqs} are satisfied. \item Condition (i) of Corollary \ref{cor:ineqs} is satisified, and all the inequalities $I_{v,F}(\textbf{b})$ are satisfied, where $(v, F)$ is a neighboring vertex-facet pair of $P_0$. \item Condition (i) of Corollary \ref{cor:ineqs} is satisified, and for any edge $e=\{v, v'\}$ of $P_0$, there exists $\lambda_e \in \mathbb{R}_{\ge 0}$ such that $v- v' = \lambda_e (v_\textbf{b} - v'_\textbf{b}).$ \end{enumerate} \end{proposition} \begin{proof} The equivalence between (1) and (2) are assumed by Corollary \ref{cor:ineqs}, and it is clear that (2) implies (3). So it suffices to show (3) implies (4) and (4) implies (2). \noindent \underline{``$(3) \implies (4)$'':} There exist $d-1$ facets $F_{j_1},\dots, F_{j_{d-1}}$ of $P_0,$ such that the edge $e=\{v, v'\}$ in $P_0$ is the intersection of them. Thus $v-v'$ is in the one dimensional space that is orthgonal to the $(d-1)$-space spanned by $\textbf{a}_{j_1},\dots, \textbf{a}_{j_{d-1}}.$ By the definiton of $v_\textbf{b}$ and $v'_\textbf{b}$ and because condition (i) of Corollary \ref{cor:ineqs} is satisfied, one sees that $v_\textbf{b}-v'_\textbf{b}$ should be in the same one dimensional space. Therefore, $v-v' = \lambda_e(v_\textbf{b} - v'_\textbf{b})$ for some $\lambda_e \in \mathbb{R}.$ Thus, it is left to show that $\lambda_e \ge 0.$ Let $F$ be a facet that $v'$ lies on but $v$ does not. Since $v \in P_0$ has to satisfy the strict inequality in $I_{v,F}(\textbf{b}_0),$ we have that $\langle \textbf{a}_F, v \rangle < b_{0, F} = \langle \textbf{a}_F, v' \rangle,$ which is equivalent to $\langle \textbf{a}_F, v-v' \rangle < 0.$ On the other hand, as $(v,F)$ is a neighboring pair, we also have $I_{v,F}(\textbf{b})$ holds, which is equivalent to $\langle \textbf{a}_F, v_\textbf{b} -v'_\textbf{b} \rangle = \lambda_e \langle \textbf{a}_F, v-v' \rangle \le 0.$ Hence, $\lambda_e \ge 0.$ \noindent \underline{``$(4) \implies (2)$'':} Let $(v, F)$ be a vertex-facet pair of $P_0$ such that $v$ does not lie on $F.$ Let $v_0 = v$ and pick a point $\textbf{x} \in F.$ Then \[ \langle \textbf{a}_F, v_0 \rangle < b_{0,F} = \langle \textbf{a}_F, \textbf{x} \rangle,\] Since $\textbf{x} - v_0$ is a nonnegative linear combination of rays in $\{ u - v_0 \ : \ \{v_0, u\} \text{ is an edge of $P_0$ } \},$ there exists a vertex $v_1$ such that $\{v_0,v_1\}$ is an edge and \[ \langle \textbf{a}_F, v_0 \rangle < \langle \textbf{a}_F, v_1 \rangle.\] Continuing this procedure, we can construct a sequence of vertices of $P_0:$ $v_0=v, v_1, v_2, \dots, v_\ell$ such that $\{v_i, v_{i+1}\}$ is an edge of $P_0$ for each $i,$ and \[ \langle \textbf{a}_F, v \rangle < \langle \textbf{a}_F, v_1 \rangle < \cdots < \langle \textbf{a}_F, v_\ell \rangle = b_{0,F}.\] Using the assumption of (4), we get \[ \langle \textbf{a}_F, v_\textbf{b} \rangle \le \langle \textbf{a}_F, \left(v_1\right)_{\textbf{b}} \rangle \le \cdots \le \langle \textbf{a}_F, \left(v_\ell\right)_{\textbf{b}} \rangle = b_{F},\] which is exactly the inequality $I_{v,F}(\textbf{b})$ as desired. \end{proof} In this article, we will use the equivalence between (1) and (3) of Proposition \ref{prop:preredux} to determine the deformation cone $\operatorname{Def}(P_0).$ \begin{remark} Part (3) of the proposition allows us to reduce the number of inequalities in determining the deformation cone. In the toric varieties language, this correspond to the fact that it is enough to check positivity on each torus invariant curve. See \cite[Theorem 6.3.12 part (c)]{cls}. \end{remark} It is undesirable to compute $v_\textbf{b}$ and then compute $\langle \textbf{a}_{F}, v_\textbf{b} \rangle$ for each individual $E_{v,F}$ or $I_{v,F}$. We find the following explicit formulation useful. \begin{lemma}\label{lem:rewritelhs} Let $(v,F)$ be a vertex-facet pair of $P_0$. Suppose $F_{i_1}, F_{i_2}, \dots, F_{i_d}$ are the first $d$ supporting facets of $v$. If $\textbf{a}_F = \sum_{j=1}^d c_j \textbf{a}_{i_j} = \sum_{j =1}^d c_j \textbf{a}_{F_{i_j}},$ then the left hand side of $E_{v,F}$ and $I_{v,F}$ becomes $\displaystyle \sum_{j =1}^d c_j b_{{i_j}}$ or equivalently $\displaystyle \sum_{j =1}^d c_j b_{F_{i_j}}.$ \end{lemma} \begin{proof} $\displaystyle \langle \textbf{a}_{F}, v_\textbf{b} \rangle = \left\langle \sum_{j=1}^d c_j\textbf{a}_{F_{i_j}}, v_\textbf{b} \right\rangle =\sum_{j=1}^d c_j \langle \textbf{a}_{F_{i_j}}, v_\textbf{b} \rangle = \sum_{j=1}^d c_j b_{F_{i_j}}. $ \end{proof} \subsection*{Deformation cones of simple polytopes} Finally, we apply our results to simple polytopes. We start with the following preliminary lemma. \begin{lemma}\label{lem:edgeeq} Suppose $P_0$ is simple. Let $e = \{v, v'\}$ be an edge of $P_0.$ Suppose $F, F_{i_1},\dots, F_{i_{d-1}}$ are the supporting facets of $v,$ and $F', F_{i_1},\dots, F_{i_{d-1}}$ be the supporting facets of $v'.$ There is a unique solution $(c_F, c_{F'}, c_1, \dots, c_{d-1})$ up to scale to \begin{equation}\label{equ:edgeeq} \sum_{j=1}^{d-1} c_j \textbf{a}_{F_{i_j}} = c_F \textbf{a}_F + c_{F'} \textbf{a}_{F'} \end{equation} such that $c_F c_{F'} > 0.$ Hence, there is a unique solution up to \emph{positive} scale to the above equation such that $c_F > 0, c_{F'} > 0.$ \end{lemma} \begin{proof} The unique existence of a solution to \eqref{equ:edgeeq} such that $c_F c_F' \neq 0$ follow from the fact that both the set $\textbf{a}_F, \textbf{a}_{F_{i_1}},$ $\dots,$ $\textbf{a}_{F_{i_{d-1}}}$ and the set $\textbf{a}_{F'}, \textbf{a}_{F_{i_1}}, \dots, \textbf{a}_{F_{i_{d-1}}}$ are linearly independent. The numbers $c_F$ and $c_F'$ have the same sign because $\textbf{a}_F$ and $\textbf{a}_{F'}$ are on two different sides of the $(d-1)$-dimensional space spanned $\textbf{a}_{F_{i_1}}, \dots, \textbf{a}_{F_{i_{d-1}}}$ and $\sum_{j=1}^{d-1} c_j \textbf{a}_{F_{i_j}}$ is a vector in this space. \end{proof} \begin{remark} Equation \eqref{equ:edgeeq} is called the \emph{wall condition} in \cite[Chapter 6]{cls}. See \cite[Figure 17, page 301]{cls}. \end{remark} \begin{definition} Assume all the hypothesis in Lemma \ref{lem:edgeeq} and let $(c_F, c_{F'}, c_1, \dots, c_{d-1})$ be the unique solution up to positive scale to \eqref{equ:edgeeq} assumed by Lemma \ref{lem:edgeeq}. (So $c_F, c_{F'} > 0$.) We associate edge $e=\{v, v'\}$ an inequality: \[ I_e(\textbf{b}): \sum_{j=1}^{d-1} c_j b_{F_{i_j}} \le c_F b_F + c_{F'} b_{F'}.\] \end{definition} We now reach the main result of this part. \begin{proposition}\label{prop:redux} Suppose $P_0$ is as given in Setup \ref{setup1} and is simple. Let $\textbf{b} \in \mathbb{R}^m.$ Then $\textbf{b} \in \operatorname{Def}(P_0)$ if and only if all the inequalities $I_{e}(\textbf{b})$ are satisfied, where $e$ is an edge of $P_0.$ \end{proposition} \begin{proof} We use the equivalence between (1) and (3) of Proposition \ref{prop:preredux}. Since $P_0$ is simple. it is clear that condition (i) of Corollary \ref{cor:ineqs} can be ignored. Furthermore, any ordered pair of adjacent vertex $(v, v')$ of $P_0$ determines a unique neighboring vertex-facet pair $(v, F)$, where $F$ is the unique supporting facet of $v'$ that does not support $v,$ and any pair $(v,F)$ arises (not necessarily uniquely) this way. Therefore, we can change the indexing of the inequalities in condition (ii) of Corollary \ref{cor:ineqs} to $(v,v').$ Finally, one can verify if $e=\{v, v'\}$ is an edge, the inequality $I_e(\textbf{b})$ is equivalent to both the inequality associated to $(v,v')$ and the one associated to $(v', v).$ Then the conclusion follows. \end{proof} \begin{example}\label{ex:exampleprop} We go back to our Example \ref{ex:example}, illustrated in Figure \ref{fig:exfigure}. We draw the polytope $P_0$ with a labeling of its vertices, and draw the normal fan $\Sigma(P_0)$ of $P_0$ in Figure \ref{fig:normalfan}. \begin{figure}[t!] \begin{tikzpicture} \begin{scope}[xshift=7cm, scale=0.8] \draw[thick,->] (0,0) -- (-1,0); \node[left] at (-1,0) {$\scriptstyle \textbf{a}_1=(-1,0)$}; \draw[thick,->] (0,0) -- (0,1); \node[above] at (0,1) {$\scriptstyle \textbf{a}_2=(0,1)$}; \draw[thick,->] (0,0) -- (0,-1); \node[below] at (0,-1) {$\scriptstyle \textbf{a}_3=(0,-1)$}; \draw[thick,->] (0,0) -- (1,-1); \node[above right] at (0.8,-0.8) {$\scriptstyle \textbf{a}_4=(1,-1)$}; \node at (-3,1){$\textcolor{blue}{\Sigma(P_0)}:$}; \end{scope} \begin{scope}[xshift=-5cm, scale=0.9] \draw (6,0.5)--(8,0.5)--(7,-0.5)--(6,-0.5)--cycle; \node[above] at (6,0.5) {$v$}; \node[above] at (8,0.5){$w$}; \node[below] at (7,-0.5) {$x$}; \node[below] at (6,-0.5) {$y$}; \node at (6.7,0){$\textcolor{blue}{P_0}$}; \draw[fill] (6,-0.5) circle [radius=0.08]; \draw[fill] (6,0.5) circle [radius=0.08]; \draw[fill] (7,-0.5) circle [radius=0.08]; \draw[fill] (8,0.5) circle [radius=0.08]; \end{scope} \end{tikzpicture} \caption{Normal fan of polytope $P_0$ in Example \ref{ex:example}.}\label{fig:normalfan} \end{figure} Now we apply Proposition \ref{prop:redux} to find the inequalities that defines $\operatorname{Def}(P_0).$ Let $e_1 = \{v, y\}.$ The vertex $v$ lies on facets $F_1$ and $F_2$, and the vertex $y$ lies on facets $F_1$ and $F_3.$ We have $0 \cdot \textbf{a}_1=\textbf{a}_2 + \textbf{a}_3.$ This gives the inequality $I_{e_1}: 0 \le b_2 + b_3.$ Similarly, for $e_2 := \{v, w\},$ we have $-\textbf{a}_2 =\textbf{a}_1 + \textbf{a}_4,$ which gives $I_{e_2}: - b_2 \le b_1 + b_4;$ for $e_3 = \{y,x\},$ we have $\textbf{a}_3=\textbf{a}_1 + \textbf{a}_4,$ which gives $I_{e_3}: b_3 \le b_1 + b_4;$ for $e_4 = \{x,w\},$ we have $0 \cdot \textbf{a}_4=\textbf{a}_2 + \textbf{a}_3 ,$ which gives $I_{e_4}: 0 \le b_2 + b_3.$ Note that two of the four inequalities $I_{e_1}$ and $I_{e_4}$ are the same, and $I_{e_2}$ follows from $I_{e_1}$ and $I_{e_3},$ so is redundant. Therefore, $\operatorname{Def}(P_0)$ is defined by two inequalities in $\mathbb{R}^4:$ \begin{equation} I_{e_1}=I_{e_4}: 0 \le b_2+ b_3, \qquad I_{e_3}: b_3 \le b_1 + b_4. \label{equ:exdefcone} \end{equation} Among the three vectors given in Example \ref{ex:example}, we can verify that $\textbf{b}_1=(3,2,0,2.5), \textbf{b}_2=(1,2,1,0)$ satisfy the above two inequalities, and $\textbf{b}_3=(1,2,2,0)$ does not satisfy the inequality $I_{e_3}.$ This agrees with the assertion that $\textbf{b}_1, \textbf{b}_2 \in \operatorname{Def}(P_0)$ and $\textbf{b}_3 \not\in \operatorname{Def}(P_0).$ We remark that $\operatorname{Def}(P_0)$ defined by \eqref{equ:exdefcone} is not pointed. Indeed, for any deformation $Q$ of $P_0,$ any translation of $Q$ is also a deformation of $P_0.$ We may consider two polytopes are equivalent if one is obtained from another by translation. Under this equivalence, the collection of the deforming vectors gives the nef cone $\operatorname{Nef}(P_0)$ of $P_0$. (See Remark \ref{rem:nef}.) One sees that $\operatorname{Nef}(P_0)$ is computed from $\operatorname{Def}(P_0)$ by quotienting out the span of the two columns of $\textrm{A}$ which are $(-1,0,0,1)^T,(0,1,-1,-1)^T.$ In this quotient we write everything in terms of $b_3,b_4$ since we have $b_1=b_4$ and $b_2=b_3+b_4$. So the nef cone $\operatorname{Nef}(P_0)$ is defined by \[ 0 \leq 2b_3+b_4, \qquad 0 \leq -b_3+2b_4, \] where $b_3,b_4$ are the coordinates of $\mathbb{R}^2$. This is always a pointed cone. \end{example} \subsection*{Deformation cones of simplicial projective fans} A fan $\Sigma$ is \emph{simplicial} if every cone in it is simplicial. This means that every $k$-dimensional cone in $\Sigma$ is spanned by exactly $k$ rays. One sees that $P$ being simple is equivalent to that $\Sigma(P)$ is simplicial. In particular, edges of $P$ are in bijection with a pair of adjacent maximal cones in $\Sigma(P),$ where we say two maximal cones are \emph{adjacent} if their spanning ray sets differ by exactly one ray. We can easily translate Lemma \ref{lem:edgeeq} and Proposition \ref{prop:redux} to versions for simplicial fans using the connection between a simple polytope and its simplicial normal fan. We omit the modified version of Lemma \ref{lem:edgeeq}, but restate Proposition \ref{prop:redux} since the new version will be the main one we use in Sections \ref{sec:GP} and \ref{sec:NBF}. \begin{definition}\label{defn:fanineq} Suppose $\Sigma_0$ is simplicial. Let $\left\{\textbf{a}_F, \textbf{a}_{F_{i_1}},\dots, \textbf{a}_{F_{i_{d-1}}} \right\}$ and $\Big\{ \textbf{a}_{F'}, \textbf{a}_{F_{i_1}},$ $\dots,$ $\textbf{a}_{F_{i_{d-1}}} \Big\}$ be the sets of spanning rays of two adjacent maximal cones $\sigma$ and $\sigma'$ in $\Sigma_0.$ Suppose $(c_F, c_{F'}, c_1, \dots, c_{d-1})$ is the unique solution up to positive scale to \eqref{equ:edgeeq} assumed by Lemma \ref{lem:edgeeq}. (So $c_F, c_{F'} > 0$.) We associate to the pair $\{\sigma, \sigma'\}$ an inequality: \[ I_{ \{\sigma,\sigma'\}} (\textbf{b}): \sum_{j=1}^{d-1} c_j b_{F_{i_j}} \le c_F b_F + c_{F'} b_{F'}.\] \end{definition} \begin{proposition}\label{prop:reduxfan} Suppose $\Sigma_0$ is as given in Setup \ref{setup2} and is simplicial. Let $\textbf{b} \in \mathbb{R}^m.$ Then $\textbf{b} \in \operatorname{Def}(\Sigma_0)$ if and only if all the inequalities $I_{ \{\sigma, \sigma'\}}(\textbf{b})$ are satisfied, where $\{ \sigma, \sigma'\}$ is a pair of adjacent maximal cones in $\Sigma_0.$ \end{proposition} \subsection*{Back to Deformations} We finish this section with a discussion on how to determine whether a polytope $Q$ is a deformation of $P_0$ provided that we have a description for the deformation cone $\operatorname{Def}(P_0).$ Although there is a one-to-one correspondence between deforming vectors $\textbf{b} \in \operatorname{Def}(P_0)$ and deformations of $P_0,$ if we take a polytope $Q$ that is defined by $\textrm{A} \textbf{x} \le \textbf{b}$, knowing $\textbf{b} \not\in \operatorname{Def}(P_0)$ is not enough to conclude that $Q$ is not a deformation of $P_0.$ Indeed, we have seen in Examples \ref{ex:example} and \ref{ex:exampleprop} that $Q_3$ (in Figure \ref{fig:exfigure}) is defined by $\textrm{A} \textbf{x} \le \textbf{b}_3$ where $\textbf{b}_3 \not\in \operatorname{Def}(P_0);$ but $Q_3$ is a deformation of $P_0$. It was discussed earlier that the reason for which $\textbf{b}_3$ is not a deforming vector is that the hyperplane defined by $-y=2$, i.e., the bottom horizontal line in the picture for $Q_3$, does not ``touch'' the polytope $Q_3.$ This turns out to be an important notion. \begin{definition} Suppose a polytope $Q \subset V$ is defined by the linear system $\textrm{A} \textbf{x} \le \textbf{b}.$ We say an inequality $\langle \textbf{a}_i, \textbf{x} \rangle \le b_i$ in the system is \emph{tight} for $Q$, if the equality attains for some points in $Q.$ If all the inequalities in the system are tight, we say $\textrm{A} \textbf{x} \le \textbf{b}$ is a \emph{tight} representation for $Q.$ \end{definition} It is easy to see that $\textrm{A} \textbf{x} \le \textbf{b}$ being a tight representation for $Q$ is a consequence of condition \eqref{item:nopass} of Defintion \ref{defn:deform0}, and thus is a necessary condition for $\textbf{b}$ being a deforming vector. With this concept of tight representations, we can use the knowledge of deformation cone to verify whether a polytope $Q$ is a deformation of $P_0.$ \begin{lemma}\label{lem:checkdeform} Suppose $P_0$ is as described in Setup \ref{setup1}, and $Q$ is defined by a tight representation $\textrm{A} \textbf{x} \le \textbf{b}.$ Then $Q$ is a deformation of $P_0$ if and only if $\textbf{b} \in \operatorname{Def}(P_0).$ \end{lemma} \begin{proof} We only need to show the forward implication as the backward one is obvious. Suppose $Q$ is a deformation of $P_0.$ Then there exists $\textbf{b}' \in \operatorname{Def}(P_0)$ such that $Q$ is defined by $\textrm{A} \textbf{x} \in \textbf{b}',$ which is a tight representation as well. By the definition of tightness, we have \[ b_i = \max_{\textbf{x} \in Q} \langle \textbf{a}_i, \textbf{x} \rangle = b_i', \quad \forall 1 \le i \le m. \] Hence, $\textbf{b} = \textbf{b}' \in \operatorname{Def}(P_0).$ \end{proof} \section{Generalized permutohedra and Braid Fan} \label{sec:GP} In the following two sections, we work over the vector space $V_d = \{ \textbf{x} \in \mathbb{R}^{d+1} \ : \langle {\bf{1}}, \textbf{x}\rangle = 0\} \subset \mathbb{R}^{d+1}$ and its dual space $W_d = \mathbb{R}^{d+1}/ {\bf{1}}$, where $ {\bf{1}} = (1, 1, \dots, 1)$ denotes the all-one vector in $\mathbb{R}^{d+1}.$ Note that the standard basis $\{\textbf{e}_1,\cdots,\textbf{e}_{d+1}\}$ of $\mathbb{R}^{d+1}$ is a canonical spanning set for $W_d$ although it is not a basis. The goal of this section is to apply the techniques introduced in Section \ref{sec:prel} to give a new combinatorial proof for Theorem \ref{thm:submodular} by determining the deformation cone of the Braid fan. We also state and prove Theorem \ref{thm:polymatroid}, which gives the connection between polymatroids and generalized permutohedra. We start by introducing the fan concerned in this section. \begin{definition} For any $\pi\in {\mathfrak{S}}_{d+1}$ we define a cone in $W_d$ as follows: \[ C(\pi):=\{\textbf{x}\in W_d \ :\ x_{\pi^{-1}(1)}<x_{\pi^{-1}(2)}<\cdots<x_{\pi^{-1}(d+1)}\}. \] \end{definition} One checks that $C(\pi)$ is well-defined because if $(x_1, \dots, x_{d+1}) = (y_1, \dots, y_{d+1})$ is in $W_d$, that is, there exists $k \in \mathbb{R}$ such that $y_i = x_i +k$ for each $i,$ then \[ x_{\pi^{-1}(1)}<x_{\pi^{-1}(2)}<\cdots<x_{\pi^{-1}(d+1)} \text{ if and only if } y_{\pi^{-1}(1)} <y_{\pi^{-1}(2)} <\cdots<y_{\pi^{-1}(d+1)}.\] Also, for any two distinct $\pi_1,\pi_2 \in {\mathfrak{S}}_{d+1},$ the cones $C(\pi_1)$ and $C(\pi_2)$ are disjoint. Each region $C(\pi)$ is an open polyhedral cone. Its closure, denoted by $\sigma(\pi)$, is obtained from $C(\pi)$ by relaxing the strict inequalities. \begin{definition} We call the collection of cones $\{\sigma(\pi): \pi\in {\mathfrak{S}}_{d+1}\}$, together with all of their faces, the \emph{Braid fan}, denoted by $\textrm{Br}_d.$ \end{definition} It is straightforward to show that $\textrm{Br}_d$ is a complete fan in $W_d.$ However, we will prove this fact by showing $\textrm{Br}_d$ is the normal fan of a family of polytopes in Proposition \ref{prop:fanofusual} below. The similar idea will be used in the next section, and we simply present it this way in preparation for late discussions. We next formally introduce generalized permutohedra. Given a strictly increasing sequence $ {\bm \alpha}= (\alpha_1,\alpha_2,\cdots,\alpha_{d+1}) \in \mathbb{R}^{d+1}$, for any $\pi\in {\mathfrak{S}}_{d+1}$, we use the following notation: \[ v_\pi^ {\bm \alpha} := \left(\alpha_{\pi(1)},\alpha_{\pi(2)},\cdots, \alpha_{\pi({d+1})}\right) = \sum_{i=1}^{d+1} \alpha_{i} \textbf{e}_{\pi^{-1}(i)}.\] Then we define the \emph{usual permutohedron} \begin{equation*} \operatorname{Perm}( {\bm \alpha}) := \textrm{conv}\left(v_\pi^{ {\bm \alpha}}:\quad \pi\in {\mathfrak{S}}_{d+1}\right). \label{equ:defnusual} \end{equation*} In particular, if $ {\bm \alpha} = (1, 2, \dots, {d+1}),$ we obtain the \emph{regular permutohedron}, denoted by $\Pi_{d},$ \[ \Pi_{d} := \operatorname{Perm} (1, 2, \dots, d+1).\] Note that the above definition for $\operatorname{Perm}( {\bm \alpha})$ does not directly say that the vertex set of $\operatorname{Perm}( {\bm \alpha})$ is $\{ v_\pi^ {\bm \alpha} : \ \pi \in {\mathfrak{S}}_{d+1}\}$. However, this is true as we will see in Proposition \ref{prop:fanofusual} below. Recall that that \emph{generalized permutohedra} are polytopes obtained from usual permutohedra by moving vertices while preserving all edge directions. We see that any generalized permutohedron in $\mathbb{R}^{d+1}$ lies in an affine space that is parallel to $V_d.$ However, under the setup of our article, we would like to only consider polytopes that are in $V_d.$ Thus, we give the following definition. \begin{definition}\label{defn:central} For any polytope $P \in \mathbb{R}^{d+1}$ that lies in an affine space $V'$ that is parallel to $V_d.$ It is clear $V' =\{ \textbf{x} \in \mathbb{R}^{d+1} : \langle {\bf{1}}, \textbf{x} \rangle = N\}$ for some (unique) $N \in \mathbb{R}.$ Then $V' = \frac{N}{d+1} {\bf{1}} + V_d.$ We define $\widetilde{P} := P - \frac{N}{d+1} {\bf{1}}$ to be the \emph{centralized} version of the polytope $P,$ which lies in $V_d.$ \end{definition} \begin{example}\label{ex:crp} The regular permutohedron $\Pi_d$ lies in the affine space $V' = \Big\{ \textbf{x} \in \mathbb{R}^{d+1} \ : \ \langle {\bf{1}}, \textbf{x} \rangle = \frac{(d+2)(d+1)}{2}\Big\}.$ Hence, the \emph{centralized regular permutohedron} is \[ \widetilde{\Pi_d} = \Pi_d - \frac{d+2}{2} {\bf{1}} = \operatorname{Perm}\left( -\frac{d}{2}, -\frac{d-2}{2}, \dots, \frac{d-2}{2}, \frac{d}{2} \right) \subset V_d.\] \end{example} We have the following two results relating generalized/usual permutohedra and $\textrm{Br}_d.$ \begin{proposition}[Proposition 2.6 in \cite{post}]\label{prop:fanofusual} If $ {\bm \alpha}=(\alpha_1,\dots, \alpha_{d+1})$ is strictly increasing, then for each $\pi \in {\mathfrak{S}}_{d+1},$ the point $v_\pi^ {\bm \alpha}$ is a vertex of $\operatorname{Perm}( {\bm \alpha}),$ and the normal cone of $\operatorname{Perm}( {\bm \alpha})$ at $v_{\pi}^ {\bm \alpha}$ is $\sigma(\pi).$ Therefore, the Braid fan $\textrm{Br}_d$ is the normal fan of the usual permutohedron $\operatorname{Perm}( {\bm \alpha}).$ Hence, $\textrm{Br}_d$ is a complete projective fan in $W_d$. \end{proposition} \begin{proposition}[Proposition 3.2 in \cite{PosReiWil}] \label{prop:coarser} A polytope $P$ in $V_d$ is a (centralized) generalized permutohedron if and only if its normal fan $\Sigma(P)$ is refined by the braid arrangement fan $\textrm{Br}_d$. \end{proposition} We include a proof for Proposition \ref{prop:fanofusual}, which is relevant to discussion in Section \ref{sec:NBF}. The following elementary result is useful (See \cite[Theorem 368]{inequalities}). \begin{lemma}[Rearrangement Inequality]\label{lem:rearr} Suppose $x_1 \le x_2 \le \cdots \le x_n$ and $y_1 \le y_2 \le \cdots \le y_n.$ Then for any $\pi \in {\mathfrak{S}}_n,$ we have \[ \sum_{i=1}^n x_i y_i \ge \sum_{i=1}^n x_i y_{\pi(i)}.\] Furthermore, if $x_1 < x_2 < \cdots < x_n$ and $y_1 < y_2 < \cdots < y_n$, then the equality only holds when $\pi$ is the identity permutation. \end{lemma} Recall the definition of normal cone in Definition \ref{defn:normal} \begin{proof}[Proof of Proposition \ref{prop:fanofusual}] Let $\textbf{w} \in C(\pi).$ For convenience, we let $u_i = w_{\pi^{-1}(i)}$ so that $\textbf{w}$ can be expressed as \begin{equation}\label{equ:wexp0} \textbf{w} = \sum_{i=1}^{d+1} u_{i}\textbf{e}_{\pi^{-1}(i)}. \end{equation} Then $\textbf{w} \in C(\pi)$ means that \[ u_{1} < u_{2} < \cdots < u_{d+1}.\] Then it follows from Lemma \ref{lem:rearr} that $\langle \textbf{w}, v_\pi^ {\bm \alpha} \rangle > \langle \textbf{w}, v_{\pi'}^ {\bm \alpha} \rangle,$ for any $\pi \neq \pi' \in {\mathfrak{S}}_{d+1}.$ Hence, $v_\pi^ {\bm \alpha}$ does not lie in $\textrm{conv}( v_{\pi'}^ {\bm \alpha} : \ \pi \neq \pi' \in {\mathfrak{S}}_{d+1});$ so $v_\pi^ {\bm \alpha}$ is a vertex of $\operatorname{Perm}( {\bm \alpha})$. Furthermore, we must have that $\textbf{w} \in \operatorname{ncone}(v_\pi^ {\bm \alpha}, \operatorname{Perm}( {\bm \alpha})).$ This implies \begin{equation} \label{equ:inclusion1} \sigma(\pi) \subseteq \operatorname{ncone}(v_\pi^ {\bm \alpha}, \operatorname{Perm}( {\bm \alpha})). \end{equation} However, the union of $\sigma(\pi)$ is the entire space $W_d,$ so the equality must holds in \eqref{equ:inclusion1}. Thus, the conclusion follows. \end{proof} It follows from Propositions \ref{prop:fanofusual} and \ref{prop:coarser} that the deformation cone $\operatorname{Def}(\textrm{Br}_d)$ of $\textrm{Br}_d$ is the same as the deformation cone $\operatorname{Def}\left( \widetilde{\Pi_d} \right)$ of $\widetilde{\Pi_d}$, which gives a characterization for (centralized) generalized permutohedron. The combinatorics of the Braid fan $\textrm{Br}_d$ are equivalent to those of the face lattice of $\widetilde{\Pi_d}$, which are well-studied in the literature \cite[Chapter VI, Proposition 2.2]{barvinokconvex}. We summarize relevant results in terms of the Braid fan $\textrm{Br}_d$ in the proposition below. Recall that the \emph{Boolean algebra} ${\mathcal B}_{d+1}$ is the poset on all subsets of $[d+1]$ ordered by containment. This poset has a minimum element $\hat{0}=\emptyset$ and a maximum element $\hat{1}=[d+1]$. We denote by $\overline{{\mathcal B}_{d+1}}$ the poset obtained from ${\mathcal B}_{d+1}$ by removing the maximum and minimum elements. For each element $S \in {\mathcal B}_{d+1}$, define \[ \displaystyle \textbf{e}_S := \sum_{i \in S} \textbf{e}_i. \] \begin{proposition}\label{prop:charbr} The rays, i.e., $1$-dimensional cones, of the Braid fan $\textrm{Br}_d$ are given by $\textbf{e}_S$ for all $S \in \overline{{\mathcal B}_{d+1}}$. Furthermore, a $k$-set of rays $\{\textbf{e}_{S_1},\cdots, \textbf{e}_{S_k}\}$ spans a $k$-dimensional cone in $\textrm{Br}_d$ if and only if the sets $S_1,\dots, S_k$ form a $k$-chain in $\overline{{\mathcal B}_{d+1}}$. In particular, the maximal cones in $\textrm{Br}_d$ are in bijection with the maximal chains in $\overline{{\mathcal B}_{d+1}}.$ Hence, $\textrm{Br}_d$ is simplicial. \end{proposition} As the one-dimensional cones are indexed by elements in $\overline{{\mathcal B}_{d+1}},$ the deformation cone of $\textrm{Br}_d$ can be considered to be in $\mathbb{R}^{\overline{{\mathcal B}_{d+1}}}$ which is indexed by nonempty, proper subsets $S$ of $[d+1].$ With these results in hand, we can now apply Proposition \ref{prop:reduxfan} to compute $\operatorname{Def}(\textrm{Br}_d)$ . \begin{theorem}\label{thm:centralsub} The deformation cone of the Braid fan (or centralized regular permutohedron) is the collection of $\textbf{b} \in \mathbb{R}^{\overline{{\mathcal B}_{d+1}}}$ satisfying the following \emph{submodular condition} on ${\mathcal B}_{d+1}:$ \begin{equation}\label{equ:bsub} b_{S \cup T} + b_{S \cap T} \le b_S + b_T, \quad \forall S, T \in {{\mathcal B}_{d+1}}, \end{equation} where by convention we let $b_\emptyset = b_{[d+1]} = 0.$ \end{theorem} \begin{figure}[h] \begin{tikzpicture} \begin{scope}[scale=0.8] \draw [black,fill] (0,0) circle [radius = 0.08]; \draw[black, fill] (0,-1) circle [radius=0.08]; \draw[black, fill] (1,1) circle [radius=0.08]; \draw[black, fill] (-1,1) circle [radius=0.08]; \draw[black, fill] (0,2) circle [radius=0.08]; \draw[black, fill] (0,3) circle [radius=0.08]; \draw (0,-1) -- (0,0) -- (1,1) -- (0,2); \draw (0,0) -- (-1,1) -- (0,2)--(0,3); \node[above] at (0,3.2) {\vdots}; \node[below] at (0,-1) {\vdots}; \node[below right] at (0,-1) {$S_{i-2}$}; \node[below right] at (0,0) {$S_{i-1}$}; \node[right] at (1,1) {$S'_{i}$}; \node[left] at (-1,1) {$S_{i}$}; \node[right] at (0,2) {$S_{i+1}$}; \node[right] at (0,3) {$S_{i+2}$}; \end{scope} \end{tikzpicture} \caption{A diamond in the Boolean algebra.} \label{fig:poset} \end{figure} \begin{proof} We may add $\emptyset$ and $[d+1]$ back to $\overline{{\mathcal B}_{d+1}}$ and say that the maximal cones in $\textrm{Br}_d$ are in bijection with maximal chains in ${\mathcal B}_{d+1}.$ Then any pair of adjacent maximal cones in $\textrm{Br}_d$ corresponds to a pair of maximal chains in ${\mathcal B}_{d+1}$ that only differ at a non-extreme element, and all pairs of adjacent maximal cones arise this way. One sees any such pair of maximal chains in ${\mathcal B}_{d+1}$ always form a ``diamond'' shape as shown in Figure \ref{fig:poset}. Suppose we have a pair of maximal chains shown in Figure \ref{fig:poset}. Then if we let $a = S_i \setminus S_{i-1}$ and $b = S_{i'} \setminus S_{i-1},$ we must have that $S_{i+1} = S_i \cup \{a, b\}.$ Therefore, \begin{equation}\label{equ:eesum} \textbf{e}_{S_{i+1}} + \textbf{e}_{S_{i-1}} = \textbf{e}_{S_{i}} + \textbf{e}_{S_i'}, \end{equation} which is precisely the solution to \eqref{equ:edgeeq} assumed by Lemma \ref{lem:edgeeq}. (Note that if $i=1,$ then $\textbf{e}_{S_{i-1}} = \textbf{e}_\emptyset=0$, and if $i=d,$ then $\textbf{e}_{S_{i+1}} = \textbf{e}_{[d+1]} = {\bf{1}} = 0$ in $W_d.$ For both cases, \eqref{equ:eesum} is the expression that we need.) It follows from Proposition \ref{prop:reduxfan} that the corresponding pair of adjacent maximal cones gives us the following inequality: \[ b_{S_{i+1}} + b_{S_{i-1}} \le b_{S_{i}} + b_{S_i'}.\] Going through all pairs of adjacent maximal cones, we see that $\operatorname{Def}(\textrm{Br}_d)$ is defined by the following collection of inequalities: \begin{equation}\label{equ:bsubdiamond} b_{S \cup \{a,b\}} + b_{S} \le b_{S \cup \{a\}} + b_{S \cup \{b\}}, \text{ for all $S \subseteq [d+1]$ and $a, b \in [d+1]\setminus S.$} \end{equation} Finally, each inequality given by \eqref{equ:bsub} follows from the above set of inequalities by induction on the size difference between $S \cup T$ and $S \cap T.$ \end{proof} \begin{remark}\label{rem:equivsub} We see from the proof of Theorem \ref{thm:centralsub} that the submodular condition \eqref{equ:bsub} is equivalent to the ``diamond'' submodular condition \eqref{equ:bsubdiamond}. This is a standard result on submodular functions, and will be used again in Section \ref{sec:NBF}. \end{remark} Note that points in $\mathbb{R}^{{\mathcal B}_{d+1}}$ can be considered as set functions from $2^{[d+1]}$ to $\mathbb{R}.$ We now restate the Submodular Theorem with more details and prove it. \begin{theorem}[Submodular Theorem, restated] \label{thm:submodrestate} For each submodular function $\textbf{b} \in \mathbb{R}^{{\mathcal B}_{d+1}}$ satisfying $\textbf{b}_\emptyset=0,$ the linear system: \begin{equation} \label{equ:linear} \left\langle \textbf{e}_{[d+1]}, \textbf{x} \right\rangle = \left\langle {\bf{1}}, \textbf{x} \right\rangle \ = \ b_{[d+1]}, \quad \text{and} \quad \left\langle \textbf{e}_S, \textbf{x} \right\rangle \ \le \ b_S, \quad \forall \emptyset \neq S \subsetneq [d+1] \end{equation} defines a generalized permutohedron in $\mathbb{R}^{d+1},$ and any generalized permutohedron arises this way uniquely. Furthermore, if a polytope $P \in \mathbb{R}^{d+1}$ is defined by a tight representation \eqref{equ:linear}, then $P$ is a generalized permutohedron if and only if $\textbf{b}$ is a submodular function $\textbf{b} \in \mathbb{R}^{{\mathcal B}_{d+1}}$ satisfying $\textbf{b}_\emptyset=0.$ \end{theorem} \begin{proof} It follows from Theorem \ref{thm:centralsub}, Proposition \ref{prop:coarser} and the description for rays of $\textrm{Br}_d$ in Proposition \ref{prop:charbr} that the one-to-one correspondence described by the theorem holds for centralized generalized permutohedra and submodular functions $\textbf{b} \in \mathbb{R}^{{\mathcal B}_{d+1}}$ satisfying $\textbf{b}_\emptyset=0$ and $\textbf{b}_{[d+1]} = 0.$ Suppose $\textbf{b} \in \mathbb{R}^{{\mathcal B}_{d+1}}$ is a set function. Let $k = \frac{b_{[d+1]}}{d+1}$ and define a new vector/function $\textbf{b}'$ by \[ \textbf{b}'_S = \textbf{b}_S - k |S|, \quad \forall S \subseteq [d+1].\] Let $P$ and $Q$ be the polytopes defined by the linear system \eqref{equ:linear} with vectors $\textbf{b}$ and $\textbf{b}'$ respectively. It is straightforward to check the following facts are true: \begin{enumerate} \item $\textbf{b}'_\emptyset = \textbf{b}_\emptyset$ and $\textbf{b}'_{[d+1]} =0.$ \item $\textbf{b}'$ is a submodular function if and only if $\textbf{b}$ is a submodular function. \item $Q =\tilde{P} = P - k {\bf{1}}$ is the centralized version of $P.$ \end{enumerate} The first conclusion of the theorem follows from these facts and the arguments in the first paragraph. Finally, the second conclusion follows from Lemma \ref{lem:checkdeform} and the observation that $\textrm{A} \textbf{x} \le \textbf{b}$ is a tight representation for $P$ if and only if $\textrm{A} \textbf{x} \le \textbf{b}'$ is a tight representation for $Q.$ \end{proof} \begin{remark}\label{rem:edges} We remark that other than the Submodular Theorem and Proposition \ref{prop:coarser}, there is another characterization of generalized permutohedra in terms of edges. A polytope $P \in \mathbb{R}^{d+1}$ is a generalized permutohedron if and only if all of its edge directions are in the form of $\textbf{e}_i-\textbf{e}_j$ for $1 \le i < j \le d+1$. We briefly give the proof for the forward implication of the above statement, which will be used in the example we discuss below. Indeed, it follows from Proposition \ref{prop:coarser} that for each cone $\sigma$ of codimension $1$ in the normal fan $\Sigma(P)$ of a generalized permutohedron $P$, there exists a cone $\sigma'$ of codimension $1$ in $\textrm{Br}_d$, such that the $(d-1)$-dimensional linear space spanned by $\sigma$ is the same of the linear space spanned by $\sigma',$ and hence the direction of the edge associated with $\sigma$ in $P$ has the same direction as the direction of the edge associated with $\sigma'$ in the regular permutohedron $\Pi_d.$ It is straightforward to verify that all the edge directions of $\Pi_d$ are in the form of $\textbf{e}_i-\textbf{e}_j.$ \end{remark} \begin{example}\label{ex:unexample} Consider the polytope $P$ in $\mathbb{R}^4$ defined by the linear system \eqref{equ:linear} with $b_{[4]} = 6$ and $b_S =3$ if $|S|=1$, $b_S=4$ if $|S|=2$, and $b_S = 6$ if $|S|=3.$ We see that \[ 8=4 + 4 = b_{\{1,2\}} + b_{\{2,3\}} < b_{\{1,2,3\}} + b_{\{2\}} = 6 + 3 = 9. \] So $\textbf{b}$ is not a submodular function. Since the given system is a tight representation for $P$, we conclude that $P$ is a not a generalized permutohedron. Indeed, $P$ is the cube whose vertices are $(1,1,1,3),(0,2,2,2)$ and their permutations. The linear functional given by the vector $(1,2,3,4)$ attains its maximum at the vertices $(0,2,2,2)$ and $(1,1,1,3)$, but not at the other vertices. Thus, $(0,2,2,2)$ and $(1,1,1,3)$ form an edge whose direction is parallel to $(-1,1,1,-1)$, conflicting with the condition for being a generalized permutohedron expressed in Remark \ref{rem:edges}. \end{example} \subsection*{Polymatroids vs Generalized Permutohedra.}\label{subsec:polyvsperm} We finish this sections by making the connection between polymatroids and generalized permutohedra. \begin{definition}\label{defn:rank} A \emph{polymatroid rank function} is a set function $r: 2^E \to \mathbb{R}$ on a finite set $E$ such that \begin{itemize} \item[(R1)] $0\leq r(A)$ for all $A\subseteq E$. (Nonnegativity condition) \item[(R2)] If $A_1\subseteq A_2\subseteq E$ then $r(A_1)\leq r(A_2)$. (Monotone condition) \item[(R3)] $r(A_1\cup A_2) + r(A_1\cap A_2)\leq r(A_1) + r(A_2)$ for all $A_1,A_2\subseteq E$. (Submodular condition) \end{itemize} \end{definition} Note that we are only lifting the restriction $r(A)\leq |A|$ from the definition of matroids. To be consistent with notation used for generalized permutohedra, we may assume $E = [d+1]$, and $r = \textbf{b} \in \mathbb{R}^{{\mathcal B}_{d+1}}.$ \begin{definition} The \emph{base polymatroid} $P_\textbf{b}$ associated to a polymatroid rank function $\textbf{b}$ on $[d+1]$ is the polytope in $\mathbb{R}^{d+1}$ defined by the linear system \eqref{equ:linear}. \end{definition} It turns out that we may add the constraint $\textbf{b}_\emptyset = 0$ to the above definition and still get all the base polymatroids. \begin{lemma}\label{lem:reduce20} Let $\textbf{b} \in \mathbb{R}^{{\mathcal B}_{d+1}}$ be a polymatroid rank function. Define an associated vector $\textbf{b}'$ as follows: \[ \textbf{b}'_\emptyset = 0, \quad \text{and} \quad \textbf{b}'_S = \textbf{b}_S, \forall \emptyset \neq S \subseteq [d+1].\] Then $\textbf{b}'$ is a polymatroid rank function on $[d+1]$ and $P_\textbf{b} = P_{\textbf{b}'}.$ \end{lemma} The proof of the lemma is straightforward, so is omitted. \begin{theorem}\label{thm:polymatroid} The bijection asserted in Theorem \ref{thm:submodrestate} induces a bijection between base polymatroids of dimension at most $d$ and monotone submodular functions $\textbf{b} \in \mathbb{R}^{{\mathcal B}_{d+1}}$ satisfying $\textbf{b}_\emptyset=0.$ Moreover, every generalized permutohedron has a translation that is a base polymatroid. \end{theorem} \begin{proof} The first assertion follows easily from Lemma \ref{lem:reduce20}, Theorem \ref{thm:submodrestate}, and the observation that the nonnegativity condition (R1) follows from the monotone condition when we assume $\textbf{b}_\emptyset=0.$ We use similar ideas presented in the proof of Theorem \ref{thm:submodrestate} to prove the second statement. Suppose $P$ is a generalized permutohedron associated to the submodular function $\textbf{b} \in \mathbb{R}^{{\mathcal B}_{d+1}}$ (where $\textbf{b}_\emptyset = 0$). For any $k \in \mathbb{R}$, we define a new vector/function $\textbf{b}^{(k)} \in \mathbb{R}^{{\mathcal B}_{d+1}}$ by \[ \textbf{b}^{(k)}_S = \textbf{b}_S + k |S|, \quad \forall S \subseteq [d+1].\] Then $\textbf{b}^{(k)}$ is a submodular function and the generalized permutohedron associated to $\textbf{b}^{(k)}$ is a translation of $P.$ However, it is easy to see that for sufficiently large $k,$ the set function $\textbf{b}^{(k)}$ is monotone. Hence, the conclusion follows. \end{proof} \section{Nested Braid fan and nested permutohedra} \label{sec:NBF} The plan of this section is as follows: We will first introduce the nested Braid fan $\textrm{Br}_d^2$ as a refinement of the Braid fan, and construct a family of polytopes, called usual nested permutohedra in $V_d$ by giving an explicit description for their vertices. We then establish (in Proposition \ref{prop:fanofusual2}) the connection between these two new objects by showing $\textrm{Br}_d^2$ is the normal fan of any usual nested permutohedron. After discussing combinatorial structure of $\textrm{Br}_d^2$ (in Proposition \ref{prop:charbr2}), we give an inequality description for usual nested permutohedra (in Theorem \ref{thm:facetdes}). Lastly, we determine deformation cones of $\textrm{Br}_d^2$ and nested permutohedra and give a result that is analogous to the Submodular Theorem (see Theorems \ref{thm:centralsub2} and \ref{thm:sub2}). Recall that $\{\textbf{e}_1,\cdots,\textbf{e}_{d+1}\}$ is the standard basis for $\mathbb{R}^{d+1}$. For any permutation $\pi \in {\mathfrak{S}}_{d+1},$ we define \[ \textbf{f}^\pi_i := \textbf{e}_{\pi^{-1}(i+1)} - \textbf{e}_{\pi^{-1}(i)}, \quad \forall 1 \le i \le d,\] and for any point $\textbf{x} =(x_1,\dots,x_{d+1})$ in $\mathbb{R}^{d+1}$ or $W_d$, we define \[ (\Delta \textbf{x})^\pi_i := x_{\pi^{-1}(i+1)} - x_{\pi^{-1}(i)}, \quad \forall 1 \le i \le d.\] \begin{definition}\label{defn:NBF} For each $(\pi,\tau)\in {\mathfrak{S}}_{d+1}\times {\mathfrak{S}}_{d}$, let $C(\pi,\tau)$ be the collection of vectors $\textbf{x} \in W_d$ satisfying: \begin{enumerate} \item $x_{\pi^{-1}(1)}<x_{\pi^{-1}(2)}<\cdots<x_{\pi^{-1}(d+1)}$, and \item $(\Delta \textbf{x})^\pi_{\tau^{-1}(1)}<(\Delta\textbf{x})^\pi_{\tau^{-1}(2)}<\cdots < (\Delta\textbf{x})^\pi_{\tau^{-1}(d)}$. (Note that this condition is an order of the first differences of the sequence $x_{\pi^{-1}(1)}, x_{\pi^{-1}(2)}, \dots, x_{\pi^{-1}(d+1)}$ with respect to the permutation $\tau.$) \end{enumerate} \end{definition} Similar to $C(\pi)$ defined in the last section, one can check that $C(\pi,\tau)$ is well-defined, and each region $C(\pi,\tau)$ is an open polyehdral cone. Let $\sigma(\pi,\tau)$ be the closed polyhedral cone obtained from $C(\pi,\tau)$ by relaxing the strict inequalities. \begin{definition}\label{defn:NBF2} We call the collection of cones $\{\sigma(\pi,\tau): (\pi, \tau) \in {\mathfrak{S}}_{d+1}\times {\mathfrak{S}}_{d}\}$, together with all of their faces, the \emph{nested Braid fan}, denoted by $\textrm{Br}_d^2.$ \end{definition} \begin{example} Let $(\pi,\tau) = (3241, 231).$ Then $(\pi^{-1},\tau^{-1})=(4213,312).$ Thus, the $C(\pi,\tau)$ is the collection of $\textbf{x} \in W_d$ satisfying \begin{enumerate} \item $x_4 < x_2 < x_1 < x_3$, and \item $x_3-x_1<x_2-x_4<x_1-x_2.$ \end{enumerate} \end{example} We will use similar idea as presented in last section to prove that $\textrm{Br}_d^2$ is a complete projective fan by showing it is the normal fan of a family of polytopes, which will be constructed below. We start by choosing two stricly increasing sequences \[ {\bm \alpha} = (\alpha_1, \alpha_2, \dots, \alpha_{d+1}) \in \mathbb{R}^{d+1} \quad \text{and} \quad {\bm \beta} = (\beta_1, \beta_2, \dots, \beta_{d}) \in \mathbb{R}^{d}.\] We then pick $M, N > 0.$ The basic idea of the construction is to take the $M$-th dilation of the usual permutohedron $\operatorname{Perm}( {\bm \alpha})$ and then replace each of its vertices with an $N$-th dilation of $\operatorname{Perm}( {\bm \beta})$ under a correct coordinate system. This will give us $d! (d+1)!$ vertices. Below is the precise construction. For any $(\pi, \tau) \in {\mathfrak{S}}_{d+1} \times {\mathfrak{S}}_d$, we define \begin{equation}\label{equ:defnv} v_{\pi,\tau}^{( {\bm \alpha}, {\bm \beta}),(M,N)} := M \sum_{i=1}^{d+1} \alpha_i \textbf{e}_{\pi^{-1}(i)} + N \sum_{i=1}^d \beta_i \textbf{f}_{\tau^{-1}(i)}^\pi. \end{equation} (Note that $\sum_{i=1}^{d+1} \alpha_i \textbf{e}_{\pi^{-1}(i)} = v_\pi^ {\bm \alpha}$ is a vertex of $\operatorname{Perm}( {\bm \alpha}).$) We omit $( {\bm \alpha}, {\bm \beta})$ from the superscript, and only write $v_{\pi,\tau}^{(M,N)}$ if $( {\bm \alpha}, {\bm \beta}) = \left( (1,2,\dots,d+1), (1,2,\dots, d) \right).$ After rearranging coordinate, we get the following expression: \begin{equation} v_{\pi,\tau}^{( {\bm \alpha}, {\bm \beta}),(M,N)} = \sum_{i=1}^{d+1} (M \alpha_i + N(\beta_{\tau(i-1)} - \beta_{\tau(i)})) \ \textbf{e}_{\pi^{-1}(i)}, \label{equ:expansion} \end{equation} where by convention we let $\beta_{\tau(0)} = \beta_{\tau(d+1)} =0.$ We would like to have the coefficients of $\textbf{e}_{\pi^{-1}(i)}$ in the above expansion increase strictly as $i$ increases, for any $(\pi,\tau)$. If this happens, we say $(M,N) \in \mathbb{R}_{>0}^2$ is an \emph{appropriate choice} for $( {\bm \alpha}, {\bm \beta})$. It is not hard to see that for fixed $( {\bm \alpha}, {\bm \beta})$, any pair $(M,N)$ satisfying $M >> N$ is an appropriate choice. \begin{definition}\label{defn:usual2} Suppose $( {\bm \alpha}, {\bm \beta}) \in \mathbb{R}^{d+1} \times \mathbb{R}^d$ is a pair of strictly increasing sequences $( {\bm \alpha}, {\bm \beta}) \in \mathbb{R}^{d+1} \times \mathbb{R}^d$ and $(M,N) \in \mathbb{R}_{>0}^2$ is an \emph{appropriate choice} for $( {\bm \alpha}, {\bm \beta})$. We define the \emph{usual nested permutohedron} \begin{equation} \operatorname{Perm}( {\bm \alpha}, {\bm \beta}; M,N) := \textrm{conv}\left(v_{\pi,\tau}^{( {\bm \alpha}, {\bm \beta}),(M,N)}:\quad (\pi,\tau) \in {\mathfrak{S}}_{d+1}\times {\mathfrak{S}}_d\right). \label{equ:defnusual2} \end{equation} In particular, if $ {\bm \alpha} = (1, 2,\dots, d+1)$ and $ {\bm \beta}=(1,2,\dots, d),$ we call the polytope a \emph{regular nested permutohedron}, denoted by $\Pi_d^2(M,N).$ (So $v_{\pi,\tau}^{(M,N)}$ are vertices for $\Pi_d^2(M,N).$) \end{definition} We remark that similar to the definition of $\operatorname{Perm}( {\bm \alpha})$, the above definition does not directly say that each $v_{\pi,\tau}^{( {\bm \alpha}, {\bm \beta}),(M,n)}$ is a vertex of $\operatorname{Perm}( {\bm \alpha}, {\bm \beta}; M,n).$ However, it will be shown to be true in Proposition \ref{prop:fanofusual2} below. One sees that $\operatorname{Perm}( {\bm \alpha}, {\bm \beta};M,N)$ lies in the hyperplane $\sum_{i=1}^{d+1} x_i = M\sum_{i=1}^{d+1} \alpha_i$, which is a translation of $V_d,$ and $\operatorname{Perm}( {\bm \alpha}, {\bm \beta}; M, N)$ is centralized if and only if $\sum_{i=1}^{d+1} \alpha_i=0,$ which is the situation we will focus on. \begin{example}\label{ex:nestedperm} One can show that $(M,N)=(4,1)$ is an appropriate choice for $( {\bm \alpha}=(1,2,3,4), {\bm \beta}=(1,2,3))$. Thus, $\Pi_3^2(4,1)$ is a nested regular permutohedron. See Figure \ref{fig:2polytopes} for a picture of it together with a picture of the regular permutohedron $\Pi_3$ as a comparison. Let $(\pi,\tau)=(3241,231)$. Then $(\pi^{-1}, \tau^{-1})=(4213, 312)$. Thus, the vertex of $\Pi_3^2(4,1)$ associated to $(3241,231)$ is \[ v_{3241,231}^{(4,1)}= 4(1\textbf{e}_4+2\textbf{e}_2+3\textbf{e}_1+4\textbf{e}_3) + 1(1(\textbf{e}_3-\textbf{e}_1)+2(\textbf{e}_2-\textbf{e}_4)+3(\textbf{e}_1-\textbf{e}_2)) = (14,7,17,2). \] We can compute all vertices of $\Pi_3^2(4,1)$ this way, and they are \[ (3,7,11,19),(2,9,10,19),(1,10,11,18),(1,9,13,17),(2,7,14,17),(3,6,13,18), \] and all of their permutations. \end{example} \begin{proposition}\label{prop:fanofusual2} Suppose $( {\bm \alpha}, {\bm \beta}) \in \mathbb{R}^{d+1} \times \mathbb{R}^d$ is a pair of strictly increasing sequences and $(M,N) \in \mathbb{R}_{>0}^2$ is an appropriate choice for $( {\bm \alpha}, {\bm \beta})$. Then for each $(\pi,\tau) \in {\mathfrak{S}}_{d+1} \times {\mathfrak{S}}_d,$ the point $v_{\pi,\tau}^{( {\bm \alpha}, {\bm \beta}),(M,N)}$ is a vertex of $\operatorname{Perm}( {\bm \alpha}, {\bm \beta}),$ and the normal cone of $\operatorname{Perm}( {\bm \alpha}, {\bm \beta};M,N)$ at $v_{\pi,\tau}^{( {\bm \alpha}, {\bm \beta});(M,N)}$ is $\sigma(\pi,\tau).$ Therefore, the nested Braid fan $\textrm{Br}_d^2$ is the normal fan of the nested usual permutohedron $\operatorname{Perm}( {\bm \alpha}, {\bm \beta}; M, N).$ Hence, $\textrm{Br}_d^2$ is a complete projective fan in $W_d$. \end{proposition} \begin{proof} Similar to the proof of Proposition \ref{prop:fanofusual}, it is enough to show that for any $\textbf{w} \in C(\pi, \tau)$ (assuming $(\pi,\tau)$ is fixed), \begin{equation} \label{equ:strictineq} \left\langle \textbf{w}, v_{\pi,\tau}^{( {\bm \alpha}, {\bm \beta}),(M,N)} \right\rangle > \left\langle \textbf{w}, v_{(\pi',\tau')}^{( {\bm \alpha}, {\bm \beta}),(M,N)} \right\rangle, \quad \forall (\pi,\tau) \neq (\pi',\tau') \in {\mathfrak{S}}_{d+1} \times {\mathfrak{S}}_d. \end{equation} We will prove the above inequality by introducing an intermediate product and showing \begin{equation}\label{equ:strictineq1} \left\langle \textbf{w}, v_{\pi,\tau}^{( {\bm \alpha}, {\bm \beta}),(M,N)} \right\rangle > \left\langle \textbf{w}, v_{\pi,\tau'}^{( {\bm \alpha}, {\bm \beta}),(M,N)} \right\rangle > \left\langle \textbf{w}, v_{(\pi',\tau')}^{( {\bm \alpha}, {\bm \beta}),(M,N)} \right\rangle. \end{equation} Similar as before, we let $u_i = w_{\pi^{-1}(i)}$ for each $i$ and express $\textbf{w}$ as in \eqref{equ:wexp0}. Then $\textbf{w} \in C(\pi, \tau)$ means that \begin{enumerate} \item $u_1 < u_2 < \dots < u_{d+1}$, and \item $u_{\tau^{-1}(1)+1} - u_{\tau^{-1}(1)} < u_{\tau^{-1}(2)+1}-u_{\tau^{-1}(2)} < \cdots < u_{\tau^{-1}(d)+1} - u_{\tau^{-1}(d)}.$ \end{enumerate} Expression \eqref{equ:wexp0}, together with \eqref{equ:expansion}, allows us to compute products in \eqref{equ:strictineq} easily. Then the second inequality in \eqref{equ:strictineq1} follows from the Rearrangement Inequality (Lemma \ref{lem:rearr}), condition (1) above and the fact that $(M,N)$ is an appropriate choice. Next, we see the first inequality in \eqref{equ:strictineq1} holds if and only if \[\sum_{i=1}^{d+1} u_i \left(\beta_{\tau(i-1)} - \beta_{\tau(i)}\right) > \sum_{i=1}^{d+1} u_i \left(\beta_{\tau'(i-1)} - \beta_{\tau'(i)}\right).\] After rearranging summations, the above inequality becomes \[ \sum_{j=1}^d \beta_j \left(u_{\tau^{-1}(j)+1} - u_{\tau^{-1}(j)}\right) >\sum_{j=1}^d \beta_j \left(u_{(\tau')^{-1}(j)+1} - u_{(\tau')^{-1}(j)}\right),\] which follows from the Rearrangement Inequality, condition (2) above and the fact that $ {\bm \beta}$ is strictly increasing. \end{proof} Proposition \ref{prop:fanofusual2} provides one natural way to define generalized nested permutohedra. \begin{definition}\label{defn:gen2} A polytope in $V_d$ (or in an affine plane that is a translation of $V_d$) is a \emph{generalized nested permutohedron} if its normal fan is a coarsening of $\textrm{Br}_d^2.$ \end{definition} \subsection*{Combinatorics of $\textrm{Br}_d^2$} Our next goal is to determine the combinatorics of the fan $\textrm{Br}^2_d$ which is equivalent to those of the face lattice of $\Pi_d^2(M,N)$. The following poset arises naturally in our discussion. \begin{definition}\label{defn:osp} An \emph{ordered (set) partition} of $[d+1]$ is an ordered tuple of disjoint subsets whose union is $[d+1]$, i.e. ${\mathcal T}=(S_1,\cdots,S_k)$ with $S_i\subset [d+1]$ for all $1\leq i\leq k$ and $S_1\sqcup\cdots \sqcup S_k = [d+1]$. The \emph{ordered (set) partition poset}, denoted by ${\mathcal O}_{d+1}$, is the poset on all ordered set partitions of $[d+1]$ ordered by refinement. This is a ranked poset of rank $d.$ It has a maximum, the trivial partition, $\hat{1}=([d+1])$, but doesn't have a minimum. It has $(d+1)!$ minimal elements, one for each permutation $\pi \in {\mathfrak{S}}_{d+1}$ considered as an ordered set partition of singletons: \[ {\mathcal T}(\pi) := ( \{\pi^{-1}(1)\}, \{\pi^{-1}(2)\}, \dots, \{\pi^{-1}(d+1)\}).\] We denote by $\overline{{\mathcal O}_{d+1}}$ the poset obtained from ${\mathcal O}_{d+1}$ by removing the maximum element. \end{definition} \begin{remark} We are going to write ordered set partitions by using numbers separated by bars. For instance, the ordered partition ${\mathcal T}=(\{3,4\},\{1,5\},\{2,6,7\})$ will be written as $34|15|267$. It is important to keep in mind that the numbers between bars form a set, hence their order is irrelevant. We have ${\mathcal T}(3721456) = 4|3|1|5|6|7|2 \le 34|15|267$. \end{remark} Recall we define $\textbf{e}_S$ for each $S \in {\mathcal B}_{d+1}.$ For each element ${\mathcal T} = (S_1,\cdots,S_k) \in {\mathcal O}_{d+1}$, we define \begin{equation} \textbf{e}_{{\mathcal T}} := \sum_{i} i \textbf{e}_{S_i}. \label{equ:defneT} \end{equation} For instance if ${\mathcal T}=34|15|267$, then $\textbf{e}_{{\mathcal T}} = 1 \cdot \textbf{e}_{34} + 2 \cdot \textbf{e}_{15} + 3 \cdot \textbf{e}_{267} = (2,3,1,1,2,3,3).$ We have the following result that is analogous to Proposition \ref{prop:charbr}. \begin{proposition}\label{prop:charbr2} The rays, i.e., $1$-dimensional cones, of the Braid fan $\textrm{Br}_d^2$ are given by $\textbf{e}_{\mathcal T}$ for all ${\mathcal T} \in \overline{{\mathcal O}_{d+1}}$. Furthermore, a $k$-set of rays $\{\textbf{e}_{{\mathcal T}_1},\cdots, \textbf{e}_{{\mathcal T}_k}\}$ spans a $k$-dimensional cone in $\textrm{Br}_d^2$ if and only if the sets ${\mathcal T}_1,\dots, {\mathcal T}_k$ form a $k$-chain in $\overline{{\mathcal O}_{d+1}}$. In particular, the maximal cones in $\textrm{Br}_d^2$ are in bijection with the maximal chains in $\overline{{\mathcal O}_{d+1}}.$ Hence, $\textrm{Br}_d^2$ is simplicial. \end{proposition} As each maximal cone $\sigma(\pi,\tau)$ of $\textrm{Br}_d^2$ is indexed by $(\pi,\tau) \in {\mathfrak{S}}_{d+1} \times {\mathfrak{S}}_d$, and maximal chains in $\overline{{\mathcal O}_{d+1}}$ are obtained from maximal chains in ${\mathcal O}_{d+1}$ by removing the top element, we will prove the above proposition by providing a bijection between $(\pi,\tau) \in {\mathfrak{S}}_{d+1}\times {\mathfrak{S}}_d$ and maximal chains in ${\mathcal O}_{d+1}.$ We first observe that the rank-$0$ element ${\mathcal T}(\pi): \pi^{-1}(1)|\pi^{-1}(2)|\cdots|\pi^{-1}(d+1)$ contains $d$ bars, and any element of rank $r$ in the interval $[{\mathcal T}(\pi), \hat{1}]$ can be obtained from ${\mathcal T}(\pi)$ by removing an $r$-subset of the $d$ bars. Conversely, any element of rank $r$ arises this way. This gives the following lemma. \begin{lemma}\label{lem:localbool} For each $\pi \in {\mathfrak{S}}_{d+1},$ the interval $[{\mathcal T}(\pi), \hat{1}]$ is isomorphic to the Boolean algebra ${\mathcal B}_{d}.$ Hence, the poset ${\mathcal O}_{d+1}$ is \emph{locally Boolean}, i.e., all of its intervals are Boolean algebras. \end{lemma} Moreover, the discussion above provides a natural way to construct a desired bijection for the proof of Proposition \ref{prop:charbr2}. \begin{notation} We represent each $(\pi,\tau)$ with the following diagram, denoted by ${\mathcal D}(\pi,\tau)$: \[ \pi^{-1}(1)\stackrel{\tau(1)}{|} \pi^{-1}(2) \stackrel{\tau(2)}{|} \cdots \stackrel{\tau(d-1)}{|} \pi^{-1}(d) \stackrel{\tau(d)}{|}\pi^{-1}(d+1). \] \end{notation} \begin{definition} Let $(\pi,\tau) \in {\mathfrak{S}}_{d+1} \times {\mathfrak{S}}_d,$ we define $\operatorname{ch}(\pi,\tau)$ to be the unique maximal chain in $[{\mathcal T}(\pi), \hat{1}]$ that is obtained in the following way: \begin{enumerate} \item Let ${\mathcal D}(\pi,\tau; 0) = {\mathcal D}(\pi,\tau)$. \item For each $1 \le r \le d,$ we let ${\mathcal D}(\pi,\tau;r)$ be the diagram obtained from ${\mathcal D}(\pi,\tau;r-1)$ by removing the bar labelled by $r.$ \item For each $0 \le r \le d,$ ignoring the labels on bars gives an ordered set partition in $[{\mathcal T}(\pi),\hat{1}]$ of rank $r$, and we denote it by ${\mathcal T}(\pi,\tau; r).$ \item Let $\operatorname{ch}(\pi,\tau)$ be the maximal chain formed by $\{ {\mathcal T}(\pi,\tau;r) \ : \ 0 \le r \le d\}.$ \end{enumerate} \end{definition} \begin{example}\label{ex:nestedperm2} Let $(\pi,\tau) = (3241, 231).$ Then ${\mathcal D}(\pi,\tau)$ is the diagram \begin{equation}\label{eq:diagram} 4\stackrel{2}{|}2\stackrel{3}{|}1\stackrel{1}{|}3, \end{equation} and $\operatorname{ch}(\pi,\tau)$ is as shown in the box on the left side of Figure \ref{fig:maxchain}, where the arrows demonstrate the procedure we describe above. In the middle of the figure (or the third column of the figure), we list the rays $\textbf{e}_{{\mathcal T}(\pi,\tau;i)}$ associated with ${\mathcal T}(\pi,\tau;r)$ for each $r.$ Finally, in the fourth column, we show the difference between any two consecutive associated rays, which turns out to be important. \begin{figure}[h] \begin{tikzpicture}[scale=0.9] \node at (-2.4,3.8) {${{\mathcal D}(\pi,\tau; r)}$}; \node at (-2.4,0) {$4\stackrel{2}{|}2\stackrel{3}{|}1\stackrel{1}{|}3$}; \node at (-2.4,0.5) {$\uparrow$}; \node at (-2.4,1) {$4\stackrel{2}{|}2\stackrel{3}{|}1 \ \ 3$}; \node at (-2.4,3/2) {$\uparrow$}; \node at (-2.4,4/2) {$4 \ \ 2\stackrel{3}{|}1 \ \ 3$}; \node at (-2.4,5/2) {$\uparrow$}; \node at (-2.4,6/2) {$4 \ \ 2 \ \ 1 \ \ 3$}; \node at (-1,0) {$\longrightarrow$}; \node at (-1,1) {$\longrightarrow$}; \node at (-1,2) {$\longrightarrow$}; \node at (-1,3) {$\longrightarrow$}; \node at (0,3.8) {${{\mathcal T}(\pi,\tau; r)}$}; \node at (0,0) {$4|2|1|3$}; \node at (0,0.5) {$|$}; \node at (0,1) {$4|2|13$}; \node at (0,3/2) {$|$}; \node at (0,4/2) {$42|13$}; \node at (0,5/2) {$|$}; \node at (0,6/2) {$4213$}; \draw (-0.6,-0.3) -- (0.6,-0.3) -- (0.6,3.3) -- (-0.6,3.3) -- cycle; \begin{scope}[xshift=0.5cm] \node at (3,0) {$\textbf{e}_4+2\textbf{e}_2+3\textbf{e}_1+4\textbf{e}_3$}; \node at (3,1) {$\textbf{e}_4+2\textbf{e}_2+3\textbf{e}_1+3\textbf{e}_3$}; \node at (3,2) {$\textbf{e}_4+\textbf{e}_2+2\textbf{e}_1+2\textbf{e}_3$}; \node at (3,3) {$\textbf{e}_4+\textbf{e}_2+\textbf{e}_1+\textbf{e}_3$}; \node at (3,3.8) {$\textbf{e}_{{\mathcal T}(\pi,\tau; r)}$}; \end{scope} \begin{scope}[xshift=1cm] \node at (6.6,3.8) {$\textbf{e}_{{\mathcal T}(\pi,\tau; r-1)} - \textbf{e}_{{\mathcal T}(\pi,\tau;r)}$}; \node at (6.6, 0.5) {$\textbf{e}_{\{3\}} = \textbf{e}_3$}; \node at (6.6, 1.5) {$\textbf{e}_{\{213\}} = \textbf{e}_2 + \textbf{e}_1 + \textbf{e}_3$}; \node at (6.6, 2.5) {$\textbf{e}_{\{13\}} = \textbf{e}_1 + \textbf{e}_3$}; \node at (10,3.8) {$\Gamma_{\tau^{-1}(r)}^\pi$}; \node at (10, 0.5) {$\Gamma_{3}^\pi =\{3\}$}; \node at (10, 1.5) {$\Gamma_{1}^\pi = \{213\}$}; \node at (10, 2.5) {$\Gamma_{2}^\pi =\{13\}$}; \node at (8.7, 0.5) {$\longleftarrow$}; \node at (8.7, 1.5) {$\longleftarrow$}; \node at (8.7, 2.5) {$\longleftarrow$}; \end{scope} \end{tikzpicture} \caption{Maximal chain and associated rays} \label{fig:maxchain} \end{figure} \end{example} One may notice that in Figure \ref{fig:maxchain} that the differences $\textbf{e}_{{\mathcal T}(\pi,\tau; r-1)} - \textbf{e}_{{\mathcal T}(\pi,\tau;r)}$ can be understood in a more systemetic way. We use the following notation. \begin{notation} Fix $\pi \in {\mathfrak{S}}_{d+1}$. For $1 \le i \le d,$ let \[ \Gamma_i^{\pi} := \pi^{-1}(i, d+1] = \{ \pi^{-1}(j) \ : \ i < j \le d+1\}.\] \end{notation} The following lemma is clear from the construction of $\operatorname{ch}(\pi,\tau)$. So we omit its proof. \begin{lemma}\label{lem:diff} The map $(\pi,\tau) \to \operatorname{ch}(\pi,\tau)$ is a bijection from $ {\mathfrak{S}}_{d+1}\times {\mathfrak{S}}_d$ to maximal chains of ${\mathcal O}_{d+1}$ (or equivalently, to maximal chains of $\overline{{\mathcal O}_{d+1}}$). Furthermore, for each $1 \le r \le d,$ \begin{equation}\label{equ:diff} \textbf{e}_{{\mathcal T}(\pi,\tau; r-1)} - \textbf{e}_{{\mathcal T}(\pi,\tau;r)} = \textbf{e}_{\Gamma_{\tau^{-1}(r)}^\pi}. \end{equation} \end{lemma} \begin{example} In our running example, where $(\pi,\tau) = (3241, 231),$ we have \[ \Gamma_1^\pi = \{ 213\}, \quad \Gamma_2^\pi = \{ 13\}, \quad \Gamma_3^\pi = \{3\}.\] Then the differences $\textbf{e}_{{\mathcal T}(\pi,\tau; r-1)} - \textbf{e}_{{\mathcal T}(\pi,\tau;r)}$ can be computed using \eqref{equ:diff} as shown in the last column of Figure \ref{fig:maxchain}. For instance, $\tau^{-1}(3)=2$ implies that $\textbf{e}_{\Gamma_2^\pi} = \textbf{e}_{\{13\}}$ gives the difference vector $\textbf{e}_{{\mathcal T}(\pi,\tau; 2)} - \textbf{e}_{{\mathcal T}(\pi,\tau;3)}$. \end{example} \begin{proof}[Proof of Proposition \ref{prop:charbr2}] It is enough to show that $\sigma(\pi,\tau)$ is spanned by the rays $\textbf{e}_{{\mathcal T}(\pi,\tau; r)}$, where $0 \le r \le d-1,$ associated to non-maximum elements in the maximal chain $\operatorname{ch}(\pi,\tau).$ (All the conclusions in the proposition follow from it.) It follows from the definition that $\sigma(\pi,\tau)$ is the collection of $\textbf{x} \in W_d$ satisfying \begin{equation}\label{eq:chainineq2} 0\leq (\Delta \textbf{x})^\pi_{\tau^{-1}(1)}\leq (\Delta \textbf{x})^\pi_{\tau^{-1}(2)}\leq\cdots \leq (\Delta \textbf{x})^\pi_{\tau^{-1}(d)}. \end{equation} The rays of $\sigma(\pi,\tau)$ are obtained by having one strict inequality in \eqref{eq:chainineq2} and equalities in the rest, i.e., by \begin{equation}\label{equ:chainineqr} 0 = (\Delta \textbf{x})^\pi_{\tau^{-1}(1)}=(\Delta \textbf{x})^\pi_{\tau^{-1}(2)}=\cdots=(\Delta \textbf{x})^\pi_{\tau^{-1}(r)}<(\Delta \textbf{x})^\pi_{\tau^{-1}(r+1)} =\cdots= (\Delta \textbf{x})^\pi_{\tau^{-1}(d)} =1. \end{equation} The right hand side can be any positive constant as the solution will just be off by a scale; hence, we let it be $1.$ As there is a unique solution (if one exists) to \eqref{equ:chainineqr}, it is enough to verify $\textbf{e}_{{\mathcal T}(\pi,\tau;r)}$ is a solution. Indeed, by the construction of ${\mathcal T}(\pi,\tau;r),$ the followings are true: \begin{enumerate} \item If $i \le r,$ then $\pi^{-1}(\tau^{-1}(i)+1)$ is in the same block of ${\mathcal T}(\pi,\tau;r)$ as $\pi^{-1}(\tau^{-1}(i))$. \item If $i > r,$ then $\pi^{-1}(\tau^{-1}(i)+1)$ is in the block of ${\mathcal T}(\pi,\tau;r)$ that follows the block where $\pi^{-1}(\tau^{-1}(i))$ is in. \end{enumerate} Then the desired conclusion follows from the definition of $\textbf{e}_{\mathcal T}$ (see \eqref{equ:defneT}) for any ordered set partion ${\mathcal T}$. \end{proof} As a summary, we have associated three objects to each pair of $(\pi,\tau)$. The proofs of Proposition \ref{prop:fanofusual2} and \ref{prop:charbr2} tells us the connection between them, which are summarized in the diagram below. \begin{figure}[h] \begin{tikzpicture} \node[left] at (-1,2) {$ {\mathfrak{S}}_{d+1}\times {\mathfrak{S}}_d\ni(\pi,\tau)$}; \draw[->, postaction={decorate,decoration={raise=0.5ex,text along path,text align=center, text={|\tiny|vertex}}}] (-1,2) to [out=60, in=190] (1,3); \draw[fill](-1,2) circle [radius=0.05]; \node[right] at (1,3) {$v_{\pi,\tau}^{( {\bm \alpha}, {\bm \beta})(M,N)}$}; \draw[->,dashed] (3,3) to [out=330, in=15] (2.3,2.1); \draw[->,dashed] (2.5,1) to [out=30, in=345] (2.3,1.9); \node[right] at (1,2) {$\sigma(\pi,\tau)$}; \node at (4,2.4) {normal cone}; \node[right] at (1,1) {$\operatorname{ch}(\pi,\tau)$}; \node at (5,1.6) {non-maximum elements giving}; \node at (5,1.2) {the set of spanning rays}; \draw[->, postaction={decorate,decoration={raise=0.5ex,text along path,text align=center, text={|\tiny|cone}}}] (-1,2) -- (1,2); \draw[->, postaction={decorate,decoration={raise=-1ex,text along path,text align=center, text={|\tiny|maximum chain}}}] (-1,2) to [out=300, in=170] (1,1); \end{tikzpicture} \caption{Relation between objects associated to $(\pi,\tau)$} \label{fig:relation} \end{figure} \begin{example} \label{ex:nestedperm3} Let $(\pi,\tau)=(3241,231).$ As shown in Examples \ref{ex:nestedperm} and \ref{ex:nestedperm2} that $v_{3241,231}^{(4,1)} = (14,7,17,2)$, the maximal chain $\operatorname{ch}(3241,231)$ and its associated rays are given in Figure \ref{fig:maxchain}. So the normal cone of $\Pi_3^2(4,1)$ at the vertex $v_{3241,231}^{(4,1)}$ is $\sigma(3241,231)$. It is spanned by the rays associated to non-maixmum elements in $\operatorname{ch}(3241,231),$ which are the three vectors on the bottom of the middle column in Figure \ref{fig:maxchain}. This helps us to find three facet-defining inequalities for $\Pi_3^2(4,1):$ \begin{align*} \left\langle \textbf{e}_{42|13}, \textbf{x} \right\rangle = 2x_1+x_2+2x_3+x_4 &\leq \left\langle \textbf{e}_{42|13}, (14,7,17,2) \right\rangle = 71\\ \left\langle \textbf{e}_{4|2|13}, \textbf{x} \right\rangle = 3x_1+2x_2+3x_3+x_4 &\leq\left\langle \textbf{e}_{4|2|13}, (14,7,17,2) \right\rangle = 109\\ \left\langle \textbf{e}_{4|2|1|3}, \textbf{x} \right\rangle = 3x_1+2x_2+4x_3+x_4 &\leq\left\langle \textbf{e}_{4|2|1|3}, (14,7,17,2) \right\rangle = 126. \end{align*} \end{example} \subsection*{Inequality description of usual nested permutohedra} It follows from Propositions \ref{prop:fanofusual2} and \ref{prop:charbr2} and Definition \ref{defn:gen2} that any generalized nested permutohedron in $\mathbb{R}^{d+1}$ is defined by the linear system in the form of \begin{equation} \label{equ:linear2} \langle \textbf{e}_{[d+1]}, \textbf{x} \rangle = \langle {\bf{1}}, \textbf{x} \rangle \ = \ b_{[d+1]}, \quad \text{and} \quad \langle \textbf{e}_{\mathcal T}, \textbf{x} \rangle \ \le \ b_{\mathcal T}, \quad \forall {\mathcal T} \in \overline{{\mathcal O}_{d+1}}. \end{equation} Note that $[d+1]$ can be considered as the maximal element in ${\mathcal O}_{d+1}$ which is an ordered set partition, and also can be considered as the maximal element in ${\mathcal B}_{d+1}$, which is a set. Either way, $\textbf{e}_{[d+1]}$ represents the all-one vector $ {\bf{1}}.$ Hence, we may consider all the $b$'s appearing in \eqref{equ:linear2} as a vector $\textbf{b} \in \mathbb{R}^{{\mathcal O}_{d+1}}$ where indices are elements in ${\mathcal O}_{d+1}.$ It is interesting to obtain results that are analogous to those in Theorems \ref{thm:centralsub} and \ref{thm:submodrestate}. We will do this in next part. Before that we focus on usual nested permutohedra. When we introduced usual nested permutohedra, we defined them as convex hull of all of their vertices in \eqref{equ:defnusual2}. Now we can give an inequality description in the form of \eqref{equ:linear2} by giving explicit description for $\textbf{b}$. It turns out each coordinate $b_{\mathcal T}$ is determined by its structure type. \begin{definition}\label{defn:type} Let ${\mathcal T} = S_1| S_2| \dots | S_{k+1} \in {\mathcal O}_{d+1}.$ We define the \emph{structure type} of ${\mathcal T}$, denoted by $ {\operatorname{Type}}({\mathcal T})$, to be the sequence $(t_0=0, t_1, t_2, \dots, t_{k+1}=d+1),$ where \[ t_i = \sum_{j=1}^i |S_j|, \quad \text{for $1 \le i \le k$}.\] (We can also understand $t_i (1 \le i \le k)$ as the position number of the $i$th bar in ${\mathcal T}.$) \end{definition} \begin{theorem}\label{thm:facetdes} Suppose $( {\bm \alpha}, {\bm \beta}) \in \mathbb{R}^{d+1} \times \mathbb{R}^d$ is a pair of strictly increasing sequences $( {\bm \alpha}, {\bm \beta}) \in \mathbb{R}^{d+1} \times \mathbb{R}^d$ and $(M,N) \in \mathbb{R}_{>0}^2$ is an appropriate choice for $( {\bm \alpha}, {\bm \beta})$. Suppose $\textbf{b} \in \mathbb{R}^{{\mathcal O}_{d+1}}$ is defined as follows: for each ${\mathcal T} \in {\mathcal O}_{d+1},$ if $ {\operatorname{Type}}({\mathcal T})=(t_0, t_1,t_2,\dots, t_k, t_{k+1}),$ let \begin{equation} b_{{\mathcal T}} = M \left( \sum_{i=1}^{k+1} i \sum_{j=t_{i-1}+1}^{t_i} \alpha_j\right) + N \sum_{j=d-k+1}^d \beta_j. \label{equ:usualb} \end{equation} Then the linear system \eqref{equ:linear2} defines the usual nested permutohedron $\operatorname{Perm}( {\bm \alpha}, {\bm \beta}; M,N)$. \end{theorem} \begin{proof} As we discussed after Definition \ref{defn:usual2} that $\operatorname{Perm}( {\bm \alpha}, {\bm \beta};M,N)$ lies on the hyperplane $\sum_{i=1}^{d+1} x_i = M \sum_{i=1}^{d+1} \alpha_i.$ This, together with Propositions \ref{prop:fanofusual2} and \ref{prop:charbr2}, implies that it is enough to verify that $\textbf{b}$ defined by \eqref{equ:usualb} satisfies \begin{enumerate} \item $b_{[d+1]} = M \sum_{i=1}^{d+1} \alpha_i;$ and \item if ${\mathcal T} \in \overline{{\mathcal O}_{d+1}}$ is in the maximal chain $\operatorname{ch}(\pi,\tau),$ then $b_{\mathcal T} = \left\langle \textbf{e}_{\mathcal T}, v_{\pi,\tau}^{( {\bm \alpha}, {\bm \beta}),(M,N)} \right\rangle.$ \end{enumerate} Suppose ${\mathcal T} = [d+1]$ is the maximal element of ${\mathcal O}_{d+1}.$ Then $k=0$ and its structure type is $(0, d+1).$ The right hand side of \eqref{equ:usualb} becomes $M \sum_{j=1}^{d+1} \alpha_j$ as desired. Suppose ${\mathcal T} = S_1 | S_2 | \cdots | S_{k+1} \in \overline{{\mathcal O}_{d+1}}$ has structure type $(t_0, t_1, \dots, t_{k+1}),$ and it belongs to the maximal chain $\operatorname{ch}(\pi,\tau).$ (Note that the choice of $(\pi,\tau)$ is not unique.) It is easy to see that ${\mathcal T}$ is of rank $d-k.$ Hence, it is the rank-$(d-k)$ element ${\mathcal T}(\pi,\tau;d-k)$ of the maximal chain $\operatorname{ch}(\pi,\tau).$ It follows from the construction of $\operatorname{ch}(\pi,\tau)$ that ${\mathcal D}(\pi,\tau; d-k)$ is the following diagram: \[ \pi^{-1}(1) \pi^{-1}(2) \cdots \pi^{-1}(t_1) \stackrel{\tau(t_1)}{|} \pi^{-1}(t_1+1) \cdots \pi^{-1}(t_2) \stackrel{\tau(t_2)}{|} \cdots \cdots \stackrel{\tau(t_k)}{|} \pi^{-1}(t_k+1) \cdots \pi^{-1}(d+1). \] Since ${\mathcal T} = {\mathcal T}(\pi,\tau;d-k) = S_1 |S_2 |\cdots|S_{k+1}$ is obtained from ${\mathcal D}(\pi,\tau;d-k)$ by removing labels on the bars, and the labels on the bars have to be the largest $k$ elements in $[d].$ The followings are true \begin{align} & \{ \tau(t_1), \tau(t_2),\dots, \tau(t_k)\} = \{d-k+1, d-k+2, \dots, d\}; \label{equ:barcond} \\ & S_i = \{ \pi^{-1}(j) : t_{i-1}+1 \le j \le t_i\} \quad \forall 1 \le i \le k+1. \label{equ:Si} \end{align} It follows from \eqref{equ:Si} that $\textbf{e}_{\mathcal T} = \sum_{i=1}^{k+1} i \sum_{j=t_{i-1}+1}^{t_i} \textbf{e}_{\pi^{-1}(j)}$. Using this and \eqref{equ:defnv}, we obtain \begin{align*} \left\langle \textbf{e}_{\mathcal T}, v_{\pi,\tau}^{( {\bm \alpha}, {\bm \beta}),(M,N)} \right\rangle =& M \left( \sum_{i=1}^{k+1} i \sum_{j=t_{i-1}+1}^{t_i} \alpha_j\right) + N \left\langle \textbf{e}_{\mathcal T}, \sum_{i=1}^d \beta_{\tau(i)} \textbf{f}_{i}^\pi \right\rangle \end{align*} However, $\langle \textbf{e}_{\mathcal T}, \textbf{f}_{i}^\pi \rangle = \langle \textbf{e}_{\mathcal T}, \textbf{e}_{\pi^{-1}(i+1)} - \textbf{e}_{\pi^{-1}(i)} \rangle$ is $0$ if $\pi^{-1}(i)$ and $\pi^{-1}(i+1)$ are in the same block of ${\mathcal T}$, and is $1$ othwerwise, in which case they are in two consecutive blocks. One checks that the latter situation happens if and only if $i = t_j$ for some $1 \le j \le k$, which is a position where a bar is placed. Therefore, it follows from \eqref{equ:barcond} that $\displaystyle \left\langle \textbf{e}_{\mathcal T}, \sum_{i=1}^d \beta_{\tau(i)} \textbf{f}_{i}^\pi \right\rangle = \sum_{j=d-k+1}^d \beta_j$ as desired. \end{proof} \begin{example} We apply Theorem \ref{thm:facetdes} to the regular nested permutohedron $\Pi_3^2(4,1)$ first studied in Example \ref{ex:nestedperm}. Clearly, the polytope lies in \[ \langle \textbf{e}_{[4]}, \textbf{x} \rangle = x_1 + x_2 + x_3 + x_4 =b_{[d+1]} = M \sum_{i=1}^4 \alpha_i = 4 (1+2+3+4) = 40.\] We then compute some of the inequalities: \begin{align*} \langle \textbf{e}_{23|4|1},\textbf{x}\rangle = 3x_1+x_2+x_3+2x_4\leq 4\Big(1(1+2)+2(3)+3(4)\Big)+1\Big(2+3\Big)&= 90\\ \langle \textbf{e}_{14|23},\textbf{x}\rangle = x_1+2x_2+2x_3+x_4\leq 4\Big(1(1+2)+2(3+4)\Big)+1\Big(3\Big)&= 71\\ \langle \textbf{e}_{4|123},\textbf{x}\rangle = 2x_1+2x_2+2x_3+x_4\leq 4\Big(1(1)+2(2+3+4)\Big)+1\Big(2+3\Big)&= 81. \end{align*} \end{example} \subsection*{Deformation cone of $\textrm{Br}_d^2$} Finally, we are going to present results that analogous to results on $\operatorname{Def}(\textrm{Br}_d)$ and the Submodular Theorem for generalized permutohedron discussed in Section \ref{sec:GP}. We first apply Proposition \ref{prop:reduxfan} to determine $\operatorname{Def}(\textrm{Br}_d^2)$, or equivalently, the deformation cone of a \emph{centralized} usual nested permutohedron. In order to apply Proposition \ref{prop:reduxfan}, one needs to describe pairs of adjacent maximal cones in $\textrm{Br}_d^2.$ By Proposition \ref{prop:charbr2}, this is equivalent to describing pairs of maximal chains in ${\mathcal O}_{d+1}$ that only differ at a non-maximum element. Suppose $\operatorname{ch}_1$ and $\operatorname{ch}_2$ forms such a pair of maximal chains in ${\mathcal O}_{d+1}$. Let ${\mathcal T} = \operatorname{ch}_1 \setminus \operatorname{ch}_2$ and ${\mathcal T}'= \operatorname{ch}_2\setminus \operatorname{ch}_1.$ Then ${\mathcal T}$ and ${\mathcal T}'$ are of same rank, say $r$, where $0 \le r < d.$ In this case, we say $\{ \operatorname{ch}_1, \operatorname{ch}_2\}$ is a pair of \emph{($r$-)adjacent} maximal chains in ${\mathcal O}_{d+1}$. \begin{figure}[t] \begin{tikzpicture} \begin{scope}[scale=0.8] \draw [black,fill] (0,0) circle [radius = 0.08]; \draw[black, fill] (0,-1) circle [radius=0.08]; \draw[black, fill] (1,1) circle [radius=0.08]; \draw[black, fill] (-1,1) circle [radius=0.08]; \draw[black, fill] (0,2) circle [radius=0.08]; \draw[black, fill] (0,3) circle [radius=0.08]; \draw (0,-1) -- (0,0) -- (1,1) -- (0,2); \draw (0,0) -- (-1,1) -- (0,2)--(0,3); \node[above] at (0,3.2) {\vdots}; \node[below] at (0,-1) {\vdots}; \node[below right] at (0,-1) {${\mathcal T}_{r-2}$}; \node[below right] at (0,0) {${\mathcal T}_{r-1}$}; \node[right] at (1,1) {${\mathcal T}'_{r}={\mathcal T}'$}; \node[left] at (-1,1) {${\mathcal T}_{r}={\mathcal T}$}; \node[right] at (0,2) {${\mathcal T}_{r+1}$}; \node[right] at (0,3) {${\mathcal T}_{r+2}$}; \node[scale=1.5] at (-1.7,3) {$\operatorname{ch}_1$}; \node[scale=1.5] at (2.1,3) {$\operatorname{ch}_2$}; \node at (0,-2.5) {a diamond with $0 < r < d$}; \end{scope} \begin{scope}[xshift=6cm, yshift=-2cm, scale=0.8] \draw[black, fill] (1,1) circle [radius=0.08]; \draw[black, fill] (-1,1) circle [radius=0.08]; \draw[black, fill] (0,2) circle [radius=0.08]; \draw[black, fill] (0,3) circle [radius=0.08]; \draw[black, fill] (0,4) circle [radius=0.08]; \draw[black, fill] (0,5) circle [radius=0.08]; \draw (1,1) -- (0,2) -- (0,3)--(0,4)--(0,5); \draw (-1,1) -- (0,2); \node[above] at (0,5.2) {\vdots}; \node[right] at (1,1) {${\mathcal T}'_{r}$}; \node[left] at (-1,1) {${\mathcal T}_{r}$}; \node[right] at (0,2) {${\mathcal T}_{r+1}$}; \node[right] at (0,3) {${\mathcal T}_{r+2}$}; \node[right] at (0,4) {${\mathcal T}_{r+1}$}; \node[right] at (0,5) {${\mathcal T}_{r+2}$}; \node[scale=1.5] at (-1.7,3) {$\operatorname{ch}_1$}; \node[scale=1.5] at (2.1,3) {$\operatorname{ch}_2$}; \node at (0,0) {a ``\!\includegraphics[scale=0.7]{ren.eps}'' shape with $r=0$}; \end{scope} \end{tikzpicture} \caption{Two possibilities} \label{fig:2poss} \end{figure} \begin{lemma}\label{lem:adjch} Suppose $\{ \operatorname{ch}_1, \operatorname{ch}_2\}$ is a pair of \emph{$r$-adjacent} maximal chains in ${\mathcal O}_{d+1}$. Then (by Lemma \ref{lem:diff}) there exists a unique pair $(\pi_i, \tau_i) \in {\mathfrak{S}}_{d+1} \times {\mathfrak{S}}_d$ such that $\operatorname{ch}_i = \operatorname{ch}(\pi_i,\tau_i)$ for each $i.$ There are two situations: \begin{enumerate} \item (The diamond situation:) If $0 < r < d,$ then $\operatorname{ch}_1$ and $\operatorname{ch}_2$ form a diamond shape as shown on the left of Figure \ref{fig:2poss}. Furthermore, we have \begin{equation}\label{equ:dia} \pi_1 = \pi_2, \quad \text{and} \quad (r,r+1) \circ \tau_1 = \tau_2, \end{equation} where $(r,r+1)$ is the transposition that exchanges $r$ and $r+1.$ \item (The \!\includegraphics[scale=0.7]{ren.eps} situation:) If $r=0,$ then $\operatorname{ch}_1$ and $\operatorname{ch}_2$ form a \!\includegraphics[scale=0.7]{ren.eps} (reads ``ren'') shape as shown on the right of Figure \ref{fig:2poss}. Suppose ${\mathcal T}_{r+1} = {\mathcal T}_1$, the minimum common element of $\operatorname{ch}_1$ and $\operatorname{ch}_2$, has its only $2$-element-block in $i$th position, that is, \begin{equation} {\mathcal T}_1 =s_1|s_2|\cdots|s_{i-1}|s_i s_i'|s_{i+1}|\cdots|s_d. \label{equ:rank1} \end{equation} Then \begin{equation}\label{equ:ren} \tau_1 = \tau_2, \quad \tau_1(i)=1=\tau_2(i), \quad \text{and} \quad (i,i+1) \circ \pi_1 = \pi_2. \end{equation} \end{enumerate} Moreover, if $(\pi_1,\tau_1)$ and $(\pi_2,\tau_2)$ satisfy either \eqref{equ:dia} or \eqref{equ:ren}, then $\operatorname{ch}(\pi_1,\tau_1)$ and $\operatorname{ch}(\pi_2,\tau_2)$ are adjacent maximal chains in ${\mathcal O}_{d+1}.$ \end{lemma} \begin{proof} Suppose $r \neq 0.$ Then $\operatorname{ch}_1$ and $\operatorname{ch}_2$ has the same minimum element, say ${\mathcal T}(\pi).$ Thus, $\operatorname{ch}_1$ and $\operatorname{ch}_2$ are two maximal chains in the maximal interval $[{\mathcal T}(\pi), \hat{1}],$ and form a diamond. Hence, $\pi_1 = \pi = \pi_2.$ By the construction of $\operatorname{ch}(\pi,\tau)$, we must have \begin{equation} \tau_1^{-1}(r) = \tau_2^{-1}(r+1), \text{ and } \tau_1^{-1}(r+1) = \tau_2^{-1}(r), \label{equ:tautran} \end{equation} and $\tau_1^{-1}(i) = \tau_2^{-1}(i)$ for $i \neq r, r+1.$ This is equivalent to $(r, r+1) \circ \tau_1 = \tau_2.$ Suppose $r=0$, and $\operatorname{ch}_1$ and $\operatorname{ch}_2$ are as shown on the right of Figure \ref{fig:2poss}. Assume further that ${\mathcal T}_1$ is given by \eqref{equ:rank1}. Then the two minimal elements in $\operatorname{ch}_1$ and $\operatorname{ch}_2$ are \[ s_1|s_2|\cdots|s_{i-1}|s_i| s_i'|s_{i+1}|\cdots|s_d, \quad \text{and} \quad s_1|s_2|\cdots|s_{i-1}|s_i'| s_i|s_{i+1}|\cdots|s_d.\] Then \eqref{equ:ren} follows from the construction of $\operatorname{ch}(\pi,\tau).$ Finally, the last assertion can be easily verified. \end{proof} Using the connection between adjacent chains of ${\mathcal O}_{d+1}$ and adjacent vertices of the regular nested permutohedron $\operatorname{Perm}( {\bm \alpha}, {\bm \beta};M,N),$ we immediately have the following result. \begin{corollary} The two vertices $v_{\pi_1,\tau_1}^{( {\bm \alpha}, {\bm \beta}),(M,N)}$ and $v_{\pi_2,\tau_2}^{( {\bm \alpha}, {\bm \beta}),(M,N)}$ are adjacent, i.e., form an edge, if and only if either \eqref{equ:dia} or \eqref{equ:ren} holds. \end{corollary} Each pair of adjacent maximal chains described in Lemma \ref{lem:adjch}, via its correspondence with a pair of adjacent maximal cones in $\textrm{Br}_d^2,$ is associated with an inequality as described in Definition \ref{defn:fanineq}. In the lemma below, we describe this association explicitly. \begin{lemma}\label{lem:adjchineq} Assume all the hypothesis in Lemma \ref{lem:adjch}. Let $\sigma_i = \sigma(\pi_i,\tau_i)$ be the maximal cone in $\textrm{Br}_d^2$ that is in bijection with $\operatorname{ch}_i$ for $i=1,2.$ \begin{enumerate} \item (The diamond situation:) Suppose $\operatorname{ch}_1$ and $\operatorname{ch}_2$ form a diamond shape as shown on the left of Figure \ref{fig:2poss}. Then the associated inequality $I_{\{\sigma_1, \sigma_2\}}(\textbf{b})$ is \begin{equation} b_{{\mathcal T}_{r+1}} + b_{{\mathcal T}_{r-1}} \le b_{{\mathcal T}_r} + b_{{\mathcal T}'_r}. \label{equ:diaineq} \end{equation} We call such an inequality a \emph{diamond submodular inequality}. \item (The \!\includegraphics[scale=0.7]{ren.eps} situation:) Suppose $\operatorname{ch}_1$ and $\operatorname{ch}_2$ form a \!\includegraphics[scale=0.7]{ren.eps} shape as shown on the right of Figure \ref{fig:2poss}, and ${\mathcal T}_1$ is given by \eqref{equ:rank1}. (We know that $\tau_1=\tau_2.$) Let $\tau=\tau_1=\tau_2,$ and then let $p = \tau(i-1)$ and $q = \tau(i+1).$ (By convention, let $\tau(0)=0$ and $\tau(d+1)=d+1.$) Then the associated inequality $I_{\{\sigma_1, \sigma_2\}}(\textbf{b})$ is \begin{equation} 2b_{{\mathcal T}_1} + \underbrace{\left(b_{{\mathcal T}_{p-1}}-b_{{\mathcal T}_{p}}\right)}_{\text{replaced with $b_{[d+1]}$ if $p=0$}} + \underbrace{\left(b_{{\mathcal T}_{q-1}}-b_{{\mathcal T}_{q}}\right)}_{\text{eliminated if $q=d+1$}} \ \le b_{{\mathcal T}_0} + b_{{\mathcal T}_0'}. \label{equ:renineq} \end{equation} We call such an inequality a \emph{\!\includegraphics[scale=0.7]{ren.eps} inequality}. \end{enumerate} In both situations, we assume $b_{[d+1]} = b_{\hat{1}} = 0.$ \end{lemma} \begin{remark}\label{rem:balance} There are (at least) two ways to see why $b_{[d+1]} =0:$ First, we are discussing the centralized permutohedra which all lie in $V_d$, where each point has its coordinates sum to $0.$ Second, $\textbf{e}_{[d+1]} = 0$ in $W_d$ is a not a spanning ray of any maximal cone in $\textrm{Br}_d^2.$ The reason we keep $b_{[d+1]}$ in our expression \eqref{equ:renineq} (as well as its later reformulations) is to keep the expression \emph{balanced}, which means if we replace each $b_{\mathcal T}$ with $\textbf{e}_{\mathcal T}$ in \eqref{equ:renineq}, then the left side gives the same vector as the right side considering both are vectors in $\mathbb{R}^{d+1}$ (instead of $W_d$). (It will be clear from the proof of Lemma \ref{lem:adjchineq} below why \eqref{equ:renineq} is balanced.) \end{remark} \begin{proof}[Proof of Lemma \ref{lem:adjchineq}] For the diamond situation, as it was shown in the proof of Lemma \ref{lem:adjch} that $\tau_1$ and $\tau_2$ satisfy \eqref{equ:tautran}. Then it follows from the second part of Lemma \ref{lem:diff} that \[\textbf{e}_{{\mathcal T}_{r-1}} - \textbf{e}_{{\mathcal T}_r} = \textbf{e}_{{\mathcal T}(\pi,\tau_1; r-1)} - \textbf{e}_{{\mathcal T}(\pi,\tau_1;r)} =\textbf{e}_{{\mathcal T}(\pi,\tau_2; r)} - \textbf{e}_{{\mathcal T}(\pi,\tau_2;r+1)} = \textbf{e}_{{\mathcal T}'_r} - \textbf{e}_{{\mathcal T}_{r+1}},\] which implies that \[ \textbf{e}_{{\mathcal T}_{r+1}} + \textbf{e}_{{\mathcal T}_{r-1}} = \textbf{e}_{{\mathcal T}_r} + \textbf{e}_{{\mathcal T}'_r}.\] This gives us the diamond submodular inequality \eqref{equ:diaineq}. We now consider the \!\includegraphics[scale=0.7]{ren.eps} situation. Without loss of generality, we may assume \begin{equation} {\mathcal T}_0 = s_1|s_2|\cdots|s_{i-1}|s_i| s_i'|s_{i+1}|\cdots|s_d, \quad \text{and} \quad {\mathcal T}_0' = s_1|s_2|\cdots|s_{i-1}|s_i'| s_i|s_{i+1}|\cdots|s_d. \label{equ:rank0} \end{equation} Then \begin{equation} \textbf{e}_{{\mathcal T}_0} -\textbf{e}_{{\mathcal T}_1} = \textbf{e}_{s_i'} + \sum_{j=i+1}^d \textbf{e}_{s_j} \quad \text{and} \quad \textbf{e}_{{\mathcal T}_0'} -\textbf{e}_{{\mathcal T}_1} = \textbf{e}_{s_i} + \sum_{j=i+1}^d \textbf{e}_{s_j}. \label{equ:reneq} \end{equation} Note that by the second part of Lemma \ref{lem:diff}, we have \begin{align*} \textbf{e}_{s_i}+\textbf{e}_{s_i'} + \sum_{j=i+1}^d \textbf{e}_{s_j} =& \begin{cases} \textbf{e}_{{\mathcal T}_{p-1}} - \textbf{e}_{{\mathcal T}_p} \quad & \text{if $p \neq 0$} \\ \textbf{e}_{[d+1]} =\textbf{e}_{\hat{1}} \quad & \text{if $p=0$};\end{cases} \\ \text{and} \quad \sum_{j=i+1}^d \textbf{e}_{s_j} =& \begin{cases} \textbf{e}_{{\mathcal T}_{q-1}} - \textbf{e}_{{\mathcal T}_q} \quad & \text{if $q \neq d+1$} \\ 0 \quad & \text{if $q=d+1$}.\end{cases} \end{align*} One sees that the sum of the left hand sides of the above two equalities equals to the sum of the right hand sides of the two equalities in \eqref{equ:reneq}. This gives us an equality involving $\textbf{e}_{{\mathcal T}_0}, \textbf{e}_{{\mathcal T}_0'}, \textbf{e}_{{\mathcal T}_1},$ $\textbf{e}_{{\mathcal T}_{p-1}} - \textbf{e}_{{\mathcal T}_p}$ and $\textbf{e}_{{\mathcal T}_{q-1}} - \textbf{e}_{{\mathcal T}_q}.$ Rearranging terms and applying Definition \ref{defn:fanineq} yields the desired inequality \eqref{equ:renineq}. \end{proof} We combine results in part (1) of Lemmas \ref{lem:adjch} and \ref{lem:adjchineq} to obtain the following reformulated description for diamond submodular inequalities. The proof is straightforward, so is omitted. \begin{corollary}\label{cor:diamond} Let $(\pi,\tau) \in {\mathfrak{S}}_{d+1}\times {\mathfrak{S}}_d$ and $1 \le r < d.$ If we let $\tau' = (r, r+1)\circ \tau,$ then $\operatorname{ch}(\pi,\tau)$ and $\operatorname{ch}(\pi, \tau')$ form a pair of $r$-adjacent maixmal chains and their associated diamond submodular inequality can be written as: \begin{equation} b_{{\mathcal T}(\pi,\tau; r-1)} - b_{{\mathcal T}(\pi,\tau; r)} \le b_{{\mathcal T}(\pi,\tau'; r)} - b_{{\mathcal T}(\pi,\tau'; r+1)}. \label{equ:refdia} \end{equation} \end{corollary} While the diamond submodular inequalities are in a simple form that is easy to describe, the inequalities arise from the \!\includegraphics[scale=0.7]{ren.eps} situation are relative messy. Fortunately, given the diamond submodular inequalities, in particular their reformulations given in Corollary \ref{cor:diamond}, we only need to consider a subset of the ones from the \!\includegraphics[scale=0.7]{ren.eps} situation. In fact, for each rank-$1$ ${\mathcal T}_1$ element of ${\mathcal O}_{d+1},$ we only need one inequality constructed from a \!\includegraphics[scale=0.7]{ren.eps} shape containing ${\mathcal T}_1.$ \begin{lemma} \label{lem:ess} Let ${\mathcal T} = {\mathcal T}_1 = s_1|s_2|\cdots|s_{i-1}|s_i s_i'|s_{i+1}|\cdots|s_d$ be an element ${\mathcal O}_{d+1}$ of rank $1.$ It covers exactly two rank-$0$ elements ${\mathcal T}_0$ and ${\mathcal T}_0'$ as given in \eqref{equ:rank0}. We also let \[ {\mathcal S} := s_1 s_2 \cdots s_{i-1} | s_i s_i' | s_{i+1}\cdots s_d \] be the maximum element that is above ${\mathcal T}$ and still contains $\{s_i,s_i'\}$ as a single block. Note that ${\mathcal S}$ is of rank $d-1$ if $i=1$ or $d,$ and is of rank $d-2$ otherwise. Assume all the hypothesis in Lemma \ref{lem:adjchineq}, including those assumption in part (ii) for the \!\includegraphics[scale=0.7]{ren.eps} situation. Moreover, assume further the common rank-$1$ element of $\operatorname{ch}_1$ and $\operatorname{ch}_2$ is the ${\mathcal T}_1$ given above. Clearly, ${\mathcal T}_0, {\mathcal T}_0'$ are the two rank-$0$ elements. \begin{enumerate}[(a)] \item If $S$ is a common element of $\operatorname{ch}_1$ and $\operatorname{ch}_2,$ then the associated \!\includegraphics[scale=0.7]{ren.eps} inequality $I_{\{\sigma_1, \sigma_2\}}(\textbf{b})$ becomes: \begin{equation}\label{equ:essineq} 2 b_{\mathcal T} + b_{\mathcal S} \le b_{{\mathcal T}_0} + b_{{\mathcal T}_0'} + \underbrace{b_{[d+1]}}_{\text{eliminated if $i=1$}}. \end{equation} \item If we do not assume ${\mathcal S}$ is a common element of $\operatorname{ch}_1$ and $\operatorname{ch}_2,$ then the associated \!\includegraphics[scale=0.7]{ren.eps} inequality $I_{\{\sigma_1, \sigma_2\}}(\textbf{b})$ can be deduced from the inequality \eqref{equ:essineq} and all the diamond submodular inequalities. \end{enumerate} \end{lemma} We remark that the term $b_{[d+1]}$ in \eqref{equ:essineq} can be removed even when $i \neq 1$ as we have the assumption $b_{[d+1]}=0$. However, as we stated in Remark \ref{rem:balance}, we keep this term to make our inequality balanced. \begin{proof} Both parts of the lemma can be verified in three cases: (i) When $i=1$, and ${\mathcal S}$ is of rank $d-1;$ (ii) when $i=d,$ and ${\mathcal S}$ is of rank $d-1;$ (iii) when $i\neq 1$ or $d$, and ${\mathcal S}$ is of rank $d-2.$ The proofs for all cases are similar. Therefore, we only present the one for case (i) where $i=1.$ \begin{enumerate}[(a)] \item Since $i=1,$ we immeidately have that $p = \tau(i-1)=0.$ The element ${\mathcal S} = s_1 s_1'|s_2 s_3 \cdots s_d$ is of rank $d-1,$ and so is the second element ${\mathcal T}_{d-1}$ from the top on the maximal chain $\operatorname{ch}_1=\operatorname{ch}(\pi_1,\tau_1)=\operatorname{ch}(\pi_1,\tau).$ By the construction of $\operatorname{ch}(\pi_1,\tau),$ the top element ${\mathcal T}_d = \hat{1}=[d+1]$ in $\operatorname{ch}_1$ was obtained from ${\mathcal S}$ by removing the bar at the $2$nd position, which means $q = \tau(i+1)=\tau(2)=d$. Plugging these information into \eqref{equ:renineq}, we obtain \eqref{equ:essineq}. \item We may assume $d \ge 2$ since if $d=1,$ there is only one inequality arising from the \!\includegraphics[scale=0.7]{ren.eps} situation (and no diamond submodular inequalities). As in the previous part, we have $p=\tau^{-1}(i-1)=0.$ Since $d \ge 2,$ we have $q = \tau(i+1) = \tau(2) \neq d+1.$ Hence, the associated inequality is \[ 2b_{{\mathcal T}_1} + b_{[d+1]} + \left(b_{{\mathcal T}_{q-1}}-b_{{\mathcal T}_{q}}\right) \le b_{{\mathcal T}_0} + b_{{\mathcal T}_0'}. \] Comparing it with \eqref{equ:essineq}, we see that it is enough to show the following inequality can be deduced from all the diamond submodular inequalities: \begin{equation}\label{equ:mdia} b_{{\mathcal T}_{q-1}}-b_{{\mathcal T}_{q}} \le b_{\mathcal S} - b_{[d+1]}. \end{equation} Clearly, if $q=d,$ the equality holds. So we assume $q < d.$ Let $\gamma_0 = \tau,$ and for each $1 \le i \le d-q,$ let $\gamma_i = (q+i, q+i-1) \circ \gamma_{i-1}.$ Then $\gamma_i(2) = q+i.$ In particular, $\gamma_{d-q}(2) = d.$ Therefore, we have \[ {\mathcal T}_{q-1} = {\mathcal T}(\pi_1,\gamma_0; q-1), {\mathcal T}_{q} = {\mathcal T}(\pi_1,\gamma_0; q), \quad \text{and} \quad {\mathcal S} = {\mathcal T}(\pi_1, \gamma_{d-q}; d-1), [d+1] = {\mathcal T}(\pi_1,\gamma_{d-q}; d).\] We apply Corollary \ref{cor:diamond} ($d-q$) times with $\pi = \pi_1$ and $(\tau,\tau') = (\gamma_{i-1}, \gamma_i)$ for all $1 \le i \le d-q,$ and obtain $d-q$ diamond submodular inequalities in the form of \eqref{equ:refdia}. Adding these inequalities together yields the desired inequality \eqref{equ:mdia}. \end{enumerate} \end{proof} We see that the inequality \eqref{equ:essineq} only depends on ${\mathcal T} = {\mathcal T}_1$, as ${\mathcal T}_0, {\mathcal T}_0'$ and ${\mathcal S}$ are all determined by ${\mathcal T}.$ Hence, we denote the inequality \eqref{equ:essineq} by $I_{{\mathcal T}}(\textbf{b})$, and call it the \emph{essential inequality} associated with the rank-$1$ element ${\mathcal T}={\mathcal T}_1.$ \begin{example}\label{ex:ess} In Figure \ref{fig:renex}, we give an example of the proof of part (b) of Lemma \ref{lem:ess}, showing how to use diamond submodular inequalities to reduce an arbitrary \!\includegraphics[scale=0.7]{ren.eps} inequality to an essential one. We start with the pair of adjacent chains given by the circled elements, $\operatorname{ch}_1= ({\mathcal T}_0,{\mathcal T}_1,{\mathcal T}_2,{\mathcal T}_3,{\mathcal T}_4)$ and $\operatorname{ch}_2=({\mathcal T}_0',{\mathcal T}_1,{\mathcal T}_2,{\mathcal T}_3,{\mathcal T}_4)$. Their associated \!\includegraphics[scale=0.7]{ren.eps} inequality is \begin{equation}\label{equ:exineq} 2b_{{\mathcal T}_1}+ b_{[5]} + (b_{{\mathcal T}_1} - b_{{\mathcal T}_2}) \leq b_{{\mathcal T}_0}+b_{{\mathcal T}_0'}. \end{equation} However, using the two diamonds shown in Figure \ref{fig:renex}, we obtain that \[ b_{{\mathcal T}_1} - b_{{\mathcal T}_2}\le b_{{\mathcal T}_2'} - b_{{\mathcal T}_3}\leq b_{{\mathcal T}_3'}-b_{{\mathcal T}_4} = b_{53|142} - b_{[5]}.\] This shows that the \!\includegraphics[scale=0.7]{ren.eps} inequality \eqref{equ:exineq} can be deduced from the above two diamond submodular inequalities and the following inequality: \[ 2b_{{\mathcal T}_1}+ b_{[5]} + (b_{53|142} - b_{[5]}) = 2b_{{\mathcal T}_1} + b_{53|142} \leq b_{{\mathcal T}_0}+b_{{\mathcal T}_0'},\] which is exactly the essential \!\includegraphics[scale=0.7]{ren.eps} inequality associated to ${\mathcal T}_1.$ \begin{figure}[h] \centering \includegraphics[scale=0.8]{renex.eps} \caption{Illustration of Lemma \ref{lem:ess}} \label{fig:renex} \end{figure} \end{example} All the discussion above, together with Proposition \ref{prop:reduxfan} and Remark \ref{rem:equivsub}, gives us the first main result of this part. Recall that ${\mathcal O}_{d+1}$ is locally boolean (Lemma \ref{lem:localbool}), so we can define $\wedge$ and $\vee$ on any pair of elements in an interval. \begin{theorem}\label{thm:centralsub2} The deformation cone of the nested Braid fan (or centralized nested regular permutohedron) is the collection of $\textbf{b} \in \mathbb{R}^{\overline{{\mathcal O}_{d+1}}}$ satisfying the following conditions: \begin{enumerate} \item (Local submodularity) All the diamond submodular inequalities on ${\mathcal O}_{d+1}$ are satisfied, or equivalently, for any maximal interval $[{\mathcal T}(\pi), \hat{1}]$, \begin{equation}\label{equ:bsub2} b_{{\mathcal S} \vee {\mathcal T}} + b_{{\mathcal S} \wedge {\mathcal T}} \le b_{{\mathcal S}} + b_{{\mathcal T}}, \quad \forall {\mathcal S}, {\mathcal T} \in [{\mathcal T}(\pi),\hat{1}]. \end{equation} \item ( \!\includegraphics[scale=0.7]{ren.eps} condition) For any rank-$1$ element ${\mathcal T} \in \overline{{\mathcal O}_{d+1}},$ its associated essential inequality $I_{{\mathcal T}}(\textbf{b})$ holds. \end{enumerate} For both part, we assume $b_{\hat{1}} = b_{[d+1]} = 0.$ \end{theorem} If we remove the condition $b_{[d+1]}=0$ which corresponds to the centralized cases, we get a theorem that characeterize all generalized nested permutohedra, analogous to Theorem \ref{thm:submodrestate}. \begin{theorem}\label{thm:sub2} For $\textbf{b} \in \mathbb{R}^{{\mathcal O}_{d+1}}$ satisfying the local submodularity condition and the \!\includegraphics[scale=0.7]{ren.eps} condition described in Theorem \ref{thm:centralsub2}, the linear system: \begin{equation} \label{equ:linear3} \langle \textbf{e}_{[d+1]}, \textbf{x} \rangle = \langle {\bf{1}}, \textbf{x} \rangle \ = \ b_{[d+1]}, \quad \text{and} \quad \langle \textbf{e}_{\mathcal T}, \textbf{x} \rangle \ \le \ b_{\mathcal T}, \quad \forall {\mathcal T} \in \overline{{\mathcal O}_{d+1}} \end{equation} defines a generalized nested permutohedron in $\mathbb{R}^{d+1},$ and any generalized nested permutohedron arises this way uniquely. Furthermore, if a polytope $P \in \mathbb{R}^{d+1}$ is defined by a tight representation \eqref{equ:linear3}, then $P$ is a generalized nested permutohedron if and only if $\textbf{b} \in \mathbb{R}^{{\mathcal O}_{d+1}}$ satisfies the local submodularity condition and the \!\includegraphics[scale=0.7]{ren.eps} condition. \end{theorem} \begin{proof} The proof is similar to that of Theorem \ref{thm:submodrestate}, and we only give a sketch of the proof for the first part. The one-to-one correspondence between centralized nested permutohedra and $\textbf{b}$'s satisfying the local submodularity condition and the \!\includegraphics[scale=0.7]{ren.eps} condition with $b_{[d+1]}=0$ is established by Theorem \ref{thm:centralsub2}. Suppose $\textbf{b} \in \mathbb{R}^{{\mathcal O}_{d+1}}$. Let $k = \frac{b_{[d+1]}}{d+1}$ and define a new vector/function $\textbf{b}' \in \mathbb{R}^{{\mathcal O}_{d+1}}$ by \[ \textbf{b}'_{\mathcal T} = \textbf{b}_{\mathcal T} - k \cdot \operatorname{card}({\mathcal T}), \quad \forall {\mathcal T} \in {\mathcal O}_{d+1},\] where $\operatorname{card}({\mathcal T}) = \left\langle \textbf{e}_{{\mathcal T}}, {\bf{1}} \right\rangle = \sum_{i} i |S_i|,$ if ${\mathcal T} = (S_1,S_2 ,\cdots ,S_k)$. Let $P$ and $Q$ be the polytopes defined by the linear system \eqref{equ:linear3} with vectors $\textbf{b}$ and $\textbf{b}'$ respectively. Then we have the following facts: \begin{enumerate} \item $\textbf{b}'_{[d+1]} =0.$ \item $\textbf{b}'$ satisfies the local submodularity condtion and the \!\includegraphics[scale=0.7]{ren.eps} condition if and only if $\textbf{b}$ satisfies these two conditions as well. \item $Q =\tilde{P} = P - k {\bf{1}}$ is the centralized version of $P.$ \end{enumerate} Facts (1) and (3) are straightforward to check, and fact (2) follows from that all the inequalities we describe are balanced. (See Remark \ref{rem:balance}.) We see the first conclusion of the theorem follows from these facts and the arguments in the first paragraph. \end{proof} \begin{example}\label{ex:unexampleII} Recall that the polytope $P \subset \mathbb{R}^4$ considered in Example \ref{ex:unexample}. We already mentioned that $P$ is the cube whose vertices are $(1,1,1,3)$ and $(0,2,2,2)$ and their permutations. Furthermore, by discussing edge directions, we conclude that $P$ is not a generalized permutohedron. Another way to see this is by looking at the normal cones of $P$ at each vertex. For example, let $\sigma$ be the normal cone of $P$ at the vertex $(0,2,2,2).$ One can show that $\sigma$ is spanned by $3$ rays in $\textrm{Br}_3$: $\textbf{e}_{\{2,3\}}, \textbf{e}_{\{3,4\}}, \textbf{e}_{\{2,4\}}.$ However, another ray $\textbf{e}_{ \{2,3,4\}}$ of $\textrm{Br}_3$ is in the middle of $\sigma.$ As a result, $\sigma$ cuts through $6$ maximal cones of $\textrm{Br}_3$, and thus is not a union of maximal cones in $\textrm{Br}_3.$ Figure \ref{fig:nonexample}/(A) depicts a slice of these $6$ cones where the shaped region corresponds to the normal cone $\sigma.$ Hence, the normal fan of $P$ does not refine $\textrm{Br}_3,$ and by Proposition \ref{prop:coarser}, $P$ is not a generalized permutohedron. \begin{figure}[h] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.6\linewidth]{nonexample1.eps} \caption{$\sigma$ in the Braid fan $\textrm{Br}_3$} \label{fig:sub1} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.6\linewidth]{nonexample2.eps} \caption{$\sigma$ in the nested Braid fan $\textrm{Br}_3^2$} \label{fig:sub2} \end{subfigure} \caption{Comparison of a normal cone $\sigma$ in $\textrm{Br}_3$ and $\textrm{Br}_3^2$} \label{fig:nonexample} \end{figure} However, in $\textrm{Br}_3^2,$ each maximal cone of $\textrm{Br}_3$ was subdivided into $6$ cones. Figure \ref{fig:nonexample}/(B) shows how the maximal cones in Figure \ref{fig:nonexample}/(A) were subdivided, where the dark dots are rays in $\textrm{Br}_3^2.$ One sees that $\sigma$ is a union of maximal cones in $\textrm{Br}_3^2.$ Similarly, all the other normal cones of $P$ are unions of maximal cones in $\textrm{Br}_3^2.$ Hence, normal fan of $P$ refines $\textrm{Br}_3^2$. Thus, $P$ is a generalized \emph{nested} permutohedron. \end{example} \section{Chiseling Constructions} \label{sec:chisel} Victor Reiner asked whether it is true that $\textrm{Br}_d^2$ is the barycentric subdivision of $\textrm{Br}_d.$ The main purpose of this section is to give an affirmative answer to his question. We start by introducing the concept of \emph{chiseling} off faces of a polytope, which will be used to construct \emph{barycentric subdivision}. \begin{definition}\label{defn:chisel} Suppose $G$ is a face of a $d$-dimensional polytope $P\subset V.$ Let $F_1,\cdots F_l$ be the facets containing $G$ with primitive outer normals $\textbf{a}_1,\cdots, \textbf{a}_l$ respectively; in other words, $\textbf{a}_1,\cdots, \textbf{a}_l$ are the spanning rays of the normal cone $\operatorname{ncone}(G,P)$. Define the \emph{chiseling direction of $P$ at $G$} to be \[ \textbf{a}_G :=\textbf{a}_1+\cdots+ \textbf{a}_l.\] Furthermore, let $b_G$ be the scalar such that $G = P \cap \{ \textbf{x} \in V \ : \ \langle \textbf{a}_G, \textbf{x} \rangle = b_G \}$; equivalently, \[ b_G := \max_{\textbf{x} \in P} \langle \textbf{a}_G, \textbf{x} \rangle.\] For any sufficiently small $\epsilon>0$ such that $\{\textbf{x}: \langle \textbf{a}_G,\textbf{x}\rangle < b_G-\epsilon\}$ contains all vertices of $P$ not in $G$, we define $P_\epsilon:=P\cap \{\textbf{x} \ : \ \langle \textbf{a}_G,\textbf{x}\rangle \leq b_G-\epsilon\}$ to be the polytope obtained by \emph{chiseling $G$ off $P$ (at distance $\epsilon$)}. We call the facet $P \cap \{\textbf{x}: \langle \textbf{a}_G,\textbf{x}\rangle = b_G-\epsilon\}$ of $P_\epsilon$ created by this process the \emph{facet obtained by chiseling $G$ off $P$}. Let $G_1, \dots, G_k$ be faces of $P.$ We say $G_1, \dots, G_k$ can be \emph{simultaneously chiseled off $P$ at distance $\epsilon$} if for any $1 \le i < j \le k$ the facet obtained by chiseling $G_i$ off $P$ at distance $\epsilon$ has no intersection with the facet obtained by chiseling $G_j$ off $P$ at distance $\epsilon.$ \end{definition} See Figure \ref{fig:chisel} for a picture of chiseling off a vertex from a polygon. \begin{figure}[h] \centering \includegraphics{chisel.eps} \caption{Chiseling off a vertex from a polygon. The set $\{\textbf{x}: \langle \textbf{a}_G,\textbf{x}\rangle = b_G-\epsilon\}$ is given by the thick line, whereas the other is the dashed line.} \label{fig:chisel} \end{figure} It is easy to see that $G_1, \dots, G_k$ can be simultaneously chiseled off $P$ at a sufficient small distant $\epsilon>0$ if and only if $G_1,\dots,G_k$ are pairwise disjoint. \begin{remark} To chisel a face $G$ of a polytope $P$ correspond to make a \emph{stellar subdivision} on $\Sigma(P)$ along $\operatorname{ncone}(G,P)$ (See \cite[Section III.2]{ewald}). \end{remark} Suppose $\Sigma$ is a projective fan in $W$ such that $0 \in \Sigma.$ The following algorithm gives one way to obtain the barycentric subdivision of $\Sigma$. \begin{algorithm}\label{alg:bary} \begin{enumerate}[(1)] \item[(0)] Let $P_0=P$ be a $d$-polytope whose normal fan is $\Sigma$. \item Let $\epsilon_1 > 0$ be a sufficiently small number such that we can simultaneously chisel all vertices off $P_0$ at distance $\epsilon_1$, and let $P_1$ be the polytope obtained from $P_0$ by applying these chiselings. \item Let $E_1, \dots, E_m$ be the edges of $P_1$ that come from $P_0$, that is, edges that are not created from the chiselings done in the steps above. Let $\epsilon_2>0$ be a sufficiently small number such that we can simultaneously chisel $E_1, \dots, E_m$ off $P_1$ at distance $\epsilon_2$, and let $P_2$ be the polytope obtained from $P_1$ by applying these chiselings. \item \dots \item[\vdots] \item[(d)] Let $F_1, \dots, F_d$ be the $(d-1)$-dimensional faces of $P_{d-1}$ that come from $P_0.$ Let $\epsilon_d >0$ be a sufficiently small number such that we can simultaneously chisel $F_1, \dots, F_n$ off $P_{d-1}$ at distance $\epsilon_d$, and let $P_d$ be the polytope obtained from $P_{d-1}$ by applying these chiselings. \end{enumerate} \end{algorithm} It follows from \cite[Definition 2.5, Section III.2]{ewald} that the normal fan of $P_d$ obtained by Algorithm \ref{alg:bary} is the \emph{barycentric subdivision} of $\Sigma.$ \begin{remark} For any $k > 0, $ the set of all $k$-dimensional faces of a polytope $P$ cannot be simultaneously chiseled, since they are not pairwise disjoint. However, they do become pairwise disjoint after $k$ steps of Algorithm \ref{alg:bary}. For instance, the set of all edges of $P_0$ is clearly not pairwise disjoint, but the resulting edges after chiseling all vertices of $P_0$ are pairwise disjoint and can be simultaneously chiseled. \end{remark} \begin{remark} The barycentric subdivision of the normal fan of a polytope should not be confused with the barycentric subdivision of the polytope itself. For any polytope $P$, its barycentric subdivision is a triangulation of $P$, whereas the barycentric subdivision of a fan is again a fan. Furthermore Algorithm \ref{alg:bary} shows that it preserves projectivity. \end{remark} The following lemma, which follows immediately from the construction of $P_d,$ will be usful in our discusion. \begin{lemma}\label{lem:Pd} The resulting polytope $P_d$ (of Algorithm \ref{alg:bary}) is a full-dimensional polytope in the same $d$-dimensional affine space as $P_0,$ and is defined by the following linear system: \[ \langle \textbf{a}_G, \textbf{x} \rangle \le b_G - \epsilon_{\dim(G)+1}, \quad \text{for all nonempty proper faces $G$ of $P$},\] where $\textbf{a}_G$ and $b_G$ are as defined in Definition \ref{defn:chisel}, and $\epsilon_i$'s are the chiseling distances given in Algorithm \ref{alg:bary}. \end{lemma} We are now ready to state the main result, Theorem \ref{thm:bary2}, of this section, preceded by a classical result, Theorem \ref{thm:bary1}, that is related to Reiner's question, Recall that the \emph{standard $d$-simplex} is $\Delta_d := \textrm{conv}\{ \textbf{e}_1, \dots, \textbf{e}_{d+1}\}$. Theorem \ref{thm:bary1} follows from Remark 6.6 in \cite{PosReiWil}. \begin{theorem} \label{thm:bary1} The Braid fan $\textrm{Br}_d$ is the barycentric subdivision of the normal fan $\Sigma(\Delta_d)$ of $\Delta_d$. \end{theorem} \begin{theorem}\label{thm:bary2} The nested Braid fan $\textrm{Br}_d^2$ is the barycentric subdivision of the Braid fan $\textrm{Br}_d,$ and thus is the second barycentric subdivision of $\Sigma(\Delta_d).$ \end{theorem} As a warmup, we give a proof for Theorem \ref{thm:bary1} in which we use well-known facts about faces of the standard simplices. \begin{proof}[Proof of Theorem \ref{thm:bary1}] We apply Algorithm \ref{alg:bary} to $P_0 = \Delta_d$, making sure that the chiseling distances $\epsilon_i$'s satisfy: \begin{equation} \epsilon_1 < \frac{1}{2}, \quad \text{and for $2 \le i \le d,$} \quad \epsilon_i < \frac{\epsilon_{i-1}}{2}. \label{equ:tiny} \end{equation} It is sufficient to show that the resulting polytope $P_d$ is a usual permutohedron. In order to do this, we will apply Lemma \ref{lem:Pd} to find an inequality description for $P_d.$ First, note that $\Delta_d$ is a full-dimensional polytope in the affine space \[ \langle \textbf{e}_{[d+1]}, \textbf{x} \rangle = \langle {\bf{1}}, \textbf{x} \rangle \ = \ 1.\] Next, faces of $\Delta_d$ are naturally indexed by subsets of $[d+1]$. For each $S \subseteq [d+1]$, the corresponding face is $\textrm{conv}\{\textbf{e}_i: i\in S\}$, which we denote by $G_S.$ Recall that for any $\emptyset \neq S \subsetneq [d+1],$ the normal cone of $\Delta_d$ at $G_S$ is generated by $\{-\textbf{e}_j:j\notin I\}$. Hence, the chiseling direction of $\Delta_d$ at $G_S$ is $\sum_{j\notin I} -\textbf{e}_j$. As the normal fan is defined in $W_d$, where $\textbf{e}_{[d+1]} = \sum_{i\in[d+1]} \textbf{e}_i=0$, this chiseling direction can be written as $\textbf{e}_S=\sum_{i\in S} \textbf{e}_i$. Finally, let \[ b_S := \max_{\textbf{x} \in \Delta_d} \langle \textbf{e}_S, \textbf{x} \rangle = 1.\] Therefore, by Lemma \ref{lem:Pd}, the polytope $P_d$ is given by the following linear system: \begin{equation} \label{eq:chiselex} \langle \textbf{e}_{[d+1]}, \textbf{x} \rangle = \langle {\bf{1}}, \textbf{x} \rangle \ = \ 1, \quad \text{and} \quad \langle \textbf{e}_S, \textbf{x} \rangle \ \le \ b_S-\epsilon_{\dim(G_S)+1}=1-\epsilon_{|S|}, \quad \forall \emptyset \neq S \subsetneq [d+1]. \end{equation} It follows from \eqref{equ:tiny} that \begin{equation} \epsilon_d < \epsilon_{d-1}-\epsilon_d < \epsilon_{d-2}-\epsilon_{d-1} < \cdots < \epsilon_1- \epsilon_2 < 1 - \epsilon_1, \label{equ:tiny1} \end{equation} using which one can show that the right hand side of \eqref{eq:chiselex} is a submodular function, so that Theorem \ref{thm:submodular} applies. More directly, one can show that $P_d$ is the usual permutohedron $\operatorname{Perm}( {\bm \alpha})$ with \[ {\bm \alpha} = \Big(\epsilon_d, \epsilon_{d-1}-\epsilon_d, \epsilon_{d-2}-\epsilon_{d-1}, \dots, \epsilon_1- \epsilon_2, 1 - \epsilon_1\Big).\] \end{proof} We are going to prove Theorem \ref{thm:bary2} in a parallel fashion. Before that, we give the following preliminary lemma. \begin{lemma}\label{lem:bij} There exists a one-to-one correspondence between $(d-k)$-faces of $\Pi_d$ and ordered set partitions with $k+1$ parts in $\overline{{\mathcal O}_{d+1}}$ such that if we let $G_{\mathcal T}$ be the face corresponds to the ordered set partition ${\mathcal T},$ then the chiseling direction of $\Pi_d$ at $G_{\mathcal T}$ is $\textbf{e}_{\mathcal T}.$ \end{lemma} \begin{proof} It follows from Propositions \ref{prop:fanofusual} and \ref{prop:charbr} that each $(d-k)$-dimensional face $G$ of $\Pi_d$ corresponds with a $k$-chain in $\overline{{\mathcal B}_{d+1}}:$ \begin{equation}\label{equ:kchain} \emptyset \subsetneq S_1 \subsetneq S_2 \subsetneq \cdots \subsetneq S_k \subsetneq [d+1] \end{equation} such that the normal cone of $G$ is spanned by $\textbf{e}_{S_1}, \textbf{e}_{S_2}, \cdots, \textbf{e}_{S_k}.$ For each $k$-chain in the form of \eqref{equ:kchain}, we associate with it the ordered set partition ${\mathcal T} = T_1 | T_2 | \cdots | T_k | T_{k+1}$, where \[ T_1 = [d+1] \setminus S_k, \quad T_2 = S_k \setminus S_{k-1}, \quad \dots, \quad T_{k} = S_2\setminus S_1, \quad T_{k+1} = S_1.\] One sees that this established a bijection between $k$-chains in $\overline{{\mathcal B}_{d+1}}$ and ordered set partitions in $\overline{{\mathcal O}_{d+1}},$ and hence induces a bijection between nonempty proper faces of $\Pi_d$ and ordered set partitions in $\overline{{\mathcal O}_{d+1}}$. Furthermore, suppose $G$ is in bijection with ordered set partition ${\mathcal T}$ through the $k$-chain \eqref{equ:kchain}. Then the chiseling direction of $\Pi_d$ at $G$ is \[ \sum_{i=1}^k \textbf{e}_{S_i} = \sum_{i=1}^k \sum_{j=k+2-i}^{k+1} \textbf{e}_{T_j} = \sum_{j=2}^{k+1} (j-1) \textbf{e}_{S_j}.\] As the normal fan is defined in $W_d$, where $\textbf{e}_{[d+1]}=\sum_{i\in[d+1]} \textbf{e}_i=0$, the above chiseling direction can be written as \[ \sum_{j=2}^{k+1} (j-1) \textbf{e}_{S_j} + \textbf{e}_{[d+1]} = \sum_{j=1}^{k+1} j \textbf{e}_{S_j} = \textbf{e}_{{\mathcal T}}.\] \end{proof} \begin{proof}[Proof of Theorem \ref{thm:bary2}] We apply Algorithm \ref{alg:bary} to $P_0 = \Pi_d$, making sure that the chiseling distances $\epsilon_i$'s satisfy \begin{equation} \epsilon_1 < \frac{1}{4}, \quad \text{and for $2 \le i \le d,$} \quad \epsilon_i < \frac{\epsilon_{i-1}}{2}. \label{equ:tiny2} \end{equation} Similar to the proof of Theorem \ref{thm:bary1}, we will show $P_d$ is a usual nested permutohedron. First, $\Pi_d$ is a full-dimensional polytope in the affine space \[ \langle \textbf{e}_{[d+1]}, \textbf{x} \rangle = \langle {\bf{1}}, \textbf{x} \rangle \ = \ \sum_{j=1}^{d+1} j =: b_{[d+1]}.\] Next, for each ${\mathcal T} \in \overline{{\mathcal O}_{d+1}},$ let $G_{\mathcal T}$ be its corresponding face of $\Pi_d$ assumed by Lemma \ref{lem:bij}. Then the chiseling direction of $G_{\mathcal T}$ at $\Pi_d$ is $\textbf{e}_{\mathcal T}.$ By Lemma \ref{lem:Pd}, the polytope $P_d$ is defined by the linear system: \begin{equation} \langle \textbf{e}_{[d+1]}, \textbf{x} \rangle = \langle {\bf{1}}, \textbf{x} \rangle \ = \ b_{[d+1]}, \quad \text{and} \quad \langle \textbf{e}_{\mathcal T}, \textbf{x} \rangle \ \le \ b_{\mathcal T} - \epsilon_{d-k+1}, \quad \forall {\mathcal T} \in \overline{{\mathcal O}_{d+1}}, \end{equation} where $b_{{\mathcal T}} := \max_{\textbf{x} \in \Pi_d} \langle \textbf{e}_{{\mathcal T}}, \textbf{x} \rangle.$ Suppose $ {\operatorname{Type}}({\mathcal T})=(t_0, t_1,t_2,\dots, t_k, t_{k+1})$ (see Defintion \ref{defn:type} for the definition of structure type). Then we can compute that \[ b_{{\mathcal T}} = \left( \sum_{i=1}^{k+1} i \sum_{j=t_{i-1}+1}^{t_i} j\right).\] Note the above formula for $b_{{\mathcal T}}$ not only works for ${\mathcal T} \in \overline{{\mathcal O}_{d+1}}$, also works for ${\mathcal T} = [d+1].$ It follows from \eqref{equ:tiny2} that \[ 0 < \epsilon_d < \epsilon_{d-1}-\epsilon_d < \epsilon_{d-2}-\epsilon_{d-1} < \cdots < \epsilon_1- \epsilon_2 < \epsilon_1 < \frac{1}{4}. \] Hence, \[ {\bm \beta} := \Big( \epsilon_2-\epsilon_1, \epsilon_3 - \epsilon_2, \dots, \epsilon_d-\epsilon_{d-1}, -\epsilon_d\Big)\] is a strictly increasing sequence, where the abosolute value of each entry is strictly smaller than $\frac{1}{4}.$ Therefore, letting $ {\bm \alpha}:=(1,2,\dots, d+1),$ one checks that $(M,N)=(1,1)$ is an appropriate choice for $( {\bm \alpha}, {\bm \beta}).$ Thus, it follows from Theorem \ref{thm:facetdes} that $P_d$ is the usual nested permutohedron $\operatorname{Perm}( {\bm \alpha}, {\bm \beta};1,1).$ \end{proof} \section{Questions}\label{sec:question} We finish with questions that might be of interest for future research. \begin{enumerate} \item Notice that in the case of the usual permutohedron $\operatorname{Perm}( {\bm \alpha}),$ we always have $v_\pi^ {\bm \alpha} \in C(\pi)$ for each $\pi$, which is a property that makes notation natural. It is not always the case that for a usual nested permutohedron $\operatorname{Perm}( {\bm \alpha}, {\bm \beta}; M,N)$ we have $v_{\pi,\tau}^{( {\bm \alpha}, {\bm \beta})(M,N)}\in C(\pi,\tau)$ for each $(\pi,\tau)$. Does there exist any usual or generalized nested permutohedron with this property? \item Is there a way to realize the permuto-associahedron \cite[Lecture 9.3]{zie} as a deformation of the regular nested permutohedron? \item The nested Braid fan was defined by grouping together points with the same relative order of coordinates and their first differences. One could go beyond and consider second differences, but this is \textbf{not} a subsequent barycentric subdivision. Is this ``doubly" nested Braid fan a projective fan? \item The barycentric subdivision of a fan is obtained from stellar subdivisions in a particular order. If not done in the correct order the resulting fan is different. Which sequences of stellar subdivisions of $\Sigma(\Delta_d)$ give coarsenings of $\textrm{Br}_d$? \item As mentioned before, one of the motivations of this paper was to define and study a class of polytopes whose edges are parallel to directions in the form of $\textbf{e}_i+\textbf{e}_j-\textbf{e}_k-\textbf{e}_\ell$. The most direct way would be to first construct a fan from the hyperplane arrangement given by $x_i+x_j=x_k+x_\ell$ for all tuples $(i,j,k,\ell)$, including those with repeated elements, and then define a family of polytopes whose normal fans coarsen this new fan. One issue that arises is that this hyperplane arrangement is not simplicial. How many regions does it have? Do they have a combinatorial interpretation? \end{enumerate}
1,941,325,221,189
arxiv
\section*{Supplemental Material} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{center}{\begin{center}} \def\end{center}{\end{center}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} We consider a multiplex network with $M$ layers and adjacency matrix ${\bf a}^{[\alpha]}$ in each layer $\alpha=1,2,\ldots, M$. Initially we assume that we know the set of nodes that are initially damaged. The configuration of the initial damage is indicated by the variables $\{x_i\}$ where $x_i=0$ ($x_i=1$) if node $i$ is (is not) damaged. The message passing algorithm for given initial damage configuration determines whether node $i$ belongs ($\sigma_i=1$) or not belongs $\sigma_i=0$ to the mutually connected giant component (MCGC) as long as the multiplex network is locally tree-like. The algorithm requires the determination of the set of messages \begin{eqnarray} \vec{n}_{i\to j}=\left(n_{i \to j}^{[1]},n_{i\to j}^{[2]},\ldots, n^{[\alpha]}_{i\to j}, \ldots,n_{i\to j}^{[M]}\right) \end{eqnarray} going from node $i$ to node $j$ connected at least in one layer. Each message $n_{i\to j}^{[\alpha]}$ indicates whether ($n_{i\to j}^{[\alpha]}=1$) or not ($n_{i\to j}^{[\alpha]}=0$) node $i$ connects node $j$ to the MCGC through links in layer $\alpha$. These messages are determined by the recursive message passing equations \begin{eqnarray} n_{i\to j}^{[\alpha]}=\delta(v_{i\to j},M)a^{[\alpha]}_{ij}x_{i}\left[1-\prod_{\ell\in N(i)\setminus j}\left(1-n^{[\alpha]}_{\ell\to i}\right)\right]. \label{S1} \end{eqnarray} Here $v_{i\to j}$ indicates in how many layers node $i$ is connected to the MCGC assuming that node $j$ also belongs to the MCGC and it is given by \begin{eqnarray} v_{i\to j}&=&\sum_{\alpha=1}^M\left\{\left[1-\prod_{\ell\in N(i)\setminus j}\left(1-n^{[\alpha]}_{\ell\to i}\right)\right]+a^{[\alpha]}_{ij}\prod_{\ell\in N(i)\setminus j}\left(1-n^{[\alpha]}_{\ell\to i}\right)\right\}. \label{S2} \end{eqnarray} Finally the value of $\sigma_i$ for any generic node $i$ can be expressed in terms of the messages $\vec{n}_{i\to j}$ as \begin{eqnarray} \hspace{-10mm}\sigma_{i}&=&x_{i}\prod_{\alpha}\left[1-\prod_{\ell \in N(i)}\left(1-n^{[\alpha]}_{\ell\to i}\right)\right]. \label{S3} \end{eqnarray} This message passing algorithm can be applied only when the full configuration $\{x_i\}$ of the initial damage is known. Here our goal is to derive from this algorithm a distinct message passing algorithm able to predict the probability $r_i=\Avg{\sigma_i}$ that a node is in the MCGC for a random configuration of the initial damage. Specifically we will assume that the initial damage configuration $\{x_i\}$ has probability \begin{eqnarray} {\mathcal P}(\{x_i\})=\prod_{i=1}^N p^{x_i}(1-p)^{1-x_i}, \label{Sps} \end{eqnarray} i.e. nodes are independently damaged with probability $f=1-p$. In order to predict $r_i$, it is useful to use an alternative formulation of the message passing algorithm for a given configuration of the initial disorder. This alternative formulation will allow us to perform easily the average of the initial damage configuration. To this end, we introduce the variable $\sigma_{i\to j}^{\vec{m},\vec{n}}$ which indicates whether ($\sigma_{i\to j}^{\vec{m},\vec{n}}=1$ ) or not ($\sigma_{i\to j}^{\vec{m},\vec{n}}=0$) node $i$ sends to node $j$ the messages $\vec{n}_{i\to j}$ given that node $i$ and node $j$ are linked by a multilink \begin{eqnarray} \vec{m}=\vec{m}_{ij}=\left(a_{ij}^{[1]},a_{ij}^{[2]},\ldots, a_{ij}^{[\alpha]}, \ldots, a_{ij}^{[M]}\right). \end{eqnarray} According to Eqs.(\ref{S1})-(\ref{S2}) a node $i$, in order to send a message $\vec{n}\neq \vec{0}$, should be connected to the MCGC by nodes different from node $j$ in all the layers where $n^{[\alpha]}=1$ and in all the layers where $m^{[\alpha]}=0$. In fact the first requirement is necessary for having $n^{[\alpha]}=1$ the second requirement is necessary for having $v_{i\to j}=M$ because $m^{[\alpha]}=a_{ij}^{[\alpha]}=0$. Additionally, for every layer $\alpha$ where $m^{[\alpha]}=a_{ij}^{[\alpha]}=1$ but $n^{[\alpha]}=0$ node $i$ must not receive node any positive messages from neighbor nodes different from node $j$. Therefore we have for $\vec{n}\neq \vec{0}$, \begin{eqnarray} \hspace{-10mm}\sigma^{\vec{m},\vec{n}}_{i\to j}&=&x_i\prod_{\alpha=1}^M\left\{\left(m^{[\alpha]}\right)^{n^{[\alpha]}}\left[1-\prod_{\ell\in N(i)\setminus j}\left(1-n^{[\alpha]}_{\ell\to i}\right)\right]^{n^{[\alpha]}m^{[\alpha]}+\left(1-m^{[\alpha]}\right)}\left[\prod_{\ell\in N(i)\setminus j}\left(1-n^{[\alpha]}_{\ell\to i}\right)\right]^{\left(1-n^{[\alpha]}\right)m^{[\alpha]}}\right\}, \label{SF1} \end{eqnarray} while for $\vec{n}=\vec{0}$ we have \begin{eqnarray} \sigma^{\vec{m},\vec{0}}_{i\to j}=1-\sum_{\vec{n}\neq \vec{0}}\sigma^{\vec{m},\vec{n}}_{i\to j}. \end{eqnarray} Note that our of the messages $\sigma^{\vec{m},\vec{n}}_{i\to j}$ with different value of $\vec{n}$ only one has value one and all the other are zero. We call this message $\vec{n}_{i\to j}$ or in other words, \begin{eqnarray} \vec{n}_{i\to j}=\mbox{argmax}_{\vec{n}}\sigma^{\vec{m},\vec{n}}_{i\to j}. \end{eqnarray} This different formulation of the message passing equations, is suitable to easy perform an average that takes into account the correlations existing between the different messages $n_{i\to j}^{[\alpha]}$ between node $i$ and node $j$. In order to perform the average over the probability ${\mathcal P}(\{x_i\})$ given by Eqs. $(\ref{Sps})$, let us use the identity valid for $p^{[\alpha]}$ taking values $p^{[\alpha]}=0,1$ \begin{eqnarray} &&\prod_{\alpha=1}^M (1-x_{\alpha})^{p^{[\alpha]}}=\prod_{\alpha|p^{[\alpha]}>0}(1-z_{\alpha}) = \sum_{\vec{r}|r^{[\alpha]}=0 \ \mbox{{\scriptsize if}}\ p^{[\alpha]}=0}(-1)^{\sum_{\alpha=1}^M r^{[\alpha]}} \left(z_{\alpha}\right)^{r^{[\alpha]}}, \label{r} \end{eqnarray} where the sum in the last term is over all the vectors \begin{eqnarray} \vec{r}=\left(r^{[1]},r^{[2]},\ldots, r^{[\alpha]},\ldots, r^{[M]}\right) \end{eqnarray} of elements $r^{[\alpha]}=0,1$ for $p^{[\alpha]}=1$ and $r^{[\alpha]}=0$ for $p^{[\alpha]}=0$. Using this relation for Eq. $(\ref{SF1})$ we obtain \begin{eqnarray} \hspace*{-13mm}\sigma^{\vec{m},\vec{n}}_{i\to j}= x_i\sum_{\vec{r}|r^{[\alpha]}=0 \ \mbox{if}\ (1-n^{[\alpha]})m^{[\alpha]}=1}\left[\prod_{\alpha=1}^M \left(m^{[\alpha]}\right)^{n^{[\alpha]}}\right](-1)^{\sum_{\alpha}r^{[\alpha]}}\prod_{\ell\in N(i)\setminus j}\prod_{\alpha=1}^M\left(1-n_{\ell\to i}^{[\alpha]}\right)^{r^{[\alpha]}+m^{[\alpha]}\left(1-n^{[\alpha]}\right)}. \end{eqnarray} Since between all the messages $\sigma_{i\to j}^{\vec{m},\vec{n}}$ sent between node $i$ to node $j$ only one message is equal to one, we have \begin{eqnarray} \hspace*{-13mm}\sigma^{\vec{m},\vec{n}}_{i\to j}= x_i\sum_{\vec{r}|r^{[\alpha]}=0 \ \mbox{if}\ (1-n^{[\alpha]})m^{[\alpha]}=1}\left[\prod_{\alpha=1}^M \left(m^{[\alpha]}\right)^{n^{[\alpha]}}\right](-1)^{\sum_{\alpha}r^{[\alpha]}}\prod_{\ell\in N(i)\setminus j}\left(1-\sum_{\vec{n}'|\sum_{\alpha}\left(n^{\prime}\right)^{{[\alpha]}}[r^{[\alpha]}+(1-n^{[\alpha]})m^{[\alpha]}]>0}\sigma_{\ell\to i}^{\vec{m}_{\ell i}\vec{n}'}\right). \end{eqnarray} By averaging these messages over the distribution ${\mathcal P}(\{x_i\})$ given by Eq. (\ref{SF1}) we can formulate a different message passing algorithm able to predict the probability $r_i$ that a random node belongs to the MCGC for a random realization of the initial disorder. In this case the generic message $s^{\vec{m}_{ij},\vec{n}}_{i\to j}$ indicates the probability that node $i$ connects node $j$ to the MCGC in the layers where ${n}^{[\alpha]}=1$. These messages are given by $s^{\vec{m}_{ij},\vec{n}}_{i\to j}=\Avg{\sigma^{\vec{m},\vec{n}}_{i\to j}}$ where the average is over the random realization of the initial disorder. Therefore they satisfy the following recursive equations \begin{eqnarray} \hspace*{-13mm}s^{\vec{m},\vec{n}}_{i\to j}= p\sum_{\vec{r}|r^{[\alpha]}=0 \ \mbox{if}\ (1-n^{[\alpha]})m^{[\alpha]}=1}\left[\prod_{\alpha=1}^M \left(m^{[\alpha]}\right)^{n^{[\alpha]}}\right](-1)^{\sum_{\alpha}r^{[\alpha]}}\prod_{\ell\in N(i)\setminus j}\left(1-\sum_{\vec{n}'|\sum_{\alpha}\left(n^{\prime}\right)^{{[\alpha]}}[r^{[\alpha]}+(1-n^{[\alpha]})m^{[\alpha]}]>0}s_{\ell\to i}^{\vec{m}_{\ell i}\vec{n}'}\right), \end{eqnarray} as long as the multiplex network is locally tree-like. Similarly the probability $r_i$ that node $i$ is in the MCGC is the average $r_i=\Avg{\sigma_i}$, i.e. \begin{eqnarray} r_i=p\sum_{\vec{r}}(-1)^{\sum_{\alpha}r^{[\alpha]}}\left[\prod_{\ell\in N(i)}\left(1-\sum_{\vec{n}'|\sum_{\alpha}\left(n^{\prime}\right)^{[\alpha]}r^{[\alpha]}>0}s_{\ell\to i}^{\vec{m}_{\ell i}\vec{n}'}\right)\right], \end{eqnarray} as long as the multiplex network satisfy the locally tree-like approximation. \begin{figure*}[!htb] \includegraphics[width=0.9\textwidth]{figure_ce.pdf} \caption{Percolation diagrams for the {\it Caenorhabditis Elegans} duplex networks. Description of the various panels are identical to those of Fig.~\ref{figure} of the main text. Order of appearance the duplexes is identical to the one of Table~\ref{table} of the main text. } \label{figure_ce} \end{figure*} \begin{figure*}[!htb] \includegraphics[width=0.9\textwidth]{figure_dro.pdf} \caption{Percolation diagrams for the {\it Drosophila Melanogaster} duplex networks. Description of the various panels are identical to those of Fig.~\ref{figure} of the main text. Order of appearance the duplexes is identical to the one of Table~\ref{table} of the main text. } \label{figure_dro} \end{figure*} \begin{figure*}[!htb] \includegraphics[width=0.9\textwidth]{figure_homo.pdf} \caption{Percolation diagrams for the {\it Homo Sapiens} duplex networks. Description of the various panels are identical to those of Fig.~\ref{figure} of the main text. Order of appearance the duplexes is identical to the one of Table~\ref{table} of the main text. } \label{figure_homo} \end{figure*} \begin{figure*}[!htb] \includegraphics[width=0.9\textwidth]{figure_arxiv.pdf} \caption{Percolation diagrams for the {\it NetSci Co-authorship} duplex networks. Description of the various panels are identical to those of Fig.~\ref{figure} of the main text. Order of appearance the duplexes is identical to the one of Table~\ref{table} of the main text. } \label{figure_arxiv} \end{figure*} \end{document}
1,941,325,221,190
arxiv
\section{Introduction} Define $Y_d(n,p)$ to be the probability distribution on all $d$-dimensional simplicial complexes with $n$ vertices, with complete $(d-1)$-skeleton and with each $d$-dimensional face included independently with probability $p$. We use the notation $Y \sim Y_d(n,p)$ to mean that $Y$ is chosen according to the distribution $Y_d(n,p)$; note the $1$-dimensional case $Y_1(n,p)$ is equivalent to the Erd\H{o}s--R\'enyi random graph $G \sim G(n,p)$. Results in this area are usually as $n \to \infty$ and $p=p(n)$. We say that an event occurs {\it with high probability} (abbreviated w.h.p.) if the probability approaches one as the number of vertices $n \to \infty$. Whenever we use big-$O$ or little-$o$ notation, it is also understood as $n \to \infty$. \medskip A function $f=f(n)$ is said to be a {\it threshold} for a property $\mathcal{P}$ if whenever $p / f \to \infty$, w.h.p.\ $G \in \mathcal P$, and whenever $p / f \to 0$, w.h.p.\ $G \notin \mathcal{P}$. In this case, one often writes that $f$ is {\it the} threshold, even though technically $f$ is only defined up to a scalar factor. It is a fundamental fact of random graph theory (see for example Section 1.5 of \cite{JLR}) that every monotone property has a threshold. However, not every monotone property has a sharp threshold. For example, $1/n$ is the threshold for the appearance of triangles in $G(n,p)$, but this threshold is not sharp. In contrast, the Erd\H{o}s--R\'enyi theorem asserts that $\log n / n$ is a sharp threshold for connectivity. Classifying which graph properties have sharp thresholds is a problem which has been extensively studied; see for example the paper of Friedgut with appendix by Bourgain \cite{FB99}. \medskip The first theorem concerning the topology of $Y_d(n,p)$ was in the influential paper of Linial and Meshulam \cite{LM}. Their results were extended by Meshulam and Wallach to prove the following far reaching extension of the Erd\H{o}s--R\'enyi theorem \cite{MW09}, where they described sharp vanishing thresholds for homology with field coefficients. \begin{LinMeshWall} Suppose that $d \ge 2$ is fixed and that $Y \sim Y_d(n,p)$. Let $\omega$ be any function such that $\omega \to \infty$ as $n \to \infty$. \begin{enumerate} \item If $$p \le \frac{d \log{n} - \omega }{ n}$$ then w.h.p.\ $H_{d-1}(Y; \mathbb Z / q\mathbb Z) \neq 0$, and \item if $$p \ge \frac{d \log{n}+\omega }{ n}$$ then w.h.p.\ $H_{d-1}(Y; \mathbb Z / q\mathbb Z) = 0$. \label{difficult} \end{enumerate} \end{LinMeshWall} The $d=1$ case is equivalent to the Erd\H{o}s--R\'enyi theorem. The Linial--Meshulam theorem is the case $d=2$, $q=2$, and the Meshulam--Wallach theorem is the general case $d \ge 2$ arbitrary and $q$ any fixed prime. In closing remarks of \cite{LM}, Linial and Meshulam asked ``Where is the threshold for the vanishing of $H_1(Y,\mathbb Z)$?'' By the universal coefficient theorem, $H_{d-1}(Y;\mathbb Z/q\mathbb Z)=0$ for every prime $q$ implies that $H_{d-1}(Y; \mathbb Z)=0$, so one may be tempted to conclude that the Meshulam--Wallach theorem already answers the question of the threshold for $\mathbb Z$-coefficients. This is not the case, however, since we are concerned with not just a single simplicial complex, but with a sequence of complexes as $n \to \infty$, and there might very well be torsion growing with $n$. The Meshulam--Wallach Theorem holds for $q$ fixed, and can be made to work for $q$ growing slowly enough compared with $n$. But it does not seem possible to extend the cocycle-counting arguments from \cite{LM} and \cite{MW09} to cover the case when $q$ is growing much faster than polynomial in $n$. On the surface of things, this might actually be a big problem. A complex $X$ is called $\mathbb Q$-acyclic if $H_0(X,\mathbb Q) = \mathbb Q$ and $H_i(X,\mathbb Q) = 0$ for $i \geq 1.$ Kalai showed that for a uniform random $\mathbb Q$-acyclic $2$-dimensional complex $T$ with $n$ vertices and ${n-1} \choose 2$ edges, the expected size of the torsion group $|H_1(T;\mathbb Z)|$ is of order at least $\exp(c n^2)$ for come constant $c > 0$ \cite{Kalai83}. On the other hand, the largest possible torsion for a $2$-complex on $n$ vertices is of order at most $\exp(C n^2)$ for some other constant $C > 0$, so Kalai's random $\mathbb Q$-acyclic complex provides a model of random simplicial complex which is essentially the worst case scenario for torsion. We mention in passing that another approach to homology-vanishing theorems for random simplicial complexes is ``Garland's method'' \cite{Garland}, with various refinements due to \.Zuk \cite{Zuk, zuk2}, Ballman--\'Swi\k{a}tkowski \cite{BS99}, and others. These methods have been applied in the context of random simplicial complexes, see for example \cite{hkp12,kahle14}. However, it must be emphasized that these methods only work over a field of characteristic zero; they do not detect torsion in homology. A different kind of argument is needed to handle homology with $\mathbb Z$ coefficients. \medskip The fundamental group $\pi_1(Y)$ of the random $2$-complex $Y \sim Y_2(n,p)$ was studied earlier by Babson, Hoffman, and Kahle \cite{BHK11}, and the threshold face probability for simple connectivity was shown to be of order $1 / \sqrt{n}$. Until now, there seems to have been no upper bound on the vanishing threshold for integer homology for random $2$-complexes, other than this. Our main result is that the threshold for vanishing of integral homology agrees with the threshold for field coefficients, up to a constant factor. In particular we have the following. \begin{theorem} \label{thm:main1} Let $d \ge 2$ be fixed and $Y \sim Y_d(n,p)$. If $$p \ge \frac{80d \log{n}}{ n}$$ then $H_{d-1}(Y; \mathbb Z) = 0$ w.h.p. \end{theorem} \begin{remark} For the sake of simplicity, we make no attempt here to optimize the constant $80d$. We conjecture that the best possible constant is $d$; in other words we would guess that the Linial--Meshulam--Wallach theorem is still true with $\mathbb Z / q\mathbb Z$-coefficients replaced by $\mathbb Z$-coefficients. But to prove this, it seems that another idea will be required. \end{remark} \bigskip Our main tool in proving Theorem \ref{thm:main1} is the following. \begin{theorem} \label{thm:main2} Let $d \ge 2$ be fixed and let $q=q(n)$ be a sequence of primes. If $Y \sim Y_d(n,p)$ where $$p \ge \frac{40d \log{n}}{ n},$$ then $$\mathbb P(H_{d-1}(Y; \mathbb Z/q\mathbb Z) \neq 0) \leq \frac{1}{n^{d+1}}.$$ \end{theorem} \begin{remark} Theorem \ref{thm:main2} is similar to the main result in Meshulam--Wallach, but the statement and proof differ in fundamental ways. The main point is that the bound on the probability that $H_{d-1}(Y; \mathbb Z/q\mathbb Z) \neq 0$ holds uniformly over all primes $q$, even if $q$ is growing very quickly compared to the number of vertices $n$. \end{remark} \section{Proof} We first prove Theorem \ref{thm:main1}. The proof relies on Theorem \ref{thm:main2} plus one additional fact --- a bound on the size of the torsion subgroup in the degree $(d-1)$ homology of a simplicial complex, which only depends on the number of vertices $n$. Let $A_{T}$ denote the torsion subgroup of an abelian group $A.$ \begin{lemma} \label{dino bath} Let $d \ge 2$ and suppose that $X$ is a $d$-dimensional simplicial complex on $n$ vertices. Then $|\left(H_{d-1}(X;\mathbb Z)\right)_{T}| = \exp\left ( O(n^d) \right) $. \end{lemma} \begin{proof}[Proof of Lemma \ref{dino bath}] We include a proof here for the sake of completeness, but such bounds on the order of torsion groups are known. See, for example, Proposition 3 in Soul\'{e} \cite{Soule99}, which he attributes in turn to Gabber. We assume without loss of generality that $H_{d}(X) =0$. Indeed, if there is a nontrivial cycle $Z$ in $H_{d}(X)$, then delete one face $\sigma$ from the support of $Z$. Then in the subcomplex $X - \sigma$, the rank of $H_{d}(X - \sigma)$ is one less than the rank of $H_{d}(X)$. So we have $$\dim [ H_{d-1} (X - \sigma, {\bf k}) ] = \dim [ H_{d-1} (X , {\bf k} )]$$ over every field $k$, and then the isomorphism $H_{d-1} (X - \sigma, \mathbb Z) = H_{d-1} (X , \mathbb Z )$ follows by the universal coefficient theorem. We may further assume that the number of $d$-dimensional faces $f_d$ is bounded by $f_d \le {n \choose d}$, since if there were more faces than this, then we would have $f_d > f_{d-1}$ and there would have to be nontrivial homology in degree $d$, by dimensional considerations. Let $C_i$ denote the space of chains in degree $i$, i.e.\ all formal $\mathbb Z$-linear combinations of $i$-dimensional faces, and let $\delta_{i}: C_i \to C_{i-1}$ be the boundary map in simplicial homology. If $Z_i$ is the kernel of $\delta_{i}$ and $B_i$ is the image of $\delta_{i+1}$, then by definition $H_i (X; \mathbb Z) = Z_i / B_i$. Let $M_i$ be a matrix for the boundary map $\delta_i$, with respect to the preferred bases of faces in the simplicial complex. Then the order of the torsion subgroup $|(C_i / B_i)_T|$ is bounded by the product of the lengths of the columns of $M_i$, as follows. We begin by writing $M_i$ in its Smith normal form, i.e. $M_i = P D Q$ with $P$ and $Q$ invertible matrices over $\mathbb Z$ and $D$ a rectangular matrix with entries only on its diagonal. Let $r$ be the rank of $D$ over $\mathbb Q;$ note this is also the $\mathbb Q$-rank of $M_i.$ By removing the all $0$ rows and columns from $D$ (and some columns of $P$ and some rows of $Q$), we may write $M_i = P' D' Q'$ where $D'$ is an $r \times r$ diagonal matrix, and all of $P',D',\text{and } Q'$ have $\mathbb Q$-rank $r.$ By the definition of $D,$ we have $\det D' = |(C_i / B_i)_T|.$ As $P'$ and $Q'$ both have $\mathbb Q$-rank $r,$ we can find a collection of $r$ rows from $P'$ that are linearly independent over $\mathbb Q$ and $r$ columns of $Q'$ that are linearly independent over $\mathbb Q.$ Write $\tilde P$ and $\tilde Q$ for the $r \times r$ submatrices of $P'$ and $Q'$ given by these rows and columns. As $\tilde P$ and $\tilde Q$ are full $\mathbb Q$-rank, they are invertible over $\mathbb Q$ and have nonzero determinant. As they are additionally integer matrices, they each have determinants at least $1.$ Thus, \[ \det(D') \leq | \det( \tilde P) \det(D') \det( \tilde Q)| = | \det( \tilde PD'\tilde Q)|. \] On the other hand $\tilde M = \tilde P D' \tilde Q$ is an $r \times r$ submatrix of $M_i.$ Thus, applying the Hadamard bound to $\tilde M$, we may bound $\det(\tilde M)$ by the product of the lengths of the columns of $\tilde M.$ As the columns of $M_i$ all have lengths at least $1,$ the product of the lengths of the columns of $\tilde M$ are at most the product of the lengths of the columns of $M_i,$ completing the proof. Since $Z_i / B_i$ is isomorphic to a subgroup of $C_i / B_i$, this also gives a bound on the torsion in homology. In particular, for any simplicial complex $X$ on $n$ vertices, we have that \begin{align*} |\left(H_{d-1}(X;\mathbb Z)\right)_{T}| & \le \sqrt{d+1}^{{ n \choose d }} \\ & = \exp\left ( O(n^d) \right). \end{align*} \end{proof} Now define $$Q(X)=\{q~\text{prime}:\ H_{d-1}(X;\mathbb Z/q\mathbb Z) \neq 0\}.$$ An immediate consequence of Lemma \ref{dino bath} is that $$|Q(X)| =O( n^d),$$ and this is the fact which we will use. \medskip \begin{pfofthm}{\ref{thm:main1}} Our strategy is as follows. Let $Y_1,Y_2 \sim Y_d(n,40d \log n/n)$ be two independent random $d$-complexes and let $Y \sim Y_d(n,80d \log n/n)$ \begin{enumerate} \item[\bf{Step 1}] \label{duwamish} First we note that we can couple $Y$, $Y_1$ and $Y_2$ such that \begin{equation} F_d(Y_1) \cup F_d(Y_2)\subset F_d(Y) . \label{cauliflower} \end{equation} By (\ref{cauliflower}) if $H_{d-1}(Y_1;\mathbb Z/q\mathbb Z)=0$ or $H_{d-1}(Y_2;\mathbb Z/q\mathbb Z)=0$ then $H_{d-1}(Y;\mathbb Z/q\mathbb Z)=0$. \item[\bf{Step 2}] By Lemma \ref{dino bath}, $Q(Y_1)$ has cardinality $O(n^d)$. \item[\bf{Step 3}] Applying a union bound, the probability that either $H_{d-1}(Y_1;\mathbb Q) \neq 0$ or there exists $q \in Q(Y_1)$ such that $$H_{d-1}(Y_2;\mathbb Z/q\mathbb Z) \neq 0$$ is at most $O(n^d\cdot n^{-(d+1)})=O(1/n)=o(1)$. \item[\bf{Step 4}] Thus if \begin{enumerate} \item $H_{d-1}(Y_1;\mathbb Q) =0$, and \item $H_{d-1}(Y_2;\mathbb Z/q\mathbb Z) = 0$ for all $q \in Q(Y_1)$, \end{enumerate} then by the coupling in Step 1, we have that $H_{d-1}(Y;\mathbb Z/q\mathbb Z)=0$ for all primes $q$. By the universal coefficient theorem we have that $H_{d-1}(Y;\mathbb Z)=0$. Each of these two conditions happens with probability $1-o(1)$ which completes the proof. \end{enumerate} \end{pfofthm} Now we begin our proof of Theorem \ref{thm:main2}. Throughout this paper we are always working with $d$-dimensional simplicial complexes on vertex set $[n]$, with complete $(d-1)$-skeleton. Such a complex $Y$ is defined by $F_d(Y)$, its set of $d$-dimensional faces. We often associate the two in the following way. If $f\in {[n] \choose d+1}$ (i.e.\ $f$ is a $d$-dimensional simplex) and $Y$ is as above then we write $Y \cup f$ for the simplicial complex with $F_d(Y \cup f)=F_d(Y)\cup f.$ \bigskip Let $q$ be a prime and $Y$ be as above. Define $$\text{$q$-reducing set }(Y)=\left \{f :\ H_{d-1}(Y\cup f; \mathbb Z/q\mathbb Z) \neq H_{d-1}(Y; \mathbb Z/q\mathbb Z)\right \}.$$ In other words, $\text{$q$-reducing set }(f)$ is precisely the set of $d$-dimensional faces which, when added to $Y$, drop the dimension of $H_{d-1} (Y; \mathbb Z/ q \mathbb Z)$ by one. \begin{lemma} \label{shabby} A $d$-dimensional simplex $f \in \text{$q$-reducing set }(Y)$ if and only if the boundary of $f$ is not a $(\mathbb Z/q\mathbb Z)$ boundary in $Y$. Thus if $Y \subset Y'$, where $Y$ and $Y'$ are $d$-dimensional complexes sharing the same $d-1$-skeleton, then $$\text{$q$-reducing set }(Y') \subset \text{$q$-reducing set }(Y).$$ \end{lemma} \begin{proof} If $\partial f$ is not a boundary in $Y$ then $H_{d-1}(Y;Z/q\mathbb Z) \neq H_{d-1}(Y \cup f;Z/q\mathbb Z)$. If $\partial f$ is a boundary in $Y$ then $H_{d-1}(Y;Z/q\mathbb Z) = H_{d-1}(Y \cup f;Z/q\mathbb Z)$. \end{proof} \begin{lemma} \label{djimi} $H_{d-1}(Y; \mathbb Z/q\mathbb Z) =0$ if and only if $\text{$q$-reducing set }(Y)=\emptyset.$ \end{lemma} \begin{proof} Clearly, $H_{d-1}(*,\mathbb Z/q\mathbb Z)=0$ is monotone with respect to inclusion of $d$-faces, so $H_{d-1}(Y; \mathbb Z/q\mathbb Z) =0$ implies that $\text{$q$-reducing set }(Y)=\emptyset.$ But we also have that the $d-1$-skeleton of $Y$ is complete, so once all possible $d$-faces have been added, homology is vanishing. Once again applying the monotonicity of Lemma \ref{shabby} , $\text{$q$-reducing set }(Y) = \emptyset$ also implies that $H_{d-1}(Y; \mathbb Z/ q \mathbb Z) = 0$. \end{proof} Instead of working directly with the Linial--Meshulam distribution $Y_d(n,p)$ where each face is included independently with probability $p$, it is convenient to work with the closely related distribution $Y_d(n,m)$, where the complex is chosen uniformly over all $${ {n \choose d+1} \choose m}$$ simplicial complexes on $n$ vertices with complete $d-1$-skeleton, and with exactly $m$ faces of dimension $d$. As with the random graphs we have that if $m \approx p{n \choose d+1}$ then for many properties the two models are very similar. After doing our analysis with $Y_d(n,m)$, we convert our results back to the case of $Y_d(n,p)$. Let $$ \tilde m = \tilde m(n,q)= \min\left\{m':\ \mathbb E\big| \text{$q$-reducing set }(Y(n,m'))\big| \leq \frac{1}{2}\binom{n}{d+1} \right\} $$ This next lemma points out an easy consequence of our definition of $\tilde m$. \begin{lemma} \label{lamar} For every $d$-face $f$ $$\mathbb P \left( f \in \text{$q$-reducing set } \left( Y(n,\tilde m) \right) \right)\leq 1/2.$$ \end{lemma} \begin{proof} This follows easily by symmetry. \end{proof} If $Z$ and $Z'$ are random $d$-complexes with vertex set $[n]$ and the complete $(d-1)$-skeleton then we say $Z$ {\it stochastically dominates} $Z'$ if there exists a coupling of the two random variables with $\mathbb P\bigg(F_d(Z') \subset F_d(Z)\bigg)=1$. \begin{lemma} \label{whitecaps} Let $m=\sum_{i=1}^k m_i$ with $m_i \in \mathbb N$. Also let $Y \sim Y_d(n,m)$ and $Y^i \sim Y_d(n,m_i)$ for all $i$. Then $Y$ stochastically dominates $\bigcup_{i=1}^{k}Y^i$ and $$\text{$q$-reducing set }(Y) \subset \text{$q$-reducing set }\left( \bigcup_{i=1}^{k}Y^i \right).$$ \end{lemma} \begin{proof} The first claim is a standard argument; see for example Section 1.1 of \cite{JLR}. The second follows from the first and the monotonicity of the $q$-reducing set (Lemma \ref{shabby}). \end{proof} \begin{lemma} \label{traore} For any $q$, sufficiently large $n$, $d$-face $f$ and $k\geq 2(d+1) \log_2(n)$, then for $Y \sim Y_d(n,k \tilde m)$ $$\mathbb P\bigg(f \in \text{$q$-reducing set }(Y)\bigg)\leq \frac{1}{n^{2(d+1)}}.$$ \end{lemma} \begin{proof} Let $Y^1, \dots , Y^k$ be i.i.d.\ complexes with distribution $Y_d(n,\tilde m)$. Then by Lemma \ref{whitecaps} we can find a coupling so that a.s.\ $$\text{$q$-reducing set }\big(Y\big) \subset \text{$q$-reducing set }\left(\bigcup_{i=1}^{k}Y^i\right). $$ Then by Lemmas \ref{shabby}, \ref{djimi} and \ref{lamar} \begin{eqnarray*} \mathbb P\bigg(f \in \text{$q$-reducing set }(Y) \bigg) & \leq & \mathbb P\left( f \in \text{$q$-reducing set }\left( \bigcup_{1}^{k}Y^i \right) \right) \\ & \leq & \mathbb P\left(\bigcap_1^k \left\{f \in \text{$q$-reducing set }\left( Y^i\right)\right\} \right) \\ & \leq & \prod_1^k \mathbb P \left( f \in \text{$q$-reducing set }\left( Y^i\right)\right) \\ & \leq & \left( \frac12 \right)^k\\ & \leq & \frac{1}{n^{2(d+1)}}. \end{eqnarray*} \end{proof} Now the main task that remains is to estimate $\tilde m$. Before we do so, we give a heuristic that indicates that $\tilde m \leq 2{n \choose d}$. We consider the process where we start with $Y_0$ the complex with the complete $(d-1)$-skeleton and no $d$-dimensional faces. Then we inductively generate $Y_{i+1}$ by taking $Y_i$ and independently adding one new $d$-dimensional face. Note that when we are adding faces one at a time, the dimension $\dim H_{d-1}(Y_i, \mathbb Z / q \mathbb Z)$ is monotone decreasing. As $H_{d-1}(Y_0;\mathbb Z/q\mathbb Z)$ is generated by the $(d-1)$-cycles its dimension is at most ${n \choose d}$. Heuristically this indicates that $\tilde m$ should be no larger than $2{n \choose d}$, because if we were to add $2{n \choose d}$ faces and half of them reduce the dimension of the homology, then the dimension has dropped ${n \choose d}$ times. This would make the homology trivial, and would leave no faces remaining in the $q$-reducing set. We now make this heuristic rigorous, albeit with a slightly worse constant. \begin{lemma} \label{froome} Let $Y$ be a $d$-complex and let $f_1, f_2, \dots$ be an ordering of $F_d(Y)$. Let $Y_i$ be the $d$-complex with $$F_d(Y_i)=\bigcup_{j=1}^{i}\{f_j\}.$$ Then there are at most ${n \choose d}$ $i$ such that $$f_i \in \text{$q$-reducing set }(Y_{i-1}). $$ \end{lemma} \begin{proof} By induction. If there exist a subsequence $0<i_1<i_2<\dots <i_{s}$ with $$f_{i_s}\in \text{$q$-reducing set }(Y_{i_s-1})$$ then $$|H_{d-1}(Y_{i_s},\mathbb Z/q\mathbb Z)| \leq q^{{n \choose d}-s}.$$ Thus the longest possible subsequence has length ${n \choose d}$. \end{proof} \begin{lemma} \label{spokane} For any $q$ and any $n>d$ we have $\tilde m \leq 4{n \choose d}$. \end{lemma} \begin{proof} Let $f_1,f_2, \dots, f_{{n \choose d+1}}$ be a uniformly random ordering of all the possible $d$-faces. Again we define the complexes $Y_i$ by $$F_d(Y_i)=\bigcup_{j=1}^{i}\{f_j\},$$ and we remark that each $F_d(Y_i)$ is distributed as $Y_d(n,m).$ Define the random variables $$Z_i={\bf 1}_{\{f_i \in \text{$q$-reducing set }(Y_{i-1})\}}.$$ and $\{X_i\}$ be an i.i.d.\ sequence of Bernoulli(1/3) random variables. We can couple the events so that $Z_i$ stochastically dominates $X_i$ up until the random time $m^*$, where $$m^*=\min\left(m':\ |\text{$q$-reducing set }(Y_{m'})| \leq \frac{1}{3}\binom{n}{d+1} \right).$$ Thus by Lemma \ref{froome} we have a.s.\ that $${n \choose d} \geq \sum_{i=1}^{m^*}Z_i \geq \sum_{i=1}^{m^*}X_i.$$ So either \begin{enumerate} \item $m^* \leq 4{n \choose d}$ or \item $\sum_{i=1}^{4{n \choose d}}X_i < {n \choose d}$. \label{brunch} \end{enumerate} The sum on the left hand side of \ref{brunch} has expected value $\frac43{n \choose d}$ which is a constant factor larger than ${n \choose d}.$ Thus the probability of the last event is exponentially decreasing in ${n \choose d},$ and so it is certainly less than 1/10. Thus $\mathbb P(m^* > 4{n \choose d})<1/10$ as well. \begin{align*} \mathbb E \big| \text{$q$-reducing set }\left(Y_{4{n \choose d}}\right)\big| &\leq \frac{1}{3}\binom{n}{d+1}\cdot \mathbb P\left(m^* \leq 4{n \choose d}\right)\\ &\hspace{0.5in} + \binom{n}{d+1}\mathbb P\left(m^*>4{n \choose d}\right)\\ &\leq \frac{1}{3}\binom{n}{d+1} +\frac{1}{10}\binom{n}{d+1}\\ &\leq \frac{1}{2}\binom{n}{d+1}. \end{align*} Thus $\tilde m \leq 4 {n \choose d}.$ \end{proof} \begin{lemma} \label{cotton} Let $Y \sim Y_d(n,m)$. For $$m \geq (12d+12)(\log n) {n \choose d}.$$ and any prime $q$ $$\mathbb P\bigg(H_{d-1}(Y;\mathbb Z/q\mathbb Z)\neq 0\bigg)\leq\frac{1}{2n^{d+1}}. $$ \end{lemma} \begin{proof} First of all, $(12d+12)(\log n) > (8d+8)(\log_2 n)$, since $\log 2 > 2/3$. Then by Lemma \ref{spokane} \begin{align*} (8d+8)(\log_2 n){n \choose d} &=(2d+2)(\log_2 n)\left(4{n \choose d}\right) \\ & \geq (2d+2)(\log_2 n) \tilde m.\\ \end{align*} By Lemma \ref{traore} and the union bound, we have \begin{align*} \mathbb P\bigg(H_{d-1}(Y;\mathbb Z/q\mathbb Z)\neq 0\bigg) & \leq {n \choose d+1}\frac{1}{n^{2(d+1)}}\\ & \leq \frac{1}{2n^{d+1}}. \end{align*} \end{proof} \begin{pfofthm}{\ref{thm:main2}} If $$p \ge \frac{40d \log n}{n},$$ then by applying Chernoff bounds, with probability at least $$1-\frac{1}{2n^{d+1}}$$ a random $d$-complex $Y \sim Y_d(n,p)$ has at least $(12 d + 12)(\log n){n \choose d}$ faces of dimension $d$. Then the theorem follows from Lemma \ref{cotton}. \end{pfofthm} \section*{Acknowledgements} The authors thank Nati Linial and Roy Meshulam for many helpful and encouraging conversations. \medskip C.H.\ gratefully acknowledges support from NSF grant DMS-1308645 and NSA grant H98230-13-1-0827. M.K.\ gratefully acknowledges support from the Alfred P.\ Sloan Foundation, from DARPA grant N66001-12-1-4226, and from NSF grant CCF-1017182. E.P.\ gratefully acknowledges support from NSF grant DMS-0847661. \bibliographystyle{plain}
1,941,325,221,191
arxiv
\section{#1}} \topmargin -.6in \def\nonumber{\nonumber} \def\rf#1{(\ref{eq:#1})} \def\lab#1{\label{eq:#1}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\nonumber{\nonumber} \def\lbrack{\lbrack} \def\rbrack{\rbrack} \def\({\left(} \def\){\right)} \def\vert{\vert} \def\bigm\vert{\bigm\vert} \def\vskip\baselineskip\vskip-\parskip\noindent{\vskip\baselineskip\vskip-\parskip\noindent} \relax \newcommand{\noindent}{\noindent} \newcommand{\ct}[1]{\cite{#1}} \newcommand{\bi}[1]{\bibitem{#1}} \def\alpha{\alpha} \def\beta{\beta} \def{\cal A}{{\cal A}} \def{\cal M}{{\cal M}} \def{\cal N}{{\cal N}} \def{\cal F}{{\cal F}} \def\delta{\delta} \def\Delta{\Delta} \def\epsilon{\epsilon} \def\gamma{\gamma} \def\Gamma{\Gamma} \def\nabla{\nabla} \def {1\over 2} { {1\over 2} } \def\hat{c}{\hat{c}} \def\hat{d}{\hat{d}} \def\hat{g}{\hat{g}} \def {+{1\over 2}} { {+{1\over 2}} } \def {-{1\over 2}} { {-{1\over 2}} } \def\kappa{\kappa} \def\lambda{\lambda} \def\Lambda{\Lambda} \def\langle{\langle} \def\mu{\mu} \def\nu{\nu} \def\over{\over} \def\omega{\omega} \def\Omega{\Omega} \def\phi{\phi} \def\partial{\partial} \def\prime{\prime} \def\rightarrow{\rightarrow} \def\rho{\rho} \def\rangle{\rangle} \def\sigma{\sigma} \def\tau{\tau} \def\theta{\theta} \def\tilde{\tilde} \def\widetilde{\widetilde} \def\int dx {\int dx } \def\bar{x}{\bar{x}} \def\bar{y}{\bar{y}} \def\mathop{\rm tr}{\mathop{\rm tr}} \def\mathop{\rm Tr}{\mathop{\rm Tr}} \def\partder#1#2{{\partial #1\over\partial #2}} \def{\cal D}_s{{\cal D}_s} \def{\wti W}_2{{\widetilde W}_2} \def{\cal G}{{\cal G}} \def{\widehat \lie}{{\widehat {\cal G}}} \def{\cal G}^{\ast}{{\cal G}^{\ast}} \def{\widetilde \lie}{{\widetilde {\cal G}}} \def{\elie}^{\ast}{{{\widetilde \lie}}^{\ast}} \def{\cal H}{{\cal H}} \def{\widetilde \lie}{{\widetilde {\cal G}}} \def\relax\leavevmode{\relax\leavevmode} \def\vrule height1.5ex width.4pt depth0pt{\vrule height1.5ex width.4pt depth0pt} \def\rlx\hbox{\sf Z\kern-.4em Z}{\relax\leavevmode\hbox{\sf Z\kern-.4em Z}} \def\rlx\hbox{\rm I\kern-.18em R}{\relax\leavevmode\hbox{\rm I\kern-.18em R}} \def\rlx\hbox{\,$\inbar\kern-.3em{\rm C}$}{\relax\leavevmode\hbox{\,$\vrule height1.5ex width.4pt depth0pt\kern-.3em{\rm C}$}} \def\hbox{{1}\kern-.25em\hbox{l}}{\hbox{{1}\kern-.25em\hbox{l}}} \def\PRL#1#2#3{{\sl Phys. Rev. Lett.} {\bf#1} (#2) #3} \def\NPB#1#2#3{{\sl Nucl. Phys.} {\bf B#1} (#2) #3} \def\NPBFS#1#2#3#4{{\sl Nucl. Phys.} {\bf B#2} [FS#1] (#3) #4} \def\CMP#1#2#3{{\sl Commun. Math. Phys.} {\bf #1} (#2) #3} \def\PRD#1#2#3{{\sl Phys. Rev.} {\bf D#1} (#2) #3} \def\PRB#1#2#3{{\sl Phys. Rev.} {\bf B#1} (#2) #3} \def\PLA#1#2#3{{\sl Phys. Lett.} {\bf #1A} (#2) #3} \def\PLB#1#2#3{{\sl Phys. Lett.} {\bf #1B} (#2) #3} \def\JMP#1#2#3{{\sl J. Math. Phys.} {\bf #1} (#2) #3} \def\PTP#1#2#3{{\sl Prog. Theor. Phys.} {\bf #1} (#2) #3} \def\SPTP#1#2#3{{\sl Suppl. Prog. Theor. Phys.} {\bf #1} (#2) #3} \def\AoP#1#2#3{{\sl Ann. of Phys.} {\bf #1} (#2) #3} \def\PNAS#1#2#3{{\sl Proc. Natl. Acad. Sci. USA} {\bf #1} (#2) #3} \def\RMP#1#2#3{{\sl Rev. Mod. Phys.} {\bf #1} (#2) #3} \def\PR#1#2#3{{\sl Phys. Reports} {\bf #1} (#2) #3} \def\AoM#1#2#3{{\sl Ann. of Math.} {\bf #1} (#2) #3} \def\UMN#1#2#3{{\sl Usp. Mat. Nauk} {\bf #1} (#2) #3} \def\FAP#1#2#3{{\sl Funkt. Anal. Prilozheniya} {\bf #1} (#2) #3} \def\FAaIA#1#2#3{{\sl Functional Analysis and Its Application} {\bf #1} (#2) #3} \def\BAMS#1#2#3{{\sl Bull. Am. Math. Soc.} {\bf #1} (#2) #3} \def\TAMS#1#2#3{{\sl Trans. Am. Math. Soc.} {\bf #1} (#2) #3} \def\InvM#1#2#3{{\sl Invent. Math.} {\bf #1} (#2) #3} \def\LMP#1#2#3{{\sl Letters in Math. Phys.} {\bf #1} (#2) #3} \def\IJMPA#1#2#3{{\sl Int. J. Mod. Phys.} {\bf A#1} (#2) #3} \def\AdM#1#2#3{{\sl Advances in Math.} {\bf #1} (#2) #3} \def\RMaP#1#2#3{{\sl Reports on Math. Phys.} {\bf #1} (#2) #3} \def\IJM#1#2#3{{\sl Ill. J. Math.} {\bf #1} (#2) #3} \def\APP#1#2#3{{\sl Acta Phys. Polon.} {\bf #1} (#2) #3} \def\TMP#1#2#3{{\sl Theor. Mat. Phys.} {\bf #1} (#2) #3} \def\JPA#1#2#3{{\sl J. Physics} {\bf A#1} (#2) #3} \def\JSM#1#2#3{{\sl J. Soviet Math.} {\bf #1} (#2) #3} \def\MPLA#1#2#3{{\sl Mod. Phys. Lett.} {\bf A#1} (#2) #3} \def\JETP#1#2#3{{\sl Sov. Phys. JETP} {\bf #1} (#2) #3} \def\JETPL#1#2#3{{\sl Sov. Phys. JETP Lett.} {\bf #1} (#2) #3} \def\PHSA#1#2#3{{\sl Physica} {\bf A#1} (#2) #3} \def\PHSD#1#2#3{{\sl Physica} {\bf D#1} (#2) #3} \newcommand\twomat[4]{\left(\begin{array}{cc} {#1} & {#2} \\ {#3} & {#4} \end{array} \right)} \newcommand\twocol[2]{\left(\begin{array}{cc} {#1} \\ {#2} \end{array} \right)} \newcommand\twovec[2]{\left(\begin{array}{cc} {#1} & {#2} \end{array} \right)} \newcommand\threemat[9]{\left(\begin{array}{ccc} {#1} & {#2} & {#3}\\ {#4} & {#5} & {#6}\\ {#7} & {#8} & {#9} \end{array} \right)} \newcommand\threecol[3]{\left(\begin{array}{ccc} {#1} \\ {#2} \\ {#3}\end{array} \right)} \newcommand\threevec[3]{\left(\begin{array}{ccc} {#1} & {#2} & {#3}\end{array} \right)} \newcommand\fourcol[4]{\left(\begin{array}{cccc} {#1} \\ {#2} \\ {#3} \\ {#4} \end{array} \right)} \newcommand\fourvec[4]{\left(\begin{array}{cccc} {#1} & {#2} & {#3} & {#4} \end{array} \right)} \begin{titlepage} \vspace*{-2 cm} \noindent \begin{flushright} \end{flushright} \vskip 1 cm \begin{center} {\Large\bf New Massive Gravity Domain Walls } \vglue 1 true cm { U.Camara dS}$^{*}$\footnote {e-mail: [email protected]} and { G.M.Sotkov}$^{*}$\footnote {e-mail: [email protected], [email protected]}\\ \vspace{1 cm} ${}^*\;${\footnotesize Departamento de F\'\i sica - CCE\\ Universidade Federal de Espirito Santo\\ 29075-900, Vitoria - ES, Brazil}\\ \vspace{5 cm} \end{center} \normalsize \vskip 0.5cm \begin{center} { {\bf ABSTRACT}}\\ \end{center} \vspace{.5in} The properties of the asymptotic $AdS_3$ space-times representing flat domain walls (DW's) solutions of the New Massive 3D Gravity with scalar matter are studied. Our analysis is based on $I^{st}$ order BPS-like equations involving an appropriate superpotential. The Brown-York boundary stress-tensor is used for the calculation of DW's tensions as well as of the $CFT_2$ central charges. The holographic renormalization group flows and the phase transitions in specific deformed $CFT_2$ dual to 3D massive gravity model with quadratic superpotential are discussed. \end{titlepage} \section{Introduction} The new massive gravity (NMG) represents an appropriate ``higher derivatives'' generalization of 3D Eisntein gravity action: \begin{eqnarray} S_{NMG}(g_{\mu\nu},\sigma;\kappa,\Lambda)&=&\frac{1}{\kappa^2}\int dx^3\sqrt{-g}\Big\{\epsilon R +\frac{1}{m^2}{\cal K}-\kappa^2\Big(\frac{1}{2}|\vec{\nabla}\sigma|^2+V(\sigma)\Big)\Big\}\nonumber\\ {\cal K}&=&R_{\mu\nu}R^{\mu\nu}-\frac{3}{8}R^2,\quad \kappa^2=16\pi G,\quad \epsilon=\pm1\label{eq1} \end{eqnarray} which unlike its 4D Einstein version is unitary ($i.e.$ ghost-free) under certain restrictions on the matter potential $V(\sigma)$, on the values of the cosmological constant $\Lambda=-\frac{\kappa^2}{2}V(\sigma^*)$ and of the new mass parameter $m^2$ for the both choices of the sign of the $R$-term \cite{1}. It is 1-loop UV finite, but power-counting non-renormalizable \footnote{ although there exist controversy claims concerning its super-renormalizability \cite{oda}.} quite as in the case of 4D Einstein gravity \cite{flat},\cite{des}. The NMG vacuum ($\sigma=const$) sector contains two propagating (massive) degrees of freedom (the ``graviton'' polarizations) and as a result it admits a variety of physically interesting classical solutions - gravitational waves, black holes, etc. \cite{2}. When matter is added the only known exact solutions \cite{3} are certain asymptotically $dS_3$ geometries that describe Bounce-like evolutions of 3D Universe. The \emph{problem} addressed in the present paper concerns the construction of a family of \emph{flat static domain walls} (DW's) solutions, $i.e.$ $\sigma=\sigma(z)$ and \begin{eqnarray} ds^2=dz^2+e^{\varphi(z)}(dx^2-dt^2)\label{eq2} \end{eqnarray} of the NMG model (\ref{eq1}) for polynomial matter potentials $V(\sigma)$. We are looking for DW's interpolating between two \emph{different} $AdS_3$ \emph{vacua} $(\sigma_A^*,\Lambda^A_{eff})$, parametrized by the solutions of the algebraic equations: \begin{eqnarray} V'(\sigma_{A}^*)=0,\quad\quad \ 2\Lambda_{eff}^A\left(1+\frac{\Lambda^A_{eff}}{4\epsilon m^2}\right) =\epsilon\kappa^2V(\sigma_{A}^*)\label{eq3} \end{eqnarray} The study of such DW's is motivated by their important role in the description of the ``holographic" renormalization group (RG) flows \cite{4} and of the corresponding phase transitions in two-dimensional QFT ``dual'' to $3D$ massive gravity (\ref{eq1}). The generalization of the superpotential method proposed in ref. \cite{3} allows an explicit construction of qualitatively new DW's relating ``old" to the ``new" purely NMG vacua. Assuming further that the $AdS_3/CFT_2$ correspondence \cite{malda}, \cite{witt},\cite{pol} takes place for the extended NMG model (\ref{eq1}) as well, we investigate the changes induced by the counter-terms $\cal {K}$ (and by the sign factor $\epsilon$) on the structure of the corresponding $QFT_2$'s $\beta$-function, concerning the ``$m^2$-corrections" to the central charges, scaling dimensions and to its free energy. The example of the DW's of NMG model with \emph{quadratic} superpotential and the phase transitions in its dual perturbed $CFT_2$ ($pCFT_2$) are studied in some details. An extension of $d=3$ NMG's BPS-like $I^{st}$ order system and related to it superpotential to the case of \emph{d-dimensional} New Massive Gravity for $d>3$ is introduced in Sect.7. \section{Superpotential} Although the NMG action (\ref{eq1}) involves up to fourth order derivatives of 3D metrics $g_{\mu\nu}$, the corresponding equations for the DW's (\ref{eq2}) are of \emph{second} order: \begin{eqnarray} &&\ddot{\sigma}+\dot{\sigma}\dot{\varphi}-V'(\sigma)=0\nonumber\\ &&\ddot{\varphi}\Big(1-\frac{\dot{\varphi}^2}{8\epsilon m^2}\Big)+\frac{1}{2}\dot{\varphi}^2\Big(1-\frac{\dot{\varphi}^2}{16\epsilon m^2}\Big)+\epsilon\kappa^2\Big(\frac{1}{2}\dot{\sigma}^2+V(\sigma)\Big)=0\nonumber\\ &&\dot{\varphi}^2\Big(1-\frac{\dot{\varphi}^2}{16\epsilon m^2}\Big)+\epsilon\kappa^2(-\dot{\sigma}^2+2V(\sigma))=0\label{eq4} \end{eqnarray} due to a particular form of the higher derivatives ${\cal K}$-term. A powerful method for construction of analytic non-perturbative solutions of eqs. (\ref{eq4}) consists in the introduction of an auxiliary function $W(\sigma)$ called \emph{superpotential}\footnote{it represents an appropriate $D=3$ NMG adapted version of the Low-Zee superpotential \cite{zee} introduced in the context of DW's solutions of $D=5$ Gauss-Bonnet improuved gravity} \cite{3}, \cite{5} such that \begin{eqnarray} &&\kappa^2V(\sigma)=2(W')^2\Big(1-\frac{\kappa^2W^2}{2\epsilon m^2}\Big)^2-2\epsilon\kappa^2 W^2\Big(1-\frac{\kappa^2 W^2}{4\epsilon m^2}\Big)\nonumber\\ &&\dot{\varphi}=-2\epsilon\kappa W, \quad \quad \dot{\sigma}=\frac{2}{\kappa}W'\Big(1-\frac{\kappa^2W^2}{2\epsilon m^2}\Big)\label{eq5} \end{eqnarray} where $W'(\sigma)=\frac{dW}{d\sigma}$, $\dot{\sigma}=\frac{d\sigma}{dz}$ etc. The \emph{statement} is that for each given $W(\sigma)$ all the solutions of the first order system (\ref{eq5}) are solutions of the eqs. (\ref{eq4}) as well. For example, the linear superpotential $W(\sigma)=B\sigma$ ($B=const$) describes a particular double-well matter potential \begin{eqnarray} V(\sigma)=\frac{\gamma}{4}\Big(\sigma^2-\frac{m_{\sigma}^{2}}{2\gamma}\Big)^2-\frac{2\Lambda}{\kappa^2}\label{eq6} \end{eqnarray} for $\epsilon m^2>0$ and $\gamma$, $m_{\sigma}^2$ and $\Lambda$ given by \begin{eqnarray} \gamma=\frac{2B^4\kappa^2}{m^2}\Big(1+\frac{B^2}{m^2}\Big), \quad m_{\sigma}^2=8\epsilon B^2\Big(1+\frac{B^2}{m^2}\Big), \quad \Lambda=m^2\nonumber \end{eqnarray} The corresponding DW's solutions of eq. (\ref{eq4}) \begin{eqnarray} \sigma(z)=\frac{\sqrt{2\epsilon m^2}}{B\kappa}\tanh\Big(B^2\sqrt{\frac{2}{\epsilon m^2}}(z-z_{0})\Big),\quad\quad\quad e^{\varphi(z)+\varphi_{0}}=\Big[\cosh\Big(B^2\sqrt{\frac{2}{\epsilon m^2}}(z-z_{0})\Big)\Big]^{-\frac{2\epsilon|m^2|}{B^2}}\label{eq7} \end{eqnarray} have as asymptotics at $z\rightarrow\pm\infty$ two very special NMG - vacua with $\lambda_{BHT}=-\frac{\Lambda}{m^2}=-1$ \cite{1} placed at two degenerate minima \begin{eqnarray} \sigma^{\pm}&=&\sigma(z\rightarrow\pm\infty)=\pm\frac{\sqrt{2\epsilon m^2}}{B\kappa}\nonumber \end{eqnarray} of the potential (\ref{eq6}) and representing two $AdS_3$ spaces of equal cosmological constant $\Lambda^{\pm}_{eff}=-2\epsilon m^2<0$. \section{Vacua and Domain Walls} All the constant $\sigma$ solutions of eqs. (\ref{eq5}) are determined by the real roots of the following algebraic equations:(a) $ W'(\sigma_{a}^*)=0$ and (b) $ W^2(\sigma_{b}^*)=\frac{2\epsilon m^2}{\kappa^2}$, that describe (a part of) the matter potential $V(\sigma)$ extrema. Each one of them defines an $AdS_3$ space (i.e. one vacua solution of eqs. (\ref{eq5})) \begin{eqnarray*} ds^2=dz^2+e^{-2\epsilon\sqrt{|\Lambda_{eff}^A|}z}(dx^2-dt^2), \quad\quad A=a,b \end{eqnarray*} of cosmological constant $\Lambda_{eff}^{A}=-\kappa^2W^2(\sigma_{A}^*)$ as one can see by calculating the values of 3D scalar curvature: \begin{eqnarray} R=-2\ddot{\varphi}-\frac{3}{2}\dot{\varphi}^2\equiv8\epsilon(W')^2\left(1-\frac{\kappa^2W^2}{2\epsilon m^2}\right)-6\kappa^2W^2\label{eq8} \end{eqnarray} $i.e.$ $R_{vac}=-6\kappa^2W^2(\sigma_{A}^*)=6\Lambda_{eff}^A$. Hence the variety of admissible vacua of NMG model (\ref{eq1}) is defined by the values of the extrema $\sigma^*$ of the matter potential $V(\sigma)$ and by the signs of the parameters $\epsilon$ and $m^2$. For example in the case of \emph{quadratic} superpotential $W_2(\sigma)=B\sigma^2+D$ for $B>0$ and $D\neq0$ we find one type $(a)$ vacuum $\sigma_a^*=0$ of cosmological constant $\Lambda^{(a)}_{eff}=-\kappa^2D^2$ and for $\epsilon m^2>0$ few type $(b)$ vacua given by \begin{eqnarray} \left(\sigma_{\pm}^*\right)^2=\pm\frac{\sqrt{2\epsilon m^2}}{\kappa B}-\frac{D}{B}, \quad \quad \left(\sigma_{-}^*\right)^2\le\left(\sigma_{+}^*\right)^2\label{eq9} \end{eqnarray} Depending on the range of values of D ($i.e.$ on the shape of potential $V(\sigma)$) we have: (1) \emph{no one} type $(b)$ vacuum for $D>\frac{\sqrt{2\epsilon m^2}}{\kappa}$; (2) \emph{two} type $(b)$ vacua $\{\pm |\sigma_{+}^*\}$ for $-\frac{\sqrt{2\epsilon m^2}}{\kappa}<D<\frac{\sqrt{2\epsilon m^2}}{\kappa}$ and (3) \emph{four} type $(b)$ vacua $\{\pm |\sigma_{+}^*,\pm |\sigma_{+}^*|\}$ for $D<-\frac{\sqrt{2\epsilon m^2}}{\kappa}$. Note that all the type (b) vacua have by construction \emph{equal} cosmological constants $\Lambda_{eff}^{(b)}=-2\epsilon m^2$. We consider as an example the DW's one can construct in the \emph{region} (2) above, characterized by the three vacua $\pm |\sigma_+^* |$ and $\sigma_a^*=0$. Then the two DW's solutions of eqs. (\ref{eq5}) connecting $| \sigma_+^*|$ (or $-| \sigma_+^* |$) with $\sigma_a^*$ have the following rather implicit form: \begin{eqnarray} &&\left(\sigma^4\right)^{\alpha_{+}\alpha_{-}}\left(\sigma^2+| (\sigma_-^*)^2|\right)^{-\alpha_-}\left((\sigma_+^*)^2-\sigma^2\right)^{-\alpha_+}=e^{\frac{16B}{\kappa}(z-z_0)}\nonumber\\ &&e^{\varphi-\varphi_0}=\left(\sigma^2\right)^{-\frac{D\kappa^2\alpha_+\alpha_-}{4\epsilon B}} \left((\sigma_+^*)^2-\sigma^2\right)^{\frac{\alpha_+\kappa\sqrt{2\epsilon m^2}}{8\epsilon B}}\left(|(\sigma_-^*)^2|+\sigma^2\right)^{-\frac{\alpha_-\kappa\sqrt{2\epsilon m^2}}{8\epsilon B}}\label{eq10} \end{eqnarray} where we have denoted: \begin{eqnarray*} \alpha_+=\left(1-\frac{D\kappa}{\sqrt{2\epsilon m^2}}\right)^{-1}, \ \alpha_-=\left(1+\frac{D\kappa}{\sqrt{2\epsilon m^2}}\right)^{-1}. \end{eqnarray*} Nevertheless one can easily verify that the corresponding asymptotics (at $z\rightarrow\pm\infty$) of $\sigma(z)$: \begin{eqnarray} \sigma(z)\stackrel{z\rightarrow\pm\infty}{\approx}\sigma_{A}^* - \sigma_{A}^{0}e^{\mp2\Delta_A\sqrt{|\Lambda^A_{eff}|}z},\quad\quad \sigma(\infty)=\pm|\sigma_+^*|, \quad \sigma(-\infty)=\sigma_a^*=0\nonumber\\ \Lambda_{eff}^{\pm}=-2\epsilon m^2,\quad \Lambda_{eff}^a=-\kappa^2D^2,\quad\quad \Delta_A=1+\sqrt{1-\frac{m_{\sigma}^2(A)}{\Lambda_{eff}^A}},\quad \ m_{\sigma}^2=V''(\sigma_A^*)\label{eq11} \end{eqnarray} indeed coincide with the \emph{vacuum data} $(\sigma_A^*,\Lambda_{eff}^A, \Delta_A)$ that determine the boundary conditions for the DW solutions in region (2). Observe that the scale factor $e^{\varphi(z)}$ has \emph{different} asymptotic behaviour depending on the sign of $D$: in the case of negative values, $i.e.$ for $D\in\left(-\frac{\sqrt{2\epsilon m^2}}{\kappa},0\right)$ and $\epsilon=-1$, $m^2<0$, we find that: \begin{eqnarray} e^{\varphi}\stackrel{z\rightarrow\infty}{=}e^{2\sqrt{|\Lambda_+ |}z}\rightarrow\infty,\quad\quad e^{\varphi}\stackrel{z\rightarrow-\infty}{=}e^{-2\sqrt{|\Lambda_a |}z}\rightarrow\infty\label{eq12} \end{eqnarray} while for $\epsilon=-1$, $m^2<0$ and considering positive values within the interval $D\in\left(0,\frac{\sqrt{2\epsilon m^2}}{\kappa}\right)$ we have that: \begin{eqnarray} e^{\varphi}\stackrel{z\rightarrow\infty}{=}e^{2\sqrt{|\Lambda_+ |}z}\rightarrow\infty,\quad\quad\ e^{\varphi}\stackrel{z\rightarrow-\infty}{=}e^{2\sqrt{|\Lambda_a |}z}\rightarrow0\label{eq13} \end{eqnarray} As it well known the divergences of the scale factor correspond to $AdS_3$ type of boundaries. The regions of vanishing scale factor (which are not curvature singularities) represent null Cauchy \emph{horizons}, where the causal description in the Poincare patch terminates. Therefore our DW's (\ref{eq10}) define particular asymptotically $AdS_3$ ($(a)AdS_3$) spaces: the one with $D<0$ (see eqs. (\ref{eq12})) has \emph{two different boundaries} and the other one with $D>0$ (see eqs. (\ref{eq13})) has \emph{one boundary} at $z\rightarrow \infty$ and one \emph{null horizon} at $z\rightarrow-\infty$. Let us also mention that the linear superpotential DW's (\ref{eq7}) (and more general all the DW's relating two type (b) vacua) in the case $\epsilon=-1$, $m^2<0$ describe $(a)AdS_3$ spaces of \emph{two boundaries}, but in this case of \emph{equal} cosmological constants $\Lambda_{eff}^{b_{\pm}}=-2\epsilon m^2$. \section{Unitarity and BF - conditions} The Bergshoeff-Hohm-Townsend (BHT) unitarity conditions \cite{1}: \begin{eqnarray} m^2\left(\Lambda_{eff}^A-2\epsilon m^2\right)>0\label{A1} \end{eqnarray} together with the Higuchi bound: \begin{eqnarray} \Lambda_{eff}^A\le M_{gr}^2(A)=-\epsilon m^2+\frac{1}{2}\Lambda_{eff}^A\label{A2} \end{eqnarray} for the massive spin two field (``graviton'') are result of the requirement of perturbative (1 - loop) unitarity consistency of NMG model (\ref{eq1}) for $\sigma=const$. When massive scalar field is also included one have to further impose the Breitenlohner-Freedman (BF) condition \cite{BF} which for $D=3$ reads: \begin{eqnarray} \Lambda_{eff}^A\le m_{\sigma}^2(\sigma_{A}^*)=V''(\sigma_{A}^*)\label{A3} \end{eqnarray} or in its \emph{stronger} form: \begin{eqnarray} \Lambda_{eff}^A\le m_{\sigma}^2(\sigma_{A}^*)<0\label{A4} \end{eqnarray} It is convenient to parametrize the effective ``vacuum masses'' $m_{\sigma}^2(\sigma_A^*)=\kappa^2 W_A^2 y_A (y_A-2)$ in terms of the scaling dimensions $\Delta_A=2-y_A$ of 2D field $\phi_{\sigma}(x,t)$ ``holographically dual'' of $\sigma(z)$ \cite{4} ($A=a,b$): \begin{eqnarray} y_{a}=y(\sigma_{a}^*)=\frac{2\epsilon W_a''}{\kappa^2 W_a}\left(1-\frac{\kappa^2 W_a^2}{2\epsilon m^2}\right),\quad\quad\quad y_b=y(\sigma_b^*)=-\frac{4\epsilon (W_b ')^2}{\kappa^2 W_b^2}\label{A5} \end{eqnarray} Let us consider the case $\epsilon=-1$, $m^2<0$. Then the above unitarity conditions (\ref{A1}), (\ref{A2}), (\ref{A4}) take the following simple form \begin{eqnarray} 0\le \frac{\kappa^2 W_A^2}{2\epsilon m^2}\le2, \quad\quad 0\le y_a<2\label{A6} \end{eqnarray} which imposes restrictions on the values of the parameters of NMG model (\ref{eq1}). For example, in the case of linear superpotential both vacua satisfy all the unitarity conditions only for specific values of the parameter $B$ such that: $B^2\le |m^2|$. In the particular case of quadratic superpotential of only three vacua $\pm |\sigma_{+}^*|$ and $\sigma_a^*=0$ ($i.e.$ in region (2)) all the vacua are \emph{unitary} and satisfy the weak BF condition, when the superpotential parameters are restricted as follows: $\kappa^2 D^2 < 2\epsilon m^2$ and $B>0$. Therefore the corresponding DW's (\ref{eq10}) are interpolating between two \emph{unitary} vacua. Such DW's turns out to also have positive tensions $\tau_{DW}>0$ as it shown in Sect. 6. \section{Domain Walls Tensions} In all the ``planar'' DW's (2) of NMG model (\ref{eq1}) the scalar matter is uniformly distributed ($i.e. \ \frac{\partial\sigma}{\partial x}=0$) along the whole $x$-axis and therefore such DW's have infinite energy. As it well known \cite{6}, an important characteristics of the gravitational properties of such DW's is given by the values of their \emph{energy densities} $\epsilon_{DW}=\frac{E_{DW}}{L_x}$ (equals of their tensions $\tau_{DW}$). In the case of $(a)AdS_3$ geometries it is given by \cite{7}: \begin{eqnarray} \tau_{DW}=\lim_{L_x\rightarrow\infty}\frac{1}{L_x}\sum_{A=\pm}v_A\int_{-L_x/2}^{L_x/2}dx\xi^iT_{ij}^{(A)}\xi^{j},\quad\quad\quad i,j=0,1\label{eq14} \end{eqnarray} where $A=\pm$ denote the two $z\rightarrow\pm\infty$ limits $(\partial M)_{A}$ describing $(a)AdS_3$ boundaries or/and horizons; $v_{\pm}=\pm1$ and $\xi^\mu=(0,\xi^i)$ is time-like Killing vector, orthogonal to both $(\partial M)_A$ - surfaces and normalized as $\xi^i\gamma_{ij}^A\xi^i=-1$. The Brown-York ``boundary'' stress-tensor $T_{ij}^{(A)}$ \cite{bala} is defined as follows: \begin{eqnarray} T_{ij}^{(A)}=-\frac{2}{\sqrt{-\gamma^A}}\frac{\delta S_{NMG}^{BY}}{\delta \gamma_{A}^{ij}}v_A\label{eq15} \end{eqnarray} where $\gamma_{ij}^{A}$ are the corresponding ``boundary/horizon'' $(\partial M)_A$ - metrics: \begin{eqnarray*} \gamma_{ij}^A(x,t)=\lim_{z\rightarrow\pm\infty}\gamma_{ij}(x,t|z), \quad\quad \gamma_{ij}(x,t|z)=e^{\varphi(z)}\eta_{ij},\quad\quad\quad \eta_{ij}=diag(+,-) \end{eqnarray*} The main ingredient of the NMG version of the Brown-York formula (\ref{eq14}) and (\ref{eq15}) is the improved NMG action $S_{NMG}^{BY}=S_{NMG}+S_{gGH}$ with few ``boundary'' terms $S_{gGH}$ added. They represent an appropriate generalization of the Gibbons-Hawking boundary action to the case of NMG model (\ref{eq1}) recently proposed by Hohm and Tonni \cite{8}: \begin{eqnarray} S_{gGH}=-\frac{2}{\kappa^2}\sum_{A=\pm}v_{A}\int_{(\partial M)_A}dxdt\sqrt{-\gamma}\Big(\epsilon K-\frac{1}{2}fK+\frac{1}{2}f_{ij}K^{ij}\Big)\label{eq16} \end{eqnarray} where $K_{ij}$ is the extrinsic curvature of 2D ``boundary'' surface $(\partial M)_A$; $f_{\mu\nu}$ is the auxiliary Pauli-Fierz spin two field \cite{1} whose ``on-shell'' form \begin{eqnarray*} f_{\mu\nu}=\frac{2}{m^2}\left(R_{\mu\nu}-\frac{1}{4}g_{\mu\nu}R\right), \ \mu;\nu=0,1,2 \end{eqnarray*} is used in eq. (\ref{eq16}); $f=\gamma^{ij}f_{ij}$ and $K=\gamma^{ij}K_{ij}$. In the case of DW's (2) one can further apply the $I^{st}$ order equations (\ref{eq5}) in order to derive the following simple ``boundary'' form of the improved action \footnote{unlike the case of the Einstein gravity where all the flat DW's are of BPS type \cite{town}, for the 3D NMG DW's one need to use the $I^{st}$ order eqs.(\ref{eq5}) in order to prove that the remaining terms in the bulk action represent a total derivative.} : \begin{eqnarray} S_{NMG}^{BY}(DW)=-\frac{2}{\kappa}\sum_{A=\pm}v_A\int_{(\partial M)_A} dxdt\sqrt{-\gamma} W(\sigma)\left(1+\frac{\kappa^2 W^2(\sigma)}{2\epsilon m^2}\right)\label{eq17} \end{eqnarray} Then according to the definitions (\ref{eq15} ) and (\ref{eq16}) one easily obtains the corresponding explicit form of the ``boundary'' stress-tensor for the NMG model with scalar matter: \begin{eqnarray} T_{ij}^A(DW)=-\frac{2}{\kappa}W(\sigma_A^*)\left(1+\frac{\kappa^2W^2(\sigma_A^*)}{2\epsilon m^2}\right)\gamma^A_{ij}\label{eq18} \end{eqnarray} which allows us to calculate the values of the DW's tensions: \begin{eqnarray} \tau_{DW}=\frac{2}{\kappa}\sum_{A=\pm}v_AW_A\left(1+\frac{\kappa^2 W_A^2}{2\epsilon m^2}\right),\quad\quad W_A = W(\sigma_{A}^*) \label{eq19} \end{eqnarray} Note that in the $m^2\rightarrow\infty$ limit the above formula reproduces the well know results for a flat DW's tensions in 3D Einstein gravity obtained by the Israel's thin wall approximation \cite{6}. \section{Boundary counter-terms and central charges} Consider NMG model (\ref{eq1}) in the limit of small effective cosmological constant: $L_A\gg G=l_{pl}$, where $|\Lambda^A_{eff}|L_A^2=1$. The $AdS_3/CFT_2$ correspondence suggests that each of its vacua $(\sigma_A^*,\Lambda_{eff}^A, \Delta_A)$ determine the main features of certain $CFT_2$ , ``living" on the corresponding 2-D boundaries/horizons $(\partial M)_A$ of $(a)AdS_3$ space-times (\ref{eq11}). As in the case of 3D Einstein Gravity (i.e. the $m^2\rightarrow\infty $ limit of (\ref{eq1})), all the 2D data, namely: central charges, scaling dimensions $\Delta_A(\sigma^*_A)=2-y_A$ and the vacuum expectation values \begin{eqnarray} <A_{vac}|\hat{T}_{ij}^A(x_+,x_-)|A_{vac}>=-\frac{2}{\sqrt{-\gamma^A}}\frac{\delta S_{NMG}^{ren}(DW)}{\delta \gamma_{A}^{ij}(x_+,x_-)}=T_{ij}^A + T^{ct,A}_{ij}= T_{ij}^{ren}(A),\quad x_{\pm}=x\pm t \label{eq20} \end{eqnarray} $<A_{vac}|\hat{\Phi}_{\sigma}(x_+,x_-)|A_{vac}>$, etc. of 2D operators $\hat{T^A_{ij}}$ and $\hat{\Phi}_{\sigma}$, duals of the 3D NMG model fields $\gamma_{ij}$ and $\sigma$ - can be extracted from NMG classical action (\ref{eq17}), appropriately renormalized: $S^{ren}_{NMG}=S^{BY}_{NMG}+S^{ct}_{NMG}$, where \begin{eqnarray} S_{NMG}^{ct}=\frac{2}{\kappa}\sum_{A=\pm}v_A\int_{(\partial M)_A} dxdt\sqrt{-\gamma} W(\sigma)\left(1+\frac{\kappa^2 W^2(\sigma)}{2\epsilon m^2}\right)\label{eq21} \end{eqnarray} The particular form\footnote{although all our arguments are based on specific DW's solutions of NMG model(\ref{eq1}) it is expected that as in the Einstein gravity case, this form of the counter-terms is universal, i.e. itcancels the $S_{NMG}^{BY}$ divergences for larger class of solutions.} of the boundary counter-terms (\ref{eq21}) we have introduced above is a consequence of the condition that the vacua NMG solutions are conjectured to describe the vacua states $|A_{vac}>=|\sigma^*_A,\Lambda^A_{eff}>$ of corresponding (UV and IR) $CFT_2$'s, which by definition must have vanishing dimensions $\Delta^A_{vac}=0$ and energy $ E^A_{vac}$=0 (for planar 2D geometries), i.e. \begin{eqnarray} <\hat{T}_{ij}^A(x_+,x_-)>_A = 0 = T_{ij}^A + T^{ct,A}_{ij} \label{eq22} \end{eqnarray} Note that $S^{ct}$ makes NMG action convergent, i.e. we have $S^{ren}_{NMG}(DW)=0$, thus providing hints that such DW's are stable. The central charges $c_A$ of these $CFT_2$'s are given by the normalization constants of the stress-tensor's 2-point functions \begin{eqnarray} <\hat{T}_{\pm\pm}^A(x_{\pm})\hat{T}_{\pm\pm}^A(0)>_A = \frac{c_A}{2 x_{\pm}^4} \label{eq23} \end{eqnarray} or equivalently by the coefficients of the inhomogeneous part of the stress-tensor's transformation laws: \begin{eqnarray} <\delta_{\xi}\hat{T}_{\pm\pm}^A(x_{\pm})>_A =-\frac{c_A}{24\pi}\xi_{\pm}^{'''} \label{eq24} \end{eqnarray} under infinitesimal 2D transformations: $x^{'}_{\pm}= x_{\pm}+\xi_{\pm}(x_{\pm})$. According to the Brown-Henneaux's observation \cite{9} the 3D counterparts of these 2D conformal symmetries are given by special 3D diffeomorphisms that keep invariant the asymptotic form of the $AdS_3$ metrics\footnote{but are larger then the $AdS_3$ isometry group $SO(2,2)$} such that \begin{eqnarray} \delta_{\xi}\gamma_{\pm\pm}(x_+,x_-) =-\frac{L_A^2}{2}\xi_{\pm}^{'''}. \label{eq25} \end{eqnarray} As a result the corresponding improved Brown-York stress-tensor $T_{ij}^{ren}(A)$ (proportional to $\gamma_{ij}$) gets inhomogeneous terms under these transformations: \begin{eqnarray} \delta_{\xi}{T}_{\pm\pm}^{ren}(A) =\frac{L_A^2}{\kappa}W_A\left(1+\frac{\kappa^2 W_A^2}{2\epsilon m^2}\right)\xi_{\pm}^{'''} \label{eq26} \end{eqnarray} which allows to calculate the $CFT_2$'s central charges in therms of the NMG vacuum data: \begin{eqnarray} c_A =-\frac{3L_A^2}{2G}\kappa W_A\left(1+\frac{\kappa^2 W_A^2}{2\epsilon m^2}\right) \label{eq27} \end{eqnarray} Consider for example the domain wall solution (\ref{eq10}) for quadratic superpotential under the restriction $0 <\kappa D < \sqrt{2\epsilon m^2}$ and $B > 0 $ , which for $\epsilon=-1$ and $ m^2 < 0$ interpolates between two vacua, i.e. $AdS_3$'s of effective cosmological constants $\Lambda_+=-2\epsilon m^2$ and $\Lambda_a=-\kappa^2D^2$ as one can see from its asymptotic form (\ref{eq13}). Since in this case we have $W(\sigma)>0$, the following identification $ \kappa W_A=-\frac{\epsilon}{L_A}$ takes place. As a consequence the corresponding central charges of the two $CFT$'s representing these vacua get the familiar form \cite{1},\cite{8}: \begin{eqnarray} c_A =\frac{3\epsilon L_A}{2G}\left(1+\frac{L_{gr}^2}{L_A^2}\right),\quad\quad L_{gr}^2=\frac{1}{2\epsilon m^2}\gg l_{pl}^2\label{eq28A} \end{eqnarray} The same central charge formula turns out to be valid in the more general case of DW's for which the superpotential $W(\sigma)$ does not change its sign between the two vacua $\sigma_A^*$ and for the case of ``non-unitary" vacua with $\epsilon = 1$ and $m^2 >0$ as well. It is worthwhile to mention an interesting fact that the DW's tensions (\ref{eq19}) can be rewritten in terms of the central charges as follows: \begin{eqnarray} \tau_{DW}(L_+,L_a)= -\frac{1}{12\pi}\left(\frac{c_+}{L_+^2} -\frac{c_a}{L_a^2}\right) \label{eq29A} \end{eqnarray} Observe that the condition of positive tensions, i.e. $\tau_{DW}(L_+,L_a) >0$ requires $|\Lambda_+|>|\Lambda_a|$ which is automatically satisfied in the example discussed above. In the case of "unitary" BHT -vacua $\epsilon=-1$ and $m^2<0$ (i.e. for negative $c_A$'s ) this condition is equivalent to the following restriction on the central charges: \begin{eqnarray} \frac{|c_a|}{|c_+|} > \frac{L^2_a}{L^2_+} >1 \label{eq29B} \end{eqnarray} i.e. we have $c_+ > c_a$. It turns out that such ``ordering" of the UV and IR central charges determines the direction of the RG flow in the dual 2D $pCFT_2$ as we are going to show in the next section. \section{Comments on holographic RG flows and NMG's extensions } The off-critical $(a)AdS_3/pCFT_2$ version of the holographic principle relates certain static DW's solutions of 3D gravity (Einstein or NMG) with scalar matter to the RG flows in specific deformed (supersymmetric) $CFT_2$ \cite{4}. These non-conformal $QFT_2$'s can be realized as an appropriate perturbations of the ultraviolet (UV) $CFT_2$ by marginal or/and relevant operators $\hat{\Phi}_{\sigma}$ that break 2D conformal symmetry to the Poincare one : \begin{eqnarray} S_{pCFT_{2}}^{ren}(\sigma)=S_{CFT_2}^{UV}+\sigma(L_*)\int d^2x\sqrt{-g}\Phi_{\sigma}(x^i)\label{eq28} \end{eqnarray} The scale-radial duality \cite{VVB} allows to identify the ``running'' coupling constant $\sigma(L_*)$ of $pCFT_2$ (\ref{eq28}) with the scalar field $\sigma(z)$ and the RG scale $L_*$ with the scale factor $e^{\varphi(z)}$ as follows: $L_*=l_{pl}e^{-\varphi/2}$. This identification is based on the equivalence of the ``radial'' evolution equations (\ref{eq5}) and the Wilson RG equations for the $pCFT_2$: \begin{eqnarray} \frac{d\sigma}{dl}=-\beta(\sigma)=\frac{2\epsilon}{\kappa^2}\frac{W'(\sigma)}{W(\sigma)}\bigg(1-\frac{W^2(\sigma)\kappa^2}{2\epsilon m^2}\bigg),\quad\quad l=\ln L_*\label{eq30} \end{eqnarray} It is evident that the zeros of the $\beta$-function (\ref{eq30}) $\sigma_{B}^*$ coincide with the NMG vacuum of type ($a$) ($i.e.$ $W'(\sigma_{a}^*)=0$) or of the type ($b$) ($i.e.$ $W_{\pm}^2(\sigma_{\pm}^{*})=\frac{2\epsilon m^2}{\kappa^2}$). We also realize that the anomalous dimensions $\Delta_{\Phi}$ of the operator $\Phi_{\sigma}(x^i)$ at each critical point: \begin{eqnarray} y(\sigma^*)=2-\Delta_{\Phi}(\sigma^*)=-\frac{d\beta(\sigma)}{d\sigma}\Big\vert_{\sigma=\sigma^*}\label{eq31} \end{eqnarray} are nothing but the parameters $\Delta_{\pm}^{a,b}(\sigma^*)$ and $y(\sigma_{a,b}^{*})$ given by eqs. (\ref{A5}), that determine the asymptotic behaviour at $z\rightarrow\pm\infty$ of the matter 3D bulk gravity field $\sigma(z)$. As it well known, when the explicit form of the $\beta(\sigma)$ - function is given, say by eq. (\ref{eq30}), it provides the key ingredient that allows to further derive the free energy and certain thermodynamical characteristics of 2D \emph{classical} statistical model related (in its thermodynamic limit) to the \emph{quantum} $pCFT_2$ in discussion\footnote{ indeed we have to consider the euclidean version of NMG such that the corresponding ``boundaries/horizons" of $(a)H_3$ are flat euclidean planes or spheres $S^2$.}. We are interested in the description of the scaling laws, critical exponents and the phase structure of particular $pCFT_2$ dual to NMG model (\ref{eq1}) with quadratic superpotential in the case the range of its parameters $B$ and $D$ belongs to the region (2). Following the standard RG methods (see for example \cite{muss}) we find that the singular part of the reduced free energy per 2D volume $F_s(\sigma)$ has the following simple form: \begin{eqnarray} F_s(\sigma)\approx (\sigma^2)^{\frac{1}{y_0}}\left((\sigma_+^*)^2-\sigma^2\right)^{\frac{2}{y_+}}\left( |(\sigma_-^*)^2|+\sigma^2\right)^{\frac{2}{y_-}}\label{eq33} \end{eqnarray} The critical exponents $\nu_A = \frac{1}{y_A}$ related to the correlation length singularities $\xi_A\approx(\sigma - \sigma_A^*)^{-\frac{1}{y_A}}$ at each critical point (i.e.the NMG's vacua with $A=0,\pm $) are given by: \begin{eqnarray} y_0=-\frac{4\epsilon B}{D\kappa^2\alpha_+\alpha_-},\quad\quad y_+=\frac{8\epsilon B}{\alpha_+\kappa\sqrt{2\epsilon m^2}},\quad\quad y_-=-\frac{8\epsilon B}{\alpha_-\kappa\sqrt{2\epsilon m^2}}\label{eq34} \end{eqnarray} For a particular choice of the ``unitary" (for $\epsilon=-1 ,m^2<0 $) DW's (\ref{eq10}) and of the coupling constant $\sigma$ within the range $0<\sigma<\sigma^*_+$ we have that $y_0<0$ (i.e. the IR $CFT_2$ with $\Phi_{\sigma}$ as irrelevant operator) and $0<y_+<2$ (i.e. the UV $CFT_2$ with $\Phi_{\sigma}$ representing now a relevant operator). Therefore such DW describes massless RG flow from UV critical point $\sigma^*_+$ to the IR one $\sigma_0^*=0$. An important characteristics of all the massless flows is the so called Zamolodchikov's central function : \begin{eqnarray} C(\sigma)=-\frac{3}{2G\kappa W(\sigma)}\bigg(1+\frac{\kappa^2W^2(\sigma)}{2\epsilon m^2}\bigg)\label{eq32} \end{eqnarray} which at the critical points $\sigma_{A_{\pm}}^*$ takes the values (\ref{eq28A}). It represents a natural generalization \cite{sinha} of the well known result for $m^2\rightarrow\infty$ limit \cite{4},\cite{VVB}. According to its original 2D definition \cite{x} it is intrinsically related to the $\beta$-function: \begin{eqnarray} \beta(\sigma)=-\frac{4G \epsilon W(\sigma)}{3\kappa}\left(\frac{dC(\sigma)}{d\sigma}\right) \end{eqnarray} of the $pCFT_2$ dual of the NMG model (\ref{eq1}). Taking into account the RG equations (\ref{eq30}) we realize that : \begin{eqnarray} \frac{dC(\sigma)}{dl} = -\frac{3}{4GW(\sigma)}\left(\frac{d \sigma}{dl}\right)^2 \label{eq29b} \end{eqnarray} and therefore when $W(\sigma)>0$ is positive (as in our example) the central function is decreasing during massless flow we are discussing, i.e. we have $ c_+ > c_a$. Observe that for $ \sigma>\sigma^*_+$ and for $\sigma \rightarrow\infty$ the correlation length remains \emph{finite} due to the following ``resonance" property $\frac{1}{2y_0}+\frac{1}{y_+}+\frac{1}{y_-}=0$, specific for the quadratic superpotential we are studying. Hence this region of the coupling constant space corresponds to the \emph{massive} phase of the $pCFT_2$, which is described ``holographically" by \emph{singular} DW metrics giving rise to $(a)AdS_3$ space-time with naked singularity, as one can see from the generic form (\ref{eq10}) of our DW's solutions. We have therefore an example of phase transition from massive to massless phase that occurs at the UV critical point $\sigma_+^*$. For the description of such phase transition we need \emph{two different} NMG solutions having coinciding boundary conditions $(\sigma_A^*,\Lambda_{eff}^A, \Delta_A)$ at their common boundary $z\rightarrow\infty$. We next briefly discuss the possibility to extend our $d=3$ superpotential constructions (\ref{eq4}) to the case of $d>3$ NMG models: \begin{eqnarray} S=\frac{1}{\kappa^2}\int d^dx\sqrt{-g}\Big\{R+\frac{1}{m^2}\left(R^{\mu\nu}R_{\mu\nu}-\frac{d}{4(d-1)}R^2\right)-\kappa^2\Big(\frac{1}{2}|\vec{\nabla}\sigma|^2+V(\sigma)\Big)\Big\}\label{dacao} \end{eqnarray} As in $d=3$ case the static flat DW's solutions of such d-dimensional NMG model are defined by \begin{eqnarray} ds^2=dz^2+e^{2\beta\varphi(z)}\eta_{ij}dx^idx^j,\quad \sigma=\sigma(z),\quad \beta=\frac{1}{\sqrt{2(d-1)(d-2)}},\quad \alpha=(d-1)\beta,\label{ddw} \end{eqnarray} that leads us to a system of \emph{second} order equations of the type (\ref{eq4}), but with different d-dependent coefficients. It is then natural to introduce the following generalization of $d=3$ NMG superpotential and of the $I^{st}$ order system (\ref{eq5}) for arbitrary $d>3$ : \begin{eqnarray} \dot{\varphi}&=&-2\kappa\alpha W(\sigma),\quad \dot{\sigma}=\frac{2}{\kappa}W'(\sigma)\left(1+\kappa^2\frac{(d-4)}{2(d-2)}\frac{W^2}{m^2}\right)\\ V(\sigma)&=&2(W')^2\left(1+\kappa^2\frac{(d-4)}{2(d-2)}\frac{W^2}{m^2}\right)^2-2\kappa^2\alpha^2W^2\left(1+\kappa^2\frac{(d-4)}{4(d-2)}\frac{W^2}{m^2}\right)\label{dsys} \end{eqnarray} which in $d=5$ case reproduces the Low-Zee superpotential \cite{zee} for the Gauss-Bonnet(GB) extended 5D gravity. It is worthwhile to mention the well known fact (see ref. \cite{flat} for example) that for \emph{conformally flat} solutions (i.e. of vanishing $d>3$ Weyl tensor as in the case of the DW's (\ref{ddw})) the action of the Gauss-Bonnet-Einstein gravity becomes identical to the $d>3$ NMG's one (\ref{dacao}). Therefore the solutions of the eqs.(\ref{dsys}) describe the flat static DW's of the both models. The form of the eqs.(\ref{dsys}) above makes evident that any given superpotential $W_d(\sigma)$ describe qualitatively different matter potentials $V_d(\sigma)$ depending on the values of $d=3,4,5$. Hence the properties of DW's solutions of corresponding NMG model's, as well as of the $\beta_d(\sigma)-$functions of their $pCFT_d$ duals, are expected to be rather different depending on the space-time dimensions. We leave the problem of the identification of these $QFT_d$ models and of the geometrical NMG's description of their phase structure to our forthcoming paper \cite{nmgd}. Let us emphasize in conclusion the advantages of the superpotential method in the study of the DW's properties of the NMG models (\ref{dacao}) as well as of the holographic RG flows in their dual $pCFT_d$ models. As we have shown on the example of 3D NMG model (\ref{eq1}) with quadratic $W(\sigma)$, the DW's solutions provide an important information about the phase transitions in its dual 2D model. It is important to note however that although we have recognized many of the ingredients of the $AdS_3/CFT_2$ correspondence as central charges, scaling dimensions, free energy,etc. the answer to the question of whether and under which conditions such correspondence takes place for the NMG model (\ref{eq1}) remains still open. The complete identification and the description of all the properties of the dual $pCFT_2$ in terms of the NMG model's solutions requires better understanding of the apparent ``unitarity discrepancy" that relates (1-loop) \emph{ unitary} massive 3D gravity to \emph{non-unitary} $CFT_2$'s of negative central charges in the approximation of small effective cosmological constants. Negative central charges are known to appear in different contexts in the (supersymmetric) $CFT_2$'s. For example, the classical and semi-classical limits $\hbar\rightarrow 0 $ of the central charges $c_q=1-6\frac{Q^2_{cl}}{\hbar}$ of the so called minimal \emph{ unitary} Virasoro algebra models (as well as of their $N=1$ SUSY extensions) are big \emph {negative} numbers \cite{fat}. There exist also families of non-unitary 2D models representing interesting statistical mechanical problems, as for example the Lee-Yang ``edge singularity" $CFT_2$ of $c=-\frac{22}{5}$. We remind these facts just to indicate few directions for further investigations that might result in the exact identification of the 2D QFT's duals to 3D New Massive gravity models. \emph{Acknowledgments} We are grateful to H.L.C. Louzada for his collaboration in the initial stage of this work and to C.P. Constantinidis for the discussions and for critical reading of the manuscript. This work has been partially supported by PRONEX project number 35885149/2006 from FAPES-CNPq (Brazil). \vspace{1 cm}
1,941,325,221,192
arxiv
\section{Introduction} \label{sec:intro} Galaxies with central surface brightnesses fainter than the night sky ($\sim$ 22.5 $B$ mag arcsec$^{-2}$ ) are defined as Low Surface Brightness Galaxies (LSBGs; \citep[e.g.][]{Impey1997,Impey2001,Ceccarelli2012}. In the local universe, LSBGs take up a fraction of $\sim$30\%--60\% in number \citep[e.g.][]{McGaugh1995,McGaugh1996,Bothun1997,O'Neil2000,Trachternach2006,Haberzettl2007} and $\sim$ 20\% in dynamical mass \citep[e.g.][]{Minchin2004} among all the galaxies. Generally speaking, LSBGs are abundant in H{\sc{i}} gas but deficient in metal ($\leq 1/3$ solar abundance) and dust \citep[e.g.][]{McGaugh1994,Matthews2001}, and they have fairly low star formation rates (SFR) \citep[e.g.,][]{Das2009,Galaz2011,Lei2018}, which are evident that only a small number of H{\sc{ii}} regions inhabit in their diffuse disks. They also have lower stellar mass densities, comparing to their High Surface Brightness Galaxy (HSBG) counterparts (normal galaxies) \citep[e.g.][]{de Blok1996,Burkholder2001,O'Neil2004,Trachternach2006}. These special properties imply that LSBGs could have different formation and evolutionary histories from normal galaxies\citep[e.g.][]{Huang2012}. Galaxy stellar mass, $M_*$, is a critical physical property for studying galaxy formation and evolution, because its growth is directly related to galaxy formation and evolution. A widely used simple method to estimate $M_*$ is to multiply the measured galaxy luminosity, $L$, with a fixed stellar mass-to-light ratio ($\gamma^{*}$). However, different stellar populations have very different spectral energy distributions with younger stars dominating bluer bands and older stars dominating redder bands. Therefore, the $\gamma^{*}$s in different bands need to be calibrated separately for different populations and different photometric bands. In this case, the technique of modeling the broadband photometry (SED-fitting) to stellar population synthesis (SPS) models is used to estimate the stellar mass of the galaxy. Usually, the stellar masses for “normal” galaxies (i.e., not including pathological SFHs) can be recovered at the $\sim$ 0.3 dex level (1$\sigma$ uncertainty) by the broadband SED fitting. This uncertainty does not include potential systematics in the underlying SPS models \citep{Conroy2013}. Then, another convenient technique to derive the galaxy stellar mass is to use the relation between color and stellar mass-to-light ratio of the galaxies. So far, the $\gamma^{*}$s in different photometric broad bands have been derived as a function of galaxy colors by various studies \citep[e.g.,][]{Bell2001,Bell2003,Portinari2004,Zibetti2009,McGaugh2014}. However, the existing prescriptions are mostly calibrated for normal galaxies, which have very different properties from LSBGs. The star-forming main sequence (log $SFR$ - log $M_*$) of dwarf LSBGs has a steep slope of approximately unity, distinct from the shallower slope of more massive spirals \citep{McGaugh2017}. Generally, a slope of unity would agree with galaxies forming early in the universe and subsequently forming stars at a nearly constant specific star formation rate (sSFR). In contrast, a shallow slope implies that low-mass galaxies were formed recently in a shorter time scale with a higher sSFR (sSFR is roughly inversely proportional to the age of stellar disk). These different formation scenarios could potentially lead to different $\gamma^{*}$s. Therefore, we estimate the $\gamma^{*}$s and stellar masses of a sample of over 1,000~LSBGs defined in our previous work of \citet{Du2015,Du2019}, by fitting their multi-band (from UV to NIR) spectral energy distributions (SEDs) to the stellar population models, and investigate the correlations between the $\gamma^{*}$ and observed colors for the LSBG sample. We briefly introduce the LSBG sample, and show the multi-wavelength photometric band data in \S 2. We then show data reduction and photometry in \S 3. We demonstrate the multi-band SED fitting process and show the derived LSBG stellar mass distribution in \S 4. We explore the distributions of the derived LSBG M/Ls and their correlations with galaxy colors in \S 5 and discuss the results in \S 6. Throughout this paper, the distances we used to convert apparent magnitude to absolute magnitude and luminosity are from the ALFALFA catalog \citep{Haynes2018}, which adopted the Hubble constant of H$_{0}$=70 km~s$^{-1}$~Mpc$^{-1}$. Magnitudes in this paper are all in the AB magnitude system. \section{Sample and data} \label{sec:data} \subsection{LSBG Sample}\label{subsec:sample} We have selected a sample of 1129 LSBGs from the combination of the $\alpha$.40 H{\sc{i}} survey \citep{Haynes2011} and the SDSS DR7 photometric survey \citep{Abazajian2009}. We briefly introduce the sample selection and properties below, and details and related studies could be found in \citet{Du2015} and \citet{Du2019}. LSBGs are very sensitive to the sky background, so it is crucial to precisely subtract the sky background from the galaxy image before photometry. Unfortunately, the sky backgrounds have been overestimated by the SDSS photometric pipeline for galaxy images in their $ugriz$ bands, which consequently results in an average underestimation of $\sim$0.2 mag in luminosity of bright galaxies \citep{Lauer2007,Liu2008,Hyde2009,He2013} and $\sim$0.5 mag of LSBGs \citep{Lisker2007}. To improve the sky subtraction, we re-estimate the sky background of $g$- and $r$-band images for each galaxy in the $\alpha.$40-SDSS DR7 sample, using a fully tested method of sky estimation\citep{Zheng1999}. The method fits all the sky pixels on the object-masked image row-by-row and column-by-column, and is designed for a better estimate of the sky background map for galaxies with faint outskirts \citep{Zheng1999,Wu2002} and LSBGs \citep{Du2015}. More details about this sky-background-estimation method and its applications on bright galaxies and LSBGs have been reported in \citet{Zheng1999}, \citet{Wu2002}, and \citet{Du2015}. On sky-subtracted images in $g$ and $r$ bands, we measure the magnitudes of galaxies using the SExtractor code \citep{Bertin1996}, and fit the radial surface brightness profiles of galaxies to exponential profile models using the Galfit code \citep{Peng2002}. Then, the magnitude from SExtractor and the results from Galfit including disk scale length, $r_{s}$, and minor-to-major axial ratio, $b/a$, are used to calculate the disk central surface brightness in $g$ and $r$ bands, which are then combined to be converted to the disk central surface brightness in $B$ band, $\mu_{0}$(B), according to the transformation formula of \citet{Smith2002}. Finally, based on the traditional definition for LSBGs , we select 1129 non-edge-on galaxies (b/a $>$ 0.3 in both $g$ and $r$ bands) with $\mu_{0}(B) \geq$ 22.5 mag arcsec$^{-2}$ from the entire $\alpha.$40-SDSS DR7 sample (12,423 galaxies) to form our LSBG sample. The H{\sc{i}}-selected LSBG sample, inhabiting in low density environment \citep{Du2015} and dominated by dwarf galaxies in luminosity and late-type galaxies in morphology, has extended the parameter space covered by the existing LSBG samples to fainter luminosity, lower H{\sc{i}} gas mass, and bluer color \citep{Du2019}. More details are available in \citet{Du2015} and \citet{Du2019}. \subsection{Photometric data} We collected the available scientific images for each galaxy of our LSBG sample in the passbands of FUV, NUV ($GALEX$; \citet[]{Martin2005}) $ugriz$ (SDSS DR7; \citet[]{York2000}), and $YJHK$ (UKIDSS LAS DR10; \citet[]{Lawrence2007}). We note that the sky areas covered by different surveys do not completely overlap each other due to different survey strategies. Therefore, not all the galaxies in our LSBG sample are available in all the 11 passbands. We list the number of galaxies which are available in each of the 11 passbands in Table ~\ref{tab:multibands}. All the LSBGs in our sample have been observed by the SDSS DR7. However, combining the optical with UV bands, 924 galaxies are available in both SDSS DR7 and GALEX DR6 surveys. Combining the optical with NIR bands, 672 galaxies have been observed by both SDSS DR7 and UKIDSS LAS DR10 surveys. Furthermore, combining the optical, UV and NIR, only 544 galaxies are available in all the 11 bands. \begin{table} \begin{center} \caption{The number of LSBG scientific images available in multi-passbands. } \label{tab:multibands} \begin{tabular}{lc} \hline Passband & number\\ \hline GALEX FUV & 924 \\ GALEX NUV & 932 \\ SDSS u & 1129\\ SDSS g & 1129\\ SDSS r & 1129\\ SDSS i & 1129\\ SDSS z & 1129\\ UKIDSS Y & 717\\ UKIDSS J & 697\\ UKIDSS H & 709\\ UKIDSS K & 717\\ \hline \end{tabular} \end{center} \end{table} \subsection{Multi-band photometry}\label{sec:phot} \subsubsection{Image reduction} The scientific images for galaxies in our LSBG sample are all bias subtracted, dark subtracted (only for NIR frames), flat-fielded and flux-calibrated by the survey teams. We just need to start the data reduction from sky subtraction. As for the UV sky background, we directly use the sky background map provided by the GALEX team. However, for the optical and NIR band, only an average sky value is provided for a galaxy image by SDSS DR7 or UKIDSS LAS DR10, not to mention the problem of overestimation of sky background by SDSS DR7 photometric pipeline (see \S ~\ref{sec:data}). So we estimate the sky background map for LSBG image in each of the $ugriz$ and $YJHK$ bands, using the row-by-row and column-by-column fitting method \citep{Zheng1999} on the image with all the detected objects being masked out. As a fully tested sky estimation method, it has been successfully practiced for bright galaxies with faint extended outskirts \citep{Zheng1999,Wu2002} and for LSBGs \citep{Du2015}, and the details are elaborated in \S 3.1 in \citet{Du2015}. After subtracting the sky background for the LSBG image, we use SExtractor \citep{Bertin1996} to make a systematic, homogeneous photometric measurement on the LSBG images of all the 11 bands, by performing the pixel-to-pixel photometry in dual-image mode. In the dual-image mode, the galaxy image in the $r$ band is used as a reference for source detection and photometric aperture definition (center position, size and shape), and then images in all other passbands for the galaxy are photometrically measured within the same aperture as defined by the reference image. So, the dual-image mode of SExtractor requires that galaxy images in all passbands must match with the reference image in dimension, orientation, pixel scale, image size, and object position. We use the $r$-band galaxy image with sky subtracted as the reference, so we have to resample the sky-subtracted images of the galaxy in all other bands (FUV NUV, $ugiz$, $YJHK$) using cubic interpolations to match with the reference image in image orientation, pixel scale, image size, and object position. The matched images in all bands are then trimmed to have a same size of 200~arcsec $\times$ 200~arcsec, åwith the target galaxy at the center of the trimmed image. Then, we keep the centering galaxy (target) on each trimmed image, and mask all the other detected objects (by SExtractor with 3$\sigma$ as the minimum detection threshold) from the trimmed image to avoid the light contamination from adjacent objects. We then fill the masked regions with average values of the background pixel of each object-masked image. Such reduced images are ready for further photometry in \S ~\ref{sec:photometry}. For a better illustration of the image reduction process, we take the $r$-band image of the galaxy AGC 192669 in our LSBG sample as an example in Figure ~\ref{fig:general}, where panels (a) and (c), respectively show the original and sky-subtracted image frames. In order to more clearly demonstrate the quality of our sky subtraction, we compare the distribution of the background pixel values of the sky-subtracted image (panel (c)) with the distribution of the background pixel values of the original image (panel (a)) with a simple mean background value subtracted from in Figure 2, where the image background is closer to zero after our sky subtraction (black line). \subsubsection{Photometry}\label{sec:photometry} We use SExtractor to define the position and aperture of the galaxy in the $r$-band image, and then used the $r$-band position and aperture information to measure the galaxy magnitude within each filter of $FUV$, $NUV$, $u$, $g$, $r$, $i$, $z$, $Y$, $J$, $H$, and $K$ (using the SExtractor dual image mode). As the aperture definitions do not vary between wavebands, this measurement gives internally consistent colors. The measured magnitudes in all bands are corrected for Galactic extinction using the prescription of \citet{Schlafly2011}. We show the the aperture of the galaxy AGC 192669 on reduced image in each band in Figure ~\ref{fig:phot} for example. This aperture is the automatic aperture (AUTO), as among the various magnitude types that SExtractor provides (isophotal, corrected isophotal, fixed-aperture, AUTO, and Petrosian), the automatic aperture (AUTO), inspired by Kron's ``first moment'' algorithm (see details in \citet{Kron1980}), is a flexible and accurate elliptical aperture whose elongation $\epsilon$ and position angle $\theta$ are defined by the second order moments of the object's light distribution. Within this aperture, the characteristic radius $r_{1}$ is defined as $r_{1}$= $\frac{\sum rI(r)}{\sum I(r)}$, which is weighted by the light distribution function. \citet{Kron1980} and \citet{Infante1987} have verified that, for stars and galaxy profiles convolved with a Gaussian, more than 90$\%$ of the flux is expected to lie within a circular aperture of radius $kr_{1}$ if k=2, almost independently of their magnitudes. This changes if an ellipse with $\epsilon kr_{1}$ and $\frac{kr_{1}}{ \epsilon}$ is considered as the principal axes. By choosing a larger k=2.5, more than 96$\%$ of the flux is captured within the elliptical aperture. So, the AUTO magnitudes are intended to give the more precise estimate of ``total magnitudes", at least for galaxies. More details about the AUTO photometry could be found in SExtractor manual and \citet{Kron1980},\citet{Infante1987}, and \citet{Bertin1996}. During our measurments, we keep k=2.5, the recommended setting by SExtractor \citep{Bertin1996}, and our following studies would be based on the AUTO magnitude in AB magnitude system. \section{Stellar Mass}\label{sec:mstar} Galaxies emit electromagnetic radiation over the full possible wavelength range, and the distribution of energy over wavelength is called the Spectral Energy Distribution (SED) which is our primary source of information about the properties of the unresolved galaxy. In general, the different physical processes occurring in galaxies all leave their imprints on the global and detailed shapes of the spectra or SEDs. Therefore, we can constrain the galaxy stellar mass by fitting models to its SED. In \S ~\ref{sec:photometry}, we derived the multi-band magnitudes of each LSBG in our sample, so we can construct the SED for each LSBG. Since the UV light comes from regions where hot and young stars reside and the NIR light is the best tracer of old stars which dominate the stellar mass of a galaxy, our SEDs covering multi-bands from UV to NIR allow us to include contribution of both young and old stars to stellar mass. It should be noted that for the LSBGs, which have not been observed in all the 11 bands we only use the available bands to construct the SED. \subsection{SED-fitting} MAGPHYS \citep{da Cunha2008} is one of the widely used SED-fitting codes \citep[e.g.,][]{Zheng2015}, which uses the 2007 version of \citet{Bruzual2003} stellar population synthesis (SPS) model (CB07) covering a wavelength range from 91~\AA~to 160 $\mu$m, different ages from 0.1 Myr to 20 Gyr and various metallicities (Z) from 0.02 to 2 times solar. The star formation history (SFH) is described by an underlying continuous model (exponentially declining star formation) with instantaneous bursts superimposed. The initial mass function (IMF) of Chabrier \citep{Chabrier2003} is assumed, and a simple two-component dust model of \citet{Charlot2000} is adopted to describe the attenuation of the stellar light by the dust. Using MAGPHYS\citep{da Cunha2008}, we fit the SPS models to the multi-band SED of each LSBG in our sample to estimate the galaxy stellar mass. For example, we show the best-fitting model and residual by using MAGPHYS for the LSBG, AGC 192669 in the top two panels in Figure ~\ref{fig:sed_fitting}. We note that MAGPHYS gives both the stellar mass of the best-fitting model and the stellar mass distribution. In this paper, we use the mean value of stellar mass from the given stellar mass distribution as our derived galaxy stellar mass, $M_{*}$. Checking the fitting results from MAGPHYS, we found that MAGPHYS always give the lower limit of the model in stellar mass for 77 galaxies out of the total LSBG sample (1129 galaxies), so we would exclude these 77 galaxies in subsequent analysis, and the sample for further investigations include 1,052 LSBGs (Research Sample - R sample). In order to check whether the R sample (1040 LSBGs) is representative of the total LSBG sample (T sample; 1129 LSBGs), we compare them in terms of physical property in Figure ~\ref{fig:subsample}. Compared with the T sample (black), the R sample (green) has quite similar distributions in the main properties of magnitude, color, central surface brightness, size, and color versus H{\sc{i}} mass except for that the R sample lack the very low-redshift, faint, and H{\sc{i}}-poor galaxies. Therefore, the R sample is a good representative of our total LSBG sample, and our further analysis in the paper would be based on the R sample, which may still be named the LSBG sample later. \subsection{Stelar mass distribution} The derived stellar masses using MAGPHYS are shown in Figure ~\ref{fig:mstar_distri} for the R sample, ranging from log$M_{*}/M_{\odot}\sim$ 7.1 to 11.1, with a mean log$M_{*}/M_{\odot}$=8.47 and a median log$M_{*}/M_{\odot}$=8.48 for the R LSBG sample. which is considerably lower than the stellar mass for the normal galaxies. Furthermore, LSBGs which have lower surface brightnesses (fainter than 25.0 $mag/arcsec^{2}$) tend to have lower stellar mass, and LSBGs which have higher stellar masses tend to have higher surface brightness (the right panel of Figure ~\ref{fig:mstar_distri}). \section{Stellar Mass-to-light ratio}\label{sec:m2l} The best approach to measure stellar mass-to-light ratio, $\gamma^{*}$, is to fit SEDs simultaneously in multi-passbands, with at least one in the NIR to break the age-metallicity degeneracy. We show the $\gamma^{*}$ derived from fitting the UV-optical-NIR SEDs of our galaxies in Figure ~\ref{fig:m2l_distr}. The UV is strongly affected by the young, luminous, blue stars formed in recent star formation history (SFH) of a galaxy. These stars produce a large amount of the UV light and contribute most to the fluxes of galaxies, so having the UV band involved into the galaxy SED fitting should provide stronger constraints on the SFH, average age, metallicity, stellar mass, and $\gamma^{*}$ of galaxies via the luminosity-weighted SED fitting. Compared to the average $\gamma^{*}$ measured in optical and NIR bands, the $\gamma^{*}$ in UV bands suffer more perturbations because those young, luminous, blue stars contribute little to the stellar mass of a galaxy \citep{McGaugh2014}, so it is not informative to investigate the $\gamma^{*}$ in UV bands derived from SED-fitting. Here, we only show the log $\gamma^{*}$ measured in optical $ugriz$ and NIR $YJHK$ bands for the R LSBG sample in Figure ~\ref{fig:m2l_distr} (a) $\sim$ (i), and the log $\gamma^{*}$ ($\gamma^{*}$) are -0.48 (0.33), -0.40 (0.40), -0.33 (0.47), -0.40 (0.40), -0.39 (0.41), -0.40 (0.40), -0.46 (0.35), -0.55 (0.28), and -0.66 (0.22) in $ugrizYJHK$ bands, respectively, which are lower than the $\gamma^{*}$ for normal galaxies. In Figure ~\ref{fig:m2l_distr} (j), the mean $\gamma^{*}$ slightly declines as the wavelength band moves from $r$ to $K$. Such a declining trend for $\gamma^{*}$ of the normal disk galaxies from $V$ to $[3.6]$ through $I$ and $K$ has been reported to be stronger \citep{McGaugh2014}. According to Figure 1 in \citet{Wilkins2013}, the $\gamma^{*}$ measured at different wavelengths of a stellar population are dependent on the specific SED shape of the population, with a lower $\gamma^{*}$ at the wavelength with a higher specific SED flux. So, the slight declining trend of $\gamma^{*}$ of our sample from $r$ to $K$ implies that the overall LSBGs of our sample have a slightly rising SED shape from $r$ to NIR. In Figure ~\ref{fig:m2l},the $\gamma^{*}$ measured in each band of $ugrizYJHK$ is shown against the absolute magnitude (left column) and stellar mass (right column) for the R LSBG sample. While the distribution of $\gamma^{*}$ in $z$ band for a sample of galaxies drawn from SDSS main galaxy sample (bright galaxies with Petrosian $r$-band magnitudes in the range of 14.5 $< r <$ 17.7 mag) is strongly dependent on galaxy luminosity (see Fig. 13 of \citet[]{Kauffmann2003}), it shows that $\gamma^{*}$ for the bright galaxies (e.g., brighter than around -13 mag in $r$ band) of our LSBG sample slowly changes with the absolute magnitude in any band from $u$ to $K$ (left column), as the slopes of the fitting lines are all around zero (within $\pm$0.04) if we forced only the bright galaxies (with absolute magnitude in the corresponding band brighter than -16 mag) to be fitted by a linear line in each panel of the left column. However, $\gamma^{*}$ for the R LSBG sample slightly increases with the galaxy stellar mass (right column), and this increasing trend (represented by the black dotted line which is a linear fitting for all the R LSBG sample galaxies) is stronger in shorter/bluer wavelength bands than longer/redder bands. This is quantitatively evidenced by the steepest slope of the fitting line in $u$ (top panel) and nearly flat slope of the fitting line in $K$ band (bottom panel) in Figure ~\ref{fig:m2l}. Furthermore, the scatter of data points in $\gamma^{*}$ in any band is becoming narrower at higher stellar mass (right column), indicating less diversity in star formation history (SFH). \section{Color v.s. stellar mass-to-light ratio}\label{sec:ms} \subsection{Color - $\gamma^{*}$ relation for our LSBGs} Various prescriptions for predicting $\gamma^{*}$ from the observed colors have been calibrated for normal galaxies previously \citep[e.g.,][]{Bell2001,Bell2003,Portinari2004,Zibetti2009,McGaugh2014}. In contrast, our correlation is based on a sample of LSBGs, which is dominated by dwarf LSBGs (Figure 1 in \citet{Du2019}) with $\sim$ 50$\%$ of the R sample fainter than $r$= 17.5~mag (Figure 2(b) in \citet{Du2019}), almost all fainter than 21~mag~arcsec$^{-2}$ in the $r$ surface brightnesses, $\mu_{r}$, (Figure 2(d) in \citet{Du2019}), and 73$\%$ bluer than $g$-$r$=0.4~mag (Figure ~\ref{fig:subsample}(b)). As the dwarf LSBGs have been reported to form a distinct sequence from more massive normal galaxies in the star-forming main sequence \citep{McGaugh2017}, we expect to study the $\gamma^{*}$-color relation (MLCR) based on our LSBG sample, which has a large fraction of dwarf LSBGs. We fit the relations of $\gamma^{*}$ measured in each of the $grizJHK$ bands to the optical colors ($u-g$, $u-r$, $u-i$, $u-z$, $g-r$, $g-i$, $g-z$, $r-i$, and $r-z$) and the NIR colors ($J-H$, $J-K$, and $H-K$), respectively, for the LSBG sample, in the form of log($\gamma^{*}$) = $a_{\lambda}$ + $b_{\lambda}\times$color. The fitting method is the bi-square weighted line fitting method, which is the same as the fitting method that was used by \citet{Bell2003} MLCRs. To test the goodness of the fit to our data, we calculate and show the Pearson correlation coefficient (PCC) for each of these fits in Table ~\ref{tab:Pearson}. The PCC is a measure of the linear correlation between two variables, and it has a value between +1 and -1, where +1(-1) is a total positive (negative) linear correlation, and 0 is no linear correlation. As shown in Table ~\ref{tab:Pearson}, using $u$-colors as a $\gamma^{*}$ estimator for our sample results in low PCCs ($<$ 0.5). This is presumably because of two reasons. One reason is that, compared with relatively redder bands ($g$ and $r$), $u$ band is more affected by the recently formed young stars, which would cause considerable perturbations to the average $\gamma^{*}$ in $u$ band than $g$ and $r$ bands. Another reason might be due to the quality of SDSS $u$-band images which are reported to have scattered light problems that may cause relatively larger errors in the $u$ band fluxes than other SDSS bands and then perturb $u$-colors (see the `Caveats' in SDSS websites). In this case, the $u$-colors do not seem to be good estimators of $\gamma^{*}$ for our sample. \citet{McGaugh2014} found that the solar metallicity model from \citet{Schombert2009} changes in color as it ages from 1 to 12 Gyr by $\Delta$(B-V)=0.37 and $\Delta$(J-K)=0.03, demonstrating that NIR colors (such as $J-H$, $J-K$ and $H-K$ here) are much less sensitive indicators of $\gamma^{*}$ than optical colors. Thus, it is expected that using NIR colors as a $\gamma^{*}$ estimator results in nearly zero PCCs in Table ~\ref{tab:Pearson}, and $g-r$ and $g-i$ colors, with the greatest PCCs ( mostly$>$0.5) are instead more sensitive indicators of $\gamma^{*}$ in $g$, $r$, $i$ and $z$ bands for our sample. The PCCs are declining as the wavelength band goes redder. This is because the variation of $\gamma^{*}$ with color is expected to be minimized in the NIR, so the $\gamma^{*}$ in a NIR band is almost a constant, being independent on colors. In this case, we would only focus on investigations of independently using $g-r$ and $g-i$ colors as estimators of $\gamma^{*}$ measured in $griz$ bands. We show ``robust'' bi-square weighted line fits (black solid lines) of log $\gamma_{*}^{j}$ ($j$=$g$,$r$,$i$, and $z$) with $g$ - $r$ color in the left column in Figure ~\ref{fig:m2l_color_gr} and with $g$ - $i$ color in the left column in Figure ~\ref{fig:m2l_color_gi}. The coefficients of the bi-square weighted line fitting relations are tabulated in Table ~\ref{tab:m2l_color}, which provides a direct comparison with tables in other published papaers, such as Table 7 of \citet{Bell2003} and Table B1 of \citet{Zibetti2009}. The detailed comparison will be presented in \S ~\ref{sec:com}. \begin{table*} \caption{Pearson correlation coefficients for the MLCRs.} \label{tab:Pearson} \begin{center} \begin{tabular}{lccccccc} \hline \hline color & PCC$_{g}$ & PCC$_{r}$ &PCC$_{i}$ & PCC$_{z}$ &PCC$_{J}$ & PCC$_{H}$ &PCC$_{K}$ \\ \hline \hline $u-g$& 0.02& 0.05& 0.01& 0.24& 0.00& 0.04& 0.06\\ $u-r$& 0.25& 0.24& 0.20& 0.40& 0.20& 0.21& 0.09\\ $u-i$& 0.26& 0.28& 0.19& 0.47& 0.21& 0.22& 0.13\\ $u-z$& 0.48& 0.47& 0.49& 0.21& 0.38& 0.37& 0.30\\ $g-r$& 0.65& 0.49& 0.56& 0.39& 0.49& 0.45& 0.36\\ $g-i$& 0.67& 0.61& 0.48& 0.59& 0.51& 0.42& 0.37\\ $g-z$& 0.49& 0.46& 0.52& 0.02& 0.43& 0.38& 0.39\\ $r-i$& 0.11& 0.17& 0.03& 0.29& 0.07& 0.06& 0.04\\ $r-z$& 0.28& 0.31& 0.34& 0.16& 0.22& 0.25& 0.29\\ $J-H$& 0.09& 0.04& 0.03& 0.02& 0.29& 0.17& 0.08\\ $J-K$& 0.00& 0.06& 0.06& 0.02& 0.16& 0.03& 0.27\\ $H-K$& 0.09& 0.10& 0.11& 0.02& 0.07& 0.11& 0.30\\ \hline \hline \hline \end{tabular} \end{center} \end{table*} \begin{table*} \caption{The fitting parameters for the MLCRs in the form of log($\gamma^{*}$)=$a_{\lambda}$+($b_{\lambda}\times$color).} \label{tab:m2l_color} \begin{center} \begin{tabular}{lcccccccccccccc} \hline \hline color & $a_{g}$ & $b_{g}$ &$a_{r}$ & $b_{r}$ &$a_{i}$ & $b_{i}$ &$a_{z}$ & $b_{z}$\\ \hline \hline $g-r$&-0.857&1.558&-0.700&1.252&-0.756&1.226&-0.731&1.128\\ $g-i$&-1.152&1.328&-0.947&1.088&-0.993&1.057&-0.967&0.996\\ \hline \hline \hline \end{tabular} \end{center} \end{table*} \subsection{Comparison with other color - $\gamma^{*}$ relations}\label{sec:com} The stellar mass-to-light ratios of galaxies in this work are derived by fitting stellar population synthesis (SPS) models to the observed SEDs, so the derived stellar mass-to-light ratio (hence the stellar mass-to-light ratio -- color relation - MLCR) should be dependent on SPS models, since different SPS models do not usually take the same prescription for the primordial ingredients, including the assumption of initial mass function (IMF) , the stellar evolution theory in the form of isochrones, the treatment of star formation history (SFH), metallicity distribution, and TP-AGB phase. Therefore, we compare our MLCRs with several other representative MLCRs of \citet[hereafter B03]{Bell2003}, \citet[hereafter Z09]{Zibetti2009}, \citet[hereafter IP13]{Into2013}, \citet{Roediger2015} based on BC03 model (hereafter RC15(BC03)) and FSPS model (hereafter RC15(FSPS)), and \citet[hereafter H16]{Herrmann2016}. Since our relations (and those of Z09, RC15, and H16) are based on a \citet{Chabrier2003} stellar IMF, while B03 is based on a `diet' Salpeter IMF and IP13 is based on a \citet{Kroupa1998} IMF, we have applied zero-point offsets for MLCRs of B03 and IP13 for a better comparison. B03 noted that log$\gamma_{*}$ should be added by 0.15, 0.0, -0.1, -0.15, -0.15, -0.15, -0.35 dex to be converted from their `diet' Salpeter IMF to the \citet{Salpeter1955}, \citet{Gould1997}, \citet{Scalo1986}, \citet{Kroupa1993}, \citet{Kroupa2002}, \citet{Kennicutte1983}, or Bottema 63$\%$ maximal IMFs. We follow \citet{Gallazzi2008} and \citet{Zibetti2009} to reduce B03-predicted log$\gamma_{*}$ by -0.093 dex to convert from the `diet' Salpeter IMF to a \citet{Chabrier2003} IMF. Following H16, we added 0.057 dex to the IP13-predicted log$\gamma_{*}$ which is based on a Kroupa IMF to adjust to a Chabrier IMF. With all the literature MLCRs adjusted (if needed) to a Chabrier IMF, we overplot the relations of log $\gamma_{*}^{j}$ ($j$=$g$,$r$,$i$, and $z$) with $g$ - $r$ color in the left column in Figure ~\ref{fig:m2l_color_gr} and with $g$ - $i$ color in the left column in Figure ~\ref{fig:m2l_color_gi} by different colored dashed lines. In comparison to each other, the B03 MLCR (red line) has the shallowest slope and the Z09 MLCR (blue line) has the steepest slope of all the colored lines in each panel, and our linear fitting line (black solid line) for the LSBG data is among these colored literature MLCRs in each panel. B03 MLCRs (red dashed lines) do not appear to fit our LSBG data well and they overestimate the stellar mass-to-light ratios for our LSBGs in all panels. In quantity, we show the distributions of both B03 MLCR-based (red) and our MLCR-based (black) $\gamma_{*}$ (red) for our LSBGs in Figure ~\ref{fig:offset_Bell}, and roughly derive a systematic offset $\Delta$log$\gamma_{*}^{j}$= 0.26~dex for $j$=$g$ and $r$ (predicted from $g-r$ color) between BC03 and our MLCRs. IP13 (green dashed lines) and RC15(FSPS) (grey dashed lines) MLCRs are slightly higher in all panels whereas Z09 (blue dashed lines) and RC15(BC03) (cyan dashed lines) MLCRs fit our LSBG data well and match our fitting lines (except for being a little steeper mostly). Such differences between the MLCRs from various studies are probably due to the variety of galaxy samples, linear fitting technique, and SED model ingredients mainly consisting of SP model, IMF, and SFH. All of these factors would be discussed in $\S$~\ref{sec:discuss}. \section{Discussion for variance between MLCRs}\label{sec:discuss} \subsection{ Variety of galaxy samples} B03 MLCR is based on a sample of mostly bright (13 $\leq r \leq$17.5~mag) and high surface brightness (HSB; $\mu_{r} <$21~mag~arcsec$^{-2}$) galaxies. The vast majority of their galaxies have 0.4 $< g$-$r<$1.0~mag in color ( as shown in Figure 5 in Bell03 paper). RC15 MLCR is based on a representative sample of nearby bright galaxies with apparent $B$-band magnitudes $>$16~mag from the Virgo cluster. IP13 and Z09 are purely theoretical MLCRs. IP13 MLCR is based on a combination of simple stellar populations (SSPs) from the isochrone data set of the Padova group which includes a revised (new) prescription for the thermally pulsing asymptotic giant branch (TP-AGB), composite stellar populations (CSP) which are generated by convolving SSPs according to exponentially declining (or increasing) star formation histories (SFHs), and disc galaxy models from \citet{Portinari2004}. Z09 MLCR is based on a Monte Carlo library of 50 000 SSPs from the 2007 version of \citet{Bruzual2003} (CB07), assuming a two-component SFH of a continuous, exponentially declining mode with random bursts superimposed and also includes a revised (new) prescription for TP-AGB phase. Although they strive to construct a representative sample of the whole galaxy populations, they are all biased to HSB and redder galaxies to varying degrees. For example, B03 sample lacks for sufficient LSBGs and bluer galaxies ($g$-$r <$0.4~mag), which would cause a weak constraint for the MLCR in the bluer color part, but be more constrained by the redder galaxies with $g -r >$0.4~mag. However, our MLCR is calibrated for a sample of LSBGs (most of which have $\mu_{r} >$21~mag~arcsec$^{-2}$ and $r >$17.5~mag), and 73$\%$ of the sample is bluer than $g$-$r$=0.4~mag with the rest between 0.4 $< g$-$r<$ 1.0~mag. This work is hitherto the first attempt to test the MLCR on LSBGs which are proposed to be potentially different from HSB galaxies. In this case, we additionally overplot the H16 MLCR (as magenta dashed lines) which is based on a sample of 34 dwarf irregular (dIrr) galaxies in the corresponding panels in left column of Figures ~\ref{fig:m2l_color_gr}, since H16 only gives the MLCR of log $\gamma_{*}^{j}$ ($j$=$g$ and $r$) with $g$ - $r$ color. It seems that the H16 MLCR is even much flatter than the B03 MLCR. Therefore, the distinction/disparity between sample properties may lead to subtle/significant differences between the MLCRs. In addition, both our and B03 MLCRs are based on photometry measured on SDSS images, but the methods of photometry between us is distinguishing. B03 adopts the SDSS Petrosian (SDSS Petro) magnitudes, measured within an circular aperture that is twice the radius at which the local surface brightness is 1/5 of the mean surface brightness within that radius. As we mentioned in $\S$ ~\ref{sec:phot}, the SDSS Petro has been reported to underestimate the magnitudes of bright galaxies by $\sim$0.2~mag \citep{He2013} and those of dwarf galaxies by up to $\sim$0.5~mag \citep{Lisker2007} due to its sky estimate algorithm that tends to subtract the LSB parts of galaxies as part of sky background. To correct this problem, B03 roughly subtracts 0.1~mag from the SDSS Petro magnitudes for their sample galaxies. However, for our LSBGs, we take a more accurate sky estimate method \citet{Zheng1999,Wu2002,Du2015} in advance to generate an 'unbiased' 2-D sky background for each of the SDSS images. The sky-subtracted images are then fed into the SExtractor code to measure the magnitudes of our LSBGs within the Kron elliptical aperture (SEx AUTO; recommended by SExtractor), which is distinct from the SDSS Petrosian circular aperture that B03 adopts. \citep{Hill2011} tested the mean (median) difference between SDSS Petrosian circular aperture magnitude and the SExtractor Kron elliptical aperture magnitude for a same galaxy sample and found the offset is 0.04(-0.01), 0.03(0.01), 0.02(0.01), 0.04(0.01), and 0.02(-0.01)~mag in $u$, $g$, $r$, $i$, and $z$ bands, respectively. Obviously, the offset of magnitudes arising from difference of the two aperture definitions are small. Here we also tested the difference between SEx AUTO and SDSS Petro (from SDSS DR7 photometric catalogue) magnitudes for our sample. As shown in Figure ~\ref{fig:mag_SDSSour}, the mean(median) offset of SEx AUTO from SDSS Petro magnitudes is 0.18(0.26), 0.16(0.23), 0.09(0.14), 0.31(0.37), 0.52(0.62)~mag, respectively, in $u$, $g$, $r$, $i$, and $z$ bands. Compared to the results of \citet{Hill2011}, these offsets for our LSBGs are larger, which is mainly due to our correction for the underestimation of SDSS Petro magnitude by using a different but better sky-subtraction recipe ($\S$ ~\ref{sec:phot}) from the SDSS one. Apparently, these offsets in magnitudes for our sample are reasonably consistent with the expected correction value for sky subtraction provided in previous literature \citep[e.g.][]{He2013,Lisker2007}, but it would also cause the offset of zero-point (color and log $\gamma_{*}$) between B03 and our MLCRs because the B03 MLCR subtracts 0.1 dex from the SDSS Petro magnitudes of their galaxies to correct the sky subtraction problem of the SDSS photometry. However, for our sample, our mean correction value for SDSS Petro magnitudes in $g$, $r$, $i$, and $z$ bands are -0.16, -0.09, -0.31, and -0.52~mag, respectively, while according to B03 method, the correction value would be -0.1~mag for all these 4 bands. So our correction would result in a mean offset of $\sim$ -0.07 in $g$ - $r$ and 0.15~mag in $g$ - $i$ color zeropoints from B03 correction for our sample, which implies that our sample would systematically shift redwards by $\Delta$($g$-$r$)$\sim$0.07~mag in all panels of the left colomn in Figure ~\ref{fig:m2l_color_gr}, but shift bluewards by $\Delta$($g$-$r$)$\sim$0.15 mag in all panels of the left column in Figure ~\ref{fig:m2l_color_gi}, if the B03 correction was applied to our sample. In this similar way, we can derive that our sample would shift upwards by $\Delta$log$\gamma_{*}^{j} \sim$ 0.02, 0.0, 0.08, 0.17 dex for $j$=$g$,$r$,$i$, and $z$ in the corresponding panels in Figures ~\ref{fig:m2l_color_gr} and ~\ref{fig:m2l_color_gi}, if the B03 correction was applied to our sample. So in most panels, the systematic shifts of our sample blueward in color and especially upward in stellar mass-to-light ratio would reduce the offset between our sample and the B03 MLCR, although they can not completely eliminate the disparity between our sample and the B03 MLCR obviously shown in Figures ~\ref{fig:m2l_color_gr} and ~\ref{fig:m2l_color_gi}. \subsection{Fitting techniques}\label{sec:fitting_method} The MLCRs are given by fitting the galaxy data to linear lines. In $\S$ ~\ref{sec:ms}, we fit linear MLCRs (black solid lines in the left column of both Figures ~\ref{fig:m2l_color_gr} and ~\ref{fig:m2l_color_gi}) to our LSBG data by using a bi-square weight technique (biweight), which uses the distance perpendicular to the bisecting line of the data to calculate weights of data points for fitting and was also adopted by B03 MLCRs. Since diverse fitting techniques may result in distinguishing coefficients for the fitting line, we show the fitting lines by using another two different line fitting techniques for our LSBG data in the right column of Figures ~\ref{fig:m2l_color_gr} and ~\ref{fig:m2l_color_gi}. The dark green line in each panel is given by the MPFITEXY, an IDL fitting procedure, which finds the best-fitting straight line through data with errors in both coordinates taken into consideration of calculating the weights for each data point. The orange line in each panel is given by a direct fitting method, which directly fits the LSBG data to a linear model by minimizing the $\chi^{2}$ with no weights considered for fitting. For a clear comparison, the black solid line previously given by `biweight' fitting technique is also overplotted in each panel of the right column of Figures ~\ref{fig:m2l_color_gr} and ~\ref{fig:m2l_color_gi}, and the coefficients of the MLCRs given by the three different line fitting methods for our LSBG data are also tabulated in Table ~\ref{tab:m2l_color_methods}. Obviously, in each panel, the `mpfitexy' (dark green line) and 'direct' (orange line) techniques give closely consistent fitting lines, which are much flatter than the `biweight' fitting line (black solid line), agreeing much better with the flatter B03 MLCR in slope iin each panel in the right column of Figures ~\ref{fig:m2l_color_gr} and ~\ref{fig:m2l_color_gi}. This reveals that the results of the fitting line are more or less dependent on the fitting technique, and we could not guarantee that our fitting technique is completely the same as those techniques used by other literature MLCRs. \begin{table*} \caption{The coefficients for the MLCRs from three different line fitting methods for our LSBG data in the form of log($\gamma^{*}$)=$a_{\lambda}$+($b_{\lambda}\times$color).} \label{tab:m2l_color_methods} \begin{center} \begin{tabular}{lcccccccc} \hline \hline color & $a_{g}$ & $b_{g}$ &$a_{r}$ & $b_{r}$ &$a_{i}$ & $b_{i}$ &$a_{z}$ & $b_{z}$\\ \hline \hline \multicolumn{9}{l}{\bf{biweighted (black solid line)}}\\ $g-r$&-0.857&1.558&-0.700&1.252&-0.756&1.226&-0.731&1.128\\ $g-i$&-1.152&1.328&-0.947&1.088&-0.993&1.057&-0.967&0.996\\ \hline \multicolumn{9}{l}{\bf{direct (orange solid line)}}\\ $g-r$&-0.709&1.072&-0.526&0.672&-0.607&0.732&-0.537&0.490\\ $g-i$&-0.924&0.932&-0.719&0.691&-0.691&0.530&-0.741&0.606\\ \hline \multicolumn{9}{l}{\bf{mpfitexy (dark green solid line)}}\\ $g-r$&-0.743&1.141&-0.556&0.732&-0.635&0.775&-0.550&0.522\\ $g-i$&-0.964&0.976&-0.744&0.708&-0.723&0.563&-0.762&0.633\\ \hline \end{tabular} \end{center} \end{table*} \subsection{SED model ingredients} The basis of using SED-fitting method to estimate stellar mass or stellar mass-to-light ratio is that galaxies can be viewed as a convolution of simple stellar populations (SSPs) of different ages and metalicities, according to a specific star formation and chemical evolution history (SFH). So far there are various stellar population (SP) models, e.g., \citet[hereafter Vazdekis model]{Vazdekis1996}, \citet[hereafter PEGASE model]{Fioc1997}, \citet[hereafter BC03 model]{Bruzual2003}, the 2007 version of BC03 (hereafter CB07), and \citet[hereafter FSPS model]{Conroy2009}. These models are based on various stellar evolutionary tracks/or isochrones, stellar libraries, the prescriptions for the late stage of evolution, such as Thermally-Pulsing Asymptotic Giant Branch (TP-AGB) phase, assuming various initial mass functions (IMFs) and SFHs. For the MLCRs in the figures, RC15 uses two independent SP models of BC03 (RC15(BC03)) and FSPS (RC15(FSPS)), as shown in the MLCR figures, FSPS model (grey dashed line) always gives flatter MLCR than the BC03 model (cyan dashed line) in each panel. This demonstrates that the choice of SP model matters a lot, which would result in distinguishing MLCRs, especially in slope. B03 adopts PEGASE model, incorporating an `old' prescription \citep{Girardi2000, Girardi2002} for TP-AGB stars. IP13 used the new Padova model (SSP model generated from the isochrones of Padova group), and Z09, H16, and our MLCRs are all based on the CB07 model, which all incorporates a `new' prescription \citep{Marigo2007,Marigo2008} for TP-AGB stars. The 'new' treatment for TP-AGB phase includes a larger contribution from TP-AGB stars that change little to the optical luminosity dominated by main-sequence stars, but greatly enhances the predicted redder/near-infrared luminosity of galaxies, and finally leads to lower $\gamma^{*}$ in redder/near-infrared bands than bluer bands. This is why the disparity between B03 (red dashed line) and our MLCRs (black solid line) in the MLCR figures is becoming larger in panels of red bands of $i$ and $z$ bands than those of $g$ and $r$ bands for our sample. IMF is critical for determining the $\gamma^{*}$ of a galaxy from SED-fitting method. The extant IMFs are mostly determined for the Milky Way in the Solar neighborhood, while the IMF of external galaxies is in principle unknown \citep{Courteau2014}. The various IMFs, e.g., the Salpeter IMF \citep{Salpeter1955}, Kroupa IMF \citep{Kroupa1998, Kroupa2001}, and Chabrier IMF \citep{Chabrier2003}, differ mainly in the slope for low mass stars, which serve mostly to change the mass without much changing the luminosity or color. So an IMF that includes more low mass stars, such as the Salpeter IMF, yields a higher $\gamma^{*}$ at a given color, since the large number of low-mass stars would greatly increase the stellar mass, but hardly affect the luminosities or colors of galaxies as these stars are too faint. Hence, different IMFs would result in offsets in zero-point of the MLCRs, even though the relation slopes may remain unchanged \citep{Bell2001, Bell2003, Courteau2014}. In our modeling, we assumed a Chabrier IMF which gives small number of low-mass stars. However, the B03 MLCR assumed a ``diet '' \citet{Salpeter1955} IMF, which gives a larger number of stars in the low mass end of IMF and thus yields a higher $\gamma^{*}$ at a given color than the Chabrier IMF. The IP13 MLCR uses a \citet{Kroupa1998} IMF. Although we have converted both the B03 and IP13 MLCRs from its original IMFs to a Chabrier IMF by adding a correction value to the originally predicited log $\gamma_{*}f$ for comparison in the left column of the MLCR figures (Figures ~\ref{fig:m2l_color_gr} and ~\ref{fig:m2l_color_gi}). If we choose the correction value to be -0.093 dex for B03 MLCR following \citet{Gallazzi2008,Zibetti2009} and 0.057 dex for IP13 following \citet{Herrmann2016}, the IMF-corrected B03 MLCR (red dashed line) shown in the MLCR figures is still higher than our MLCR; however, the IMF-corrected IP13 (green dashed line) appears more consistent with our MLCR in zeropoint in each panel of the MLCR figures. If the correction values changed, the zero-points of both MLCRs would shift accordingly. So the zero-points of the IMF-converted B03 and IP13 MLCRs in the figures should be still dependent on the uncertainties of the correction values which we do not know for sure. Besides, we are not certain about whether the correction values used here are accurate, as -0.093 dex is used in H16 while -0.15 is used in RC15. Similar to our MLCR, the Z09 (blue dashed line) and RC15 (BC03) (cyan dashed line) MLCRs also use a Chabrier IMF, so they are obvious to be more consistent with our MLCR in zero-point than other literature MLCRs in the figures. Unlike the RC15(BC03), the RC15(FSPS) MLCR (grey dashed line) has an relatively larger offset from our MLCR, because it uses a different stellar population model of FSPS, although it also uses a Chabrier IMF. This will be discussed later. Although H16 (magenta dashed lines in the first two panels) MLCR uses a Chabrier IMF, it is much higher (like B03) than our MLCR in each panel. This should be due to the different line fitting technique (as discussed in \S~\ref{sec:fitting_method}) and other different SPS model ingredients which will be discussed later. The SFH regulates the star formation and chemical enrichment with time. The choice of the SFH, in particular whether it is rising, declining, or bursty, can significantly change the best-fit stellar mass, by perhaps as much as 0.6 dex in extreme cases \citep{Pforr2012,Conroy2013}. IP13 considers an exponentially declining (or increasing) SFH in the form of $\rm \Psi$($t$)=e$^{-t/\tau}$. The declining ones are modelled with e-folding time-scales $\tau$ ranging from 1.55 Gyr to $\infty$ (constant star formation rate, SFR) and the increasing ones are modelled with negative values of $\tau$ ranging from -50.00 to -1.00. H16 MLCR is based on a library of multi-component complex SFHs which is created for dwarf irregular galaxies by \citet{Zhang2012} and totally different from the commonly used 2-component SFH model from literature. Our MLCRs, RC15 MLCRs and Z09 MLCRs are all based on the 2-component SFH model (an exponentially declining SFHs with random bursts superimposed), which incorporates a variety of bursty events and allows young ages (of a few Gyr). Whereas B03 considers relatively smooth SFHs starting from 12 Gyr in the past, and limits the strength of star bursty events (which are simultaneously constrained to only happen in the last 2 Gyr) to $\leqslant$ 10 percent by stellar mass. As a recent burst of star formation will dramatically lower the mass-to-light ratio of a total stellar system than a smooth star formation model by up to 0.5 dex \citet{Courteau2014}, omitting or less burst components from SFH models will bias B03 MLCRs-based stellar mass-to-light ratio to higher values\citep{Roediger2015}. \subsection{Effect of surface brightness on fitting relations} Since a lower surface brightness is the characteristic that distinguishes LSBGs from HSBGs (mostly normal spiral galaxies), we here check whether the MLCRs have dependence on galaxy surface brightness, besides the external factors discussed above. We divide our LSBG data into three central surface brightness bins ($\mu_{0,B}\leqslant$23.5, 23.5 $<\mu_{0,B}<$24.5, and $\mu_{0,B}\geqslant$24.5). These $B$-band central surface brightness are all measured in \citet{Du2015}. Then, we fit the MLCR for galaxies in each bin using the `bi-weight' line fitting technique. The MLCR for each bin is over-plotted (as three brown lines of different styles) in each panel in the right column of Figures ~\ref{fig:m2l_color_gr} and ~\ref{fig:m2l_color_gi}. It is obvious that the MLCRs slightly flatten with the lower central surface brightness, as shown in Figure ~\ref{fig:slope_mu}. However, the change of MLCR due to the change of central surface brightness (represented by the differences between black and brown lines) is small, which is far less than that due to the change of fitting techniques (shown by the larger offset between the black and dark green/orange lines) in the right column of the MLCR figures. In this case, we think the minor differences between our MLCR (for LSBGs) and the literature MLCRs (for the whole galaxy population, HSBGs, or dwarfs) are more caused by the combination of differences in SED fitting models (IMF, SFH, and SPS), photometric zero-point of data, and line fitting techniques, rather than the central surface brightness of galaxies themselves. As shown in the left column of the MLCR figures, our MLCR (black solid line) for LSBGs is among the literature MLCRs in each panel, so it is expected to be generally consistent with those literature MLCRs for various samples, if those main factors of difference in photometric methods, line fitting methods, and models (IMF, SFH, and SPS) are taken into account. This could be further in evidence by the best consistency between our MLCR and Z09 MLCR (blue dashed line), especially in Figure ~\ref{fig:m2l_color_gi}, which is based on the same IMF, SPS model (CB07) and SFH as our MLCR except for based on a theoretical library of galaxy samples. \section{Comparison with Huang12's stellar mass for dwarf galaxies} In this section, we hope to assess our MLCRs by comparing the predicted stellar masses with those estimated in an independent paper \citet[Huang12]{Huang2012} for dwarf galaxies with comparable properties. Huang12 defined a sample of gas-rich dwarf galaxies, which have log $M_{H{\sc{i}}}<$7.7 and $W$50 $<$ 80 km s$^{-1}$ from the $\alpha$.40 HI survey catalogue. They give stellar masses for this sample galaxies by fitting their SEDs consisting of $GALEX$ (FUV NUV) and SDSS ($ugriz$) photometric bands to the BC03 model, assuming a \citet{Chabrier2003} IMF and a continuous SFH with random bursts superimposed, which are the same as our assumptions. Besides the SED fitting results, Huang12 also predicted the stellar masses according to the B03 MLCR ($i$ v.s. $g$-$r$). In comparison, their SED fitting yields a median log $M_{*}/M_{\odot}$=7.45 while the B03 MLCR (converted to Chabrier IMF) gives a considerably higher median of 7.73, which overestimates the stellar mass by $\sim$0.28 dex.Using Huang12's criteria, we derive a sample of dwarf galaxies out of our LSBGs. Our MLCR ($i$ v.s. $g$ - $r$) yields a median log $M_{*}/M_{\odot}$=7.36 while B03 MLCR (converted to Chabrier IMF) gives a considerably higher median of 7.69 for our dwarf galaxies, which overestimates the stellar mass by $\sim$0.33. This result is very similar to Huang12's result in terms of both the stellar mass value and the offset value from B03 predictions, which gives confidence for using our MLCR, especially for dwarf galaxies. The offset should be mainly caused by the ingredients discussed in $\S$~\ref{sec:discuss}, and Huang12 claimed that it is mainly due to the different SFH adopted by Bell03, which does not fully account for the impact of bursty behavior in dwarf galaxies. \section{Summary and Conclusion} We obtained stellar masses, $M_{*}$, and stellar mass-to-light ratio, $\gamma_{*}$, for a sample of low surface brightness galaxies (LSBGs) by fitting their spectral energy distributions (SEDs) covering eleven ultraviolet, optical and near-infrared photometric bands to the stellar population synthesis (SPS) model using the MAGPHYS code \citet{da Cunha2008}. The derived $M_{*}$ for this sample spans from log$M_{*}/M_{\odot}$= 7.1 to 11.1 dex, with a mean log $M_{*}/M_{\odot}$ = 8.47 and a median of 8.48 dex, showing that these LSBGs have a systematically lower $M_{*}$ than normal galaxies. The $\gamma^{*}$ for this sample slightly decreases from $r$-band to redder wavelength bands for our LSBG sample, similar to the declining trend of $\gamma^{*}$ from short to longer wavelength for normal star-forming galaxies, and the $\gamma^{*}$s vary little with absolute magnitude, but slightly increase with higher $M_{*}$ for our LSBG sample. This increasing trend is stronger in bluer bands, with a steepest slope in $u$ but nearly flat slope in the $K$ band. We then fitted the stellar-mass-to-light - color relation (MLCR) for the LSBG sample. The log $\gamma_{*}^{j}$ ($j$=$g$, $r$, $i$, and $z$) have the relatively tightest relations with the optical colors of $g-r$ and $g-i$ for our LSBG data. Compared with the literature MLCRs, our MLCRs are consistently among those literature MLCRs that are converted to the same IMF. The minor differences could be more due to the differences in SED models (IMF, SFH, SPS model), photometric zeropoints, and line fitting techniques, but depend little on the galaxy surface brightness. This may give a possible implication that most of our LSBGs might share the generally similar properties in star formation and evolution with the normal galaxies. \acknowledgements We appreciate the anonymous referee for his/her constructive comments, which makes this paper strengthened. DW is supported by the National Natural Science Foundation of China (NSFC) grant Nos. U1931109, 11733006, the National Key R$\&$D Program of China grant No. 2017YFA0402704, and the Young Researcher Grant funded by National Astronomical Observatories, Chinese Academy of Sciences (NAOC). CC is supported by the NSFC grant No.11803044, the Young Researcher Grant funded by NAOC, and also supported in part by the Chinese Academy of Sciences (CAS) through a grant to the CAS South America Center for Astronomy (CASSACA) in Santiago, Chile. ZZ is supported by the NSFC grant No. 11703036. WH is supported by the National Key R$\&$D Program of China grant No. 2017YFA0402704, and the NSFC grant Nos.11733006. \vspace{5mm} \facilities{GALEX, SDSS, UKIDSS} \software{SExtractor \citep{Bertin1996}, MAGPHYS \citep{da Cunha2008} }
1,941,325,221,193
arxiv
\section{Introduction} Quantum field theory is compactly written in terms of path integrals. Path integrals have been specially usefull in dealing with symmetries, quantizing gauge theories, etc. At some point, however, we need to calculate the path integrals. The problem is that the only path integrals that we know how to solve correspond to free theories (Gaussian integrals). We would be in a very sorry state if we didn't have a generic approximation scheme at our disposal. Semi-classical or loop expansion is just such an approximation scheme. In fact, much of what we know about quantum field theory comes from one-loop results. The one-loop result is obtained by Taylor expanding the action around classical fields and disregarding cubic and higher terms. In this way the path integral is approximated by a Gaussian. In this paper we will consider another Gaussian approximation to the path integral. Unlike the one-loop result, here we will Taylor expand about the average field $\varphi\equiv\langle\phi\rangle$. We shall show that this leads to an improved approximation given in terms of a recursive relation. \section{The Gaussian Approximation} The central object in quantum field theory is the generating functional $Z[J]$. Functional derivatives of $Z[J]$ with respect to the external fields $J(x)$ give the Green's functions of the theory. The generating functional is determined from the (Euclidian) action $S[\phi]$ through the path integral \begin{equation} Z[J]=\int [d\phi]\,e^{-\frac{1}{\hbar} \left(S[\phi]-\int dx\,J(x)\,\phi(x)\right)}\ . \end{equation} The integration measure is, formaly, simply \begin{equation} [d\phi]=\prod_{x\in\mathbb{R}^d}d\phi(x)\ , \end{equation} where $d$ is the dimension of space-time. We are interested in looking at a set of approximations to the above path integral. The approximations are valid in all $d$. In this letter, however, we are going to look at the simpler case of $d=0$ theories, where it is easy to compare our results with exact numerical calculations. In $d=0$ functionals become functions, and the path integral reverts to a single definite integral over the whole real line. \begin{equation} Z(J)=\int d\phi\,e^{-\frac{1}{\hbar} \left(S(\phi)-J\,\phi\right)}\ .\label{z} \end{equation} An even more useful object is $W(J)$ --- the generator of connected diagrams, defined by \begin{equation} Z(J)=Z(0)\,e^{-\frac{1}{\hbar}W(J)}\ . \end{equation} In statistical mechanics parlance this is the free energy. The quantum average of the field $\phi$ is \begin{equation} \varphi\equiv\langle\phi\rangle=-\,\frac{\partial}{\partial J}\,W(J) \ .\label{average} \end{equation} In the Gaussian approximation, we Taylor expand the action in the path integral around some reference point $\phi_\mathrm{ref}$, and keep terms that are at most quadratic in $\phi-\phi_\mathrm{ref}$. Thus, we use \begin{equation} S(\phi)\approx S(\phi_\mathrm{ref})+S'(\phi_\mathrm{ref})\, (\phi-\phi_\mathrm{ref})+ \frac{1}{2}\,S''(\phi_\mathrm{ref})\,(\phi-\phi_\mathrm{ref})^2\ . \end{equation} The integral in (\ref{z}) is now a Gaussian and we find, up to an unimportant constant, that \begin{equation} W_\mathrm{Gauss}(J,\phi_\mathrm{ref})=S(\phi_\mathrm{ref})- J\,\phi_\mathrm{ref}+ \frac{\hbar}{2}\,\ln S''(\phi_\mathrm{ref})- \frac{1}{2}\,\frac{(S'(\phi_\mathrm{ref})-J)^2}{S''(\phi_\mathrm{ref})}\ . \label{w-g} \end{equation} For this approximation to make sense, the integral must get its dominant contribution from the vicinity of the reference point $\phi_\mathrm{ref}$. The standard Gaussian approximation corresponds to the choice $\phi_\mathrm{ref}=\phi_\mathrm{class}\,(J)$, where $\phi_\mathrm{class}$ is the solution of the classical equation of motion $S'=J$. The classical solution is the maximum of the integrand in (\ref{z}). This specific choice of $\phi_\mathrm{ref}$ gives us the standard one-loop result \begin{equation} W(J)\approx W_1(J)\equiv S(\phi_\mathrm{class})-J\,\phi_\mathrm{class}+ \frac{\hbar}{2}\,\ln S''(\phi_\mathrm{class})\ . \end{equation} As is well known, loop expansion is just an expansion in powers of $\hbar$. The one-loop result gives us the first quantum correction to classical physics. From now on we set $\hbar=1$. \section{Improving the Gaussian Approximation} In this section we will choose a \emph{different} expansion point $\phi_\mathrm{ref}$ for our general Gaussian formula (\ref{w-g}). The idea is to expand around the average field $\varphi$. Although the classical solution gives the maximum of the integrand, expansion around $\varphi$ gives a better approximation for the area under the curve. This is in particular true for large values of $J$. We will work with $\phi^4$ theory in $d=0$, whose action is given by \begin{equation} S(\phi)=\frac{1}{2}\,\phi^2+\frac{1}{4!}\,g\,\phi^4\ .\label{phi4} \end{equation} The classical equation of motion is now a cubic algebraic equation. We easily find the unique real solution. In this way we get a closed form expression for the one-loop approximation $W_1(J)$. The Gaussian approximation around the average field $\varphi$ is simply \begin{equation} S(\varphi)-J\,\varphi+ \frac{1}{2}\,\ln S''(\varphi)- \frac{1}{2}\,\frac{(S'(\varphi)-J)^2}{S''(\varphi)} \ .\label{w_q} \end{equation} To be able to calculate this \emph{in closed form} we need to know $\varphi(J)$, which is tantamount to knowing how to do the theory exactly, since $\varphi$ and its derivatives give all the connected Green's functions. The use of equation (\ref{w_q}) comes about when one solves it iteratively. We use equations (\ref{average}) and (\ref{w-g}) as the basis for the following iterative process \begin{equation} \varphi_{n+1}(J)=-\,\frac{d}{dJ}\,W_\mathrm{Gauss}(J,\varphi_n(J))\ . \label{phi-n} \end{equation} Differentiating (\ref{w-g}) we find \begin{eqnarray} \lefteqn{ \varphi_{n+1}= \varphi_n-\,\frac{S'(\varphi_n)-J}{S''(\varphi_n)}\,-}\nonumber\\ & &{}\quad\quad-\,\frac{S'''(\varphi_n)}{2}\, \left(\left(\frac{S'(\varphi_n)-J} {S''(\varphi_n)}\right)^2+ \,\frac{1}{S''(\varphi_n)}\right)\frac{d\varphi_n}{dJ}\ . \label{recurse} \end{eqnarray} For the seed of this iteration we choose the classical field, i.e. $\varphi_0=\phi_\mathrm{class}$. In this way we obtain a sequence of points $\varphi_0,\varphi_1,\varphi_2,\ldots$ or equivallently of approximations to the connected generating functional $W_1,W_2,W_3,\ldots$ given by $W_{n+1}(J)=W_\mathrm{Gauss}(J,\varphi_n(J))$. The idea behind this is obvious --- we want to obtain a sequence of $W_n(J)$'s that give better and better approximations to $W(J)$. The following two figures show that this realy works. \begin{figure}[!ht] \centering \includegraphics[height=5.5cm]{4k.eps} \caption{Plots of $\varphi-\varphi_0$ (dotted line), $\varphi-\varphi_1$ (dashed line), $\varphi-\varphi_2$ (thin line) and $\varphi-\varphi_\infty$ (thick line) as functions of $J$. Here we had $g=1$.} \label{4k} \end{figure} From Figure~\ref{4k} we see that the sequence of $\varphi_n$'s converges to $\varphi_\infty$ \footnote{In iterating (\ref{recurse}) we necessarily discretize the $J$'s. The coursness of this discretization effects the speed of convergence of the $\varphi_n$'s. The standard way out of this problem is to introduce a small mixing papameter $\epsilon$. Instead of the recursive relation $\varphi_{n+1}=f(\varphi_n)$ one then consideres $\varphi_{n+1}=(1-\epsilon)\varphi_n+\epsilon f(\varphi_n)$.}. Note that $\varphi_\infty\ne \varphi$. The reason for this is obvious: We used the Gaussian approximation $W_\mathrm{Gauss}$ in defining our recursive relation, and there is no reason to expect that this converges to the exact result. It does, however, converge and $\varphi_\infty$ represents an excellent approximation to $\varphi$. To see this, in Figure~\ref{relinv} we have ploted the ratio $|\frac{\varphi-\varphi_0}{\varphi-\varphi_\infty}|$. This ratio represents a direct measure of the improvement of approximations in going from the one-loop result $W_1(J)=W_\mathrm{Gauss}(J,\varphi_0(J))$ to our improved Gaussian result $W_\infty(J) =W_\mathrm{Gauss}(J,\varphi_\infty (J))$. \begin{figure}[!ht] \centering \includegraphics[height=5.5cm]{relinv.eps} \caption{Plot of the ratio $|(\varphi-\varphi_0)\,/\,(\varphi-\varphi_\infty)|$ as a function of $J$. This is a direct measure of how the new approximation outperforms the standard one-loop result. Here we had $g=1$.} \label{relinv} \end{figure} As we have already mentioned, our new approximation was tailored to work well for large $J$. This can be read off directly from Figure~\ref{relinv}. For example, for $J=15$ the improved Gaussian is about one hundred times better than the one-loop result. The new approximation is poorest for $J\approx 2$, but even there it beats the old approximation by a factor of seven. Most of the time we are interested in working with small or zero external fields. In the vicinity of $J=0$ the new approximation is fourteen times better than the old one. \section{Monte Carlo} The aim of our investigations so far has been to develop better analytic approximation schemes that can be applied to general quantum field theories. We worked in $d=0$ in order to be able to make a simple comparison with exact (numerical) results. In this section we will look at the numerical techniques themselves. We used the Monte Carlo algorithm \cite{kw} for calculating path integrals. In $d=0$ Monte Carlo is not the most efficient way to do things --- its advantages become apparent as we look at larger and larger numbers of integrations. We use Monte Carlo in order to investigate the algorithm itself in light of what we have learned in the previous two sections. We start with a brief introduction of the method. In order to calculate the definite integral $\int f(\phi)d\phi$ we choose a non-negative function $p(\phi)$ normalized so that $\int p(\phi)\,d\phi=1$. Therefore, $p(\phi)$ is a probability distribution. The integral is now \begin{equation} \int f(\phi)\,d\phi=\int \frac{f(\phi)}{p(\phi)}\,p(\phi)\,d\phi\equiv \left\langle\frac{f}{p}\right\rangle _p\ , \end{equation} where $\langle F\rangle_p$ represents the mean value of $F$ with respect to the probability distribution $p$. Therefore, the integral of $f$ is given as the mean value of $f/p$ on a sample of random numbers whose probability distribution is given by $p$. In practice, this mean value is estimated using a finite number $N_\mathrm{mc}$ of Monte Carlo samples, and the error of such an estimate is itself estimated to be $\sigma_{f/p}=\sqrt{\sigma_{f/p}^2}\,$, where the variance equals \begin{equation} \sigma^2_{f/p}=\frac{\left\langle \left(\frac{f}{p}\right)^2\right\rangle_p- \left\langle \frac{f}{p}\right\rangle^2_p}{N_\mathrm{mc}-1}\ . \end{equation} The central limit theorem guarantees that the Monte Carlo algorithm converges to $\int f\,d\phi$ for an arbitrary choice of distribution $p$. The only condition that must be met is $\sigma^{2}_{f/p}<\infty$. This freedom in the choice of $p$ is used to speed-up the convergence of the algorithm. The speed of convergence is measured by the efficiency $\cal E$, given by \begin{equation} {\cal E}=\frac{1}{T\,\sigma^{2}_{f/p}}\ , \end{equation} where $T$ represents the total computation time. Note that a hundred fold increase of efficiency corresponds to one extra significant figure in the final result. In our calculation we chose $p(\phi)$ to be the Gaussian normal distribution \begin{equation} p(\phi)=\frac {1}{\sqrt{2\pi\sigma^2}}\, \exp\left(-\,\frac{(\phi-a)^2}{2\sigma^2}\right)\ , \end{equation} where $a$ and $\sigma$ completely determine the distribution. There are two reasons for using this distribution. First of all, the function we are integrating can be approximated by a Gaussian over a wide range of parameters $J$ and $g$. A good choice of $a$ and $\sigma$ makes $f/p$ almost constant over the range of integration, thus making the variance small. Second, there exists a specific algorithm for generating random numbers conforming to a Gaussian distribution. The Box-Muller algorithm \cite{ptwf} is much more efficient than the standard Metropolis algorithm \cite{mrrtt} since it doesn't give rise to autocorrelations of generated numbers. In the Metropolis algorithm autocorrelations can be pronounced, and their removal substantialy slows down the simulation. The choice of probability distribution has a great effect on the efficiency. For example, the efficiency corresponding to the uniform distribution on the interval $\phi\in [-100,100]$ is 3.5 $10^{10}$ times smaller than the efficiency achieved by the Gaussian distribution centered at $a=\phi_\mathrm{class}$ with optimal choice of width $\sigma$. Having chosen $p$ to be a Gaussian, the computation time $T$ depends only on the number of Monte Carlo samples $N_\mathrm{mc}$. Therefore, in our case, maximalization of efficiency is equivallent to a minimalization of the variance $\sigma_{f/p}^2$. In the previous sections we saw that it is even better to expand around $\varphi$. In the Monte Carlo setting this should translate into a further increase in efficiency. This is \emph{precisely} what we see. By varying the center of the Gaussian $a$ (always using optimal width for that given $a$), we find maximum efficiency precisely at $a=\varphi$ as we can see in Figure~\ref{min}. \begin{figure}[!ht] \centering \includegraphics[height=5.5cm]{min.eps} \caption{The variance as a function $a$. The plot is for $g=10$, $J=1$. The variance is minimized for $\varphi(1)=0.376799$ (black dot). The classical field is $\phi_\mathrm{class}(1)=0.614072$ (grey dot).} \label{min} \end{figure} Figure~\ref{eff} compares the efficiencies ${\cal E}_C$ of simulations about $\phi_\mathrm{class}$ and ${\cal E}_Q$ for simulations about $\varphi$ for various values of $J$. It is seen that we get a two fold improvement in efficiency. This may not seem spectacular, and in $d=0$ it realy is not. However, once we consider theories in $d>0$ we are dealing with true path integrals. If we approximate the path integral with $N$ integrals then the expansion around $\varphi$ gives a jump in efficiency of $2^N$. Even for a modest simulation with $N=20$ this corresponds to an increase of six orders of magnitude. \begin{figure}[!ht] \centering \includegraphics[height=5.5cm]{eff.eps} \caption{The ratio ${\cal E}_Q/{\cal E}_C$ as a function of $J$ for $g=1$.} \label{eff} \end{figure} The problem with this calculation is that we already need to have the exact result for $\varphi$ in order to get the stated increase in speed. The way out is obvious and is reminiscent of the step we made in the previous section in going from (\ref{w_q}) to (\ref{phi-n}). Therefore, we need to start Monte Carlo with a Gaussian distribution centered about $\phi_\mathrm{class}$. After a while this gives us an approximation to $\varphi$, say $\varphi^\mathrm{mc}_1$. Using this as the center of a new probability distribution we obtain $\varphi^\mathrm{mc}_2$, etc. Unlike the series $\varphi_0$, $\varphi_1$, $\varphi_2,\ldots$ of the previous section, this one necessarily converges to the exact result --- even ordinary Monte Carlo does that. The improved Monte Carlo scheme, however, can be tailored to yield an efficiency very near to the ideal value ${\cal E}_Q$ \cite{next}. \section{Conclusion} We have looked at two different ways how one can take advantage of the (rather intuitive) fact that in quantum field theory Gaussians are best centered about the average field $\varphi\equiv\langle \phi\rangle$. We cast the Gaussian approximation about $\varphi$ as a recursive relation. Working on $\phi^4$ theory in $d=0$ we have shown that the iterates of this equation present better and better approximations to $W(J)$. This sequence of approximations ends with $W_\infty(J)$, i.e. the best Gaussian approximation. The first iterate in this sequence is the standard one-loop result. The second ($W_2$) is not much more complicated, and is already much better than the one-loop approximation. In looking at theories in $d>0$ it may be possible to use $W_2$ to get a better analytic approximation to (say) the effective potential. The second use for the newly centered Gaussians is in Monte Carlo calculations. The Monte Carlo algorithm is most efficient when one generates random numbers through Gaussian probability distributions centered about $\varphi$. We have shown that this can substantialy speed up the algorithm. For an $N$-fold integral the speed up is roughly $2^N$. We are currently working on applying the improved Monte Carlo algorithm to models in $d\ge 1$.
1,941,325,221,194
arxiv
\section{Introduction} Nowadays, researchers have been increasingly tasked by funders and publishers to outline their research for the public by writing a lay summary. Therefore, it is essential to automatically generate lay summaries to reduce the workload for researchers as well as build a bridge between the public and science. Previous studies have investigated scientific article summarization especially for papers \cite{cohan2018discourse, lev2019talksumm, yasunaga2019scisummnet}. However, less work has been done to generate lay summaries. Recently, the First Workshop on Scholarly Document Processing \cite{Chandrasekaran2020Overview}, Lay Summary Task\footnote{https://ornlcda.github.io/SDProc/index.html} (LaySumm 2020) first proposed the task of Lay Summary Generation. The task aims to generate summaries that are representative of the content, comprehensible and interesting to a lay audience. After checking the dataset that the task provides, we observe that lots of the sentences in lay summaries have corresponding sentences in original papers. Inspiring by this observation, we think that making binary sentence labels for extractive summarization and utilize them as extra supervision signals can help model generate better summaries. Therefore, we conduct BART \cite{lewis2019bart} encoder to make sentence representations and train extractive summarization together with abstractive summarization. Experimental results show that leveraging sentence labels can improve the Lay summary generation performance. In the Laysumm 2020 competition, our model achieves 46.00\% Rouge1-F1 score. The code will be released on Github \footnote{https://github.com/TysonYu/Laysumm}. \section{Related Work} \paragraph{Text Summarization} Text summarization aims to produce a condensed representation of input text that captures the core meaning of the original text. Recently, neural network-based approaches have reached remarkable performance for news articles summarization \cite{see2017get, liu2019text, zhang2019pegasus}. Comparing with news articles, scientific papers are typically longer and contain more complex concepts and technical terms. \paragraph{Scientific Paper Summarization} Existing approaches for scientific paper summarization include extractive models that perform sentence selection \cite{qazvinian2013generating,cohan2017scientific,cohan2018scientific} and hybrid models that select the salient text first and then summarize it \cite{ subramanian2019extractive}. Besides, \citet{cohan2018discourse} built the first model for abstractive summarization of single, longer-form documents (e.g., research papers). In order to train neural models for this task, several datasets have been introduced. The arXiv and PubMed datasets \cite{cohan2018discourse} were created using open access articles from the corresponding popular repositories. \citet{yasunaga2019scisummnet} developed and released the first large-scale manually-annotated corpus for scientific papers (on computational linguistics). \paragraph{Large Pre-trained Language Model} Large pre-trained language models, such as BERT \cite{devlin2018bert}, UniLM \cite{dong2019unified} and BART \cite{lewis2019bart} have shown great performance on a variety of downstream tasks including summarization. For example, BART achieved state-of-the-art performance on CNN/DM \cite{hermann2015teaching} news summarization dataset. \begin{figure*} \centering {\includegraphics[width=0.82\linewidth]{images/Laysumm_model.pdf}} \caption{Multi-label summarization model. The left part is based on a bidirectional encoder and the right part is an autoregressive decoder.} \label{fig:model} \end{figure*} \section{Datasets} We use two datasets for this work, which are the dataset of CL-LaySumm 2020 and ScisummNet \cite{yasunaga2019scisummnet}. In this section, we introduce the details of them and the pre-processing method we used. \subsection{CL-LaySumm 2020 Dataset} \label{CL-LaySumm 2020 Dataset} The CL-LaySumm 2020 Dataset is released by the CL-LaySumm Shared Task that aims to produce lay summaries of scientific texts. A lay summary refers to a textual summary intended for a non-technical audience. There are 572 samples in the dataset for training and each sample contains a full-text paper with a lay summary. To test the summarization model, we need to generate lay summaries for 37 papers within 150 words. Since the original papers are very long and the task requires us to generate relatively short summaries, it is crucial to extract important parts of papers first before feeding them to large pre-trained models. Given our own experience of how papers are written, we start with the assumption that the Abstract, Introduction and Conclusion are most likely to convey the topic and the contributions of the paper. So, we make different combinations of these three sections as input to our model. \subsection{ScisummNet Dataset} The ScisummNet is the first large-scale, human-annotated Scisumm dataset. The dataset provides 1009 papers with their citation networks as well as their manual summaries. The gold summaries are written by annotators based on the abstract and selected citation sentences that also convey the contributions of papers. We take the abstract and annotators selected citation sentences as our models' input. \subsection{Data Pre-processing} As mentioned above, we first represent the document using the sentences in its Abstract, Introduction and Conclusion. Then we use two approaches to pre-process the text. The first pre-processing approach is removing tags and outliers. The original text of the Laysumm dataset has lots of tags such as TITLE, SECTION and PARAGRAPH. We remove all different kinds of tags. Besides, some samples of the Laysumm dataset do not contain an Abstract or Introduction. We regard these samples as outliers and delete them while training the model. The total number of outliers is 23. Then, we truncate all input text to a max length of 1024 tokens due to the carrying capacity of the BART model. \section{Methodology} \begin{table*} \centering \resizebox{0.82\textwidth}{!}{% \begin{tabular}{lrrrrrr} \hline \multicolumn{1}{l}{Model} & Rouge1-F1 & Rouge1-Recall & Rouge2-F1 & Rouge2-Recall & RougeL-F1 & RougeL-Recall \\ \hline BART (Abs) & 0.4350 & 0.4697 & 0.1807 & 0.1968 & 0.2722 & 0.2934 \\ BART (Abs+Intro) & 0.4518 & 0.4923 & 0.1977 & 0.2135 & 0.2820 & 0.3061 \\ BART (Abs+Intro$_{all}$) & 0.4443 & 0.4816 & 0.1991 & 0.2142 & 0.2825 & 0.3040 \\ BART (Abs+Intro+Con) & 0.4536 & \textbf{0.5171} & 0.2016 & \textbf{0.2271} & 0.2864 & \textbf{0.3243} \\ BART (Data augmentation) & 0.4490 & 0.4887 & 0.1972 & 0.2136 & 0.2895 & 0.3139 \\ BART + Two-stage & 0.4529 & 0.4882 & 0.2067 & 0.2224 & \textbf{0.2929} & 0.3140 \\ BART + Multi-label & \textbf{0.4600} & 0.5013 & \textbf{0.2070} & 0.2223 & 0.2876 & 0.3104 \\ \hline \end{tabular} } \caption{Our results on CL-LaySumm 2020 shared task.} \label{tab: results} \end{table*} \subsection{Baseline} We use BART, a denoising autoencoder for pretraining sequence-to-sequence models \cite{lewis2019bart} as our baseline. BART is based on the standard Transformer model \cite{vaswani2017attention}, which can be regarded as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder). It is pre-trained on the same corpus as RoBERTa \cite{liu2019roberta} with two tasks: text infilling and sentence permutation. For text infilling, 30\% of tokens in each document are masked and the model is trained to recover them at the output. For the sentence permutation, all sentences are permuted as input and the model is supposed to generate the output sentences with the correct order. BART obtains great performance on the summarization task. We use the BART fine-tuned on CNN/DailyMail dataset \cite{hermann2015teaching} to initialize our model. \subsection{Multi-Label Summarization Model} \label{Multi-Label Summarization Model} There are two canonical strategies for summarization: extractive summarization, which concatenates sentences into the summary and abstractive summarization, which generate novel sentences for the summary. Inspired by the observation that lots of the sentences in human written lay summaries have corresponding sentences in original papers, we use an unsupervised approach to convert the abstractive summaries to extractive labels and train abstractive summarization together with extractive summarization. To make the ground truth sentence-level binary labels for extractive summarization, which we call ORACLE, we use a greedy algorithm introduced by \cite{nallapati2017summarunner}. The approach is based on the idea that the selected sentences from the input should be the ones that maximize the Rouge score \cite{lin2003automatic} with the respect gold summary The architecture of our model is shown in Figure \ref{fig:model}, which follows the BART model's structure. The input document is fed into the bidirectional encoder, then the contextual embeddings of the $i^{th}$ [CLS] symbol are used as the sentence representations. After a feedforward neural network, these sentence representations produce a binary distribution about whether they belong to the extractive summary. As for the abstractive summary, it is generated by the autoregressive decoder. The overall loss $L$ is calculated by $L = w_{e}L_{e} + L_{a}$. Here $L_{e}$ and $L_{a}$ refer to the Cross-Entropy loss of extractive and abstractive summary respectively. \subsection{Data Augmentation} Data augmentation has been an effective technique to create new training instances when the training data is not enough, as demonstrated in computer vision as well as for many NLP tasks \cite{chen2017reading, yang2019data, yuan2017machine}. Existing data augmentation approaches in NLP tasks can be categorized into retrieval-based methods \cite{chen2017reading, yang2019data} and generation-based methods \cite{yuan2017machine, buck2017ask}. However, none of these suits our situation, since external sources or auxiliary training data are still required. So we adopted a similar method from \cite{nema2017diversity}. A pre-defined vocabulary of 24,822 words was used where each word had been associated with a synonym. Then for each training instance, certain ratios (in our case, 1/9) in each document were randomly selected (except stop words and numerical values) and then replaced with their synonyms found in the vocabulary. If a selected word was not found in the vocabulary, it was added there with the most similar word found based on cosine similarity in the GloVe \cite{pennington2014glove} vocabulary. For each training instance, this process is repeated 9 times to create 9 new documents. But the same summary of the original instance was used in the newly generated instances. \subsection{Two-Stage Fine-tuning} \label{two stage fine-tuning} To make use of the ScisummNet dataset, we conduct a two-stage fine-tuning method. In the first stage, we fine-tune the pre-trained BART model on the ScisummNet dataset. We use the Abstract and annotators selected citation sentences as the input and the gold summary as the output. The model is fine-tuned with 20000 iterations before saved. As for the second stage, we use the same settings as we directly fine-tune on the CL-LaySumm 2020 dataset. \section{Experiments} During the training phase, we randomly select 90\% of the CL-LaySumm 2020 Dataset for training and 10\% for validation. If a data sample doesn't contain an Abstract or Introduction, we don't include it in training or validation. To find the optimal architecture for this task within the models we have, we set up seven different experiments. \textit{BART (Abs)}: We only use the Abstract as the input to the BART model. \textit{BART (Abs+Intro)}: We use the Abstract and the first paragraph of the Introduction as the input to the BART model. \textit{BART (Abs+Intro$_{all}$)}: We use the Abstract and the whole Introduction as the input to the BART model. \textit{BART (Abs+Intro+Con)}: We use the Abstract, the first paragraph of the Introduction, and the Conclusion (if the paper has) as the input to the BART model. \textit{BART (Data augmentation)}: We use the same data as BART (Abs+Intro+Con). For each training sample, we create 9 new input documents by synonym data augmentation. \textit{BART + Two-stage}: We use the same data as BART (Abs+Intro+Con) to the BART model. The two-stage fine-tuning method is introduced in Section \ref{two stage fine-tuning} \textit{BART + Multi-label}: We use the same data as BART (Abs+Intro+Con). In addition, for each sentence in the input, we add [CLS] token at the beginning. As for the hyperparameters, we use a dynamic learning rate, warm up 1000 iterations, and decay afterward. We set the batch size to 1 because of the limitation of GPU memory. The gradient will accumulate every ten iterations and we train all models for 6000 iterations on 1 GPU (GTX 1080 Ti). We save the best model that has the highest Rouge1-F1 score based on the validation set. For the BART model, we use the implementation from the huggingface\footnote{https://github.com/huggingface/transformers}. We use the BART large model pre-trained on CNN/DailyMail dataset. \section{Result Analysis} The results are shown in Table \ref{tab: results} and we analyze them from three aspects. Besides, we also generate a Lay Summary of our paper, which is presented in the appendix ~\ref{our own paper}. \paragraph{Different inputs to the model.} The experiment results of BART (Abs), BART (Abs+Intro), and BART (Abs+Intro+Con) show by adding the Introduction and Conclusion to the input, the models' performance improves consistently. However, comparing with the results from BART (Abs+Intro) and BART (Abs+Intro$_{all}$), using the whole Introduction rather than the first paragraph of the Introduction decreases the performance on Rouge1 score. We think it is because the CL-LaySumm 2020 task requires to make a relatively short summary, less than 150 words. If the input is too long, it makes the model harder to summarize because longer input contains more noisy data. Since the CL-LaySumm 2020 dataset is also small, the model doesn't have enough samples to learn the task. \paragraph{Two-stage fine-tuning and Data Augmentation.} The experimental results show that two-stage fine-tuning doesn't help to improve the model's performance. After checking the details of ScisummNet, we find the corpus comes from ACL Anthology Network (AAN) \cite{radev2013acl}, which means all data relates to computational linguistics. In contrast, the CL-LaySumm 2020 dataset use papers from a variety of domains including biology and medicine. The Statistical differences between these two datasets make the model hard to learn prior knowledge that can be utilized in CL-LaySumm 2020 task. As for the Data Augmentation, the model performance also doesn't increase as we expected, which contradicts the results from the original paper \cite{nema2017diversity}. However, the same method also fails in \cite{laskar2020query}, which also adopted a large pre-trained model as a start-point for fine-tuning. So we think the possible reason might be that large pre-trained models are less robust to noisy input. Our synonyms replacement method is too simple as well as unsupervised. On one hand, it can increase the vocabulary diversity of the training data without changing the semantic meaning a lot, but on the other hand, the quality especially the grammar of the generated instances can not be guaranteed to be correct. Thus, some noise might be introduced and decreases the model performance when we augment the data. \paragraph{Multi-label summarization.} Comparing with BART (Abs+Intro+Con) and BART + Multi-label models, we find that with multi labels, the Rouge1-F1 score is better but the Recall score is lower, which means that the precision increase a lot. We think that with the extra supervision of sentence labels, the model can learn a better sentence understanding. As a result, the model is able to extract important content from the input which helps upper the F1 and Precision scores. \section{Conclusion} In this paper, we showcased how different inputs, data augmentation, training strategy, and sentence labels influence the lay summarization task. We introduce a new method to utilize sentence labels as another supervision signal while training BART based model. Experimental results show our models can generate better summaries evaluated by the Rouge1-F1 score. \normalem \bibliographystyle{acl_natbib}
1,941,325,221,195
arxiv
\section{Introduction} The \emph{syntactic complexity}~\cite{BrYe11} $\sigma(L)$ of a regular language $L$ is defined as the size of its syntactic semigroup~\cite{Pin97}. It is known that this semigroup is isomorphic to the transition semigroup of the quotient automaton ${\mathcal D}$ and of a minimal deterministic finite automaton accepting the language. The number $n$ of states of ${\mathcal D}$ is the \emph{state complexity} of the language~\cite{Yu01}, and it is the same as the \emph{quotient complexity}~\cite{Brz10} (number of left quotients) of the language. The \emph{syntactic complexity of a class} of regular languages is the maximal syntactic complexity of languages in that class expressed as a function of the quotient complexity~$n$. Syntactic complexity is related to the Myhill equivalence relation \cite{Myh57}, and it counts the number of classes of non-empty words in a regular language which act distinctly. It provides a natural bound on the time and space complexity of algorithms working on the transition semigroup. For example, a simple algorithm checking whether a language is \emph{star-free} just enumerates all transformations and verifies whether none of them contains a non-trivial cycle \cite{McSe71}. Syntactic complexity does not refine state complexity, but used as an additional measure it can distinguish particular subclasses of regular languages from the class of all regular languages, whereas state complexity alone cannot. For example, the state complexity of basic operations in the class of star-free languages is the same as in the class of all regular languages (except the reversal, where the tight upper bound is $2^{n-1}-1$ see \cite{BrSz15Aperiodic}). Finally, the largest transition semigroups play an important role in the study of \emph{most complex} languages~\cite{Brz13} in a given subclass. These are languages that meet all the upper bounds on the state complexities of Boolean operations, product, star, and reversal, and also have maximal syntactic semigroups and most complex atoms~\cite{BrTa14}. In particular, the results from this paper enabled the study of most complex bifix-free languages \cite{FS2017ComplexityBifixFree}. A language is \emph{prefix-free} if no word in the language is a proper prefix of another word in the language. Similarly, a language is \emph{suffix-free} if there is no word that is a proper suffix of another word in the language. A language is \emph{bifix-free} if it is both prefix-free and suffix-free. Prefix-, suffix-, and bifix-free languages are important classes of codes, which have numerous applications in such fields as cryptography and data compression. Codes have been studied extensively; see~\cite{BPR09} for example. Syntactic complexity has been studied for a number of subclasses of regular languages (e.g.,~\cite{BrLi15,BLL12,BLY12,BrSz15Aperiodic,HoKo04,IvNa14}). For bifix-free languages, the lower bound $(n-1)^{n-3}+(n-2)^{n-3}+(n-3)2^{n-3}$ for the syntactic complexity for $n \ge 6$ was established in~\cite{BLY12}. The values for $n \le 5$ were also determined. The problem of establishing tight upper bound on syntactic complexity can be quite challenging, depending on the particular subclass. For example, it is easy for prefix-free languages and right ideals, while much more difficult for suffix-free languages and left ideals. The case of bifix-free languages studied in this paper requires an even more involved proof, as the structure of maximal transition semigroup is more complicated. Our main contributions in this paper are as follows: \begin{enumerate} \item We prove that $(n-1)^{n-3}+(n-2)^{n-3}+(n-3)2^{n-3}$ is also an upper bound for syntactic complexity for $n \ge 8$. To do this, we apply the general method of injective function (cf. \cite{BrSz14a} and \cite{BrSz15SyntacticComplexityOfSuffixFree}). The construction here is much more involved than in the previous cases, and uses a number of various tricks for ensuring injectivity. \item We prove that the transition semigroup meeting this bound is unique for every $n \ge 8$. \item We refine the witness DFA meeting the bound by reducing the size of the alphabet to $(n-2)^{n-3} + (n-3)2^{n-3} - 1$, and we show that it cannot be any smaller. \item Using a dedicated algorithm, we verify by computation that two semigroups $\mathbf{W}^{\le 5}_{\mathrm{bf}}$ and $\mathbf{W}^{\ge 6}_{\mathrm{bf}}$ (defined below) are the unique largest transition semigroups of a minimal DFA of a bifix-free language, respectively for $n=5$ and $n=6,7$ (whereas they coincide for $n=3,4$). \end{enumerate} In summary, for every $n$ we have determined the syntactic complexity, the unique largest semigroups, and the minimal sizes of the alphabets required; this completely solves the problem for bifix-free languages. \section{Preliminaries} Let $\Sigma$ be a non-empty finite alphabet, and let $L \subseteq \Sigma^*$ be a language. If $w \in \Sigma^*$ is a word, $L.w$ denotes the \emph{left quotient} or simply quotient of $L$ by $w$, which is defined by $L.w = \{u \mid wu \in L\}$. We denote the set of quotients of $L$ by $K=\{K_0,\dots,K_{n-1}\}$, where $K_0=L=L.\varepsilon$ by convention. The number of quotients of $L$ is its \emph{quotient complexity}~\cite{Brz10} $\kappa(L)$. From the Myhill-Nerode Theorem, a language is regular if and only if the set of all quotients of the language is finite. A \emph{deterministic finite automaton (DFA)} is a tuple ${\mathcal D}=(Q, \Sigma, \delta, q_0,F)$, where $Q$ is a finite non-empty set of \emph{states}, $\Sigma$ is a finite non-empty \emph{alphabet}, $\delta\colon Q\times \Sigma\to Q$ is the \emph{transition function}, $q_0\in Q$ is the \emph{initial} state, and $F\subseteq Q$ is the set of \emph{final} states. We extend $\delta$ to a function $\delta\colon Q\times \Sigma^*\to Q$ as usual. The \emph{quotient DFA} of a regular language $L$ with $n$ quotients is defined by ${\mathcal D}=(K, \Sigma, \delta_{\mathcal D}, K_0,F_{\mathcal D})$, where $\delta_{\mathcal D}(K_i,w)=K_j$ if and only if $K_i.w=K_j$, and $F_{\mathcal D}=\{K_i\mid \varepsilon \in K_i\}$. Without loss of generality, we assume that $Q=\{0,\dots,n-1\}$. Then ${\mathcal D}=(Q, \Sigma, \delta, 0,F)$, where $\delta(i,w)=j$ if $\delta_{\mathcal D}(K_i,w)=K_j$, and $F$ is the set of subscripts of quotients in $F_{\mathcal D}$. A state $q \in Q$ is \emph{empty} if its quotient $K_q$ is empty. The quotient DFA of $L$ is isomorphic to each complete minimal DFA of $L$. The number of states in the quotient DFA of $L$ (the quotient complexity of $L$) is therefore equal to the state complexity of $L$. In any DFA ${\mathcal D}$, each letter $a\in \Sigma$ induces a transformation on the set $Q$ of $n$ states. We let ${\mathcal T}_n$ denote the set of all $n^n$ transformations of $Q$; then ${\mathcal T}_n$ is a monoid under composition. The \emph{image} of $q\in Q$ under transformation $t$ is denoted by $qt$, and the \emph{image} of a subset $S \subseteq Q$ is $St = \{qt \mid q \in S\}$. If $s,t \in {\mathcal T}_n$ are transformations, their composition is denoted by $st$ and defined by $q(st)=(qs)t$. The identity transformation is denoted by $\mathbf{1}$, and we have $q\mathbf{1} = q$ for all $q \in Q$. By $(S \to q)$, where $S \subseteq Q$ and $q \in Q$, we denote a \emph{semiconstant} transformation that maps all the states from $S$ to $q$ and behaves as the identity function for the states in $Q \setminus S$. A \emph{constant} transformation is the semiconstant transformation $(Q \to q)$, where $q \in Q$. A \emph{unitary} transformation is $(\{p\} \to q)$, for some distinct $p,q \in Q$; this is denoted by $(p \to q)$ for simplicity. The \emph{transition semigroup} of ${\mathcal D}$ is the semigroup of all transformations generated by the transformations induced by $\Sigma$. Since the transition semigroup of a minimal DFA of a language $L$ is isomorphic to the syntactic semigroup of $L$~\cite{Pin97}, the syntactic complexity of $L$ is equal to the cardinality of the transition semigroup of ${\mathcal D}$. The \emph{underlying digraph} of a transformation $t \in {\mathcal T}_n$ is the digraph $(Q,E)$, where $E = \{(q,qt) \mid q \in Q\}$. We identify a transformation with its underlying digraph and use usual graph terminology for transformations: The \emph{in-degree} of a state $q \in Q$ is the cardinality $|\{p \in Q \mid pt = q\}|$. A \emph{cycle} in $t$ is a cycle in its underlying digraph of length at least 2. A \emph{fixed point} in $t$ is a self-loop in its underlying digraph. An \emph{orbit} of a state $q \in Q$ in $t$ is a connected component containing $q$ in its underlying digraph, that is, the set $\{p \in Q \mid pt^i = qt^j\text{ for some }i,j \ge 0\}$. Note that every orbit contains either exactly one cycle or one fixed point. The \emph{distance} in $t$ from a state $p \in Q$ to a state $q \in Q$ is the length of the path in the underlying digraph of $t$ from $p$ to $q$, that is, $\min\{i \in \mathbb{N} \mid pt^i = q\}$, and is undefined if no such path exists. If a state $q$ does not lie in a cycle, then the tree of $q$ is the underlying digraph of $t$ restricted to the states $p$ such that there is a path from $p$ to $q$. \subsection{Bifix-free languages and semigroups} Let ${\mathcal D}_n=(Q, \Sigma, \delta, 0,F)$, where $Q = \{0,\ldots,n-1\}$, be a minimal DFA accepting a bifix-free language $L$, and let $T(n)$ be its transition semigroup. We also define $Q_M = \{1,\ldots,n-3\}$ (the set of the ``middle'' states). The following properties of bifix-free languages, slightly adapted to our terminology, are well known~\cite{BLY12}: \begin{lemma}\label{lem:bifix-free} A minimal DFA ${\mathcal D}_n=(Q, \Sigma, \delta, 0,F)$ of a bifix-free languages $L$ satisfies the following properties: \begin{enumerate} \item There is an empty state, which is $n-1$ by convention. \item There exists exactly one final quotient, which is $\{\varepsilon\}$, and whose state is $n-2$ by convention, so $F=\{n-2\}$. \item For $u,v\in \Sigma^+$, if $L.v\neq \emptyset$, then $L.v\neq L.uv$. \item In the underlying digraph of every transformation of $T(n)$, there is a path starting at $0$ and ending at $n-1$. \end{enumerate} \end{lemma} The items~(1) and~(2) follow from the properties of prefix-free languages, while (3) and~(4) follow from the properties of suffix-free languages. Following \cite{BrSz15SyntacticComplexityOfSuffixFree}, we say that an (unordered) pair $\{p,q\}$ of distinct states in $Q_M$ is \emph{colliding} (or $p$ \emph{collides} with $q$) in $T(n)$ if there is a transformation $t \in T(n)$ such that $0t = p$ and $rt = q$ for some $r \in Q_M$. A pair of states is \emph{focused by} a transformation $u \in {\mathcal T}(n)$ if $u$ maps both states of the pair to a single state $r \in Q_M \cup \{n-2\}$. We then say that $\{p,q\}$ is \emph{focused to the state $r$}. By Lemma~\ref{lem:bifix-free}, it follows that if $\{p,q\}$ is colliding in $T(n)$, then there is no transformation $u \in T(n)$ that focuses $\{p,q\}$. Hence, in the case of bifix-free languages, colliding states can be mapped to a single state only if the state is $n-1$. In contrast with suffix-free languages, we do not consider the pairs from $Q_M \times \{n-2\}$ being colliding, as they cannot be focused. For $n \ge 2$ we define the set of transformations \begin{eqnarray*} \mathbf{B}_{\mathrm{bf}}(n) = \{t \in {\mathcal T}_n & \mid & 0 \not\in Qt\text{, }(n-1)t=n-1\text{, }(n-2)t=n-1\text{, and for all }j\ge 1,\\ & & 0t^j = n-1\text{ or }0t^j \neq qt^j~~\forall q, \; 0 < q < n-1\}. \end{eqnarray*} In~\cite{BLY12} it was shown that the transition semigroup $T(n)$ of the minimal DFA of a bifix-free language must be contained in $\mathbf{B}_{\mathrm{bf}}(n)$. It contains all transformations $t$ which fix $n-1$, map $n-2$ to $n-1$, and do not focus any pair which is colliding from $t$. Since $\mathbf{B}_{\mathrm{bf}}(n)$ is not a semigroup, no transition semigroup of a minimal DFA of a bifix-free language can contain all transformations from $\mathbf{B}_{\mathrm{bf}}(n)$. Therefore, its cardinality is not a tight upper bound on the syntactic complexity of bifix-free languages. A lower bound on the syntactic complexity was established in~\cite{BLY12}. We study the following two semigroups that play an important role for bifix-free languages. \subsubsection{Semigroup $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$.} For $n \ge 3$ we define the semigroup: \begin{eqnarray*} \mathbf{W}^{\ge 6}_{\mathrm{bf}}(n) & = & \{t \in \mathbf{B}_{\mathrm{bf}}(n) \mid 0t \in \{n-2,n-1\}\text{, or}\\ & & 0t \in Q_M\text{ and }qt \in \{n-2,n-1\}\text{ for all }q \in Q_M\}. \end{eqnarray*} The following remark summarizes the transformations of $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ (illustrated in Fig.~\ref{fig:Wbf_transformations}): \begin{remark}\label{rem:Wbf_transformations} $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ contains all transformations that: \begin{enumerate} \item map $\{0,n-2,n-1\}$ to $n-1$, and $Q_M$ into $Q \setminus \{0\}$, \item map $0$ to $n-2$, $\{n-2,n-1\}$ to $n-1$, and $Q_M$ into $Q \setminus \{0,n-2\}$, \item map $0$ to a state $q \in Q_M$, and $Q_M$ into $\{n-2,n-1\}$.\hfill$\blacksquare$ \end{enumerate} \end{remark} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(14,16)(0,-4) \node[Nframe=n](name)(1,10){(1):} \node(0)(2,4){0}\imark(0) \node(1)(6,8){$1$} \node[Nframe=n,Nw=2,Nh=2](dots)(6,4){$\dots$} \node[Nw=3.5,Nh=11.5,Nmr=1.25,dash={.5 .25}{.25}](QM)(6,4){} \node(n-3)(6,0){$n$-$3$} \node(n-2)(10,4){$n$-$2$}\rmark(n-2) \node(n-1)(10,0){$n$-$1$} \drawedge[curvedepth=-6,sxo=-.2,exo=.2](0,n-1){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawloop[loopangle=0,syo=3](QM){} \drawedge(QM,n-2){} \drawedge(QM,n-1){} \end{picture}\begin{picture}(14,14)(0,-4) \node[Nframe=n](name)(1,10){(2):} \node(0)(2,4){0}\imark(0) \node(1)(6,8){$1$} \node[Nframe=n,Nw=2,Nh=2](dots)(6,4){$\dots$} \node[Nw=3.5,Nh=11.5,Nmr=1.25,dash={.5 .25}{.25}](QM)(6,4){} \node(n-3)(6,0){$n$-$3$} \node(n-2)(10,4){$n$-$2$}\rmark(n-2) \node(n-1)(10,0){$n$-$1$} \drawedge[curvedepth=8,sxo=-.5,exo=.5](0,n-2){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawloop[loopangle=0,syo=3](QM){} \drawedge(QM,n-1){} \end{picture}\begin{picture}(14,14)(0,-4) \node[Nframe=n](name)(1,10){(3):} \node(0)(2,4){0}\imark(0) \node(1)(6,8){$1$} \node[Nframe=n,Nw=2,Nh=2](dots)(6,4){$\dots$} \node[Nw=3.5,Nh=11.5,Nmr=1.25,dash={.5 .25}{.25}](QM)(6,4){} \node(n-3)(6,0){$n$-$3$} \node(n-2)(10,4){$n$-$2$}\rmark(n-2) \node(n-1)(10,0){$n$-$1$} \drawedge(0,dots){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawedge(QM,n-2){} \drawedge(QM,n-1){} \end{picture}\end{center} \caption{The three types of transformations in $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ from Remark~\ref{rem:Wbf_transformations}.}\label{fig:Wbf_transformations} \end{figure} The cardinality of $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ is $(n-1)^{n-3}+(n-2)^{n-3}+(n-3)2^{n-3}$. \begin{proposition}\label{pro:Wbf_unique} $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ is the unique maximal transition semigroup of a minimal DFA ${\mathcal D}_n$ of a bifix-free language in which there are no colliding pairs of states. \end{proposition} \begin{proof} Since for any pair $p,q \in Q_M$ there is the transformation $(0 \to n-1)(\{p,q\} \to n-2)(n-2 \to n-1)$ in the semigroup, the pair $\{p,q\}$ cannot be colliding. Therefore, there are no colliding pairs in $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$. Let $T(n)$ be a transition semigroup in which there are no colliding pairs of states. Consider $t \in T(n)$. If $0t = n-1$ then $t \in \mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ as is a transformation of Type~1 from Remark~\ref{rem:Wbf_transformations}. If $0t = n-2$ then $t \in \mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ as is a transformation of Type~2 from Remark~\ref{rem:Wbf_transformations}. If $0t \in Q_M$, then $qt \in \{n-2,n-1\}$, as otherwise $\{0t,qt\}$ would be a colliding pair, so $t$ is a transformation of Type~3 from Remark~\ref{rem:Wbf_transformations}. Therefore, $T(n)$ is a subsemigroup of $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$, and so $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ is unique maximal. \end{proof} In~\cite{BLY12} it was shown that for $n \ge 5$, there exists a witness DFA of a bifix-free language whose transition semigroup is $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ over an alphabet of size $(n-2)^{n-3}+(n-3)2^{n-3}+2$ (and 18 if $n=5$). Now we slightly refine the witness from \cite[Proposition~31]{BLY12} by reducing the size of the alphabet to $(n-2)^{n-3} + (n-3)2^{n-3} - 1$, and then we show that it cannot be any smaller. \begin{definition}[Bifix-free witness] For $n \ge 4$, let ${\mathcal W}(n) = (Q,\Sigma,\delta,0,\{n-2\})$, where $Q = \{0,\ldots,n-1\}$ and $\Sigma$ contains the following letters: \begin{enumerate} \item $b_i$, for $1 \le i \le n-3$, inducing the transformations $(0 \to n-1)(i \to n-2)(n-2 \to n-1)$, \item $c_i$, for every transformation of type~(2) from Remark~\ref{rem:Wbf_transformations} that is different from $(0 \to n-2)(Q_M \to n-1)(n-2 \to n-1)$, \item $d_i$, for every transformation of type~(3) from Remark~\ref{rem:Wbf_transformations} that is different from $(0 \to q)(Q_M \to n-1)(n-2 \to n-1)$ for some state $q \in Q_M$. \end{enumerate} Altogether, we have $|\Sigma| = (n-3) + ((n-2)^{n-3}-1) + (n-3)(2^{n-3}-1) = (n-2)^{n-3} + (n-3)2^{n-3} - 1$. For $n=4$ three letters suffice, since the transformation of $b_1$ is induced by $c_i d_i$, where $c_i\colon (0 \to 2)(2 \to 3)$ and $d_i\colon (0 \to 1)(1 \to 2)(2 \to 3)$. \end{definition} \begin{proposition} The transition semigroup of ${\mathcal W}(n)$ is $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$. \end{proposition} \begin{proof} Consider a transformation $t$ of type~1 from Remark~\ref{rem:Wbf_transformations}. Let $S \subseteq Q_M$ be the states that are mapped to $n-2$ by $t$. If $S = \emptyset$, then $t = s x$, where $s = (0 \to n-2)(n-2 \to n-1)$ is induced by a $c_i$, and $x$ is the transformation induced by a $c_i$ that maps $Q_M$ in the same way as $t$. If $S \neq \emptyset$, then let $q \in Q_M$ be the state such that $q \not\in Q_M t$. Let $x$ be the transformation induced by $c_i$ that maps the states from $S$ to $q$ and $Q_M \setminus S$ in the same way as $t$. Then $t = x b_q$, since $0 x b_q = n-1$, $S x b_q = q b_q = n-2$, and for $p \in (Q_M \setminus S)$ we have that $p x b_q = p t$. Hence, we have all transformations of type~1 in $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$. It remains to show how to generate the two missing transformations of type~2 and type~3 that do not have the corresponding generators $c_i$ and $d_i$, respectively. Let $u = (0 \to q)(Q_M \to n-2)(n-2 \to n-1)$, which is induced by a $d_i$. Consider the transformation $t = (0 \to q)(Q_M \to n-1)(n-2 \to n-1)$. Then $t = u v$, where $v = (0 \to n-1)(n-2 \to n-1)$ is of type~1. Consider the transformation $t = (0 \to n-2)(Q_M \to n-1)(n-2 \to n-1)$. Then $t = u v$, where $v = (0 \to n-1)(q \to n-2)(n-2 \to n-1)$ is of type~1. \end{proof} \begin{proposition}\label{pro:Wbf_alphabet_lower_bound} For $n \ge 5$, at least $(n-2)^{n-3} + (n-3)2^{n-3} - 1$ generators are necessary to generate $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$. \end{proposition} \begin{proof} Consider a transformation $t \in \mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ of type~(2) from Remark~\ref{rem:Wbf_transformations} that is different from $(0 \to n-2)(Q_M \to n-1)(n-2 \to n-1)$. If $t$ were the composition of two transformations from $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$, then either $t$ maps $0$ to $n-1$, or $t$ maps $Q_M$ into $\{n-2,n-1\}$. Since neither is the case, $t$ must be a generator. There are $(n-2)^{n-3} - 1$ such generators. Consider a transformation $t \in \mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ of type~(3) from Remark~\ref{rem:Wbf_transformations} that is different from $(0 \to q)(Q_M \to n-1)(n-2 \to n-1)$ for some $q \in Q_M$. Note that to generate $t$. a transformation of type~(3) must be used, but the composition of such a transformation with any other transformation from $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ maps every state from $Q_M$ to $n-1$. Hence, $t$ must be used as a generator, and there are $(n-3)(2^{n-3}-1)$ such generators. Consider a transformation $t \in \mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ of type~(1) from Remark~\ref{rem:Wbf_transformations} of the form $(0 \to n-1)(q \to n-2)(n-2 \to n-1)$ for some $q \in Q_M$. Note that to generate $t$, transformations of type~(3) cannot be used because $Q_M$ is not mapped into $\{n-2,n-1\}$ if $|Q_M| \ge 3$. Let $t = g_1 \dots g_k$, where $g_i$ are generators. Since a transformation of type~(2) does not map $q$ to $n-2$, $g_k$ cannot be of type~(2), and so must be of type~(1). Moreover $Q_M g_1 \dots g_{k-1} = Q_M$, as otherwise $t$ would map a state $p \in Q_M$ to $n-1$. Hence, $Q_M g_k = Q_M \setminus \{q\}$, and for every selection of $q$ there exists a different $g_k$. There are $(n-3)$ such generators. \end{proof} \subsubsection{Semigroup $\mathbf{W}^{\le 5}_{\mathrm{bf}}(n)$.} For $n \ge 3$ we define the semigroup \begin{eqnarray*} \mathbf{W}^{\le 5}_{\mathrm{bf}}(n) = \{t \in \mathbf{B}_{\mathrm{bf}}(n) & \mid & \text{for all }p,q \in Q_M\text{ where }p \neq q, pt = qt = n-1\text{ or }pt \neq qt\}. \end{eqnarray*} \begin{proposition}\label{pro:Vbf_unique} $\mathbf{W}^{\le 5}_{\mathrm{bf}}(n)$ is the unique maximal transition semigroup of a minimal DFA ${\mathcal D}_n$ of a bifix-free language in which all pairs of states from $Q_M$ are colliding. \end{proposition} \begin{proof} Let $p,q \in Q_M$ be two distinct states. Then $\{p,q\}$ is colliding because of the transformation $(0 \to p)(p \to n-1)(n-2 \to n-1) \in \mathbf{W}^{\le 5}_{\mathrm{bf}}(n)$. Therefore, all pairs of states from $Q_M$ are colliding. Let $T(n)$ be a transition semigroup with all colliding pairs of states. Consider $t \in T(n)$. Then for any distinct $p,q \in Q_M$, we have $p \neq q$ or $pt = qt = n-1$, as otherwise $\{p,q\}$ would be focused. By definition of $\mathbf{W}^{\le 5}_{\mathrm{bf}}(n)$, there are all such transformations $t$ in $\mathbf{W}^{\le 5}_{\mathrm{bf}}(n)$. Therefore, $\mathbf{W}^{\le 5}_{\mathrm{bf}}(n)$ is unique maximal. \end{proof} In~\cite{BLY12} it was shown that for $n \ge 2$ there exists a DFA for a bifix-free language whose transition semigroup is $\mathbf{W}^{\le 5}_{\mathrm{bf}}(n)$ over an alphabet of size $(n-2)!$. We prove that this is an alphabet of minimal size that generates this transition semigroup. \begin{proposition}\label{pro:Vbf_alphabet_lower_bound} To generate $\mathbf{W}^{\le 5}_{\mathrm{bf}}(n)$ at least $(n-2)!$ generators must be used. \end{proposition} \begin{proof} First we show that the composition of any two transformations $t,t' \in \mathbf{W}^{\le 5}_{\mathrm{bf}}(n)$ maps a state different from $n-1$ to the state $n-1$. Suppose that $t$ does not map any state to $n-1$. If $0t = n-2$, then $0tt' = n-1$. If $0t \in Q_M$, then some state $q \in Q_M$ must be mapped either to $n-2$ or to $n-1$, and again $qtt' = n-1$. Consider all transformations $t \in \mathbf{W}^{\le 5}_{\mathrm{bf}}(n)$ that map $Q_M \cup \{0\}$ onto $Q_M \cup \{n-2\}$. There are $(n-2)!$ such transformations, and since they cannot be generated by compositions, they must be generators. \end{proof} \section{Upper bound on syntactic complexity of bifix-free languages} Our main result shows that the lower bound $(n-1)^{n-3}+(n-2)^{n-3}+(n-3)2^{n-3}$ on the syntactic complexity of bifix-free languages is also an upper bound for $n \ge 8$. We consider a minimal DFA ${\mathcal D}_n=(Q,\Sigma,\delta,0,\{n-2\})$, where $Q=\{0,\ldots,n-1\}$ and whose empty state is $n-1$, of an arbitrary bifix-free language. Let $T(n)$ be the transition semigroup of ${\mathcal D}_n$. We will show that $T(n)$ is not larger than $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$. Note that the semigroups $T(n)$ and $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ share the set $Q$, and in both of them $0$, $n-2$, and $n-1$ play the role of the initial, final, and empty state, respectively. When we say that a pair of states from $Q$ is \emph{colliding} we always mean that it is colliding in $T(n)$. First, we state the following lemma, which generalizes some arguments that we use frequently in the proof of the main theorem. \begin{lemma}\label{lem:orbits} Let $t,\e{t} \in T(n)$ and $s \in \mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ be transformations. Suppose that: \begin{enumerate} \item All states from $Q_M$ whose mapping is different in $t$ and $s$ belong to $C$, where $C$ is either an orbit in $s$ or is the tree of a state in $s$. \item All states from $Q_M$ whose mapping is different in $\e{t}$ and $s$ belong to $\e{C}$, where $\e{C}$ is either an orbit in $s$ or is the tree of a state in $s$. \item The transformation $s^i t^j$, for some $i,j \ge 0$, focuses a colliding pair whose states are in $C$. \end{enumerate} Then either $C \subseteq \e{C}$ or $\e{C} \subseteq C$. In particular, if $C$ and $\e{C}$ are both orbits or both trees rooted in a state mapped by $s$ to $n-1$, then $C = \e{C}$. \end{lemma} \begin{proof} First observe that if $q \in (Q_M \cup \{n-2,n-1\}) \setminus C$ then $qs = qt$, since by~(1) state $q$ is mapped in the same way by $t$ as by $s$. Also, $qs \in (Q_M \cup \{n-2,n-1\}) \setminus C$, since if $qs$ would be in $C$, then $q \in C$, because $C$ is an orbit or a tree and $qs$ is reachable from $q$. Hence, for any $g=g_1 \dots g_k$, where $g_i = t$ or $g_i = s$, by simple induction we have that $q g = q s^k = q t^k \in (Q_M \cup \{n-2,n-1\}) \setminus C$. The same claim holds symmetrically for $\e{C}$. Let $\{p_1,p_2\}$ be the colliding pair that is focused by $s^i t^j$ from~(3). Suppose that $C \cap \e{C} = \emptyset$. Since $p_1,p_2 \in C$, we have that $p_1,p_2 \in (Q_M \cup \{n-2,n-1\}) \setminus \e{C}$. By the claim above for $\e{C}$, $p_1 s^i t^j = p_1 \e{t}^i t^j$, and $p_2 s^i t^j = p_2 \e{t}^i t^j$. But this means that $\e{t}^i t^j$ focuses $\{p_1,p_2\}$, hence $t$ and $\e{t}$ cannot be both present in $T(n)$. So it must be that $C \cap \e{C} \neq \emptyset$, since they are orbits or trees we have either $C \subseteq \e{C}$ or $\e{C} \subseteq C$. \end{proof} \begin{theorem}\label{thm:bifix-free_upper_bound} For $n\ge 8$, the syntactic complexity of the class of bifix-free languages with $n$ quotients is $(n-1)^{n-3}+(n-2)^{n-3}+(n-3)2^{n-3}$. \end{theorem} \begin{proof} We construct an injective mapping $\varphi \colon T(n) \to \mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$. Since $\varphi$ will be injective, this will prove that $|T(n)| \le |\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)| = (n-1)^{n-3}+(n-2)^{n-3}+(n-3)2^{n-3}$. The mapping $\varphi$ is defined by 23 (sub)cases covering all possibilities for a transformation $t \in T(n)$. Let $t$ be a transformation of $T(n)$, and $s$ be the assigned transformation $\varphi(t)$. In every (sub)case we prove \emph{external injectivity}, which is that there is no other transformation $\e{t}$ that fits to one of the previous (sub)cases and results in the same $s$, and we prove \emph{internal injectivity}, which is that no other transformation $\e{t}$ that fits to the same (sub)case results in the same $s$. All states and variables related to $\e{t}$ are always marked by a hat. In every (sub)case we observe some properties of the defined transformations $s$: Property~(a) always says that a colliding pair is focused by a transformation of the form $s^i t^j$. Property~(b) describes the orbits and trees of states which are mapped differently by $t$ and $s$; this is often for a use of Lemma~\ref{lem:orbits}. Property~(c) concerns the existence of cycles in $s$. See the appendix for a list and a map of all (sub)cases. \textbf{Supercase~1}: $t \in \mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$.\\ Let $s = t$. This is obviously injective. \noindent For all the remaining cases let $p = 0t$. Note that all $t$ with $p \in \{n-2,n-1\}$ fit in Supercase~1. Let $k \ge 0$ be a maximal integer such that $pt^k \not\in \{n-2,n-1\}$. Then $pt^{k+1}$ is either $n-1$ or $n-2$, and we have two supercases covering these situations. \textbf{Supercase~2}: $t \not\in \mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ and $pt^{k+1} = n-1$.\\ Here we have the chain $$0 \stackrel{t}{\rightarrow} p \stackrel{t}{\rightarrow} pt \stackrel{t}{\rightarrow} \dots \stackrel{t}{\rightarrow} pt^k \stackrel{t}{\rightarrow} n-1.$$ Within this supercase, we will always assign transformations $s$ focusing a colliding pair, and this will make them different from the transformations of Supercase~1. Also, we will always have $0 s = n-1$. We have the following cases covering all possibilities for $t$: \textbf{Case~2.1}: $t$ has a cycle.\\ Let $r$ be the minimal state among the states that appear in cycles of $t$, that is, $$r = \min\{q\in Q \mid q\text{ is in a cycle of }t\}.$$ Let $s$ be the transformation illustrated in Fig.~\ref{fig:case2.1} and defined by: \begin{center} $0 s = n-1$, $p s = r$,\\ $(p t^i) s = pt^{i-1}$ for $1\le i\le k$,\\ $q s = q t$ for the other states $q\in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,13)(0,-4) \node[Nframe=n](name)(0,7){\normalsize$t\colon$} \node(0)(2,0){0}\imark(0) \node(p)(8,0){$p$} \node[Nframe=n](pdots)(14,0){$\dots$} \node(pt^k)(20,0){$pt^k$} \node(n-1)(26,0){$n$-$1$} \node(n-2)(26,4){$n$-$2$}\rmark(n-2) \node(z)(12,4){$z$} \node(r)(14,7){$r$} \node[Nframe=n](rdots)(16,4){$\dots$} \drawedge(0,p){} \drawedge(p,pdots){} \drawedge(pdots,pt^k){} \drawedge(pt^k,n-1){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawedge[curvedepth=1](z,r){} \drawedge[curvedepth=1](r,rdots){} \drawedge[curvedepth=1](rdots,z){} \end{picture} \begin{picture}(28,13)(0,-4) \node[Nframe=n](name)(0,7){\normalsize$s\colon$} \node(0')(2,0){0}\imark(0') \node(p')(8,0){$p$} \node[Nframe=n](pdots')(14,0){$\dots$} \node(pt^k')(20,0){$pt^k$} \node(n-1')(26,0){$n$-$1$} \node(n-2')(26,4){$n$-$2$}\rmark(n-2') \node(z')(12,4){$z$} \node(r')(14,7){$r$} \node[Nframe=n](rdots')(16,4){$\dots$} \drawedge[linecolor=red,dash={.5 .25}{.25},curvedepth=-3](0',n-1'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](pdots',p'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](pt^k',pdots'){} \drawedge[linecolor=red,dash={.5 .25}{.25},curvedepth=3.5](p',r'){} \drawedge[curvedepth=1](z',r'){} \drawedge[curvedepth=1](r',rdots'){} \drawedge[curvedepth=1](rdots',z'){} \drawloop[loopangle=270](n-1'){} \drawedge(n-2',n-1'){} \end{picture}\end{center} \caption{Case~2.1.}\label{fig:case2.1} \end{figure} Let $z$ be the state from the cycle of $t$ such that $zt = r$. We observe the following properties: \begin{enumerate} \item[(a)] Pair $\{p,z\}$ is a colliding pair focused by $s$ to state $r$ in the cycle, which is the smallest state of all states in cycles. This is the only colliding pair which is focused to a state in a cycle. \noindent\textit{Proof}: Note that $p$ collides with any state in a cycle of $t$, in particular, with $z$. The property follows because $s$ differs from $t$ only in the mapping of states $pt^i$ ($0 \le i \le k$) and $0$, and the only state mapped to a cycle is $p$. \item[(b)] All states from $Q_M$ whose mapping is different in $t$ and $s$ belong to the same orbit in $s$ of a cycle. Hence, all colliding pairs that are focused by $s$ consist only of states from this orbit. \item[(c)] $s$ has a cycle. \item[(d)] For each $i$ with $1 \le i < k$, there is precisely one state $q$ colliding with $pt^{i-1}$ and mapped by $s$ to $pt^i$, and that state is $q=pt^{i+1}$. \noindent\textit{Proof}: Clearly $q=pt^{i+1}$ satisfies this condition. Suppose that $q \neq pt^{i+1}$. Since $pt^{i+1}$ is the only state mapped to $pt^i$ by $s$ and not by $t$, it follows that $qt = qs = pt^i$. So $q$ and $pt^{i-1}$ are focused to $pt^i$ by $t$; since they collide, this is a contradiction. \end{enumerate} \textit{Internal injectivity}: Let $\e{t}$ be any transformation that fits in this case and results in the same $s$; we will show that $\e{t}=t$. From~(a), there is the unique colliding pair $\{p,z\}$ focused to a state in a cycle, hence $\{\e{p},\e{z}\} = \{p,z\}$. Moreover, $p$ and $\e{p}$ are not in this cycle, so $\e{p}=p$ and $\e{z}=z$, which means that $0t = 0\e{t} = p$. Since there is no state $q \neq 0$ such that $qt=p$, the only state mapped to $p$ by $s$ is $pt$, hence $p\e{t} = pt$. From~(d) for $i=1,\ldots,k-1$, state $pt^{i+1}$ is uniquely determined, hence $p\e{t}^{i+1} = pt^{i+1}$. Finally, for $i=k$ there is no state colliding with $pt^{k-1}$ and mapped to $pt^k$, hence $p\e{t}^{k+1} = pt^{k+1} = n-1$. Since the other transitions in $s$ are defined exactly as in $t$ and $\e{t}$, we have $\e{t}=t$. \textbf{Case~2.2}: $t$ has no cycles, but $k \ge 1$.\\ Let $s$ be the transformation illustrated in Fig.~\ref{fig:case2.2} and defined by: \begin{center} $0 s = n-1$, $p s = p$,\\ $(p t^i) s = p t^{i-1}$ for $1\le i\le k$,\\ $q s = q t$ for the other states $q\in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,10)(0,-4) \node[Nframe=n](name)(0,4){\normalsize$t\colon$} \node(0)(2,0){0}\imark(0) \node(p)(8,0){$p$} \node[Nframe=n](pdots)(14,0){$\dots$} \node(pt^k)(20,0){$pt^k$} \node(n-1)(26,0){$n$-$1$} \node(n-2)(26,4){$n$-$2$}\rmark(n-2) \drawedge(0,p){} \drawedge(p,pdots){} \drawedge(pdots,pt^k){} \drawedge(pt^k,n-1){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \end{picture} \begin{picture}(28,10)(0,-4) \node[Nframe=n](name)(0,4){\normalsize$s\colon$} \node(0')(2,0){0}\imark(0') \node(p')(8,0){$p$} \node[Nframe=n](pdots')(14,0){$\dots$} \node(pt^k')(20,0){$pt^k$} \node(n-1')(26,0){$n$-$1$} \node(n-2')(26,4){$n$-$2$}\rmark(n-2') \drawedge[curvedepth=-3,linecolor=red,dash={.5 .25}{.25}](0',n-1'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](pdots',p'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](pt^k',pdots'){} \drawloop(p'){} \drawedge(n-2',n-1'){} \drawloop[loopangle=270](n-1'){} \end{picture}\end{center} \caption{Case~2.2.}\label{fig:case2.2} \end{figure} We observe the following properties: \begin{enumerate} \item[(a)] $\{p,pt\}$ is a colliding pair focused by $s$ to a fixed point of in-degree 2. This is the only pair among all colliding pairs focused to a fixed point. \noindent\textit{Proof}: This follows from the definition of $s$, since any colliding pair focused by $s$ contains $pt^i$ ($0 \le i \le k$), and only $pt$ is mapped to $p$, which is a fixed point. Also, no state except $0$ can be mapped to $p$ by $t$ because this would violate suffix-freeness; so only $p$ and $pt$ are mapped by $s$ to $p$, and $p$ has in-degree 2. \item[(b)] All states from $Q_M$ whose mapping is different in $t$ and $s$ belong to the same orbit in $s$ of a fixed point. \item[(c)] $s$ does not have any cycles, but has a fixed point $f \neq n-1$ with in-degree $\ge 2$, which is $p$. \item[(d)] For each $i$ with $1 \le i < k$, there is precisely one state $q$ colliding with $pt^{i-1}$ and mapped to $pt^i$, and that state is $q=pt^{i+1}$. This follows exactly like Property~(d) from Case~2.1. \end{enumerate} \textit{External injectivity}: Here $s$ does not have a cycle in contrast with the transformations of Case~2.1. \textit{Internal injectivity}: Let $\e{t}$ be any transformation that fits in this case and results in the same $s$. From~(a) there is the unique colliding pair $\{p,pt\}$ focused to the fixed point $p$, hence $\e{p} = p$ and $p\e{t} = pt$. Then, from~(d), for $i=1,\ldots,k-1$ state $pt^{i+1}$ is uniquely defined, hence $p\e{t}^{i+1} = pt^i$. Since the other transitions in $s$ are defined exactly as in $t$ and $\e{t}$, we have $\e{t} = t$. \textbf{Case~2.3}: $t$ does not fit in any of the previous cases, but there exist at least two fixed points of in-degree 1.\\ Let the two smallest valued fixed points of in-degree 1 be the states $f_1$ and $f_2$, that is, $$f_1 = \min\{q\in Q \mid q t = q, \forall_{q'\in Q \setminus \{q\}}\ q' t \neq q\},$$ $$f_2 = \min\{q\in Q\setminus\{f_1\} \mid q t = q, \forall_{q'\in Q \setminus \{q\}}\ q' t \neq q\}.$$ Let $s$ be the transformation illustrated in Fig.~\ref{fig:case2.3} and defined by \begin{center} $0 s = n-1$, $f_1 s = f_2$, $f_2 s = f_1$, $p s = f_1$,\\ $q s = q t$ for the other states $q\in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,10)(0,-4) \node[Nframe=n](name)(0,4){\normalsize$t\colon$} \node(0)(2,0){0}\imark(0) \node(p)(14,0){$p$} \node(n-1)(26,0){$n$-$1$} \node(n-2)(26,4){$n$-$2$}\rmark(n-2) \node(f1)(10,4){$f_1$} \node(f2)(18,4){$f_2$} \drawedge(0,p){} \drawedge(p,n-1){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawloop(f1){} \drawloop(f2){} \end{picture} \begin{picture}(28,10)(0,-4) \node[Nframe=n](name)(0,4){\normalsize$s\colon$} \node(0')(2,0){0}\imark(0') \node(p')(14,0){$p$} \node(n-1')(26,0){$n$-$1$} \node(n-2')(26,4){$n$-$2$}\rmark(n-2') \node(f1')(10,4){$f_1$} \node(f2')(18,4){$f_2$} \drawedge[curvedepth=-3,linecolor=red,dash={.5 .25}{.25}](0',n-1'){} \drawedge(n-2',n-1'){} \drawloop[loopangle=270](n-1'){} \drawedge[curvedepth=1,linecolor=red,dash={.5 .25}{.25}](f1',f2'){} \drawedge[curvedepth=1,linecolor=red,dash={.5 .25}{.25}](f2',f1'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](p',f1'){} \end{picture}\end{center} \caption{Case~2.3.}\label{fig:case2.3} \end{figure} We observe the following properties: \begin{enumerate} \item[(a)] $\{p,f_2\}$ is a colliding pair focused by $s$ to $f_2$. This is the only pair among all colliding pairs that are focused. \item[(b)] All states from $Q_M$ whose mapping is different in $t$ and $s$ belong to the same orbit of a cycle in $s$. \item[(c)] $s$ has exactly cycle, namely $(f_1,f_2)$, and it is of length 2. Moreover, one state in the cycle, which is $f_2$, has in-degree 1, and the other one, which is $f_1$, has in-degree 2. \end{enumerate} \textit{External injectivity}: To see that $s$ is distinct from the transformations of Case~2.1, observe that in $s$ the only colliding pair is focused to $f_2$, which lies in a cycle but is not the smallest state of the states of cycles. On the other hand, from~(a) of Case~2.1 the transformations of that case have only one colliding pair focused to a state in a cycle, and this is the smallest state from the states of cycles. Since $s$ has a cycle, it is different from the transformations of Case~2.2. \textit{Internal injectivity}: Let $\e{t}$ be any transformation that fits in this case and results in the same $s$. From~(c), there is a single state in the unique cycle that has in-degree 2 and this is $f_1$. Hence $\e{f}_1 = f_1$, and so $\e{f}_2 = f_2$. From~(a), the unique focused colliding pair is $\{p,f_2\}$, so $\{\e{p},\e{f}_2\}=\{p,f_2\}$ and $\e{p} = p$. Hence $0\e{t} = 0t$, $p\e{t} = pt = n-1$, $f_1 t = f_1 \e{t} = f_1$, and $f_2 t = f_2 \e{t} = f_2$. Since the other transitions in $s$ are defined exactly as in $t$ and $\e{t}$, we have $\e{t} = t$. \textbf{Case~2.4}: $t$ does not fit in any of the previous cases, but there exists $x \in Q\setminus \{0\}$ of in-degree $0$ such that $xt \not\in \{x,n-2,n-1\}$.\\ Let $x$ be the smallest state among the states satisfying the conditions and with the largest $\ell\ge 1$ such that $xt^\ell \not\in \{xt^{\ell-1},n-2,n-1\}$. Since $xt \not\in \{x,n-2,n-1\}$ and $t$ does not have a cycle, $x$ and $\ell$ are well defined. We have that $xt^{\ell+1} \in \{xt^\ell,n-2,n-1\}$, and $x$ has in-degree $0$. Within this case we have the following subcases covering all possibilities for $t$: \textbf{Subcase~2.4.1}: $\ell\ge 2$ and $xt^{\ell+1} = n-1$.\\ Let $s$ be the transformation illustrated in Fig.~\ref{fig:subcase2.4.1} and defined by \begin{center} $0 s = n-1$, $p s = xt^\ell$,\\ $q s = q t$ for the other states $q\in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,10)(0,-4) \node[Nframe=n](name)(0,4){\normalsize$t\colon$} \node(0)(2,0){0}\imark(0) \node(p)(14,0){$p$} \node(n-1)(26,0){$n$-$1$} \node(n-2)(26,4){$n$-$2$}\rmark(n-2) \node(x)(6,4){$x$} \node(xt)(10,4){$xt$} \node[Nframe=n](xdots)(14,4){$\dots$} \node(xt^ell)(18,4){$xt^\ell$} \drawedge(0,p){} \drawedge(p,n-1){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawedge(x,xt){} \drawedge(xt,xdots){} \drawedge(xdots,xt^ell){} \drawedge(xt^ell,n-1){} \end{picture} \begin{picture}(28,10)(0,-4) \node[Nframe=n](name)(0,4){\normalsize$s\colon$} \node(0')(2,0){0}\imark(0') \node(p')(14,0){$p$} \node(n-1')(26,0){$n$-$1$} \node(n-2')(26,4){$n$-$2$}\rmark(n-2') \node(x')(6,4){$x$} \node(xt')(10,4){$xt$} \node[Nframe=n](xdots')(14,4){$\dots$} \node(xt^ell')(18,4){$xt^\ell$} \drawedge[curvedepth=-3,linecolor=red,dash={.5 .25}{.25}](0',n-1'){} \drawedge(n-2',n-1'){} \drawloop[loopangle=270](n-1'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](p,xt^ell'){} \drawedge(x',xt'){} \drawedge(xt',xdots'){} \drawedge(xdots',xt^ell'){} \drawedge(xt^ell',n-1'){} \end{picture}\end{center} \caption{Subcase~2.4.1.}\label{fig:subcase2.4.1} \end{figure} We observe the following properties: \begin{enumerate} \item[(a)] $\{xt^{\ell-1}, p\}$ is a colliding pair focused by $s$ to $xt^\ell$. \item[(b)] $p$ is the only state from $Q_M$ whose mapping is different in $t$ and $s$, and $p$ is mapped to a state mapped to $n-1$. \item[(c)] $s$ does not have any cycles. \end{enumerate} \textit{External injectivity}: Since $s$ does not have any cycles, $s$ is different from the transformations of Case~2.1 and Case~2.3. From~(a), we have a focused colliding pair in the orbit of $n-1$. Thus, $s$ is different from the transformations of Case~2.2, where all states in focused colliding pairs are in the orbit of a fixed point different from $n-1$ (Property~(b) of Case~2.2). \textit{Internal injectivity}: Let $\e{t}$ be any transformation that fits in this subcase and results in the same $s$. From~(b), all colliding pairs that are focused contain $p$. If there are at least two such pairs, then $p$ is uniquely determined as the unique common state. If there is only one such pair, then by~(a) it is $\{xt^{\ell-1}, p\}$, and $p$ is determined as the state of in-degree $0$, since $xt^{\ell-1}$ has in-degree $\ge 1$. Hence, $\e{p} = p$, and since the other transitions in $s$ are defined exactly as in $t$ and $\e{t}$, we have $\e{t} = t$. \textbf{Subcase~2.4.2}: $\ell=1$, $xt^2 = n-1$, and $xt$ has in-degree $>1$.\\ Let $y$ be the smallest state different from $x$ and such that $yt = xt$. Note that $y$ has in-degree $0$, as otherwise it would contradict the choice of $x$, since there would be a state satisfying the conditions for $x$ with a larger $\ell$. Also, $x < y$, as otherwise we would choose $y$ as $x$. Let $s$ be the transformation illustrated in Fig.~\ref{fig:subcase2.4.2} and defined by \begin{center} $0 s = n-1$, $p s = y$,\\ $(xt) s = x$, $x s = y$,\\ $q s = q t$ for the other states $q\in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,14)(0,-4) \node[Nframe=n](name)(0,8){\normalsize$t\colon$} \node(0)(2,0){0}\imark(0) \node(p)(14,0){$p$} \node(n-1)(26,0){$n$-$1$} \node(n-2)(26,4){$n$-$2$}\rmark(n-2) \node(x)(10,4){$x$} \node(xt)(18,4){$xt$} \node(y)(10,8){$y$} \drawedge(0,p){} \drawedge(p,n-1){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawedge(x,xt){} \drawedge(xt,n-1){} \drawedge(y,xt){} \end{picture} \begin{picture}(28,14)(0,-4) \node[Nframe=n](name)(0,8){\normalsize$s\colon$} \node(0')(2,0){0}\imark(0') \node(p')(14,0){$p$} \node(n-1')(26,0){$n$-$1$} \node(n-2')(26,4){$n$-$2$}\rmark(n-2') \node(x')(10,4){$x$} \node(xt')(18,4){$xt$} \node(y')(10,8){$y$} \drawedge[curvedepth=-3,linecolor=red,dash={.5 .25}{.25}](0',n-1'){} \drawedge(n-2',n-1'){} \drawloop[loopangle=270](n-1'){} \drawedge[curvedepth=6,linecolor=red,dash={.5 .25}{.25}](p',y'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](xt',x'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](x',y'){} \drawedge(y',xt'){} \end{picture}\end{center} \caption{Subcase~2.4.2.}\label{fig:subcase2.4.2} \end{figure} We observe the following properties. \begin{enumerate} \item[(a)] $\{p,x\}$ is a colliding pair focused by $s$ to $y$. \item[(b)] All states from $Q_M$ whose mapping is different in $t$ and $s$ belong to the same orbit of a cycle of length $3$ in $s$. \item[(c)] $s$ contains exactly one cycle, namely $(x,y,xt)$. Furthermore, $y$ has in-degree $2$ and is preceded in this cycle by $x$ of in-degree $1$. \end{enumerate} \textit{External injectivity}: To see that $s$ is different from the transformations of Case~2.1, observe that by~(a) we have a colliding pair focused to $y$, which is from a cycle, but is not the smallest state from the states in cycles since $x < y$. On the other hand, in Case~2.1 all colliding pairs focused to a state in a cycle are focused to the smallest state of all states in cycles (Property~(a) of Case~2.1). Since $s$ has a cycle, it is different from the transformations of Case~2.2, Case~2.3, and Subcase~2.4.1. \textit{Internal injectivity}: Let $\e{t}$ be any transformation that fits in this subcase and results in the same $s$. From~(c), in $s$ we have a unique cycle of length 3, and this cycle is $(x,y,xt)$. Since $y$ is uniquely determined as the state of in-degree 2 preceded in the cycle by the state of in-degree 1, we have $\e{y} = y$. Then also $\e{x} = x$ and $x\e{t} = xt$. State $p$ is the only state outside the cycle mapped to $y$, hence $\e{p} = p$. We have $0t = 0\e{t} = p$, $pt = p\e{t} = n-1$, and $xt^2 = x{\e{t}}^2 = n-1$. Since the other transitions in $s$ are defined exactly as in $t$ and $\e{t}$, we have $\e{t} = t$. \textbf{Subcase~2.4.3}: $\ell=1$, $xt^2 = n-1$, and $xt$ has in-degree $1$.\\ We split the subcase into two subsubcases: (i) $p < xt$ and (ii) $p > xt$. Let $s$ be the transformation illustrated in Fig.~\ref{fig:subcase2.4.3} and defined by \begin{center} $0 s = n-1$, $p s = x$,\\ $(xt) s = x$,\\ $x s = n-2$ (i), $x s = n-1$ (ii),\\ $q s = q t$ for the other states $q\in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,10)(0,-4) \node[Nframe=n](name)(0,4){\normalsize$t\colon$} \node(0)(2,0){0}\imark(0) \node(p)(14,0){$p$} \node(n-1)(26,0){$n$-$1$} \node(n-2)(26,4){$n$-$2$}\rmark(n-2) \node(x)(10,4){$x$} \node(xt)(18,4){$xt$} \drawedge(0,p){} \drawedge(p,n-1){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawedge(x,xt){} \drawedge(xt,n-1){} \end{picture} \begin{picture}(28,11)(0,-4) \node[Nframe=n](name)(0,4){\normalsize$s\colon$} \node(0')(2,0){0}\imark(0') \node(p')(14,0){$p$} \node(n-1')(26,0){$n$-$1$} \node(n-2')(26,4){$n$-$2$}\rmark(n-2') \node(x')(10,4){$x$} \node(xt')(18,4){$xt$} \drawedge[curvedepth=-3,linecolor=red,dash={.5 .25}{.25}](0',n-1'){} \drawedge(n-2',n-1'){} \drawloop[loopangle=270](n-1'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](p',x'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](xt',x'){} \drawedge[curvedepth=2,linecolor=red,dash={.1 .1}{.1}](x',n-2'){(i)} \drawedge[curvedepth=-.5,linecolor=red,dash={.1 .1}{.1},ELside=r](x',n-1'){(ii)} \end{picture}\end{center} \caption{Subcase~2.4.3.}\label{fig:subcase2.4.3} \end{figure} We observe the following properties: \begin{enumerate} \item[(a)] $\{p,xt\}$ is a colliding pair focused by $s$ to $x$. Both states from this pair have in-degree 0. \item[(b)] All states from $Q_M$ whose mapping is different in $t$ and $s$ are from the orbit of $n-1$, and $p$ and $xt$ are the only such states that are not mapped to $n-2$ nor to $n-1$. \item[(c)] $s$ does not have any cycles. \end{enumerate} \textit{External injectivity}: Since $s$ does not have any cycles, it is different from the transformations of Case~2.1, Case~2.3, and Subcase~2.4.2. By~(b) all colliding pairs that are focused have states from the orbit of $n-1$, whereas the transformations of Case~2.2 focus a colliding pair to a fixed point. Let $\e{t}$ be a transformation that fits in Subcase~2.4.1 and results in the same $s$. By Lemma~\ref{lem:orbits}, the orbits from Properties~(b) for both $t$ and $\e{t}$ must be the same, so $x = \e{x}\e{t}^{\e{\ell}}$. But in $s$, to $x$ only states of in-degree 0 are mapped, whereas to $\e{x}\e{t}^{\e{\ell}}$ state $\e{x}\e{t}^{\e{\ell-1}}$ is mapped, which has in-degree at least 1. \textit{Internal injectivity}: Let $\e{t}$ be any transformation that fits in this subcase and results in the same $s$. From~(a) and~(b), $\{p,xt\}$ is the unique colliding pair focused to a state different from $n-2$; hence $\{p,xt\} = \{\e{p},\e{x}\e{t}\}$. The pair is focused to $x$, hence $\e{x} = x$. If $x$ is mapped to $n-2$, then we have subsubcase~(i) and $p$ is the smaller state in the colliding pair. If $x$ is mapped to $n-1$, then we have subsubcase~(ii) and $p$ is the larger state in the colliding pair. Hence $p = \e{p}$ and $xt = x\e{t}$. We have $0t = 0\e{t} = p$ and $(xt)t = (xt)\e{t} = n-1$. Since the other transitions in $s$ are defined exactly as in $t$ and $\e{t}$, we have $\e{t} = t$. \textbf{Subcase 2.4.4}: $\ell\ge 1$ and $xt^{\ell+1} = n-2$.\\ Let $s$ be the transformation illustrated in Fig.~\ref{fig:subcase2.4.4} and defined by \begin{center} $0 s = n-1$, $p s = n-2$,\\ $q s = q t$ for the other states $q\in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,10)(0,-4) \node[Nframe=n](name)(0,4){\normalsize$t\colon$} \node(0)(2,0){0}\imark(0) \node(p)(14,0){$p$} \node(n-1)(26,0){$n$-$1$} \node(n-2)(26,4){$n$-$2$}\rmark(n-2) \node(x)(6,4){$x$} \node(xt)(10,4){$xt$} \node[Nframe=n](xdots)(14,4){$\dots$} \node(xt^ell)(18,4){$xt^\ell$} \drawedge(0,p){} \drawedge(p,n-1){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawedge(x,xt){} \drawedge(xt,xdots){} \drawedge(xdots,xt^ell){} \drawedge(xt^ell,n-2){} \end{picture} \begin{picture}(28,10)(0,-4) \node[Nframe=n](name)(0,4){\normalsize$s\colon$} \node(0')(2,0){0}\imark(0') \node(p')(14,0){$p$} \node(n-1')(26,0){$n$-$1$} \node(n-2')(26,4){$n$-$2$}\rmark(n-2') \node(x')(6,4){$x$} \node(xt')(10,4){$xt$} \node[Nframe=n](xdots')(14,4){$\dots$} \node(xt^ell')(18,4){$xt^\ell$} \drawedge[curvedepth=-3,linecolor=red,dash={.5 .25}{.25}](0',n-1'){} \drawedge(n-2',n-1'){} \drawloop[loopangle=270](n-1'){} \drawedge(x',xt'){} \drawedge(xt',xdots'){} \drawedge(xdots',xt^ell'){} \drawedge(xt^ell',n-2'){} \drawedge[curvedepth=-.5,linecolor=red,dash={.5 .25}{.25}](p',n-2'){} \end{picture}\end{center} \caption{Subcase~2.4.4.}\label{fig:subcase2.4.4} \end{figure} We observe the following properties: \begin{enumerate} \item[(a)] $\{xt^\ell, p\}$ is a colliding pair focused by $s$ to $n-2$. \item[(b)] $p$ is the only state from $Q_M$ whose mapping is different in $t$ and $s$. \item[(c)] $s$ does not contain any cycles. \end{enumerate} \textit{External injectivity}: Since $s$ does not contain any cycles, it is different from the transformations of Case~2.1, Case~2.3, and Subcase~2.4.2. From~(b), all focused colliding pairs contain $p$ and so are mapped to $n-2$ in $s$. Hence, $s$ is different from the transformations of Case~2.2, Subcase~2.4.1, and Subcase~2.4.3. \textit{Internal injectivity}: Let $\e{t}$ be any transformation that fits in this subcase and results in the same $s$. If there are two focused colliding pairs, then $p$ is uniquely determined as the common state in these pairs. If there is only one such pair, then $p$ is the state of in-degree $0$, as the other state is $xt^\ell$, which has in-degree $\ge 1$. Hence, $\e{p} = p$. We have $0t = 0\e{t} = p$ and $pt = p\e{t} = n-1$. Since the other transitions in $s$ are defined exactly as in $t$ and $\e{t}$, we have $\e{t} = t$. \textbf{Subcase 2.4.5}: $\ell\ge 1$ and $xt^{\ell+1} = xt^\ell$.\\ Let $s$ be the transformation illustrated in Fig.~\ref{fig:subcase2.4.5} and defined by \begin{center} $0 s = n-1$, $p s = xt^\ell$,\\ $q s = q t$ for the other states $q\in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,11)(0,-4) \node[Nframe=n](name)(0,6){\normalsize$t\colon$} \node(0)(2,0){0}\imark(0) \node(p)(14,0){$p$} \node(n-1)(26,0){$n$-$1$} \node(n-2)(26,4){$n$-$2$}\rmark(n-2) \node(x)(6,4){$x$} \node(xt)(10,4){$xt$} \node[Nframe=n](xdots)(14,4){$\dots$} \node(xt^ell)(18,4){$xt^\ell$} \drawedge(0,p){} \drawedge(p,n-1){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawedge(x,xt){} \drawedge(xt,xdots){} \drawedge(xdots,xt^ell){} \drawloop(xt^ell){} \end{picture} \begin{picture}(28,11)(0,-4) \node[Nframe=n](name)(0,6){\normalsize$s\colon$} \node(0')(2,0){0}\imark(0') \node(p')(14,0){$p$} \node(n-1')(26,0){$n$-$1$} \node(n-2')(26,4){$n$-$2$}\rmark(n-2') \node(x')(6,4){$x$} \node(xt')(10,4){$xt$} \node[Nframe=n](xdots')(14,4){$\dots$} \node(xt^ell')(18,4){$xt^\ell$} \drawedge[curvedepth=-3,linecolor=red,dash={.5 .25}{.25}](0',n-1'){} \drawedge(n-2',n-1'){} \drawloop[loopangle=270](n-1'){} \drawedge(x',xt'){} \drawedge(xt',xdots'){} \drawedge(xdots',xt^ell'){} \drawloop(xt^ell'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](p',xt^ell'){} \end{picture}\end{center} \caption{Subcase~2.4.5.}\label{fig:subcase2.4.5} \end{figure} We observe the following properties: \begin{enumerate} \item[(a)] $\{p, xt^\ell\}$ is a colliding pair focused by $s$ to the fixed point $xt^\ell$, which has in-degree at least $3$. \item[(b)] $p$ is the only state from $Q_M$ whose mapping is different in $t$ and $s$. \item[(c)] $s$ does not contain any cycles. \end{enumerate} \textit{External injectivity}: Since $s$ does not contain any cycles, it is different from the transformations of Case~2.1, Case~2.3, and Subcase~2.4.2. Let $\e{t}$ be a transformation that fits in Case~2.2 and results in the same $s$. By Lemma~\ref{lem:orbits}, the orbits from Properties~(b) for both $t$ and $\e{t}$ must be the same, so $xt^\ell = \e{p}$. But $xt^\ell$ has in-degree at least 3, whereas $\e{p}$ has in-degree 2, which yields a contradiction. Since the orbits from Properties~(b) of the transformations of Subcase~2.4.1, Subcase~2.4.3, and Subcase~2.4.4 contain $n-1$, by Lemma~\ref{lem:orbits} they are different from $s$. \textit{Internal injectivity}: Let $\e{t}$ be any transformation that fits in this subcase and results in the same $s$. By Lemma~\ref{lem:orbits}, the orbits from Properties~(b) for both $t$ and $\e{t}$ must be the same, so we obtain that $xt^\ell = \e{x}\e{t}^{\e{\ell}}$. If $t \neq \e{t}$, then by~(b) $p \neq \e{p}$, and also $p\e{t} = \e{p}t = xt^\ell$, as otherwise $t$ and $\e{t}$ would not result in the same $s$. Then, $\{\e{p},xt^\ell\}$ is a colliding pair because of $\e{t}$. But $\e{p}t = (xt^{\ell})t = xt^\ell$, so this colliding pair is focused by $t$. Hence, it must be that $t = \e{t}$. \textbf{Case 2.5}: $t$ does not fit in any of the previous cases.\\ First we observe that there exists exactly one fixed point $f \neq n-1$, and every state $q \in Q \setminus \{0,f\}$ is mapped either to $n-2$ or $n-1$: All transformations with a cycle or with $p t \neq n-1$ are covered in Case~2.1 and~2.2. Furthermore, if there are also no states $x$ such that $x,xt \not\in \{p,n-1,n-2\}$, then every state $q\in Q\setminus \{0\}$ must either be a fixed point or must be mapped to $n-2$ or $n-1$. If there are at least 2 fixed points, $t$ is covered by Case~2.3, and if there is no fixed point, then $t \in \mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ (transformation of type~3 from Remark~\ref{rem:Wbf_transformations}) and so falls into Supercase~1. \textbf{Subcase~2.5.1}: There are at least two states $r_1,r_2,\ldots,r_u$ from $Q \setminus \{0,p\}$ such that $r_i t = n-1$ for all $i$.\\ Assume that $r_1 < r_2 < \dots < r_u$. Let $s$ be the transformation illustrated in Fig.~\ref{fig:subcase2.5.1} and defined by \begin{center} $0 s = n-1$, $p s = f$,\\ $r_i s = r_{i+1}$ for $1\le i\le u-1$,\\ $r_u s = r_1$,\\ $q s = q t$ for the other states $q\in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,11)(0,-4) \node[Nframe=n](name)(0,6){\normalsize$t\colon$} \node(0)(2,0){0}\imark(0) \node(p)(14,0){$p$} \node(n-1)(26,0){$n$-$1$} \node(n-2)(26,4){$n$-$2$}\rmark(n-2) \node(f)(8,4){$f$} \node(r1)(14,4){$r_1$} \node[Nframe=n](rdots)(18,4){$\dots$} \node(ru)(22,4){$r_u$} \drawedge(0,p){} \drawedge(p,n-1){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawloop(f){} \drawedge[curvedepth=-.2](r1,n-1){} \drawedge[curvedepth=0,exo=.2](rdots,n-1){} \drawedge[curvedepth=0,exo=.5](ru,n-1){} \end{picture} \begin{picture}(28,11)(0,-4) \node[Nframe=n](name)(0,6){\normalsize$s\colon$} \node(0')(2,0){0}\imark(0') \node(p')(14,0){$p$} \node(n-1')(26,0){$n$-$1$} \node(n-2')(26,4){$n$-$2$}\rmark(n-2') \node(f')(8,4){$f$} \node(r1')(14,4){$r_1$} \node[Nframe=n](rdots')(18,4){$\dots$} \node(ru')(22,4){$r_u$} \drawedge[curvedepth=-3,linecolor=red,dash={.5 .25}{.25}](0',n-1'){} \drawedge(n-2',n-1'){} \drawloop[loopangle=270](n-1'){} \drawloop(f'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](p',f'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](r1',rdots'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](rdots',ru'){} \drawedge[curvedepth=-2,linecolor=red,dash={.5 .25}{.25}](ru',r1'){} \end{picture}\end{center} \caption{Subcase~2.5.1.}\label{fig:subcase2.5.1} \end{figure} We observe the following properties: \begin{enumerate} \item[(a)] $\{p,f\}$ is a colliding pair focused by $s$ to the fixed point $f$. This is the only colliding pair that is focused by $s$. \item[(c)] $s$ contains exactly one cycle. \end{enumerate} \textit{External injectivity}: Since $s$ has a cycle, it is different from the transformations of Case~2.2, Subcase~2.4.1, Subcase~2.4.3, Subcase~2.4.4, and Subcase~2.4.5. From~(a) and~(c), $s$ has a cycle and focuses a colliding pair to a state whose orbit is not the orbit of a cycle. Hence, $s$ is different from the transformations of Case~2.1, Case~2.3, and of Subcase~2.4.2, where all colliding pairs that are focused by these transformations have states from the orbit of a cycle (Properties~(b) of these (sub)cases). \textit{Internal injectivity}: Let $\e{t}$ be any transformation that fits in this subcase and results in the same $s$. By~(a), $\{p,f\}$ is the unique colliding pair that is focused to the fixed point $f$, so $\e{p}=p$ and $\e{f}=f$. Also, there is exactly one cycle formed by the states $r_i$, so $(r_1,r_2,\ldots,r_u) = (\e{r_1},\e{r_2},\ldots,\e{r_u})$. It follows that $0t = 0\e{t} = p$, $ft = f\e{t} = f$, and $r_i t = \e{r_i} \e{t} = n-1$ for all $i$. Since the other transitions in $s$ are defined exactly as in $t$ and $\e{t}$, we have $\e{t} = t$. \textbf{Subcase~2.5.2}: $t$ does not fit in Subcase~2.5.1.\\ Because $n \ge 8$, we have that $Q \setminus \{0,p,f,n-2,n-1\}$ contains at least three states. Since $t$ does not fit in Subcase~2.5.1, we have at least two states $q_1, q_2, \ldots, q_v$ such that $q_i t = n-2$. Assume that $q_1 < q_2 < \ldots < q_v$. Let $s$ be the transformation illustrated in Fig.~\ref{fig:subcase2.5.2} and defined by \begin{center} $0 s = n-1$, $p s = f$,\\ $q_i s = q_{i-1}$ for $2\le i\le v$,\\ $q_1 s = q_v$,\\ $q s = q t$ for the other states $q\in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,11)(0,-4) \node[Nframe=n](name)(0,6){\normalsize$t\colon$} \node(0)(2,0){0}\imark(0) \node(p)(14,0){$p$} \node(n-1)(26,0){$n$-$1$} \node(n-2)(26,4){$n$-$2$}\rmark(n-2) \node(f)(8,4){$f$} \node(q1)(14,4){$q_1$} \node[Nframe=n](qdots)(18,4){$\dots$} \node(qv)(22,4){$q_v$} \drawedge(0,p){} \drawedge(p,n-1){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawloop(f){} \drawedge[curvedepth=-3,exo=1](q1,n-2){} \drawedge[curvedepth=-2](qdots,n-2){} \drawedge[curvedepth=0](qv,n-2){} \end{picture} \begin{picture}(28,11)(0,-4) \node[Nframe=n](name)(0,6){\normalsize$s\colon$} \node(0')(2,0){0}\imark(0') \node(p')(14,0){$p$} \node(n-1')(26,0){$n$-$1$} \node(n-2')(26,4){$n$-$2$}\rmark(n-2') \node(f')(8,4){$f$} \node(q1')(14,4){$q_1$} \node[Nframe=n](qdots')(18,4){$\dots$} \node(qv')(22,4){$q_v$} \drawedge[curvedepth=-3,linecolor=red,dash={.5 .25}{.25}](0',n-1'){} \drawedge(n-2',n-1'){} \drawloop[loopangle=270](n-1'){} \drawloop(f'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](p',f'){} \drawedge[curvedepth=2,linecolor=red,dash={.5 .25}{.25}](q1',qv'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](qdots',q1'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](qv',qdots'){} \end{picture}\end{center} \caption{Subcase~2.5.2.}\label{fig:subcase2.5.2} \end{figure} We observe the following properties: \begin{enumerate} \item[(a)] $\{p, f\}$ is a colliding pair focused by $s$ to the fixed point $f$. This is the only colliding pair that is focused by $s$. \item[(c)] $s$ contains exactly one cycle. \end{enumerate} \textit{External injectivity}: In the same way as in Subcase~2.5.1, $s$ is different from the transformations of Cases~2.1--2.4. Now suppose that the same transformation $s$ is obtained in Subcase~2.5.1. Since the unique cycles in both subcases go in opposite directions, if they are equal then they must be of length 2. But then, since $n \ge 8$, we have at least one state in $Q_M$ being mapped to $n-1$ in $t$, and also in $s$. But since $s$ is also obtained in Subcase~2.5.1, there are no such states besides $0$, $n-2$, and $n-1$, which yields a contradiction. \textit{Internal injectivity}: Let $\e{t}$ be any transformation that fits in this subcase and results in the same $s$. It follows in the same way as in~Subcase~2.5.1, that we have $0t = 0\e{t} = p$, $ft = f\e{t} = f$, and $q_i t = \e{q_i} \e{t} = n-2$ for all $i$. Since the other transitions in $s$ are defined exactly as in $t$ and $\e{t}$, we have $\e{t} = t$. \textbf{Supercase 3:} $t \not\in \mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ and $pt^{k+1} = n-2$.\\ Here we have the chain $$0 \stackrel{t}{\rightarrow} p \stackrel{t}{\rightarrow} pt \stackrel{t}{\rightarrow} \dots \stackrel{t}{\rightarrow} pt^k \stackrel{t}{\rightarrow} n-2 \stackrel{t}{\rightarrow} n-1.$$ We will always assign transformations $s$ such that $s$ together with $t$ generate a transformation that focuses a colliding pair, which distinguishes such transformations $s$ from those of Supercase~1. Moreover we will always have $0 s = n-2$, to distinguish $s$ from the transformations of Supercase~2. For all the cases of Supercase~3, let $q_1,q_2,\ldots,q_v$ be all the states such that $q_i t = n-2$, for all $i$. Without loss of generality, we assume that $q_1 < q_2 < \dots < q_v$. In contrast with the Supercase~2, we have an additional difficulty in constructions of $s$, which is that no state can be mapped to $n-2$ except state $0$. On the other hand, the chains going through a state $q_i$ and ending in $n-2$ are of length at most $k+1$. We have the following cases covering all possibilities for $t$: \textbf{Case 3.1}: $k=0$ and $t$ has a cycle.\\ Let $r$ be the minimal among the states that appear in cycles of $t$, that is, $$r = \min\{q\in Q \mid \text{q is in a cycle of } t\}.$$ Let $s$ be the transformation illustrated in Fig.~\ref{fig:case3.1} and defined by \begin{center} $0 s = n-2$, $p s = r$,\\ $q_i s = p$ for $1\le i\le v$,\\ $qs = qt$ for the other states $q\in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,12)(0,-2) \node[Nframe=n](name)(0,9){\normalsize$t\colon$} \node(0)(2,7){0}\imark(0) \node(p)(14,7){$p$} \node(n-1)(26,0){$n$-$1$} \node(n-2)(26,7){$n$-$2$}\rmark(n-2) \node(z)(12,-1){$z$} \node(r)(14,2){$r$} \node[Nframe=n](rdots)(16,-1){$\dots$} \node(q1)(18,4){$q_1$} \node[Nframe=n](qdots)(20.5,4){$\dots$} \node(qv)(23,4){$q_v$} \drawedge(0,p){} \drawedge(p,n-2){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawedge[curvedepth=.5](q1,n-2){} \drawedge[curvedepth=.6,sxo=-.5,exo=1.5](qdots,n-2){} \drawedge[curvedepth=0](qv,n-2){} \drawedge[curvedepth=1](z,r){} \drawedge[curvedepth=1](r,rdots){} \drawedge[curvedepth=1](rdots,z){} \end{picture} \begin{picture}(28,12)(0,-1) \node[Nframe=n](name)(0,9){\normalsize$s\colon$} \node(0')(2,7){0}\imark(0') \node(p')(14,7){$p$} \node(n-1')(26,0){$n$-$1$} \node(n-2')(26,7){$n$-$2$}\rmark(n-2') \node(z')(12,-1){$z$} \node(r')(14,2){$r$} \node[Nframe=n](rdots')(16,-1){$\dots$} \node(q1')(18,4){$q_1$} \node[Nframe=n](qdots')(20.5,4){$\dots$} \node(qv')(23,4){$q_v$} \drawedge[curvedepth=3,linecolor=red,dash={.5 .25}{.25}](0',n-2'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](p',r'){} \drawedge(n-2',n-1'){} \drawloop[loopangle=270](n-1'){} \drawedge[curvedepth=-.2,linecolor=red,dash={.5 .25}{.25}](q1',p'){} \drawedge[curvedepth=-.3,syo=.5,linecolor=red,dash={.5 .25}{.25}](qdots',p'){} \drawedge[curvedepth=-.8,linecolor=red,dash={.5 .25}{.25}](qv',p'){} \drawedge[curvedepth=1](z',r'){} \drawedge[curvedepth=1](r',rdots'){} \drawedge[curvedepth=1](rdots',z'){} \end{picture}\end{center} \caption{Case~3.1.}\label{fig:case3.1} \end{figure} Let $z$ be the state from the cycle of $t$ such that $zt = r$. We observe the following properties: \begin{enumerate} \item[(a)] $\{p,z\}$ is a colliding pair focused by $s$ to state $r$ in the cycle, which is the smallest state in a cycle. This is the only colliding pair which is focused to a state in a cycle. \item[(b)] All states from $Q_M$ whose mapping is different in $s$ and $t$ belong to the tree of $t$, and so to the orbit of a cycle. \item[(c)] $s$ has a cycle. \end{enumerate} \textit{Internal injectivity}: Let $\e{t}$ be any transformation that fits in this case and results in the same $s$; we will show that $\e{t}=t$. From~(a), there is the unique colliding pair $\{p,z\}$ focused to a state in a cycle, hence $\{\e{p},\e{z}\} = \{p,z\}$. Moreover, $p$ and $\e{p}$ are not in the cycle, whereas $z$ and $\e{z}$ are, so $\e{p}=p$ and $\e{z}=z$. Since there is no state $q \neq 0$ such that $qt=p$, the only states mapped to $p$ by $s$ are $q_i$, hence $q_i = \e{q_i}$ for all $i$. We have that $0t = 0\e{t} = p$, and $q_i t = q_i \e{t} = n-2$ for all $i$. Since the other transitions in $s$ are defined exactly as in $t$ and $\e{t}$, we have that $\e{t}=t$. \textbf{Case~3.2}: $t$ does not fit into any of the previous cases, $k=0$, and there exists a state $x \in Q \setminus \{0\}$ such that $xt \not\in \{x,n-1,n-2\}$.\\ Let $x$ be the smallest state among the states satisfying the conditions and with the largest $\ell\ge 1$ such that $xt^\ell \not\in \{xt^{\ell-1},n-2,n-1\}$. By the conditions of the case and since $t$ does not have a cycle, $x$ is well-defined, and $\ell \ge 1$ and it is finite. Note that $xt^{\ell+1} \neq n-2$, because $x^\ell$ collides with $p$. We have that $xt^{\ell+1} \in \{xt^\ell,n-1\}$, and $x$ has in-degree 0. Also note that, since $k=1$, all $q_i$ are of in-degree 0. We have the following subcases in this case that cover all possibilities for $t$: \textbf{Subcase~3.2.1}: $\ell \ge 2$ and $xt^{\ell+1} = n-1$.\\ We have the following two subsubcases: (i) there exists $i$ such that $q_i < x$, and (ii) there is no such $i$. Let $s$ be the transformation illustrated in Fig.~\ref{fig:subcase3.2.1} and defined by \begin{center} $0 s = n-2$, $p s = xt^\ell$,\\ $(xt^\ell) s = xt^\ell$ (i), $(xt^\ell) s = n-1$ (ii),\\ $q_i s = p$ for $1\le i\le v$,\\ $q s = q t$ for the other states $q\in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,12)(0,-2) \node[Nframe=n](name)(0,9){\normalsize$t\colon$} \node(0)(2,7){0}\imark(0) \node(p)(14,7){$p$} \node(n-1)(26,0){$n$-$1$} \node(n-2)(26,7){$n$-$2$}\rmark(n-2) \node(x)(2,0){$x$} \node(xt)(6,0){$xt$} \node[Nframe=n](xdots)(10,0){$\dots$} \node(xtl)(14,0){$xt^\ell$} \node(q1)(18,4){$q_1$} \node[Nframe=n](qdots)(20.5,4){$\dots$} \node(qv)(23,4){$q_v$} \drawedge(0,p){} \drawedge(p,n-2){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawedge[curvedepth=.5](q1,n-2){} \drawedge[curvedepth=.6,sxo=-.5,exo=1.5](qdots,n-2){} \drawedge[curvedepth=0](qv,n-2){} \drawedge(x,xt){} \drawedge(xt,xdots){} \drawedge(xdots,xtl){} \drawedge(xtl,n-1){} \end{picture} \begin{picture}(28,12)(0,-1) \node[Nframe=n](name)(0,9){\normalsize$s\colon$} \node(0')(2,7){0}\imark(0') \node(p')(14,7){$p$} \node(n-1')(26,0){$n$-$1$} \node(n-2')(26,7){$n$-$2$}\rmark(n-2') \node(q1')(18,4){$q_1$} \node[Nframe=n](qdots')(20.5,4){$\dots$} \node(qv')(23,4){$q_v$} \node(x')(2,0){$x$} \node(xt')(6,0){$xt$} \node[Nframe=n](xdots')(10,0){$\dots$} \node(xtl')(14,0){$xt^\ell$} \drawedge[curvedepth=3,linecolor=red,dash={.5 .25}{.25}](0',n-2'){} \drawedge(n-2',n-1'){} \drawloop[loopangle=270](n-1'){} \drawedge[curvedepth=-.2,linecolor=red,dash={.5 .25}{.25}](q1',p'){} \drawedge[curvedepth=-.3,syo=.5,linecolor=red,dash={.5 .25}{.25}](qdots',p'){} \drawedge[curvedepth=-.8,linecolor=red,dash={.5 .25}{.25}](qv',p'){} \drawedge[curvedepth=-3,linecolor=red,dash={.5 .25}{.25}](p',xtl'){} \drawedge(x',xt'){} \drawedge(xt',xdots'){} \drawedge(xdots',xtl'){} \drawloop[ELpos=80,linecolor=red,dash={.1 .1}{.1}](xtl'){(i)} \drawedge[linecolor=red,dash={.1 .1}{.1}](xtl',n-1'){(ii)} \end{picture}\end{center} \caption{Subcase~3.2.1.}\label{fig:subcase3.2.1} \end{figure} We observe the following properties: \begin{enumerate} \item[(a)] $\{p, xt^{\ell-1}\}$ is a colliding pair focused by $s$ to $xt^\ell$. \item[(b)] All states from $Q_M$ whose mapping is different in $s$ and $t$ belong to the tree of $xt^\ell$, which is either a fixed point (i) or a state mapped to $n-1$ (ii). \item[(c)] $s$ does not contain any cycles. \end{enumerate} \textit{External injectivity}: Since $s$ does not have any cycles, it is different from the transformations of Case~3.1. \textit{Internal injectivity}: Let $\e{t}$ be any transformation that fits in this subcase and results in the same $s$; we will show that $\e{t}=t$. By Lemma~\ref{lem:orbits} the trees from~(b) of $t$ and $\e{t}$ must be the same, so $xt^\ell = \e{x}\e{t}^{\e{\ell}}$. Also, the subsubcase is determined by $xt^\ell s$ and thus is the same for both $t$ and $\e{t}$. Consider all colliding pairs focused by $s$ to $xt^\ell$ that do not contain $xt^\ell$. All of them contain $p$, so if there are two or more such pairs, then $\e{p} = p$. Suppose that there is only one such pair $\{p,x^{\ell-1}\} = \{\e{p},\e{x}\e{t}^{\e{\ell}-1}\}$. Note that $\ell = \e{\ell}$, as this is the length of a longest path ending at $xt^{\ell} = \e{x}\e{t}^{\e{\ell}}$. Also, to $p$ only states $q_i$ are mapped, which have in-degree 0. If $\ell > 2$, then $p$ is distinguished from $xt^{\ell-1}$, since to $xt^{\ell-1}$ there is mapped $xt^{\ell-2}$ of in-degree $>0$; hence $p = \e{p}$. Consider $\ell = 2$. Let $U$ be the set of states that are mapped either to $p$ or to $xt^{\ell-1}$; then $\e{U} = U$. The smallest state in $U$ is either a state $q_i$ or $x$ (by the choice of $x$). If the subsubcase is (i), then the smallest state in $U$ is $q_i$ and so is mapped to $p$, while in subsubcase (ii) it is $x$ mapped to $xt^{\ell}$. Hence, the smallest state distinguishes $p$ from $xt^{\ell}$, and we have that $p = \e{p}$ and $xt^{\ell-1} = \e{x}\e{t}^{\ell-1}$. Then also $q_i = \e{q_i}$ for all $i$, since these are precisely the states mapped to $p = \e{p}$. Summarizing, we have that $0t = 0\e{t} = p$, $pt = p\e{t} = n-2$, $(xt^\ell)t = (\e{x}\e{t}^{\ell}\e{t} = n-1$, and $q_i t = q_i \e{t} = n-2$. Since the other transitions in $s$ are defined exactly as in $t$ and $\e{t}$, we have that $\e{t}=t$. \textbf{Subcase~3.2.2}: $\ell=1$, $xt^2 = n-1$, and $xt$ has in-degree at least 1.\\ Let $y$ be the smallest state such that $yt = xt$ and $y \neq x$. Note that $x < y$ and $y$ has in-degree 0. Let $s$ be the transformation illustrated in Fig.~\ref{fig:subcase3.2.2} and defined by \begin{center} $0 s = n-2$, $p s = y$,\\ $(xt) s = x$, $x s = y$,\\ $q_i s = p$ for all $i$,\\ $q s = q t$ for the other states $q\in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,12)(0,-2) \node[Nframe=n](name)(0,9){\normalsize$t\colon$} \node(0)(2,7){0}\imark(0) \node(p)(14,7){$p$} \node(n-1)(26,0){$n$-$1$} \node(n-2)(26,7){$n$-$2$}\rmark(n-2) \node(x)(8,0){$x$} \node(xt)(14,0){$xt$} \node(y)(8,4){$y$} \node(q1)(18,4){$q_1$} \node[Nframe=n](qdots)(20.5,4){$\dots$} \node(qv)(23,4){$q_v$} \drawedge(0,p){} \drawedge(p,n-2){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawedge[curvedepth=.5](q1,n-2){} \drawedge[curvedepth=.6,sxo=-.5,exo=1.5](qdots,n-2){} \drawedge[curvedepth=0](qv,n-2){} \drawedge(x,xt){} \drawedge(xt,n-1){} \drawedge(y,xt){} \end{picture} \begin{picture}(28,12)(0,-1) \node[Nframe=n](name)(0,9){\normalsize$s\colon$} \node(0')(2,7){0}\imark(0') \node(p')(14,7){$p$} \node(n-1')(26,0){$n$-$1$} \node(n-2')(26,7){$n$-$2$}\rmark(n-2') \node(q1')(18,4){$q_1$} \node[Nframe=n](qdots')(20.5,4){$\dots$} \node(qv')(23,4){$q_v$} \node(x')(8,0){$x$} \node(xt')(14,0){$xt$} \node(y')(8,4){$y$} \drawedge[curvedepth=3,linecolor=red,dash={.5 .25}{.25}](0',n-2'){} \drawedge(n-2',n-1'){} \drawloop[loopangle=270](n-1'){} \drawedge[curvedepth=-.2,linecolor=red,dash={.5 .25}{.25}](q1',p'){} \drawedge[curvedepth=-.3,syo=.5,linecolor=red,dash={.5 .25}{.25}](qdots',p'){} \drawedge[curvedepth=-.8,linecolor=red,dash={.5 .25}{.25}](qv',p'){} \drawedge(y',xt'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](p',y'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](xt',x'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](x',y'){} \end{picture}\end{center} \caption{Subcase~3.2.2.}\label{fig:subcase3.2.2} \end{figure} We observe the following properties: \begin{enumerate} \item[(a)] $\{p,xt\}$ is a colliding pair focused by $st$ to $xt$. \item[(b)] All states from $Q_M$ whose mapping is different in $s$ and $t$ belong to the same orbit of a cycle. \item[(c)] $s$ contains exactly one cycle, namely $(xt,x,y)$. \end{enumerate} \textit{External injectivity}: Since all colliding pairs focused by $s$ must belong to the orbit from~(b), and the smallest state in the cycle of the orbit from~(b) is $x$ of in-degree 1, $s$ does not map a colliding pair to it and so is different from the transformations of Case~3.1. Since $s$ has a cycle, it is different from the transformations of Subcase~3.3.1. \textit{Internal injectivity}: Let $\e{t}$ be any transformation that fits in this subcase and results in the same $s$; we will show that $\e{t}=t$. All colliding pairs that are focused have states from the orbit of the cycle from Property~(b), hence $(xt,x,y) = (\e{x}\e{t},\e{x},\e{y})$. Since $x$ and $\e{x}$ are the smallest states in the cycle, we have $x = \e{x}$, $y = \e{y}$, and $xt = \e{x}\e{t}$. Since $y$ has in-degree 0 in $t$, $p$ is the only state outside the cycle that is mapped to $y$ in $s$; hence $p = \e{p}$. Also, all states mapped to $p$ by $s$ are precisely the states $q_i$; hence $q_i = \e{q_i}$ for all $i$. We have that $0t = 0\e{t} = p$, $pt = p\e{t} = n-2$, $xt = x\e{t}$, $(xt)t = (xt)\e{t} = n-1$, and $q_i = \e{q_i} = n-2$. Since the other transitions in $s$ are defined exactly as in $t$ and $\e{t}$, we have that $\e{t}=t$. \textbf{Subcase~3.2.3}: $\ell=1$, $xt^2 = n-1$, and $xt$ has in-degree 1.\\ We split the subcase into the following two subcases: (i) there exists $q_1$ ($v \ge 1$) or $p < xt$; (ii) there is no $q_1$ and $p > xt$. Let $s$ be the transformation illustrated in Fig.~\ref{fig:subcase3.2.3} and defined by \begin{center} $0 s = n-2$, $p s = x$,\\ $xt s = x$,\\ $x s = x$ (i), $x s = n-1$ (ii),\\ $q_i s = p$ for all $i$,\\ $q s = q t$ for the other states $q\in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,12)(0,-2) \node[Nframe=n](name)(0,9){\normalsize$t\colon$} \node(0)(2,7){0}\imark(0) \node(p)(14,7){$p$} \node(n-1)(26,0){$n$-$1$} \node(n-2)(26,7){$n$-$2$}\rmark(n-2) \node(x)(8,0){$x$} \node(xt)(14,0){$xt$} \node(q1)(18,4){$q_1$} \node[Nframe=n](qdots)(20.5,4){$\dots$} \node(qv)(23,4){$q_v$} \drawedge(0,p){} \drawedge(p,n-2){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawedge[curvedepth=.5](q1,n-2){} \drawedge[curvedepth=.6,sxo=-.5,exo=1.5](qdots,n-2){} \drawedge[curvedepth=0](qv,n-2){} \drawedge(x,xt){} \drawedge(xt,n-1){} \end{picture} \begin{picture}(28,13)(0,-2) \node[Nframe=n](name)(0,9){\normalsize$s\colon$} \node(0')(2,7){0}\imark(0') \node(p')(14,7){$p$} \node(n-1')(26,0){$n$-$1$} \node(n-2')(26,7){$n$-$2$}\rmark(n-2') \node(q1')(18,4){$q_1$} \node[Nframe=n](qdots')(20.5,4){$\dots$} \node(qv')(23,4){$q_v$} \node(x')(8,0){$x$} \node(xt')(14,0){$xt$} \drawedge[curvedepth=3,linecolor=red,dash={.5 .25}{.25}](0',n-2'){} \drawedge(n-2',n-1'){} \drawloop[loopangle=270](n-1'){} \drawedge[curvedepth=-.2,linecolor=red,dash={.5 .25}{.25}](q1',p'){} \drawedge[curvedepth=-.3,syo=.5,linecolor=red,dash={.5 .25}{.25}](qdots',p'){} \drawedge[curvedepth=-.8,linecolor=red,dash={.5 .25}{.25}](qv',p'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](p',x'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](xt',x'){} \drawloop[linecolor=red,dash={.1 .1}{.1}](x'){(i)} \drawedge[linecolor=red,dash={.1 .1}{.1},ELside=r,curvedepth=-2](x',n-1){(ii)} \end{picture}\end{center} \caption{Subcase~3.2.3.}\label{fig:subcase3.2.3} \end{figure} We observe the following properties: \begin{enumerate} \item[(a)] $\{p, xt\}$ is a colliding pair focused by $s$ to $x$. \item[(b)] All states from $Q_M$ whose mapping is different in $s$ and $t$ belong to the same tree of $x$, which is either a fixed point (i) or a state mapped to $n-1$ (ii). \item[(c)] $s$ does not contain any cycles. \end{enumerate} \textit{External injectivity}: Since $s$ does not have any cycles, it is different from the transformations of Case~3.1 and Subcase~3.2.2. Let $\e{t}$ be a transformation that fits in Subcase~3.2.1 and results in the same $s$. By Lemma~\ref{lem:orbits}, the trees from~(b) of both $t$ and $\e{t}$ must be the same, so $xt^\ell = \e{x}\e{t}^{\e{\ell}}$. It follows that the subsubcases, which are determined by $xs$, are the same for both $t$ and $\e{t}$. Note that $x$ has in-degree 2 in $s$, one of the states from this pair ($xt$) has in-degree $0$, and the other one ($\e{x}\e{t}^{\e{\ell}-1}$) has in-degree at least 1. If the subsubcase is~(i), then $\e{p}$ has in-degree at least 1, and so both the states have in-degree at least 1, which yields a contradiction. If the subsubcase is~(ii), then $p$ has in-degree 0, and so both the states have in-degree 0, which yields a contradiction. \textit{Internal injectivity}: Let $\e{t}$ be any transformation that fits in this subcase and results in the same $s$; we will show that $\e{t}=t$. By Lemma~\ref{lem:orbits} we have that $x = \e{x}$, and so also $\{p,xt\} = \{\e{p},x\e{t}\}$. The subsubcase for both $t$ and $\e{t}$ is determined by $xs$ and so must be the same. If the subsubcase is~(i), then $p$ has in-degree $\ge 1$ or it is smaller than $xt$; hence $p = \e{p}$ and $xt = x\e{t}$. If the subsubcase is~(ii), then both $p$ and $xt$ have in-degree $0$ and $p$ is larger than $xt$; hence again $p = \e{p}$ and $xt = x\e{t}$. Also, $q_i = \e{q_i}$ as these are precisely all the states mapped to $p$ by $s$. We have that $0t = 0\e{t} = p$, $pt = p\e{t} = n-2$, $(xt)t = (xt)\e{t} = n-1$, and $q_i t = q_i \e{t} = n-2$ for all $i$. Since the other transitions in $s$ are defined exactly as in $t$ and $\e{t}$, we have that $\e{t}=t$. \textbf{Subcase~3.2.4}: $xt^{\ell} = xt^{\ell+1}$.\\ Let $s$ be the transformation illustrated in Fig.~\ref{fig:subcase3.2.4} and defined by \begin{center} $0 s = n-2$, $p s = xt^\ell$,\\ $(x t^i) s = x t^{i-1}$ for $1\le i\le \ell$,\\ $x s = p$,\\ $q_i s = x$ for $1\le i\le v$,\\ $q s = q t$ for the other states $q\in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,12)(0,-2) \node[Nframe=n](name)(0,9){\normalsize$t\colon$} \node(0)(2,7){0}\imark(0) \node(p)(14,7){$p$} \node(n-1)(26,0){$n$-$1$} \node(n-2)(26,7){$n$-$2$}\rmark(n-2) \node(x)(2,0){$x$} \node(xt)(6,0){$xt$} \node[Nframe=n](xdots)(10,0){$\dots$} \node(xtl)(14,0){$xt^\ell$} \node(q1)(18,4){$q_1$} \node[Nframe=n](qdots)(20.5,4){$\dots$} \node(qv)(23,4){$q_v$} \drawedge(0,p){} \drawedge(p,n-2){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawedge[curvedepth=.5](q1,n-2){} \drawedge[curvedepth=.6,sxo=-.5,exo=1.5](qdots,n-2){} \drawedge[curvedepth=0](qv,n-2){} \drawedge(x,xt){} \drawedge(xt,xdots){} \drawedge(xdots,xtl){} \drawloop(xtl){} \end{picture} \begin{picture}(28,16)(0,-5) \node[Nframe=n](name)(0,9){\normalsize$s\colon$} \node(0')(2,7){0}\imark(0') \node(p')(14,7){$p$} \node(n-1')(26,0){$n$-$1$} \node(n-2')(26,7){$n$-$2$}\rmark(n-2') \node(q1')(18,4){$q_1$} \node[Nframe=n](qdots')(20.5,4){$\dots$} \node(qv')(23,4){$q_v$} \node(x')(2,0){$x$} \node(xt')(6,0){$xt$} \node[Nframe=n](xdots')(10,0){$\dots$} \node(xtl')(14,0){$xt^\ell$} \drawedge[curvedepth=3,linecolor=red,dash={.5 .25}{.25}](0',n-2'){} \drawedge(n-2',n-1'){} \drawloop[loopangle=270](n-1'){} \drawedge[curvedepth=6.5,sxo=.5,linecolor=red,dash={.5 .25}{.25}](q1',x'){} \drawedge[curvedepth=7,sxo=.5,exo=-.5,linecolor=red,dash={.5 .25}{.25}](qdots',x'){} \drawedge[curvedepth=7.5,sxo=.5,exo=-1,linecolor=red,dash={.5 .25}{.25}](qv',x'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](p',xtl'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](xtl',xdots'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](xdots',xt'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](xt',x'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](x',p'){} \end{picture}\end{center} \caption{Subcase~3.2.4.}\label{fig:subcase3.2.4} \end{figure} We observe the following properties: \begin{enumerate} \item[(a)] $\{p,xt^\ell\}$ is a colliding pair focused by $s$ to $xt^\ell$. \item[(b)] All states from $Q_M$ whose mapping is different in $s$ and $t$ belong to the same orbit of a cycle. \item[(c)] $s$ contains exactly one cycle, namely $(p,xt^\ell,xt^{\ell-1},\ldots,x)$. \end{enumerate} \textit{External injectivity}: Let $\e{t}$ be a transformation that fits in Case~3.1 and results in the same $s$. Then $\e{t}$ must have the cycle $(p,xt^\ell,xt^{\ell-1},\ldots,x)$, since it exists in $s$ and the construction of Case~3.1 does not introduce any new cycles. But then $0t\e{t}t = x^{\ell}$ and $(xt^{\ell})t\e{t}t = xt^{\ell}$. Since $p$ collides with $xt^{\ell}$, $t$ and $\e{t}$ cannot be both in $T(n)$. Since $s$ has a cycle, it is different from the transformations of Subcase~3.2.1 and Subcase~3.2.3. Now let $\e{t}$ be a transformation that fits in Subcase~3.2.2 and results in the same $s$. Since $s$ contains exactly one cycle, it must be that $\ell=1$ and $(p,xt,x) = (\e{x},\e{y},\e{x}\e{t})$. We have the following three possibilities: If $p = \e{x}$, $xt = \e{y}$, and $x = \e{x}\e{t}$, then $\e{t}$ focuses the colliding pair $\{p,xt\} = \{\e{x},\e{y}\}$; hence $t$ and $\e{t}$ cannot be both in $T(n)$. If $p = \e{y}$, then we have a contradiction with that $p$ has in-degree 1 and $\e{y}$ has in-degree 2. Finally, suppose that $p = \e{x}\e{t}$, $xt = \e{x}$, and $x = \e{y}$. Then $x = \e{y}$ must have in-degree $2$, and there is $q_1 = \e{p}$ (and $v = 1$). But $\{\e{p},\e{x}\e{t}\} = \{q_1,p\}$ is a colliding pair because of $\e{t}$, and it is focused to $n-2$ by $t$; hence $t$ and $\e{t}$ cannot be both in $T(n)$. \textit{Internal injectivity}: Let $\e{t}$ be any transformation that fits in this subcase and results in the same $s$; we will show that $\e{t}=t$. By~(c) we have that $\ell = \e{\ell}$ and $(p,xt^\ell,xt^{\ell-1},\ldots,x) = (\e{p},\e{x}\e{t}^{\ell},\e{x}\e{t}^{\ell-1},\ldots,\e{x})$. First suppose that $p = \e{p}$. Then also $x = \e{x}$, $xt^\ell = x\e{t}^{\ell}$, $xt^{\ell-1} = x\e{t}^{\ell-1}$, and so on for the states of the cycle. We have that $q_i = \e{q_i}$ for all $i$. Hence, $0t = 0\e{t} = p$, $pt = p\e{t} = n-2$, $xt^i = x\e{t}^i$ for all $i$, and $q_i t = q_i\e{t} = n-2$. Since the other transitions in $s$ are defined exactly as in $t$ and $\e{t}$, we have that $\e{t}=t$. Now suppose that $p \neq \e{p}$. So $p = \e{x}\e{t}^i$ for some $i$. Note that $p$ collides with all states $xt,\ldots,xt^\ell$, and $\e{p}$ collides with all states $\e{x}\e{t},\ldots,\e{x}\e{t}^{\ell}$. If $\ell \ge 2$, then there exists $\e{x}\e{t}^j$ with $j \ge 1$ that is different from $p$ and collides with $p$. But then $\e{t}^\ell$ focuses both these states to $\e{x}\e{t}^\ell$. Finally consider $\ell = 1$. If $p = \e{x}$ then $\{x,xt\} = \{\e{p},\e{x}\e{t}\}$, which is a colliding pair because of $\e{t}$ that is focused by $t$ to $xt$. On the other hand, if $p = \e{x}\e{t}$, then $xt = \e{x}$, and so $\{p,xt\} = \{\e{x}\e{t},\e{x}\}$ is a colliding pair because of $t$ that is focused by $\e{t}$ to $\e{x}\e{t}$. Hence, $t$ and $\e{t}$ cannot be both in $T(n)$. \textbf{Case~3.3}: $t$ does not fit into any of the previous cases, $k=0$, and there exist at least two fixed points of in-degree 1.\\ Let the two smallest fixed points of in-degree 1 be the states $f_1$ and $f_2$, that is, $$f_1 = \min\{q\in Q \mid q t = q, \forall_{q'\in Q \setminus \{q\}}\ q' t \neq q\},$$ $$f_2 = \min\{q\in Q\setminus\{f_1\} \mid q t = q, \forall_{q'\in Q \setminus \{q\}}\ q' t \neq q\}.$$ Let $s$ be the transformation illustrated in Fig.~\ref{fig:case3.3} and defined by \begin{center} $0 s = n-2$, $f_1 s = f_2$, $f_2 s = f_1$, $p s = f_2$,\\ $q_i s = p$ for $1\le i\le v$,\\ $q s = q t$ for the other states $q\in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,12)(0,-2) \node[Nframe=n](name)(0,9){\normalsize$t\colon$} \node(0)(2,7){0}\imark(0) \node(p)(14,7){$p$} \node(n-1)(26,0){$n$-$1$} \node(n-2)(26,7){$n$-$2$}\rmark(n-2) \node(q1)(18,4){$q_1$} \node[Nframe=n](qdots)(20.5,4){$\dots$} \node(qv)(23,4){$q_v$} \node(f1)(8,0){$f_1$} \node(f2)(14,0){$f_2$} \drawedge(0,p){} \drawedge(p,n-2){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawedge[curvedepth=.5](q1,n-2){} \drawedge[curvedepth=.6,sxo=-.5,exo=1.5](qdots,n-2){} \drawedge[curvedepth=0](qv,n-2){} \drawloop(f1){} \drawloop(f2){} \end{picture} \begin{picture}(28,14)(0,-1) \node[Nframe=n](name)(0,9){\normalsize$s\colon$} \node(0')(2,7){0}\imark(0') \node(p')(14,7){$p$} \node(n-1')(26,0){$n$-$1$} \node(n-2')(26,7){$n$-$2$}\rmark(n-2') \node(q1')(18,4){$q_1$} \node[Nframe=n](qdots')(20.5,4){$\dots$} \node(qv')(23,4){$q_v$} \node(f1')(8,0){$f_1$} \node(f2')(14,0){$f_2$} \drawedge[curvedepth=3,linecolor=red,dash={.5 .25}{.25}](0',n-2'){} \drawedge(n-2',n-1'){} \drawloop[loopangle=270](n-1'){} \drawedge[curvedepth=-.2,linecolor=red,dash={.5 .25}{.25}](q1',p'){} \drawedge[curvedepth=-.3,syo=.5,linecolor=red,dash={.5 .25}{.25}](qdots',p'){} \drawedge[curvedepth=-.8,linecolor=red,dash={.5 .25}{.25}](qv',p'){} \drawedge[curvedepth=1,linecolor=red,dash={.5 .25}{.25}](f1',f2'){} \drawedge[curvedepth=1,linecolor=red,dash={.5 .25}{.25}](f2',f1'){} \drawedge[curvedepth=0,linecolor=red,dash={.5 .25}{.25}](p',f2'){} \end{picture}\end{center} \caption{Case~3.3.}\label{fig:case3.3} \end{figure} We observe the following properties: \begin{enumerate} \item[(a)] $\{p,f_1\}$ is a colliding pair focused by $s$ to $f_2$. \item[(b)] All states from $Q_M$ whose mapping is different in $s$ and $t$ belong to the same orbit of a cycle. \item[(c)] $s$ contains exactly one cycle, namely $(f_1,f_2)$. \end{enumerate} \textit{External injectivity}: Since $\{p,f_1\}$ is the only colliding pair that is focused by $s$ to a state in a cycle, and $f_2$ is not the minimal state in the cycle, $s$ is different from the transformations of Case~3.1. Since $s$ has a cycle, it is different from the transformations of Subcase~3.2.1 and Subcase~3.2.3. Also, since $s$ has exactly one cycle of length 2, it is different from the transformations of Subcase~3.2.2 and Subcase~3.2.4, which have a cycle of length at least 3. \textit{Internal injectivity}: Let $\e{t}$ be any transformation that fits in this case and results in the same $s$; we will show that $\e{t}=t$. From~(c), we have that $(f_1,f_2) = (\e{f_1},\e{f_2})$, and since $f_1$ has in-degree 1 and $f_2$ has in-degree 2 in $s$, we have that $f_1 = \e{f_1}$ and $f_2 = \e{f_2}$. Also $p = \e{p}$, as only $p$ and $f_2$ are mapped to $f_1$. Then $q_i = \e{q_i}$ for all $i$, since these are precisely the states mapped to $p$ in $s$. Hence $0t = 0\e{t}$, $pt = p\e{t} = n-1$, $f_1 t = f_1 \e{t} = f_1$, $f_2 t = f_2 \e{t} = f_2$, and $q_i t = q_i \e{t} = n-2$ for all $i$. Since the other transitions in $s$ are defined exactly as in $t$ and $\e{t}$, we have $\e{t} = t$. \textbf{Case~3.4}: $t$ does not fit into any of the previous cases and $k=0$.\\ In $t$, there is neither a cycle (covered by Case~3.1) nor a state $x \in Q_M$ such that $xt \not\in \{x,n-1,n-2\}$ (covered by Case~3.2). Hence, because $t \not\in \mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$, there must be a fixed point $f$ of in-degree 1. Because of Case~3.3, there is exactly one such fixed point. Let $q_1 < \ldots < q_v$ be all the states from $Q_M \setminus \{p,f\}$ such that $q_i t = n-2$. Let $r_1 < \ldots < r_u$ be all the states from $Q_M \setminus \{p,f\}$ such that $r_i t = n-1$. All states $q_i$ and $r_i$ have in-degree 0 (covered by Case~3.2), and they are all the states besides $0,p,f,n-2,n-1$. Because $n \ge 8$, we have that $v+u \ge 3$. We have the following subcases that cover all possibilities for $t$: \textbf{Subcase~3.4.1}: $v \ge 2$.\\ Let $s$ be the transformation illustrated in Fig.~\ref{fig:subcase3.4.1} and defined by \begin{center} $0 s = n-2$, $p s = f$,\\ $q_i s = q_{i+1}$ for $1\le i\le v-1$,\\ $q_v s = q_1$,\\ $r_i s = q_v$ for $1\le i\le u$,\\ $q s = q t$ for the other states $q\in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,14)(0,-2) \node[Nframe=n](name)(0,13){\normalsize$t\colon$} \node(0)(2,10){0}\imark(0) \node(p)(14,10){$p$} \node(n-1)(26,0){$n$-$1$} \node(n-2)(26,10){$n$-$2$}\rmark(n-2) \node(q1)(17,7){$q_1$} \node[Nframe=n](qdots)(20,7){$\dots$} \node(qv)(23,7){$q_v$} \node(r1)(17,3){$r_1$} \node[Nframe=n](rdots)(20,3){$\dots$} \node(ru)(23,3){$r_u$} \node(f)(8,3){$f$} \drawedge(0,p){} \drawedge(p,n-2){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawedge[curvedepth=.5](q1,n-2){} \drawedge[curvedepth=.6,sxo=-.5,exo=1.5](qdots,n-2){} \drawedge[curvedepth=0](qv,n-2){} \drawedge[curvedepth=-.5](r1,n-1){} \drawedge[curvedepth=-.6,sxo=-.5,exo=1.5](rdots,n-1){} \drawedge(ru,n-1){} \drawloop(f){} \end{picture} \begin{picture}(28,15)(0,-2) \node[Nframe=n](name)(0,13){\normalsize$s\colon$} \node(0')(2,10){0}\imark(0') \node(p')(14,10){$p$} \node(n-1')(26,0){$n$-$1$} \node(n-2')(26,10){$n$-$2$}\rmark(n-2') \node(q1')(17,7){$q_1$} \node[Nframe=n,Nh=2,Nw=2,Nmr=1](qdots')(20,7){$\dots$} \node(qv')(23,7){$q_v$} \node(r1')(17,3){$r_1$} \node[Nframe=n](rdots')(20,3){$\dots$} \node(ru')(23,3){$r_u$} \node(f')(8,3){$f$} \drawedge[curvedepth=3,linecolor=red,dash={.5 .25}{.25}](0',n-2'){} \drawedge(n-2',n-1'){} \drawloop[loopangle=270](n-1'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](p',f'){} \drawloop(f){} \drawedge[curvedepth=-1.2,linecolor=red,dash={.5 .25}{.25}](q1',qdots'){} \drawedge[curvedepth=-1.2,linecolor=red,dash={.5 .25}{.25}](qdots',qv'){} \drawedge[curvedepth=-1.2,linecolor=red,dash={.5 .25}{.25}](qv',q1'){} \drawedge[curvedepth=-.8,exo=.5,linecolor=red,dash={.5 .25}{.25}](r1',qv'){} \drawedge[curvedepth=-.5,exo=.5,linecolor=red,dash={.5 .25}{.25}](rdots',qv'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](ru',qv'){} \end{picture}\end{center} \caption{Subcase~3.4.1.}\label{fig:subcase3.4.1} \end{figure} We observe the following properties: \begin{enumerate} \item[(a)] $\{p,f\}$ is a colliding pair focused by $s$ to $f$. This is the only colliding pair that is focused by $s$ to a fixed point. \item[(c)] $s$ contains exactly one cycle, namely $(q_1,\ldots,q_v)$. \end{enumerate} \textit{External injectivity}: Observe that all states in the unique cycle have in-degree 1 except possibly $q_v$. Thus, no colliding pair of states is focused to the smallest state $q_1$ in the cycle. This distinguishes $s$ from the transformations of Case~3.1. Since $s$ has a cycle, it is different from the transformations of Subcase~3.2.1 and Subcase~3.2.3. Also, $s$ is different from the transformations of Subcase~3.2.2, Subcase~3.2.4, and Case~3.3, which do not focus a colliding pair to a fixed point, because the orbits from their Properties~(b) do not have a fixed point. \textit{Internal injectivity}: Let $\e{t}$ be any transformation that fits in this subcase and results in the same $s$; we will show that $\e{t}=t$. By~(c), we have that $\e{q_i} = q_i$ for all $i$. Then all states mapped by $s$ to $q_1$ must be $r_i$, hence $\e{r_i} = r_i$ for all $i$. By~(a) and since the fixed point is distinguished in the colliding pair, we obtain that $\e{p} = p$ and $\e{f} = f$. We have that $0t = 0\e{t} = p$, $pt = p\e{t} = n-2$, $q_i t = q_i \e{t} = n-2$ and $r_i t = r_i \e{t} = n-1$ for all $i$. Since the other transitions in $s$ are defined exactly as in $t$ and $\e{t}$, we have $\e{t} = t$. \textbf{Subcase~3.4.2}: $v = 1$.\\ We have $u \ge 2$. Let $s$ be the transformation illustrated in Fig.~\ref{fig:subcase3.4.2} and defined by \begin{center} $0 s = n-2$, $p s = f$,\\ $q_1 s = f$,\\ $r_i s = p$ for $1\le i\le u$,\\ $q s = q t$ for other states $q\in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,14)(0,-2) \node[Nframe=n](name)(0,13){\normalsize$t\colon$} \node(0)(2,10){0}\imark(0) \node(p)(14,10){$p$} \node(n-1)(26,0){$n$-$1$} \node(n-2)(26,10){$n$-$2$}\rmark(n-2) \node(q1)(17,7){$q_1$} \node(r1)(17,3){$r_1$} \node[Nframe=n](rdots)(20,3){$\dots$} \node(ru)(23,3){$r_u$} \node(f)(8,3){$f$} \drawedge(0,p){} \drawedge(p,n-2){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawedge[curvedepth=.3](q1,n-2){} \drawedge[curvedepth=-.5](r1,n-1){} \drawedge[curvedepth=-.6,sxo=-.5,exo=1.5](rdots,n-1){} \drawedge[curvedepth=-0](ru,n-1){} \drawloop(f){} \end{picture} \begin{picture}(28,15)(0,-2) \node[Nframe=n](name)(0,13){\normalsize$s\colon$} \node(0')(2,10){0}\imark(0') \node(p')(14,10){$p$} \node(n-1')(26,0){$n$-$1$} \node(n-2')(26,10){$n$-$2$}\rmark(n-2') \node(q1')(17,7){$q_1$} \node(r1')(17,3){$r_1$} \node[Nframe=n](rdots')(20,3){$\dots$} \node(ru')(23,3){$r_u$} \node(f')(8,3){$f$} \drawedge[curvedepth=3,linecolor=red,dash={.5 .25}{.25}](0',n-2'){} \drawedge(n-2',n-1'){} \drawloop[loopangle=270](n-1'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](p',f'){} \drawloop(f'){} \drawedge[curvedepth=-4.5,linecolor=red,dash={.5 .25}{.25}](r1',p'){} \drawedge[curvedepth=-4.5,linecolor=red,dash={.5 .25}{.25}](rdots',p'){} \drawedge[curvedepth=-4.5,eyo=.5,linecolor=red,dash={.5 .25}{.25}](ru',p'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](q1',f'){} \end{picture}\end{center} \caption{Subcase~3.4.2.}\label{fig:subcase3.4.2} \end{figure} We observe the following properties: \begin{enumerate} \item[(a)] $\{p,f\}$ is a colliding pair focused by $s$ to $f$. \item[(b)] All states from $Q_M$ whose mapping is different in $s$ and $t$ belong to the same orbit of the fixed point $f$. \item[(c)] $s$ does not contain any cycles. \end{enumerate} \textit{External injectivity}: Since $s$ does not have any cycles, it is different from the transformations of Case~3.1, Subcase~3.2.2, Subcase~3.2.4, Case~3.3, and Subcase~3.4.1. Let $\e{t}$ be a transformation that fits in Subcase~3.2.1 and results in the same $s$. By Lemma~\ref{lem:orbits}, the orbits from Properties~(b) for both $t$ and $\e{t}$ must be the same, so the subsubcase for $\e{t}$ is~(i), and necessarily $f = \e{x}\e{t}^{\e{\ell}}$. We have that the states $\e{p}$ and $\e{x}\e{t}^{\e{\ell}-1}$ are mapped to $f$ and have in-degree at least 1. This contradicts with that $p$ and $q_1$ are the only two states mapped to $f$, and $q_1$ has in-degree 0. Let $\e{t}$ be a transformation that fits in Subcase~3.2.3 and results in the same $s$. By Lemma~\ref{lem:orbits}, the orbits from Properties~(b) for both $t$ and $\e{t}$ must be the same, so the subsubcase for $\e{t}$ is~(i), and necessarily $f = \e{x}$. So $\{p,q_1\} = \{\e{p},\e{x}\e{t}\}$, but this is a colliding pair because of $\e{t}$, which is focused to $n-2$ by $t$; hence, $t$ and $\e{t}$ cannot be both present in $T(n)$. \textit{Internal injectivity}: Let $\e{t}$ be any transformation that fits in this subcase and results in the same $s$; we will show that $\e{t}=t$. Lemma~\ref{lem:orbits}, the orbits from Properties~(b) for both $t$ and $\e{t}$ must be the same, so we obtain that $f=\e{f}$. So we have that $\{p,q_1\} = \{\e{p},\e{q_1}\}$. Since $q_1$ and $\e{q_1}$ have in-degree 0, and $p$ and $\e{p}$ have in-degree at least 2, we have that $q_1 = \e{q_1}$ and $p = \e{p}$. Then $r_i = \e{r_i}$ for all $i$, as these are precisely the states mapped to $p$. We have that $0t = 0\e{t} = p$, $pt = p\e{t} = n-2$, $q_1 t = q_1 \e{t} = n-2$, and $r_i t = r_i \e{t} = n-1$ for all $i$. Since the other transitions in $s$ are defined exactly as in $t$ and $\e{t}$, we have $\e{t} = t$. \textbf{Subcase~3.4.3}: $v = 0$.\\ Let $s$ be the transformation illustrated in Fig.~\ref{fig:subcase3.4.3} and defined by \begin{center} $0 s = n-2$, $p s = f$,\\ $r_1 s = p$,\\ $r_i s = f$ for $2\le i\le u$,\\ $q s = q t$ for other states $q\in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,14)(0,-2) \node[Nframe=n](name)(0,9){\normalsize$t\colon$} \node(0)(2,7){0}\imark(0) \node(p)(14,7){$p$} \node(n-1)(26,0){$n$-$1$} \node(n-2)(26,7){$n$-$2$}\rmark(n-2) \node(f)(8,0){$f$} \node(r1)(17,3){$r_1$} \node[Nframe=n](rdots)(20,3){$\dots$} \node(ru)(23,3){$r_u$} \drawedge(0,p){} \drawedge(p,n-2){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawedge[curvedepth=-.5](r1,n-1){} \drawedge[curvedepth=-.6,sxo=-.5,exo=1.5](rdots,n-1){} \drawedge[curvedepth=0](ru,n-1){} \drawloop(f){} \end{picture} \begin{picture}(28,12)(0,-1) \node[Nframe=n](name)(0,9){\normalsize$s\colon$} \node(0')(2,7){0}\imark(0') \node(p')(14,7){$p$} \node(n-1')(26,0){$n$-$1$} \node(n-2')(26,7){$n$-$2$}\rmark(n-2') \node(f')(8,0){$f$} \node(r1')(17,3){$r_1$} \node[Nframe=n](rdots')(20,3){$\dots$} \node(ru')(23,3){$r_u$} \drawedge[curvedepth=3,linecolor=red,dash={.5 .25}{.25}](0',n-2'){} \drawedge(n-2',n-1'){} \drawloop[loopangle=270](n-1'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](p',f'){} \drawloop(f'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](r1',p'){} \drawedge[curvedepth=2,sxo=.5,eyo=.5,linecolor=red,dash={.5 .25}{.25}](rdots',f'){} \drawedge[curvedepth=3,linecolor=red,dash={.5 .25}{.25}](ru',f'){} \end{picture}\end{center} \caption{Subcase~3.4.3.}\label{fig:subcase3.4.3} \end{figure} We observe the following properties: \begin{enumerate} \item[(a)] $\{p,f\}$ is a colliding pair focused by $s$ to $f$. \item[(b)] All states from $Q_M$ whose mapping is different in $s$ and $t$ belong to the same orbit of the fixed point $f$, which has in-degree $u+1 \ge 4$. \item[(c)] $s$ does not contain any cycles. \end{enumerate} \textit{External injectivity}: Since $s$ does not have any cycles, it is different from the transformations of Case~3.1, Subcase~3.2.2, Subcase~3.2.4, Case~3.3, and Subcase~3.4.1. Let $\e{t}$ be a transformation that fits in Subcase~3.2.1 and results in the same $s$. By Lemma~\ref{lem:orbits}, the orbits from Properties~(b) for both $t$ and $\e{t}$ must be the same, so the subsubcase for $\e{t}$ must be~(i), and necessarily $f = \e{x}\e{t}^{\e{\ell}}$. We have that the states $\e{p}$ and $\e{x}\e{t}^{\e{\ell}-1}$ are mapped by $s$ to $f$ and have in-degree at least 1. On the other hand, all states mapped to $f$ (except $f$ itself) are $p$ and $r_2,\ldots,r_u$, where states $r_i$ have in-degree 0, which yields a contradiction. To distinguish $s$ from the transformations of Subcase~3.2.3 and of Subcase~3.4.2, observe that if they focus a colliding pair to a fixed point, then this fixed point have in-degree 3, but $s$ focuses a colliding pair to the fixed point $f$ of in-degree at least 4. \textit{Internal injectivity}: Let $\e{t}$ be any transformation that fits in this subcase and results in the same $s$; we will show that $\e{t}=t$. By Lemma~\ref{lem:orbits}, the orbits from Properties~(b) for both $t$ and $\e{t}$ must be the same, so we obtain that $f=\e{f}$. We have that $p = \e{p}$, as this is the unique state of in-degree 1 that is mapped to $f$. Then $r_1 = \e{r_1}$ as this is the unique state mapped to $p$. All states of in-degree 0 that mapped to $f$ are precisely $r_2,\ldots,r_u$; hence $r_i = \e{r_i}$ for all $i$. We have that $0t = 0\e{t} = p$, $pt = p\e{t} = n-2$, and $r_i t = r_i \e{t} = n-1$ for all $i$. Since the other transitions in $s$ are defined exactly as in $t$ and $\e{t}$, we have $\e{t} = t$. \textbf{Case~3.5}: $k \ge 1$.\\ Let $q_1 < \ldots < q_v$ be all the states from $Q_M \setminus \{pt^k\}$ such that $q_i t = n-2$. We split the case into the following three subcases covering all possibilities for $t$: \textbf{Subcase~3.5.1}: $v = 0$ and $pt^k$ has in-degree 1.\\ Let $s$ be the transformation illustrated in Fig.~\ref{fig:subcase3.5.1} and defined by \begin{center} $0 s = n-2$, $p s = p$,\\ $p t^i s = p t^{i-1}$ for $1\le i\le k$,\\ $q s = qt$ for the other states $q\in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,11)(0,-1) \node[Nframe=n](name)(0,9){\normalsize$t\colon$} \node(0)(2,7){0}\imark(0) \node(p)(8,7){$p$} \node[Nframe=n](pdots)(14,7){$\dots$} \node(pt^k)(20,7){$pt^k$} \node(n-2)(26,7){$n$-$2$}\rmark(n-2) \node(n-1)(26,2){$n$-$1$} \drawedge(0,p){} \drawedge(p,pdots){} \drawedge(pdots,pt^k){} \drawedge(pt^k,n-2){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \end{picture} \begin{picture}(28,10)(0,0) \node[Nframe=n](name)(0,9){\normalsize$s\colon$} \node(0')(2,7){0}\imark(0') \node(p')(8,7){$p$} \node[Nframe=n](pdots')(14,7){$\dots$} \node(pt^k')(20,7){$pt^k$} \node(n-2')(26,7){$n$-$2$}\rmark(n-2') \node(n-1')(26,2){$n$-$1$} \drawedge[curvedepth=3,linecolor=red,dash={.5 .25}{.25}](0',n-2'){} \drawedge(n-2',n-1'){} \drawloop[loopangle=270](n-1'){} \drawloop[loopangle=270,linecolor=red,dash={.5 .25}{.25}](p'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](pdots',p'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](pt^k',pdots'){} \end{picture}\end{center} \caption{Subcase~3.5.1.}\label{fig:subcase3.5.1} \end{figure} We observe the following properties: \begin{enumerate} \item[(a)] Pair $\{p,pt\}$ is a colliding pair focused by $s$ to $p$. \item[(b)] All states from $Q_M$ whose mapping is different in $s$ and $t$ belong to the orbit of fixed point $p$, which has in-degree 2. \end{enumerate} \textit{External injectivity}: Since the orbits from Properties~(b) for the transformations of Case~3.1, Subcase~3.2.2, Subcase~3.2.4, and Case~3.3 have cycles, and the orbit from~(b) of this subcase has a fixed point, by Lemma~\ref{lem:orbits} $s$ is different from these transformations. Similarly, the orbits from Properties~(b) for the transformations of Subcase~3.2.1, Subcase~3.2.3, Subcase~3.4.2, and Subcase~3.4.3 have a fixed point of in-degree at least 3 or they are orbits of $n-1$, so by Lemma~\ref{lem:orbits} $s$ is different from these transformations. Let $\e{t}$ be a transformation that fits in Subcase~3.4.1 and results in the same $s$. Since $\{\e{f},\e{p}\}$ is the only colliding pair that is focused to a fixed point, it must be that $p = \e{f}$ and $pt = \e{p}$. States $\e{q_i}$ form a cycle in $s$, and since it is in a different orbit from that from~(b), the cycle must be also present in $t$. Hence, states $\e{q_i}$ collide with $pt = \e{p}$, and, in particular, $\{\e{q_1},\e{p}\}$ is a colliding pair focused to $n-2$ by $\e{t}$, and so $t$ and $\e{t}$ cannot be both present in $T(n)$. \textit{Internal injectivity}: This follows exactly in the same way as in Case~2.2. \textbf{Subcase~3.5.2}: $v = 0$ and $p t^k$ has in-degree at least 2.\\ Let $y$ be the smallest state such that $yt = pt^k$ and $y \neq pt^{k-1}$.\\ Let $s$ be the transformation illustrated in Fig.~\ref{fig:subcase3.5.2} and defined by \begin{center} $0 s = n-2$, $p s = y$,\\ $y s = n-1$,\\ $pt^i s = pt^{i-1}$ for $1\le i\le k$,\\ $q s = q t$ for the other states $q\in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,11)(0,-1) \node[Nframe=n](name)(0,9){\normalsize$t\colon$} \node(0)(2,7){0}\imark(0) \node(p)(8,7){$p$} \node[Nframe=n](pdots)(14,7){$\dots$} \node(pt^k)(20,7){$pt^k$} \node(y)(20,2){$y$} \node(n-2)(26,7){$n$-$2$}\rmark(n-2) \node(n-1)(26,2){$n$-$1$} \drawedge(0,p){} \drawedge(p,pdots){} \drawedge(pdots,pt^k){} \drawedge(pt^k,n-2){} \drawedge(y,pt^k){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \end{picture} \begin{picture}(28,11)(0,-1) \node[Nframe=n](name)(0,9){\normalsize$s\colon$} \node(0')(2,7){0}\imark(0') \node(p')(8,7){$p$} \node[Nframe=n](pdots')(14,7){$\dots$} \node(pt^k')(20,7){$pt^k$} \node(y')(20,2){$y$} \node(n-2')(26,7){$n$-$2$}\rmark(n-2') \node(n-1')(26,2){$n$-$1$} \drawedge[curvedepth=3,linecolor=red,dash={.5 .25}{.25}](0',n-2'){} \drawedge(n-2',n-1'){} \drawloop[loopangle=270](n-1'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](y',n-1'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](p',y'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](pdots',p'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](pt^k',pdots'){} \end{picture}\end{center} \caption{Subcase~3.5.2.}\label{fig:subcase3.5.2} \end{figure} We observe the following properties: \begin{enumerate} \item[(a)] Pair $\{p,pt^k\}$ is a colliding pair focused by $st$ to $pt^k$. \item[(b)] All states from $Q_M$ whose mapping is different in $t$ and $s$ belong to the tree of $y$ in $s$, where $y$ is mapped to $n-1$. \end{enumerate} \textit{External injectivity}: Since the orbits from Properties~(b) for the transformations of Case~3.1, Subcase~3.2.2, Subcase~3.2.4, and Case~3.3 have cycles, and the orbit from~(b) of this subcase is the orbit of $n-1$, by Lemma~\ref{lem:orbits} $s$ is different from these transformations. Similarly, the orbits from Properties~(b) for the transformations of Subcase~3.2.1~(i), Subcase~3.2.3~(i), Subcase~3.4.2, Subcase~3.4.3, and Subcase~3.5.1 have a fixed point from $Q_M$, so by Lemma~\ref{lem:orbits} $s$ is different from these transformations. Since the transformations of Subcase~3.4.1 focus a colliding pair to a fixed point, they are also different from $s$. Let $\e{t}$ be a transformation from Subcase~3.2.1~(ii) that results in the same $s$. By Lemma~\ref{lem:orbits}, the trees from Properties~(b) for both $t$ and $\e{t}$ must be the same, and so it must be that $y = \e{x}\e{t}^{\e{\ell}}$. First observe that $p \neq \e{p}$, because otherwise $p$ and $pt=\e{q_i}$ for some $i$ would form a colliding pair because of $t$, which is focused by $\e{t}$ to $n-2$. So $p$ must be another state mapped by $s$ to $y = \e{x}\e{t}^{\e{\ell}}$, and so also by $\e{t}$. It follows that all states $p,pt,\ldots,pt^k$ are mapped by $\e{t}$ in the same way as by $s$. But then $p \e{t} t = p s t = pt^k$ and $(pt^k)\e{t} t = (pt^k)s t = pt^k$, so the colliding pair $\{p,pt^k\}$ is focused by $\e{t}t$, which yields a contradiction. Let $\e{t}$ be a transformation from Subcase~3.2.3~(ii) that results in the same $s$. By Lemma~\ref{lem:orbits}, the trees from Properties~(b) for both $t$ and $\e{t}$ must be the same, and so $\e{x} = y$. But $\e{p}$ and $\e{x}\e{t}$ are the only states mapped to $y$ in $s$, and they both have in-degree 0, whereas $p$ is also mapped to $y$ in $s$ and has in-degree 1, which yields a contradiction. \textit{Internal injectivity}: Let $\e{t}$ be any transformation that fits in this subcase and results in the same $s$; we will show that $\e{t}=t$. By Lemma~\ref{lem:orbits}, the trees from Property~(b) must be the same, so $y = \e{y}$. Since in $s$ all the states besides $p$ that are mapped to $y$ are also mapped to $y$ in $t$, it follows that $p \e{t} = y$ and $\e{p} t = y$. Note that for $i$, $0 \le i \le \min\{k,\e{k}\}$, the distance in $s$ from $p t^i$ and from $\e{p} \e{t}^i$ to $y$ is $i+1$. Hence, if $i \neq j$ then $p t^i \neq \e{p} \e{t}^j$. \textbf{Subcase~3.5.3}: $v \ge 1$.\\ For all $i \in \{1,\ldots,v\}$ we define $c_i$ to be the largest distance in $t$ from a state $q \in Q$ to $q_i$, that is, $$c_i = \max\{d \in \mathbb{N} \mid \exists q\in Q\text{ such that }q t^d = q_i\text{ for some }i\}.$$ Let $c = \max\{c_i\}$. Notice that $c \le k$. Define $$x = \min\{q\in Q \mid q t^c = q_i\text{ for some }i\},$$ that is, $x$ is the smallest state among the furthest states from some $q_i$. Let $q_m$ be that state $q_i$, which is the first state $q_i$ in the path from $x$. Notice that if all $q_i$ have in-degree 0, then $c = 0$ and $x = q_m = q_1$. Let $s$ be the transformation illustrated in Fig.~\ref{fig:subcase3.5.3} and defined by \begin{center} $0 s = n-2$, $p s = x$,\\ $p t^i s = p t^{i-1}$ for $1\le i \le k$,\\ $q_i s = q_{i+1}$ for $1\le i\le v-1$,\\ $q_v s = q_1$,\\ $q s = q t$, for the other states $q\in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,15)(0,-1) \node[Nframe=n](name)(0,13){\normalsize$t\colon$} \node(0)(2,11){0}\imark(0) \node(p)(8,11){$p$} \node[Nframe=n](pdots)(14,11){$\dots$} \node(pt^k)(20,11){$pt^k$} \node(q1)(14,6){$q_1$} \node[Nframe=n](qdots1)(16.5,6){$\dots$} \node(qi)(19,6){$q_m$} \node[Nframe=n](qdots2)(21.5,6){$\dots$} \node(qv)(24,6){$q_v$} \node(x)(8,2){$x$} \node[Nframe=n](xdots)(14,2){$\dots$} \node(n-2)(26,11){$n$-$2$}\rmark(n-2) \node(n-1)(26,2){$n$-$1$} \drawedge(0,p){} \drawedge(p,pdots){} \drawedge(pdots,pt^k){} \drawedge(pt^k,n-2){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawedge(x,xdots){} \drawedge[curvedepth=-2,exo=.2](xdots,qi){} \drawedge[curvedepth=.6](q1,n-2){} \drawedge[curvedepth=.3,sxo=-1](qdots1,n-2){} \drawedge[curvedepth=.2,sxo=-.5](qi,n-2){} \drawedge[curvedepth=.1](qdots2,n-2){} \drawedge[curvedepth=0](qv,n-2){} \end{picture} \begin{picture}(28,14)(0,0) \node[Nframe=n](name)(0,14){\normalsize$s\colon$} \node(0')(2,11){0}\imark(0') \node(p')(8,11){$p$} \node[Nframe=n](pdots')(14,11){$\dots$} \node(pt^k')(20,11){$pt^k$} \node(q1')(14,6){$q_1$} \node[Nframe=n,Nh=2,Nw=2,Nmr=1](qdots1')(16.5,6){$\dots$} \node(qi')(19,6){$q_m$} \node[Nframe=n,Nh=2,Nw=2,Nmr=1](qdots2')(21.5,6){$\dots$} \node(qv')(24,6){$q_v$} \node[Nframe=n](xdots')(14,2){$\dots$} \node(x')(8,2){$x$} \node(n-2')(26,11){$n$-$2$}\rmark(n-2') \node(n-1')(26,2){$n$-$1$} \drawedge[curvedepth=3,linecolor=red,dash={.5 .25}{.25}](0',n-2'){} \drawedge(n-2',n-1'){} \drawloop[loopangle=270](n-1'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](pdots',p'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](pt^k',pdots'){} \drawedge[linecolor=red,dash={.5 .25}{.25}](p',x'){} \drawedge(x',xdots'){} \drawedge[curvedepth=-2,exo=.2](xdots',qi'){} \drawedge[curvedepth=-1.5,linecolor=red,dash={.5 .25}{.25}](q1',qdots1'){} \drawedge[curvedepth=-1.5,linecolor=red,dash={.5 .25}{.25}](qdots1',qi'){} \drawedge[curvedepth=-1.5,linecolor=red,dash={.5 .25}{.25}](qi',qdots2'){} \drawedge[curvedepth=-1.5,linecolor=red,dash={.5 .25}{.25}](qdots2',qv'){} \drawedge[curvedepth=-2,linecolor=red,dash={.5 .25}{.25}](qv',q1'){} \end{picture}\end{center} \caption{Subcase~3.5.3.}\label{fig:subcase3.5.3} \end{figure} We observe the following properties: \begin{enumerate} \item[(a)] $\{p, pt\}$ is a colliding pair focused by $s^{c+2}t$ to $n-2$. \item[(b)] All states from $Q_M$ whose mapping is different in $s$ and $t$ belong to the same orbit of a cycle (if $v \ge 2$) or a fixed point (if $v=1$). \item[(d)] Every longest path in $s$ from some state not in a cycle to the first reachable $q_i$ contain both $p$ and $x$, and this $q_i$ is $q_m$. \textit{Proof}: If such a path would not contain $x$, then it would not contain $p,\ldots,pt^k$, and so exist also in $t$. But then, by the choice of $x$, its length could be at most $c$, whereas the path from $pt^k$ to $q_m$ is of length $k+c$. Thus, every such a path contain $x$ and so $p$, since $x$ has in-degree 1, and ends in $q_m$. \end{enumerate} \textit{External injectivity}: Let $\e{t}$ be a transformation that fits in Case~3.1 and results in the same $s$. By Lemma~\ref{lem:orbits}, the orbits from Properties~(b) for both $t$ and $\e{t}$ are the same. Let $y$ be the state mapped to $q_m$ in the path in $s$ from $p$ to $q_m$. If $\e{p} \neq y$, then by the construction of $s$ in Case~3.1, all states in the tree of $y$ are mapped in $s$ in the same way as in $\e{t}$. Hence, $\{p,pt\}$ is focused by $\e{t}^{c+2}t$ to $n-2$, which yields a contradiction. If $\e{p} = y$, then $p=\e{p}$, since to $\e{p}$ only the states $\e{q_i}$ are mapped, which have in-degree 0, and $p$ has in-degree 1. Hence $k=1$, $p=y=\e{p}$, and $pt = \e{q_i}$ for some $i$. However, $\{p,pt\} = \{\e{p},\e{q_i}\}$ is a colliding pair because of $t$ that is focused by $\e{t}$ to $n-2$, which yields a contradiction. Let $\e{t}$ be a transformation that fits in Subcase~3.2.1 and results in the same $s$. By Lemma~\ref{lem:orbits}, the orbits from Properties~(b) for both $t$ and $\e{t}$ are the same, so necessarily the subsubcase for $\e{t}$ must be~(i) and $v = 1$. Since $\e{x}\e{t}^{\e{\ell}}$ has in-degree $\ge 3$ in $s$, it cannot be $x$, because $x$ can have in-degree at most 2. Thus $\e{p}$ and $\e{x}\e{t}^{\e{\ell}-1}$ are mapped in $s$ in the same way as in $t$. But $\{\e{p},\e{x}\e{t}^{\e{\ell}-1}\}$ is a colliding pair because of $\e{t}$, which is focused by $t$ to $\e{x}\e{t}^{\e{\ell}}$, which yields a contradiction. Let $\e{t}$ be a transformation that fits in Subcase~3.2.2 and results in the same $s$. By Lemma~\ref{lem:orbits}, the orbits from Properties~(b) for both $t$ and $\e{t}$ are the same, so necessarily $v=3$ and $\{\e{x},\e{y},\e{x}\e{t}\} = \{q_1,q_2,q_3\}$. Observe that among the states mapped by $s$ to a state in the cycle $(\e{x},\e{y},\e{x}\e{t})$, only $\e{p}$ can have in-degree larger than 0. It follows that $\e{p} = p$, and we obtain a contradiction exactly as for Case~3.1. Let $\e{t}$ be a transformation that fits in Subcase~3.2.3 and results in the same $s$. By Lemma~\ref{lem:orbits}, the orbits from Properties~(b) for both $t$ and $\e{t}$ are the same, so necessarily the subsubcase for $\e{t}$ must be~(i) and $v=1$. Since $\e{x}\e{t}$ has in-degree 1 in $\e{t}$, it has in-degree 0 in $s$, so it cannot be $p$. Therefore $p = \e{p}$, but then we obtain a contradiction exactly as for Case~3.1. Let $\e{t}$ be a transformation that fits in Subcase~3.2.4 and results in the same $s$. By Lemma~\ref{lem:orbits}, the orbits from Properties~(b) for both $t$ and $\e{t}$ are the same, so necessarily $(\e{p},\e{x}\e{t}^{\e{\ell}},\ldots,\e{x})$ is the cycle formed by all states $q_i$. But $\{\e{p},\e{x}\e{t}^{\e{\ell}}\}$ is a colliding pair because of $\e{t}$, which is focused by $t$ to $n-2$; this yields a contradiction. Let $\e{t}$ be a transformation that fits in Case~3.3 and results in the same $s$. By Lemma~\ref{lem:orbits}, the orbits from Properties~(b) for both $t$ and $\e{t}$ are the same, so necessarily $v=2$ and $(f_1,f_2) = (q_1,q_2)$. Then $p = \e{p}$, and again we obtain a contradiction exactly as for Case~3.1. Let $\e{t}$ be a transformation that fits in Subcase~3.4.1 and results in the same $s$. In $s$ there is exactly one orbit of a fixed point from $Q_M$ and exactly one orbit of a cycle. But neither of them cannot be the orbit from~(b) of this subcase, since $\e{p}$ and states $\e{r_i}$ have in-degree 0 in $s$ so they cannot be $p$; this yields a contradiction. Let $\e{t}$ be a transformation that fits in either Subcase~3.4.2 or Subcase~3.4.3 and results in the same $s$. By Lemma~\ref{lem:orbits}, the orbits from Properties~(b) for both $t$ and $\e{t}$ are the same, so necessarily $v=1$ and $\e{f} = q_1 = q_m$. Then $p = \e{p}$, as $\e{p}$ is the only state with non-zero in-degree in $s$ that is mapped to $\e{f}$. So also $x = \e{f}$. But there is another state mapped by $s$ to $p$ ($\e{q_1}$ or $\e{r_2}$, depending on the subcase), and it is mapped to $x$ also by $t$. However, this contradicts that $x$ has in-degree 0 in $t$. Let $\e{t}$ be a transformation that fits in Subcase~3.5.1 and results in the same $s$. By Lemma~\ref{lem:orbits}, the orbits from Properties~(b) for both $t$ and $\e{t}$ are the same, so necessarily $v=1$ and $\e{p} = q_1 = q_m$. Consider the following path $s$, which contains all the states from $Q_M$ that are mapped differently in $t$ and $s$: $$p t^k \stackrel{s}{\rightarrow} p t^{k-1} \stackrel{s}{\rightarrow} \dots \stackrel{s}{\rightarrow} p \stackrel{s}{\rightarrow} x \stackrel{s}{\rightarrow} \dots \stackrel{s}{\rightarrow} q_1.$$ Consider the second path in $s$, which contains all the states from $Q_M$ that are mapped differently in $\e{t}$ and $s$: $$q_1 \e{t}^{\e{k}} \stackrel{s}{\rightarrow} q_1 \e{t}^{\e{k}-1} \stackrel{s}{\rightarrow} \dots \stackrel{s}{\rightarrow} q_1.$$ Let $y$ be the first common state in these paths; $y$ exists since both paths end up in $q_1$. Note that $\e{t}$ reverses the second path. We consider all possibilities for $y$, depending on where it occurs in the first chain: \begin{itemize} \item $y = p t^k$. Then $y = q_1 \e{t}^j$ for some $j \ge 1$, so $\{y,\e{p}\} = \{pt^k,q_1\}$ is a colliding pair because of $\e{t}$, which is focused by $t$ to $n-2$. \item $y = p t^h$ for $1 \le h \le k-1$. Then $(p t^h) s = p t^{h-1}$ so $(p t^{h-1})\e{t} = p t^h$, since $pt^{h-1}$ is in the second path and $\e{t}$ reverses it. Also, $(p t^{h+1})\e{t} = (p t^{h+1})s = p t^h$, since $p t^{h+1}$ does not belong to the second path. But then $\{p t^{h-1},p t^{h+1}\}$ is a colliding pair because of $t$, which is focused by $\e{t}$ to $pt^h$. \item $y = p$. Since in $s$ only state $pt$ is mapped to $p$ and $p\e{t} \neq pt$, it must be that $p\e{t} = n-2$, as otherwise $(p\e{t})s = p$. Therefore $p = q_1 \e{t}^{\e{k}}$. But $q_1 \e{t}^{\e{k}}$ has in-degree 1 in $\e{t}$ from the conditions of Subcase~3.5.1, so it has in-degree 0 in $s$, which yields a contradiction with in-degree 1 of $p$ in $s$. \item $y$ is a state in the path in $s$ from $x$ to $q_1$. Then $q_1 \e{t}^j = y$ for some $j \le c$. Remind that $c \le k$, so $j \le k$. Since $y \not\in \{p,pt,\ldots,pt^k\}$, the distance in $s$ from $pt^k$ to $y$ is at least $k+1 \ge j+1$. It follows that there is a state $z$ from the first chain such that $z s^{j+1} = z \e{t}^{j+1} = y$. However, we also have that $0\e{t}^{j+1} = q_1 \e{t}^j = y$, hence $\e{t}$ cannot be in $T(n)$. \end{itemize} We obtained a contradiction in every case, so $t$ and $\e{t}$ cannot be both in $T(n)$. Let $\e{t}$ be a transformation that fits in Subcase~3.5.2 and results in the same $s$. However, by Lemma~\ref{lem:orbits}, the orbits from Properties~(b) for both $t$ and $\e{t}$ must be the same, but for $\e{t}$ this is an orbit of $n-1$. \textit{Internal injectivity}: Let $\e{t}$ be any transformation that fits in this subcase and results in the same $s$; we will show that $\e{t}=t$. By Lemma~\ref{lem:orbits}, the orbits from~(b) must be the same for both $t$ and $\e{t}$, hence $v = \e{v}$ and the sets of $q_i$ states are the same. By~(d), both $p$ and $\e{p}$ are in every longest path to the first reachable $q_i$, so $q_m = \e{q_m}$. Without loss of generality, state $\e{p}$ occurs not later than $p$, that is, we have $\e{p} s^j = p$ for some $j \ge 0$. Since the path from $\e{x}$ to $q_m$ is the same in both $s$ and $\e{t}$, we have that $\e{x} \e{t}^i = \e{x} s^i$ for all $i \ge 0$. Consider the following path $P$ in $s$: $$P = pt^k \stackrel{s}{\rightarrow} \dots \stackrel{s}{\rightarrow} p \stackrel{s}{\rightarrow} x \stackrel{s}{\rightarrow} \dots \stackrel{s}{\rightarrow} q_m.$$ First suppose that $P$ does not contain $\e{p}$. Then also no state $\e{p} \e{t}^i$ for $1 \le i \le \e{k}$ would be in this path: let $\e{p} \e{t}^i$ be such the state with the smallest $i$; then $(\e{p} \e{t}^i) s = \e{p} \e{t}^{i-1}$ would also be in this path, which is a contradiction. Hence, by the construction of $s$, this path is also present in $\e{t}$. By the choice of $\e{x}$, the distance in $s$ from $\e{x}$ to $q_m$ is not smaller that the length of this path. So we have $\e{c} \ge k+1+c$, which yields $\e{k} > k$ (because $\e{k} \ge \e{c}$). Now observe that since in $s$ state $p$ is reachable from $\e{p}$, we have the following path in $s$: $$\e{p} \e{t}^{\e{k}} \stackrel{s}{\rightarrow} \dots \stackrel{s}{\rightarrow} \e{p} \stackrel{s}{\rightarrow} \stackrel{s}{\rightarrow} \dots \stackrel{s}{\rightarrow} pt^i,$$ where $i$ is the smallest possible. Then, by the construction of $s$, we have the following path in $t$: $$\e{p} \e{t}^{\e{k}} \stackrel{t}{\rightarrow} \dots \stackrel{t}{\rightarrow} \e{p} \stackrel{t}{\rightarrow} \dots \stackrel{t}{\rightarrow} pt^i \dots \stackrel{t}{\rightarrow} pt^k.$$ This path has length at least $\e{k}+1 > k+1$. Hence, there exists a state $y \neq p$ in this path such that $y t^{k+1} = p t^k$. This means that $\{p,yt\}$ is a colliding pair because of $t$, which is focused by $t^k$ to $p t^k$. There remains the case where $P$ contain $\e{p}$. Since $\e{p}$ must occur before $p$ in $P$, we have $p t^h = \e{p}$ for some $h \ge 0$. We claim that $p t^{h+i} = \e{p} \e{t}^{i}$ for all $i \ge 0$, which also implies that $k = h+\e{k}$. We use induction on $i$: This holds for $i = 0$, and also for $i=1$, because $\e{k} \ge 1$ and the in-degree of $\e{p}$ is 1 in $s$. For $i \ge 2$ assume that $p t^{h+j} = \e{p} \e{t}^{j}$ for all $j = 0,\ldots,i-1$. Suppose for a contradiction that $p t^{h+i} \neq \e{p} \e{t}^i$. If $\e{p} \e{t}^i \neq n-2$, then $(\e{p} \e{t}^i)t = (\e{p} \e{t}^i)s = \e{p} \e{t}^{i-1} = p t^{h+i-1}$, because in $s$ among the states mapped to $p t^{h+i-1}$, only $p t^{h+i}$ is mapped differently than in $t$. Then, however, $\{\e{p} \e{t}^i,\e{p} \e{t}^{i-2}\}$ is a colliding pair because of $\e{t}$ that is focused by $t$ to $\e{p} \e{t}^{i-1}$. If $\e{p} \e{t}^i = n-2$, then, dually, $(p t^{h+i}) \e{t} = (p t^{h+i}) s = p t^{h+i-1} = \e{p} \e{t}^{i-1}$, because in $s$, among the states mapped to $\e{p} \e{t}^{i-1}$, only $\e{p} \e{t}^i$ is mapped differently than in $\e{t}$. Then, however, $\{p t^{h+i},p t^{h+i-2}\}$ is a colliding pair because of $t$ that is focused by $\e{t}$ to $p t^{h+i-1}$. Hence, the claim follows. Suppose that $h \ge 1$. Since the path in $s$ from $p$ to $q_m$ occurs also in $\e{t}$ and $t$, and is of length $c+1$, we have that $p \e{t}^{c+2} = n-2$. Note that $\e{k} \ge \e{c} = c+h \ge c+1$. So there exists a state $\e{p} \e{t}^{\e{k}-c-1} = p t^{h+\e{k}-c-1} \neq p$. But this state collides with $p$ because of $t$, and the pair $\{p,\e{p} \e{t}^{\e{k}-c-1}\}$ is focused by $\e{t}^{c+2}$ to $n-2$. Finally, if $h = 0$, then $0 t^i = 0 \e{t}^i$ for all $i \ge 0$, and $q_i t = q_i \e{t} = n-2$ for all $i$. Since the other transitions in $s$ are defined exactly as in $t$ and $\e{t}$, we have $t = \e{t}$. \end{proof} \section{Uniqueness of maximal semigroups} Here we show that $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ for $n \ge 6$ and $\mathbf{W}^{\le 5}_{\mathrm{bf}}(n)$ for $n \in \{3,4,5\}$ (whereas $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)=\mathbf{W}^{\le 5}_{\mathrm{bf}}(n)$ for $n \in \{3,4\}$) have not only the maximal sizes, but are also the unique largest semigroups up to renaming the states in a minimal DFA ${\mathcal D}_n = (Q,\Sigma,\delta,0,\{n-2\})$ of a bifix-free language. \begin{theorem}\label{thm:uniqueness} If $n \ge 8$, and the transition semigroup $T(n)$ of a minimal DFA ${\mathcal D}_n$ of a bifix-free language has at least one colliding pair, then $$|T(n)| < |\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)| = (n-1)^{n-3} + (n-2)^{n-3} + (n-3)2^{n-3}.$$ \end{theorem} \begin{proof} Let $\varphi$ be the injective function from the proof of Theorem~\ref{thm:bifix-free_upper_bound}. Assume that there is a colliding pair $\{p_1,p_2\}$ with $p_1,p_2 \in Q_M$. Since $n \ge 8$, there must be at least three states $r_1,r_2,r_3 \in Q_M \setminus \{p,q\}$. Let $s \in \mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ be the transformation illustrated in Fig.~\ref{fig:uniqueness} and defined by: \begin{center} $0s = n-1$, $p_1 s = p_2$, $r_1 s = p_2$, $r_2 s = r_3$, $r_3 s = r_2$,\\ $q s = q$, for the other states $q \in Q$. \end{center} \begin{figure}[ht] \unitlength 8pt\scriptsize \gasset{Nh=2.5,Nw=2.5,Nmr=1.25,ELdist=0.3,loopdiam=1.5} \begin{center}\begin{picture}(28,8)(0,-2) \node(0)(2,0){0}\imark(0) \node(p1)(6,4){$p_1$} \node(p2)(10,0){$p_2$} \node(r1)(14,4){$r_1$} \node(r2)(18,4){$r_2$} \node(r3)(22,4){$r_3$} \node(n-1)(26,0){$n$-$1$} \node(n-2)(26,4){$n$-$2$}\rmark(n-2) \drawedge[curvedepth=-3](0,n-1){} \drawedge(n-2,n-1){} \drawloop[loopangle=270](n-1){} \drawedge(p1,p2){} \drawloop(p2){} \drawedge(r1,p2){} \drawedge[curvedepth=1](r2,r3){} \drawedge[curvedepth=1](r3,r2){} \end{picture}\end{center} \caption{The transformation $s$ in the proof of Theorem~\ref{thm:uniqueness}.}\label{fig:uniqueness} \end{figure} Since $\{p_1,p_2\}$ is focused by $s$ to $p_2$, $s$ is different from the transformations of Supercase~1. Since $0s = n-1$, it is also different from the transformations of Supercase~3. To see that it is different from all transformations of Supercase~2, notice that only the transformations of Case~2.1, Case~2.3, Subcase~2.4.2, Subcase~2.5.1, and Subcase 2.5.2 have a cycle. The transformations of Case~2.1, Case~2.3, and Subcase~2.4.2 have a cycle with a state with in-degree at least 2, whereas the single cycle $(r_2,r_3)$ in $s$ have both states of in-degree 1. In the transformations of Subcase~2.5.1 and Subcase~2.5.2 there is only one fixed point from $Q_M$, and it has in-degree 2, whereas the single fixed point $p_2$ in $s$ has in-degree 3. Thus, since $\varphi$ is injective and $\varphi(T(n)) \subseteq \mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$, $s \in \mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ but $s \not\in \varphi(T_n)$, it follows that $\varphi(T_n) \subsetneq \mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ so $|T(n)| < |\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)|$. \end{proof} \begin{corollary} For $n \ge 8$, the transition semigroup $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ is the unique largest transition semigroup of a minimal DFA of a bifix-free language. \end{corollary} \begin{proof} From Theorem~\ref{thm:uniqueness}, a transition semigroup that has a colliding pair cannot be largest. From Proposition~\ref{pro:Wbf_unique}, $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ is the unique maximal transition semigroup that does not have colliding pairs of states. \end{proof} The following theorem solves the remaining cases of small semigroups: \begin{theorem} For $n \in \{6,7\}$, the largest transition semigroup of minimal DFAs of bifix-free languages is $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$ and it is unique. For $n = 5$, the largest transition semigroup of minimal DFAs of bifix-free languages is $\mathbf{W}^{\le 5}_{\mathrm{bf}}(n)$ and it is unique. For $n \in \{3,4\}$, $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)=\mathbf{W}^{\le 5}_{\mathrm{bf}}(n)$ is the unique largest transition semigroup of minimal DFAs of bifix-free languages. \end{theorem} \begin{proof} We have verified this with the help of computation, basing on the idea of conflicting pairs of transformations from~\cite[Theorem~20]{BLY12}. We say that two transformations $t_1,t_2 \in \mathbf{B}_{\mathrm{bf}}(n)$ \emph{conflicts}, if they cannot be both present in the transition semigroup of a minimal DFA ${\mathcal D}$ of a bifix-free language, or they imply that all pairs of states from $Q_M$ are either colliding or focused. In the latter case, by Proposition~\ref{pro:Vbf_unique} and Proposition~\ref{pro:Wbf_unique} we know that a transition semigroup containing these transformations must be a a subsemigroup of $\mathbf{W}^{\le 5}_{\mathrm{bf}}(n)$ or $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$, respectively. Hence, we know that two conflicting transformations cannot be present in a transition semigroup of size at least $\max\{\mathbf{W}^{\le 5}_{\mathrm{bf}}(n),\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)\}$ which is different from $\mathbf{W}^{\le 5}_{\mathrm{bf}}(n)$ and $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$. Given a set of transformations $B$, the \emph{graph of conflicts} is the graph $(B,E)$, where there is an edge $(t_1,t_2) \in E$ if and only if $t_1$ conflicts with $t_2$. Given an $n$, our algorithm is as follows: We keep a subset $B_i \subseteq \mathbf{B}_{\mathrm{bf}}(n)$ of transformations that can potentially be present in a largest transition semigroup. Starting with $B_0=\mathbf{B}_{\mathrm{bf}}(n)$, we iteratively compute $B_{i+1} \subset B_i$, where $B_{i+1}$ is obtained from $B_i$ by removing some transformations. This is done for $i=0,1,\ldots$ until we obtain $|B_{i+1}| = 0$. If $B_{i+1} = B_i$ then the algorithm fails. Given $B_i$, we compute $B_{i+1}$ by checking every transformation $t \in B$ and estimating how many pairwise non-conflicting transformations can we add to the set $\{t\}$. Let $B' \subseteq B \setminus \{t\}$ be the set of all transformations that do not conflict with $t$. The maximal number of pairwise non-conflicting transformations in $B'$ is the size of a largest independent set in $B'$. We only compute an upper bound for it, since the problem is computationally hard. Let $M$ be a maximal matching in the graph of conflicts of $B'$; this can be computed by a simple greedy algorithm in $O(|B'|^2)$ time. Then $|B'|-|M|$ is an upper bound for the size of a largest independent set in $B'$, and so $1+|B'|-|M|$ is an upper bound for the cardinality of a maximal transition semigroup containing $t$ that is different from $\mathbf{W}^{\le 5}_{\mathrm{bf}}(n)$ and $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$. If this bound is smaller than $\max\{\mathbf{W}^{\le 5}_{\mathrm{bf}}(n),\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)\}$, then we do not take $t$ into $B_{i+1}$; otherwise we keep $t$. When $|B_i| = 0$, all transformations are rejected, which means that there are no transformations that can be present in a transition semigroup of size at least $\max\{\mathbf{W}^{\le 5}_{\mathrm{bf}}(n),\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)\}$ which is different from $\mathbf{W}^{\le 5}_{\mathrm{bf}}(n)$ and $\mathbf{W}^{\ge 6}_{\mathrm{bf}}(n)$, so there are no such semigroups. For $n=7$, two iterations were sufficient, and we obtained $|B_0| = 3653$, $|B_1|=1176$, and $|B_2|=0$; the computation took less than one minute. \end{proof} Since the largest transition semigroups are unique, from Propositions~\ref{pro:Wbf_alphabet_lower_bound} and~\ref{pro:Vbf_alphabet_lower_bound} we obtain what are the sizes of the alphabet required to meet the bound for the syntactic complexity. \begin{corollary} To meet the bound for the syntactic complexity of bifix-free languages, $(n-2)^{n-3} + (n-3)2^{n-3} - 1$ letters are required and sufficient for $n \ge 6$, and $(n-2)!$ letters are required and sufficient for $n \in \{3,4,5\}$. \end{corollary} \section{Conclusions} We have solved the problem of syntactic complexity of bifix-free languages and identified the largest semigroups for every number of states $n$. In the main theorem, we used the method of injective function (cf.~\cite{BrSz14a,BrSz15SyntacticComplexityOfSuffixFree}) with new techniques and tricks for ensuring injectivity (in particular, Lemma~\ref{lem:orbits} and the constructions in Supercase~3). This stands as a universal method for solving similar problems concerning maximality of semigroups. Our proof required an extensive analysis of 23 (sub)cases and much more complicated injectivity arguments than those for suffix-free (12 cases), left ideals (5 subcases) and two-sided ideals (8 subcases). The difficulty of applying the method grows quickly when characterization of the class of languages gets more involved. It may be surprising that we need a witness with $(n-2)^{n-3} + (n-3)2^{n-3} - 1$ (for $n \ge 6$) letters to meet the bound for syntactic complexity of bifix-free languages, whereas in the case of prefix- and suffix-free languages only $n+1$ and five letters suffice, respectively (see \cite{BLY12,BrSz15SyntacticComplexityOfSuffixFree}). Finally, our results enabled establishing existence of most complex bifix-free languages (\cite{FS2017ComplexityBifixFree}). \bibliographystyle{plain} \providecommand{\noopsort}[1]{}
1,941,325,221,196
arxiv
\section{Introduction} \IEEEPARstart{T}{he} performance of video compression has been continuously improved with the development from H.264/AVC\cite{h264}, H.265/HEVC\cite{h265} to H.266/VVC\cite{h266}. These standards share a similar hybrid video coding framework, which adopts prediction \cite{lainema2012intra,lin2013motion}, transformation \cite{nguyen2013transform}, quantization \cite{crave2010robust}, and context adaptive binary arithmetic coding (CABAC)\cite{marpe2003context}. Owing to the modules like quantization and flexible partition, some unavoidable artifacts are produced and cause degradation of video quality, such as blocking effect, Gibbs effect, and ringing. To compensate for those artifacts, many advanced filtering tools are designed, for instance, de-blocking(DB\cite{norkin2012hevc}), sample adaptive offset(SAO\cite{fu2012sample}), and adaptive loop filter(ALF\cite{tsai2013adaptive}). These tools reduce the artifacts effectively with acceptable complexity. In the past decades, the learning-based methods make great progress in both low-level and high-level computer vision tasks\cite{duan2019centernet,zhao2019object,liu2019recent,liu2019auto,hu2019meta,soh2019natural}, such as object detection\cite{duan2019centernet,zhao2019object}, semantic image segmentation\cite{liu2019recent,liu2019auto}, and super resolution\cite{hu2019meta,soh2019natural}. By virtue of the powerful non-linear capability of learning-based tools, they also have been utilized to replace the existing modules in video coding and show great potential, for instance, intra prediction\cite{li2018fully,hu2019progressive,sun2020enhanced}, inter prediction\cite{liu2018one,zhao2019enhanced}, and entropy coding\cite{song2017neural,ma2019neural}. Learning-based models, especially CNN, have achieved excellent performances for the in-loop filter of video coding \cite{IFCNN,VRCNN,dai2018cnn,liu2019dual,MMNN,RHCNN,SJNN,RSNN,jia2019content}. Dai \textit{et al}. \cite{VRCNN,dai2018cnn} proposed VR-CNN, which adopts a variable filter size technique to have different receptive fields in one-layers and achieves excellent performance with relatively low complexity. Zhang \textit{et al.} \cite{RHCNN} proposed a 13-layer RHCNN for both intra and inter frames. The relatively deep network has a strong mapping capability to learn the difference between the original and the reconstructed inter frames. To further adapt to the image content, Jia \textit{et al.} \cite{jia2019content} designed a multi-model filtering mechanism and proposed a content-aware CNN with a discriminative network. This method uses the discriminative network to select the most suitable deep learning model for each region. Most of the learning-based filters can achieve considerable BD-rate \cite{bdrate} savings than H.265/HEVC anchor. However, real-world applications often require lightweight models. High memory usage and computing resource consumption make it difficult to apply complex models to various hardware platforms. Therefore, designing a light network is essential to popularize learning-based in-loop filters. Considering this, some model compression methods that reduce the model complexity while maintaining performance are needed. In recent years, some famous methods have been proposed, including lightweight layers\cite{sifre2014rigid,howard2017mobilenets}, knowledge transfer \cite{Hinton2015Distilling,zagoruyko2016paying,huang2017like}, low-bit quantization\cite{yang2019quantization,nagel2019data}, and network pruning\cite{molchanov2019importance,zhao2019variational}. DSC \cite{sifre2014rigid,howard2017mobilenets} is one of the famous lightweight layers. It preserves the essential features of standard convolution while greatly reducing the complexity by using grouping convolution\cite{sifre2014rigid}. In this paper, we build our learning-based filter with DSC instead of the standard convolution. And knowledge transfer is used to help the initialization of the trainable parameters without increasing the complexity. Besides the learning-based filter itself, we also need a lightweight mechanism for the filtering of inter frames. Some inter blocks fully inherit the texture from their reference blocks and have almost no residuals. If the learning-based filter is used for each frame, those blocks will be repeatedly filtered and cause over-smoothing in inter blocks\cite{jia2019content, RRCNN}. One solution to solve this problem is training a specific filter for inter frames \cite{RHCNN}. However, the coding of intra and inter frame share some of the same modules in H.265/HEVC like transformation, quantization, and block partitions. This means the learning-based filter trained with intra frames can also be used for inter frames to some extent. Considering this, previous works \cite{dai2018cnn, jia2019content, STResNet,jvetCtu, RRCNN, SDLA} designed a syntax element control flag to indicate whether an inter CTU uses the learning-based filter or not. It chooses a selective filtering strategy for each CTU. For this strategy, we compare it with frame-level control in Section \ref{analysisCtuFrame} and found the CTU-level control may lead to artificial boundaries between the neighboring CTUs. So we propose to use the frame-level based filter to avoid unnecessary artificial boundaries. In order to improve the performance of frame-level based filtering, we propose a novel module called residual mapping (RM) in this paper. In summary, we propose a novel light CNN-based in-loop filter for both intra and inter frames based on \cite{sun2020image,liu2020learning}. Experimental results show this model achieves excellent performance in terms of both video quality and complexity. Specifically, our contributions are as follows. \begin{itemize} \item A CNN-based lightweight in-loop filter is designed for H.265/HEVC. Low-complexity DSC merged with the BN is used as the backbone of this model. Besides, we use attention transfer to pre-train it to help the initialization of parameters. \item For the filtering of inter frames, we analyze and build our CNN-filter based on frame-level to avoid the artificial boundaries caused by CTU-level. Besides, a novel post-processing module RM is proposed to improve the generalization ability of the frame-level based model and enhance the subjective and objective quality. \item We integrate the proposed method into HEVC and VVC reference software and significant performance has been achieved by our proposed method. Besides, we conduct some extensive experiments like ablation studies to prove the effectiveness of our proposed methods. \end{itemize} The following of this paper is organized as follows. In Section II, we present the related works, including the in-loop filter in video coding and the lightweight network design. Section III elaborates on the proposed network, including network structure and its loss function. Section IV focuses on the proposed RM module and provides an analysis of different control strategies. Experiment results and ablation studies are shown in Section V. In Section VI, we conclude this paper with future work. \section{Related Works} \subsection{In-loop Filters in Video Coding} \subsubsection{DB, SAO, and ALF} DB, SAO, and ALF that are adopted in the latest video coding standard H.266/VVC\cite{h266} are aimed at removing the artifacts in video coding. De-blocking \cite{norkin2012hevc} has been used to reduce the discontinuity at block boundaries since the publication of coding standard H.263+\cite{cote1998h}. Depend on the boundary strength and reconstructed average luminance level, DB chooses different coding parameters to filter the distorted boundaries. Meanwhile, by classifying the reconstructed samples into various categories, SAO \cite{fu2012sample} gives each category a different offset to compensate for the error between the reconstructed and original pixels. Based on the Wiener filter, ALF \cite{tsai2013adaptive} tries different filter coefficients by minimizing the square error between the original and reconstructed pixels. The signal of the filter coefficient needs to be sent to the decoder side to ensure the consistency between encoder and decoder. All these aforementioned filters can effectively alleviate the various artifacts in reconstructed images. However, there is still much room for improvement. \subsubsection{Learning-based Filter} Recently, the learning-based filters have far outperformed the DB, SAO, and ALF in terms of both objective and subjective quality. Different from SAO and ALF, they hardly need extra bits but can compensate for errors adaptively as well. Most of them are based on CNNs and have achieved great success in this field. For the filtering of intra frames, Park \textit{et al.} \cite{IFCNN} first proposed a CNN-based in-loop filter IFCNN for video coding. Dai \textit{et al.} \cite{VRCNN} proposed VR-CNN as post-processing to replace DB and SAO in HEVC. Based on inception, Liu \textit{et al.} \cite{liu2019dual} proposed a CNN-based filter with 475,233 trainable parameters. Meanwhile, Kang \textit{et al.} \cite{MMNN} proposed a multi-modal/multi-scale neural network with up to 2,298,160 parameters. Considering the coding unit (CU) size information, He \textit{et al.} \cite{SJNN} proposed a partition-masked CNN with a dozen residual blocks. Sun \textit{et al.} \cite{sun2020image} proposed a learning-based filter with ResNet\cite{resnet} for the VTM. Liu \textit{et al.} \cite{liu2020learning} proposed a lightweight learning-based filter based on DSC. Apart from what was mentioned above, Zhang \textit{et al.} \cite{RRCNN} proposed a residual convolution neural network with a recursive mechanism. Different from the training the filter for intra samples, training the filter with inter samples need to consider the problem of repeated filtering \cite{jia2019content, SDLA}. Jia \textit{et al.} \cite{jia2019content} proposed a content-aware CNN based in-loop filtering method that applies multiple CNN models and a discriminative network in the H.265/HEVC. This discriminative network can be used to judge the degree of distortion of the current block and select the most appropriate filter for it. However, the discriminative network requires additional complexity and memory usage, some researchers \cite{STResNet,dai2018cnn} proposed to use block-level syntax elements to replace it. This method requires extra bit consumption but gets a more accurate judgment on whether to use the learning-based filter. Similarly, some researchers \cite{IFCNN,wang2018dense} proposed to use frame-level syntax elements to control the filtering of inter frames. Besides, complicated models \cite{STResNet,RHCNN,soh2018reduction} like spatial-temporal networks are also useful for solving this problem. Jia \textit{et al.} \cite{STResNet} proposed spatial-temporal residue network (STResNet) with CTU level control to suppress visual artifacts. RHCNN that is trained for both intra and inter frames was proposed by Zhang \textit{et al.} \cite{RHCNN}. Filtering in the decoder side \cite{RSNN,li2017cnn,zhang2020enhancing} can also solve the problem of repeated enhancement well. For example, DS-CNN was designed by Yao \textit{et al.} \cite{RSNN} to achieve quality enhancement as well. Li \textit{et al.} \cite{li2017cnn} adopted a 20-layers deep CNN to improve the filtering performance. Zhang \textit{et al.} \cite{zhang2020enhancing} proposed a post-processing network for VTM 4.0.1. In summary, filtering in inter frames is more challenging than that of intra frames. In most cases, the CNN-based in-loop filter with higher complexity can achieve better performance on intra frames. But for the filtering of inter frames, the existing methods have their own problems. For example, frame-level control may lead to an over-smoothing problem, CTU-level control will cause the additional artificial boundaries, the out-loop filters cannot use the filtered image as a reference, adding discriminative network and complex model may lead to over-complexity and impractical. Therefore, we should pay attention to a more effective method for this task. \subsection{Lightweight Network Design} \subsubsection{Depthwise Separable Convolution} \begin{figure}[!tbp] \centering \includegraphics[scale=0.55]{./dsconv.pdf} \caption{The depthwise separable convolution, where "Conv." indicates convolution.}\label{dsconv} \end{figure} As a novel neural network layer, DSC achieves great success in practical applications because of its low complexity. It is initially introduced in \cite{sifre2014rigid} and subsequently used in MobileNets \cite{howard2017mobilenets}. As shown in Fig. \ref{dsconv}, DSC divides the calculation of standard convolution into two parts, depthwise convolution, and pointwise convolution. Different from standard convolution, depthwise convolution decompose the calculation of standard convolution into group convolution to reduce the complexity. Meanwhile, the pointwise convolution is the same as the standard convolution with kernel $1\times 1$. In other words, depthwise convolution is used to convolute the separate features whereas pointwise convolution is utilized to combine them to get the output feature maps. These two parts together form a complete DSC. \subsubsection{Knowledge Distillation and Transfer} Previous studies \cite{Hinton2015Distilling,huang2017like,zagoruyko2016paying} have shown that the "knowledge" in pre-trained models can be transferred to another model. Hinton \textit{et al.} \cite{Hinton2015Distilling} propose a distillation method that uses a teacher model to get a "soft target", which helps a student model that has a similar structure perform better in the classification task. Besides softening the target in classification tasks, some other methods \cite{zagoruyko2016paying,huang2017like} use the intermediate representations of the pre-trained model to transfer the "knowledge". For example, Zagoruyko \textit{et al.} \cite{zagoruyko2016paying} devise a method called attention transfer (AT) to get student model performance improved by letting it mimic the attention maps from a teacher model. Meanwhile, Huang \textit{et al.} \cite{huang2017like} design a loss function by minimizing the maximum mean discrepancy (MMD) metric between the distributions of the teacher and the student model, where MMD is a distance metric for probability distributions \cite{gretton2012kernel}. \section{Proposed CNN-based Filter} \begin{figure*}[!tbp] \centering \includegraphics[scale=0.65]{./model_dsc.pdf} \caption{The architecture of teacher model and the proposed model, where "Rec." indicates "reconstructed pixels". The top-right and bottom-right are the teacher model and the proposed student model, respectively. The rectangle on the right implies the knowledge transfer.}\label{model} \end{figure*} \subsection{Network Structure} As shown in Fig. \ref{model}, we design a network structure that functions on both the teacher and the proposed model. This structure is composed of convolution, BN layer, and activation ReLU \cite{ReLU}. The backbone of this structure is $K$ layers of DSC with dozens of feature maps $F$. The input to this structure is the HM reconstruction without filtering and the output is the filtered reconstructed samples. The last part is a standard convolution with only 1 feature map. And we add the reconstruction samples to the output inspired by residual learning \cite{resnet}. The depthwise and the standard convolution kernel are both $3\times 3$. Every convolution is followed by the ReLU except for the last one. The reason why choose ReLU instead of other advanced activation functions is that ReLU has a lower complexity while a considerable nonlinearity. In our implementation, the values of $K$ and $F$ are 24 and 64 for the teacher model, 9 and 32 for the proposed model. The description of the parameters in the proposed model is shown in Table \ref{paranum}. We use the BN layer in the training phase, this layer could improve the back-propagation of the gradients. What's more, both BN and convolution are linear computations for the tensors in the proposed model. Therefore, the BN can be merged into the convolution to further reduce the computational during the inference phase. As shown in (\ref{dwconv}), depthwise convolution output $\bm{\chi}_{dwConv}$ can be formulated as: \begin{equation}\label{dwconv} \bm{\chi}_{dwConv} = \bm{w}_{dwConv}* \bm{\chi} \end{equation} where $*$ indicates the convolution operation, $\bm{w}_{dwConv}$ is the kernel and $\bm{\chi}$ is the depthwise convolution input. Similarly, the piecewise convolution output $\bm{\chi}_{pwConv}$ can be written as: \begin{equation}\label{pwconv} \bm{\chi}_{pwConv} = \bm{w}_{pwConv}* \bm{\chi}_{dwConv} + \bm{b}_{pwConv} \end{equation} where $\bm{w}_{pwConv}$ and $\bm{b}_{pwConv}$ denote the kernel and bias. It is noticeable in (\ref{dwconv}) that the convolution of depthwise convolution has no bias, this is because the bias $\bm{b}_{dwConv}$ can be merged into $\bm{b}_{pwConv}$ when there is no activation between depthwise and pointwise convolution. After convolution, the output of BN can be obtained by (\ref{bnb}). (The reason why we use $*$ operation here is because actually the calculation of BN is equivalent to that of the depthwise convolution by simplification) \begin{equation}\label{bnb} \bm{\chi}_{bn} = \bm{\gamma} * \left( \frac{\bm{\chi}_{pwConv}-\bm{mean}}{\sqrt{\bm{var}+\bm{\epsilon}}} \right)+\bm{\beta} \end{equation} Substituting (\ref{pwconv}) into (\ref{bnb}), we obtain (\ref{bnn}) as follows: \begin{equation}\label{bnn} \bm{\chi}_{bn} = \bm{\widehat{w}}_{pwConv}* \bm{\chi}_{dwConv} + \bm{\widehat{b}}_{pwConv} \end{equation} where $\bm{\widehat{w}}_{pwConv}$ and $\bm{\widehat{b}}_{pwConv}$ in (\ref{bnn}) are: \begin{align} \bm{\widehat{w}}_{pwConv} &= \frac{\bm{\gamma} * \bm{w}_{pwConv}}{\sqrt{\bm{var}+\bm{\epsilon}}} \label{wpwConv} \\ \bm{\widehat{b}}_{pwConv} &= \frac{\bm{\gamma} * (\bm{b}_{pwConv}-\bm{mean})}{\sqrt{\bm{var}+\bm{\epsilon}}}+\bm{\beta} \label{bpwConv} \end{align} In (\ref{wpwConv}) and (\ref{bpwConv}), $\bm{\gamma}$ and $\bm{\beta}$ are trainable parameters of BN, $\bm{mean}$ and $\bm{var}$ are non-trainable parameters of BN. Hyper-parameter $\bm{\epsilon}$ represents a positive number that prevents division zero errors. In the inference phase, we use the $\bm{\widehat{w}}_{pwConv}$ and $\bm{\widehat{b}}_{pwConv}$ to replace the weight $\bm{w}_{pwConv}$ and bias $\bm{b}_{pwConv}$ in depthwise convolution, thus merging the BN into the DSC and reducing the model complexity. \begin{table}[!tbp] \begin{threeparttable} \centering \caption{Description of the Parameters in the Proposed Model}\label{paranum \setlength{\tabcolsep}{1mm}{ \begin{tabular}{l|c|c|c|c|c} \Xhline{1.0pt} Index &Block1&Block2&Block3&Std Conv.\tnote{a} &Sum \bigstrut\\ \hline Parameters &$73+2\times1,344$ &$3\times1,344$ &$3\times1,344$ &$289$ &11,114 \bigstrut\\ \Xhline{1.0pt} \end{tabular}} \begin{tablenotes} \item[a] Standard Convolution. \end{tablenotes} \end{threeparttable} \end{table} \subsection{Standard Convolution of the Proposed Structure} In this subsection, the last part of the proposed structure is detailed. Because the standard convolution uses fewer calculations than DSC when the number of convolution output channels is only one. It is worth noting that the last convolution of the proposed model is standard convolution, which isn't consistent with the backbone of the proposed model. The DSC consists of two steps, including depthwise convolution and pointwise convolution. The depthwise convolution is the simplification of the standard convolution to reduce the amount of computation while preserving the ability to convolve the input feature maps. Meanwhile, the pointwise convolution is equivalent to the standard convolution with $1\times 1$ kernel, it is utilized to fuse the different depthwise convolution output. According to their computing methods, the ratio $r$ of the calculation of the DSC to that of the standard convolution is calculated as: \begin{equation}\label{ratio} r= \frac{K_W K_H C_I W H +C_I C_O W H }{K_W K_H C_I C_O W H}=\frac{1}{C_O}+\frac{1}{K_WK_H} \end{equation} where $W $, $ H$ is the width and height of the input frame, respectively. $K_W $, $K_H $ is the width and height of the convolution kernel, respectively. $C_I $, $C_O $ are the number of feature maps for the convolution input and output, respectively. In our proposed model, $C_O=1$ and $K_W=3, K_H=3$. So $r =\frac{1}{C_O}+\frac{1}{K_WK_H}=\frac{10}{9}$, which is bigger than $1$. This represents DSC consumes more computing sources than standard convolution. The extra calculation is caused by pointwise convolution, which is utilized to combine feature maps. However, the standard convolution also can combine features, which indicates the extra calculation of pointwise convolution is meaningless. Therefore, we choose the standard convolution at the end of the model to avoid meaningless calculations. \subsection{Proposed Initialization and Training Scheme} In this subsection, we will introduce the training process and loss functions of the proposed network. In most cases, a suitable initialization of parameters can help the model better converge to the minimum. Inspired by transfer learning, a pre-trained teacher model is used to guide the initialization of the parameters in the proposed model. By using such initialization, we hope the proposed model can obtain the output similar to that of the teacher model before the real training begins. The pre-trained model uses the mean square errors (MSE) loss between the output $Y_T$ of teacher model and the original pixels $Y_O$. \begin{equation}\label{lt} \mathcal{L}_T = \frac{1}{N}\sum_{i=1}^{N}\|Y_T^i-Y_O^i\|^2_2 \end{equation} After the training of the teacher model, we use the intermediate outputs of it to guide the proposed model on parameter initialization. This process is denoted by the bold lines in Fig. \ref{model}. Because the vanishing of gradients may lead to insufficient training of shallow layers, the teacher model is divided into differently-sized blocks to produce the intermediate hints. The metric of the distance between teacher and the proposed student model tries two forms, including MMD \cite{huang2017like} and attention loss \cite{zagoruyko2016paying}. The loss function $\mathcal{L}_{MMD^2}(F_T,F_S)$ with linear kernel function ($k(x, y)=x^Ty$) could be written as follows: \begin{equation}\label{mmdloss} \mathcal{L}_{MMD^2}(F_T,F_S) = \|\frac{1}{C_T}\sum_{i=1}^{C_T}\frac{f_T^i}{\|f_T^i\|_2} -\frac{1}{C_S}\sum_{j=1}^{C_S}\frac{f_S^j}{\|f_S^j\|_2}\|^2_2 \end{equation} where $F$ represents the attention map, $f$ indicates a single feature map, $C$ is the number of feature maps, and the subscript $T$ and $S$ identify the teacher and student model. Meanwhile, the loss function $\mathcal{L}_{AT}(F_T,F_S)$ of attention transfer (AT)\cite{zagoruyko2016paying} could be written as follows: \begin{equation}\label{at} \mathcal{L}_{AT}(F_T,F_S) = \|\frac{\sum_{i=1}^{C_T}|f_T^i|^p}{\|\sum_{i=1}^{C_T}|f_T^i|^p\|_2} -\frac{\sum_{j=1}^{C_S}|f_S^j|^p}{\|\sum_{j=1}^{C_S}|f_S^j|^p\|_2}\|^2_2 \end{equation} We set p to 2 in our implementation, because these two methods are similar except for their normalization methods when $p=1$\cite{huang2017like}. After the initialization, we start the real training process of using MSE $\mathcal{L}_S$ in (\ref{ls}) to train the proposed model, where $Y_S$ indicates the output of the proposed model. \begin{equation}\label{ls} \mathcal{L}_S =\frac{1}{N}\sum_{i=1}^{N} \|Y_S^i-Y_O^i\|^2_2 \end{equation} In summary, the whole process can be divided into the following steps. \begin{algorithm}[htb] \normalsize \caption{ The process of building the trained proposed model.} \label{alg:Framwork} \begin{algorithmic}[1] \Require The dataset pair of HM reconstruction samples $X$ and original samples $Y_O$; \Ensure The trained proposed model; \State Constructing the teacher model $T$ and training it for $n_1$ epochs with MSE $\mathcal{L}_T $; \label{code:fram:ConstructingT} \State Extracting the attention maps $F_T$ from the trained $T$; \label{code:fram:Extracting} \State Constructing the student model $S$ with BN and training it for $n_2$ epochs with $\mathcal{L}_{AT}(F_T,F_S)$ or $\mathcal{L}_{MMD^2}(F_T,F_S)$; \label{code:fram:ConstructingSB} \State Training $S$ with MSE $\mathcal{L}_S$ for $n_3$ epochs; \label{code:fram:Training} \State Calculating the $\bm{\widehat{w}}_{pwConv}$ and $\bm{\widehat{b}}_{pwConv}$ for $S$; \label{code:fram:Calculating} \State Removing the BN from $S$; \label{code:fram:classify} \State Using the $\bm{\widehat{w}}_{pwConv}$ and $\bm{\widehat{b}}_{pwConv}$ to replace the weight $\bm{w}_{pwConv}$ and bias $\bm{b}_{pwConv}$ in depthwise convolution of $S$; \label{code:fram:select} \\ \Return $S$; \end{algorithmic} \end{algorithm} \section{Proposed Residual Mapping for the CNN-based Filtering} \subsection{Analysis of CTU-level and Frame-level Control}\label{analysisCtuFrame} \begin{figure}[tbp] \centering \subfigure[Convolution with valid padding]{ \begin{minipage}[t]{\linewidth} \centering \includegraphics[scale=0.9]{pixpad.pdf} \end{minipage}% }% \subfigure[Convolution with same padding]{ \begin{minipage}[t]{\linewidth} \centering \includegraphics[scale=0.9]{zeropad.pdf} \end{minipage}% } \subfigure{ \begin{minipage}[t]{\linewidth} \centering \includegraphics[scale=0.8]{padlegend.pdf} \end{minipage}% }% \centering \caption{The diagrams of convolution with different pad methods.}\label{padway} \end{figure} From the size of filtered samples, filtering methods can be divided into CTU-level (block-level) and frame-level. Compared with CTU-level control, there are two main advantages of frame-level control in CNN-based filter design, including the required computational resource and the video quality. In this subsection, the difference is analyzed from the perspectives of the padding methods and the filter kernels. \begin{table}[!tbp] \centering \caption{Complexity Comparison of CTU-level Control Between Valid Padding and Same Padding }\label{cp_pad \begin{threeparttable}[b] \setlength{\tabcolsep}{1mm}{ \begin{tabular}{l|c|c|c|c|c|c} \Xhline{1.0pt} Items &\multicolumn{2}{c|}{RHCNN\cite{RHCNN}} &\multicolumn{2}{c|}{Jia \textit{et al.} \cite{jia2019content}} & \multicolumn{2}{c}{VR-CNN\cite{VRCNN}} \bigstrut \\ \hline Padding type &Valid&Same&Valid&Same&Valid&Same \bigstrut\\ \hline \hline Flops\tnote{a} (G)&16.21&10.89&2.02&1.49&0.25&0.22 \bigstrut\\ \hline Madd\tnote{b} (G)&32.38&21.76&4.04&2.97&0.49&0.44 \bigstrut\\ \hline Memory\tnote{c} (MB) &91.06&60.11&25.43&18.02&5.84&5.02\bigstrut\\ \hline MemR+W\tnote{d} (MB)&193.05&130.36&55.09&39.42&13.88&11.99 \bigstrut\\ \Xhline{1.0pt} \end{tabular}} \begin{tablenotes} \item[a] Theoretical amount of floating point arithmetics. \item[b] Theoretical amount of multiply-adds. \item[c] Memory useage. \item[d] Memory read /write. \end{tablenotes} \end{threeparttable} \end{table} Firstly, to keep the input frames size unchanged, the CNN-based filter needs to pad the boundaries of input with some samples. There are usually two padding ways, including valid padding (padded with reconstructed samples) and same padding (padded with zero samples). In one case, if the CTUs are padded with reconstructed pixels to maintain the same accuracy as frame-level filtering, most of the networks need to pad the input block with plenty of pixels and require considerable calculation. Fig. \ref{padway} intuitively shows the difference in the amount of calculation between valid and same padding. The quantitative calculations\cite{OpCounter} are illustrated in Table \ref{cp_pad} (we assume that both of their output sizes of filtered samples are $64\times 64$), it can be found that the valid padding (see "Valid" columns) of works \cite{VRCNN, RHCNN,jia2019content} all have considerable complexity increasing than same padding (see "Same" columns). In the other case, if the same padding is selected, it will cause calculation errors around the boundaries as shown in Fig. \ref{padcontrol}. We assume that the size of the block control is $ h \times h $, and the width of the boundary area affected by the pad is $a$. The proportion $p_{fc}$ of affected pixels under frame-control is calculated as follows: \begin{equation}\label{perimeter1} p_{fc}=1 - \frac{(W-2a)(H-2a)}{WH}=\frac{2a(W+H-2a)}{WH} \end{equation} Similarly, the proportion $p_{bc}$ of affected pixels under block control can be approximated as follows. (No incomplete CTU are considered) \begin{equation}\label{perimeter2} p_{bc}=\frac{4a(h-a)}{h^2}\approx \frac{4a}{h} \end{equation} It can be found from (\ref{perimeter2}) that the area affected by same padding is approximately proportional to the perimeter of the filtered samples. Therefore, the frame-level control with a higher area-to-perimeter ratio is less affected than block-level control. According to (\ref{perimeter1}) and (\ref{perimeter2}), it can be obtained that for the HEVC test sequence, the same-padding of our network will affect an average of 45\% of the pixels under CTU-level control, whereas that of frame-level control is only 3\%. Therefore, choosing frame-level control lays a solid foundation for the application of the CNN-based filter. Secondly, the frames filtered by frame-level control has the property of integrity. Frame-level control uses the same kernel for filtering of the entire frame whereas CTU-level control may use the different kernels for two consecutive CTUs, which may lead to some artificial errors in the boundaries. As shown in Fig. \ref{padcontrol}, two consecutive CTUs with different filtering strategies have some errors along the boundaries because of the different kernels used in the filtering. Especially for the condition that one of the CTUs uses the learning-based filter while the other one doesn't. This further demonstrates the advantages of frame-level control. \begin{figure}[tbp] \centering \subfigure[CTU-level control]{ \begin{minipage}[t]{.475\linewidth} \centering \includegraphics[scale=0.58]{ctupad.pdf} \end{minipage}% } \subfigure[Frame-level control]{ \begin{minipage}[t]{.475\linewidth} \centering \includegraphics[scale=0.58]{framepad.pdf} \end{minipage}% }% \centering \caption{The diagrams of convolution with different control methods.}\label{padcontrol} \end{figure} In summary, for the design of lightweight CNN-based filters, the frame-level control has some advantages over block-level control. On the one hand, compared with frame-level control, CTU-level control leads to calculation cost with the same padding or calculation error with the valid padding. On the other hand, frame-level control has the property of integrity and it brings better subjective quality. To reduce the padding error brought by the multi-layer neural network and complexity, we built our CNN-based in-loop filters on a frame-level control. However, the ability to directly use frame-based control is weak because it only has two states of using or not using the filter, we need some added methods to improve its performance. \begin{figure}[tbp] \centering \subfigure[Org. frame]{ \begin{minipage}[t]{0.32\linewidth} \centering \includegraphics[width=2.7cm]{wiener_org.pdf} \end{minipage}% }% \subfigure[Distortion]{ \begin{minipage}[t]{0.32\linewidth} \centering \includegraphics[width=2.7cm]{wiener_distort.pdf} \end{minipage}% }% \subfigure[Learned residual]{ \begin{minipage}[t]{0.32\linewidth} \centering \includegraphics[width=2.7cm]{wiener_residue.pdf} \end{minipage}% }% \centering \caption{A frame from CLIC dataset\cite{clic} is coded with HM-16.16 and QP 37. The original frame, the distortion and the learned residual of this frame are shown in (a), (b) and (c).}\label{odl} \end{figure} \subsection{Residual Mapping} \begin{figure}[!tbp] \centering \includegraphics[width=8.3cm]{./rm_test.png} \caption{The comparison of different filtering mechanisms (``RaceHorses\_416x240", qp22, LDP configuration). Linear, quadratic, and cubic represent the mapping function of linear, quadratic, and cubic functions, respectively. We can find in the red box that the performance of using CNN-filter directly is not satisfactory, and even leads to a decrease in PSNR.}\label{rm_fig} \end{figure} In this subsection, a novel post-processing module RM is proposed to improve the performance of the frame-level control based CNN filter. It can effectively improve the over-smoothing problem \cite{jia2019content, SDLA} of inter frame. Besides, we found that it also has a considerable improvement to intra frames in Section \ref{rm_intra_ab}. Most of the trained neural networks are fitting to a certain training set. Since the distribution of training data is often very complicated, the training is actually a trade-off of the data set. For a specific image, the trained filter may be under-fitted or over-fitted. This may cause distortion or blur for a learning-based filter. What's more, if we want to use the neural network trained with intra samples for the filtering of inter samples, this phenomenon will be more serious because of the difference in the distribution of the intra and inter datasets. With this in mind, we proposed to use a parametric RM after the learning-based filter, which is some sort of non-parametric filter, to improve its generalization ability. Inspired by the potential correlation of distortion and learned residual shown in Fig. \ref{odl}, we handle this filter from the perspective that of restoring distortion from the learning-based filtered residual, which is equivalent to improving the quality of the distorted frames. The distortion $R_O$ is defined as the difference between the original samples $Y_O$ and reconstruction of de-blocking $X$ : \begin{equation}\label{RO} R_O = Y_O - X \end{equation} Similarly, the learned residual $R_S$ is defines as the difference between the output of learning-based filter and $X$. \begin{equation}\label{RS} R_S = Y_S - X \end{equation} A function $f_{\lambda}(\cdot)$ with parameters $\lambda$ is designed as the parametric filter to map $R_S$ to $R_O$. We choose MSE as the metric: \begin{equation}\label{lambda} \lambda = \mathop{\arg\min}_{\lambda}(f_{\lambda}(R_S)-R_O)^2 \end{equation} We should use a model with a small amount of parameters to construct $f_{\lambda}(\cdot)$, so that it is convenient to encode the parameters $\lambda$ into the bitstream to ensure the consistency of encoding and decoding. For the expression form of $f_{\lambda}(\cdot)$, we have tried linear functions and polynomial functions as shown in Fig. \ref{rm_fig}. From the red box on the left, it can be found that only using the CNN filter (see red dotted line) may lead to a decrease in coding performance, this proves that directly using CNN filters for inter frames may degrade video quality. And the performance is improved after adopting RM. It is noticeable that there is little difference in performance between different polynomial functions. So we choose simple linear functions to build RM. \begin{equation}\label{lambda2} \lambda = \mathop{\arg\min}_{\lambda}(\lambda R_S-R_O)^2 \end{equation} So we add $X$ and the output of RM $\hat{R}_S$ to get the filtered frame $\hat{Y}_S$. After sending it to SAO, the entire filtering process is completed. \begin{equation}\label{rm_output} \hat{Y}_S = X + \hat{R}_S = X + \lambda R_S \end{equation} We quantify the candidate $\lambda$ with $n$ bits for each component, where $\lambda=i/(2^n-1), i \in 0,1,...,2^n-1$. In the implementation, the number of required bits $n$ is set to $5$, so each frame needs 15 bits for the RM module. And a rate-distortion optimization (RDO) process is designed to find the best $\lambda$. The regular mode of CABAC is used to code $\lambda$. RM does not need specific models for inter frames or additional classifiers for each CTU. What's more, it is independent of the proposed network and can be combined with other learning-based filters to alleviate the over-smoothing problem as well. \begin{figure}[tbp] \centering \subfigure[Serial structure]{ \begin{minipage}[t]{\linewidth} \centering \includegraphics[scale=0.5]{rm1.pdf} \end{minipage}% }% \subfigure[Parallel structure]{ \begin{minipage}[t]{\linewidth} \centering \includegraphics[scale=0.5]{rm2.pdf} \end{minipage}% }% \subfigure[Proposed structure]{ \begin{minipage}[t]{\linewidth} \centering \includegraphics[scale=0.5]{rm3.pdf} \end{minipage}% }% \centering \caption{The schemes of the different frameworks with CNN-based filter. }\label{rmfig} \end{figure} Different from previous strategy \cite{SDLA} of choosing one between traditional filtering and learning-based filtering, RM uses a serial structure and makes full use of both these two kinds of filtering as shown in Fig. \ref{rmfig}. From the perspective of reconstructed frames, the proposed RM can be interpreted as a post-processing module that fully utilizes the advantages of both distorted reconstruction and learned filtered output. The full use of these two aspects makes RM have excellent performance. For example, we assume that the reference frame is a frame filtered by a learning-based filter, so if the current frame and the reference frame are almost identical, the current frame does not need to use all the filters. Conversely, if the current frame and the reference frame are completely different, it is easy to produce artificial imprints because of the distorted residue, so the filters should be used in this case. For a specific frame, however, it is often difficult to obtain an accurate judgment about whether to use the filters by using its encoded information, such as residuals or motion vectors. Considering the good generalization ability of traditional filters, we keep them working and focused on the CNN filter. So we introduce a parametric module RM, which uses an RDO process to give an appropriate filtering effect of the CNN filter. From (\ref{lambda}), it can be observed that the filtering strength varies with the change of the $\lambda$. So we can traverse all of the candidate $\lambda$ and code the one with the smallest reconstruction error into bitstreams. We can also derivative the objective function to obtain the optimal parameters, and code the quantized parameters in the bitstream. In this case, we need to consider the influence of parameter quantization, those mapping functions that are sensitive to quantization noise, such as high-order polynomials, should be abandoned. Otherwise, this may result in larger quantization errors in the decoded frames. \section{Experimental Results} \begin{table}[!tbp] \centering \caption{Experimental Environment }\label{testcondition \begin{tabular}{l|l} \Xhline{1.0pt} Items & Specification \bigstrut\\ \hline \hline Optimizer & Adam \cite{kingma2014adam} \bigstrut\\ \hline Processor & Intel Xeon Gold 6134 at 3.20 GHz\bigstrut\\ \hline GPU & NVIDIA GeForce RTX 2080 \bigstrut\\ \hline Operating system & CentOS Linux release 7.6.1810 \bigstrut\\ \hline HM version & 16.16 \bigstrut\\ \hline DNN framework & Keras 2.2.4 \cite{chollet2015keras} and TensorFlow 1.12.0 \cite{tensorflow} \bigstrut\\ \Xhline{1.0pt} \end{tabular} \end{table} \subsection{Experimental Setting} For the experiment, we mainly focus on objective quality, subjective quality, complexity, and ablation studies to illustrate the performance of our model. Nine hundred pictures from DIV2K \cite{DIV2K} are cropped into the resolution of $1024\times 1024$, and then down-sampled to $512\times 512$. These two sets of pictures are spliced into two videos as our training sets. Only the luminance component is used for training, and the chrominance components are also tested by using the proposed model. The patch size in training is $32\times 32$ of H.265/HEVC and $64 \times 64$ of H.266/VVC, which is consistent with the largest size of the TU. Considering that the reconstructed images with different QPs often have different degrees of distortion and artifacts, the whole QP band is divided into four parts, below 24, 25 to 29, 30 to 34, and above 35. So four proprietary models are trained for each QP band. The parameter initialization method is normal distribution \cite{he2015delving} for both the teacher model and the proposed model. The training epochs $n_1$ and $n_3$ are both set to 50. We use more training epochs for the model with higher QP in the special initialization phase because there are often more artifacts in the reconstructed images with higher QP. Specifically, parameters $n_2$ is set as $10$ for the lower QPs and $20$ for the higher QPs. After the training phase, we save the trained model and call it to infer in HEVC reference software (HM) and video coding test model (VTM). In the test phase, the first 64 frames from HEVC test sequences are used to evaluate the generalization ability of our model. We test four different configurations with default settings, including all-intra (AI), low-delay-B (LDB), low-delay-P (LDP), and random-access (RA) for H.265/HEVC anchor. For the H.266/VVC anchor, we test it with default AI and RA configurations. Four typical QPs in common test conditions are tested, including 22, 27, 32, 37. The other important test conditions are shown in Table \ref{testcondition}. For a fair comparison with previous works, we use the coding experimental results from their original papers. The complexities of the reference papers are tested on our local server to avoid the influence of the hardware platforms. \subsection{Experiment on H.265/HEVC}\label{hmtest} \begin{table*}[tbp] \centering \caption{BD-rate Reduction of the Proposed Method than HM-16.16 Anchor} \begin{tabular}{c|c|r|r|r|r|r|r|r|r|r|r|r|r} \Xhline{1.0pt} \multicolumn{2}{c|}{\multirow{2}[4]{*}{Sequences}} & \multicolumn{3}{c|}{AI} & \multicolumn{3}{c|}{LDB} & \multicolumn{3}{c|}{LDP} & \multicolumn{3}{c}{RA} \bigstrut\\ \cline{3-14} \multicolumn{2}{c|}{} & \multicolumn{1}{c|}{Y} & \multicolumn{1}{c|}{U} & \multicolumn{1}{c|}{V} & \multicolumn{1}{c|}{Y} & \multicolumn{1}{c|}{U} & \multicolumn{1}{c|}{V} & \multicolumn{1}{c|}{Y} & \multicolumn{1}{c|}{U} & \multicolumn{1}{c|}{V} & \multicolumn{1}{c|}{Y} & \multicolumn{1}{c|}{U} & \multicolumn{1}{c}{V} \bigstrut\\ \hline \hline \multirow{2}[4]{*}{ClassA} & \multicolumn{1}{l|}{Traffic} & -7.3\% & -3.4\% & -4.7\% & -4.6\% & -2.4\% & -0.9\% & -4.3\% & -3.4\% & -1.5\% & -6.4\% & -3.6\% & -2.7\% \bigstrut\\ \cline{2-14} & \multicolumn{1}{l|}{PeopleOnStreet} & -6.8\% & -7.1\% & -6.9\% & -4.5\% & -0.6\% & -0.9\% & -3.1\% & -4.4\% & -2.4\% & -6.1\% & -4.6\% & -5.0\% \bigstrut\\ \hline \multirow{5}[10]{*}{ClassB} & \multicolumn{1}{l|}{Kimono} & -4.9\% & -2.6\% & -2.5\% & -4.6\% & -7.5\% & -4.7\% & -7.3\% & -11.5\% & -6.7\% & -4.2\% & -5.7\% & -3.4\% \bigstrut\\ \cline{2-14} & \multicolumn{1}{l|}{ParkScene} & -5.5\% & -3.2\% & -2.3\% & -1.9\% & -0.3\% & -0.7\% & -1.5\% & -0.5\% & -0.7\% & -3.8\% & -0.6\% & -0.3\% \bigstrut\\ \cline{2-14} & \multicolumn{1}{l|}{Cactus} & -5.3\% & -4.1\% & -10.1\% & -4.4\% & -3.7\% & -4.4\% & -5.4\% & -5.6\% & -5.8\% & -6.8\% & -9.5\% & -7.2\% \bigstrut\\ \cline{2-14} & \multicolumn{1}{l|}{BasketballDrive} & -4.3\% & -8.9\% & -11.7\% & -3.4\% & -4.2\% & -7.8\% & -6.0\% & -9.2\% & -11.7\% & -4.4\% & -4.4\% & -8.9\% \bigstrut\\ \cline{2-14} & \multicolumn{1}{l|}{BQTerrace} & -3.7\% & -4.3\% & -4.8\% & -6.1\% & -2.1\% & -2.4\% & -10.6\% & -4.5\% & -3.9\% & -8.8\% & -3.8\% & -3.3\% \bigstrut\\ \hline \multirow{4}[8]{*}{ClassC} & \multicolumn{1}{l|}{BasketballDrill} & -8.0\% & -11.7\% & -14.1\% & -2.8\% & -4.9\% & -4.6\% & -3.4\% & -5.6\% & -6.2\% & -4.2\% & -8.0\% & -9.7\% \bigstrut\\ \cline{2-14} & \multicolumn{1}{l|}{BQMall} & -6.0\% & -6.3\% & -7.2\% & -3.8\% & -3.2\% & -4.7\% & -4.6\% & -4.5\% & -5.6\% & -5.1\% & -4.7\% & -5.1\% \bigstrut\\ \cline{2-14} & \multicolumn{1}{l|}{PartyScene} & -3.7\% & -4.8\% & -5.7\% & -0.8\% & -0.1\% & -0.2\% & -1.8\% & -0.4\% & -0.4\% & -1.7\% & -1.4\% & -2.0\% \bigstrut\\ \cline{2-14} & \multicolumn{1}{l|}{RaceHorses} & -3.9\% & -6.9\% & -12.0\% & -4.1\% & -6.6\% & -11.3\% & -4.2\% & -7.9\% & -12.7\% & -4.7\% & -9.7\% & -14.2\% \bigstrut\\ \hline \multirow{4}[8]{*}{ClassD} & \multicolumn{1}{l|}{BasketballPass} & -6.5\% & -7.3\% & -10.3\% & -4.4\% & -3.3\% & -4.6\% & -4.3\% & -4.8\% & -5.8\% & -3.9\% & -4.7\% & -6.1\% \bigstrut\\ \cline{2-14} & \multicolumn{1}{l|}{BQSquare} & -4.2\% & -3.0\% & -6.8\% & -2.4\% & -1.6\% & -2.8\% & -4.1\% & -1.8\% & -2.9\% & -2.4\% & -1.0\% & -2.9\% \bigstrut\\ \cline{2-14} & \multicolumn{1}{l|}{BlowingBubbles} & -5.3\% & -9.3\% & -9.8\% & -3.6\% & -5.8\% & -2.1\% & -3.9\% & -5.4\% & -1.8\% & -4.0\% & -6.1\% & -4.4\% \bigstrut\\ \cline{2-14} & \multicolumn{1}{l|}{RaceHorses} & -7.5\% & -10.5\% & -14.6\% & -6.3\% & -5.2\% & -10.3\% & -6.7\% & -7.4\% & -10.8\% & -6.8\% & -9.4\% & -12.2\% \bigstrut\\ \hline \multirow{6}[12]{*}{ClassE} & \multicolumn{1}{l|}{Vidyo1} & -8.9\% & -8.7\% & -10.5\% & -6.7\% & -9.0\% & -9.6\% & -7.4\% & -9.4\% & -8.9\% & -8.1\% & -8.4\% & -9.7\% \bigstrut\\ \cline{2-14} & \multicolumn{1}{l|}{Vidyo3} & -7.0\% & -5.2\% & -5.3\% & -4.0\% & -5.9\% & -3.1\% & -4.6\% & -6.3\% & -2.5\% & -6.5\% & -4.1\% & -5.1\% \bigstrut\\ \cline{2-14} & \multicolumn{1}{l|}{Vidyo4} & -6.3\% & -10.1\% & -10.8\% & -3.8\% & -11.5\% & -10.9\% & -3.9\% & -12.1\% & -11.2\% & -5.6\% & -9.8\% & -10.1\% \bigstrut\\ \cline{2-14} & \multicolumn{1}{l|}{FourPeople} & -9.4\% & -8.1\% & -9.0\% & -8.6\% & -9.2\% & -9.4\% & -9.0\% & -9.7\% & -10.8\% & -9.4\% & -7.7\% & -8.1\% \bigstrut\\ \cline{2-14} & \multicolumn{1}{l|}{Johnny} & -8.3\% & -12.3\% & -11.0\% & -7.0\% & -11.4\% & -9.1\% & -9.6\% & -13.1\% & -10.7\% & -8.3\% & -10.9\% & -9.7\% \bigstrut\\ \cline{2-14} & \multicolumn{1}{l|}{KristenAndSara} & -8.6\% & -10.2\% & -11.1\% & -7.7\% & -8.3\% & -8.6\% & -8.3\% & -10.1\% & -11.2\% & -8.2\% & -8.9\% & -9.6\% \bigstrut\\ \hline \multicolumn{2}{c|}{Average} & \textbf{-6.3\%} & \textbf{-7.0\%} & \textbf{-8.6\%} & \textbf{-4.5\%} & \textbf{-5.1\%} & \textbf{-5.4\%} & \textbf{-5.4\%} & \textbf{-6.6\%} & \textbf{-6.4\%} & \textbf{-5.7\%} & \textbf{-6.1\%} & \textbf{-6.6\%} \bigstrut\\ \Xhline{1.0pt} \end{tabular}% \label{results}% \end{table*}% \begin{table*}[tbp] \centering \caption{BD-rate Reduction and Complexity (GPU) of the Proposed Method compared with Previous Works \cite{jia2019content,VRCNN} in AI Configuration} \setlength{\tabcolsep}{1.5mm}{ \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \Xhline{1.0pt} \multirow{2}[4]{*}{Sequences} & \multicolumn{5}{c|}{Jia et al. \cite{jia2019content}} & \multicolumn{5}{c|}{VR-CNN \cite{VRCNN}} & \multicolumn{5}{c}{Proposed model} \bigstrut\\ \cline{2-16} & Y & U & V & $\Delta T_{enc}$ & $\Delta T_{dec}$ & Y & U & V & $\Delta T_{enc}$ & $\Delta T_{dec}$ & Y & U & V & $\Delta T_{enc}$ & $\Delta T_{dec}$ \bigstrut\\ \hline \hline ClassA & \multicolumn{1}{r|}{-4.7\%} & \multicolumn{1}{r|}{-3.3\%} & \multicolumn{1}{r|}{-2.6\%} & \multicolumn{1}{r|}{108.1\%} & \multicolumn{1}{r|}{734.9\%} & \multicolumn{1}{r|}{-5.5\%} & \multicolumn{1}{r|}{-4.7\%} & \multicolumn{1}{r|}{-4.9\%} & \multicolumn{1}{r|}{108.3\%} & \multicolumn{1}{r|}{561.1\%} & \multicolumn{1}{r|}{-7.1\%} & \multicolumn{1}{r|}{-5.4\%} & \multicolumn{1}{r|}{-5.9\%} & \multicolumn{1}{r|}{105.8\%} & \multicolumn{1}{r}{281.0\%} \bigstrut\\ \hline ClassB & \multicolumn{1}{r|}{-3.5\%} & \multicolumn{1}{r|}{-2.8\%} & \multicolumn{1}{r|}{-3.0\%} & \multicolumn{1}{r|}{109.0\%} & \multicolumn{1}{r|}{659.8\%} & \multicolumn{1}{r|}{-3.3\%} & \multicolumn{1}{r|}{-3.2\%} & \multicolumn{1}{r|}{-3.7\%} & \multicolumn{1}{r|}{110.3\%} & \multicolumn{1}{r|}{505.3\%} & \multicolumn{1}{r|}{-4.8\%} & \multicolumn{1}{r|}{-4.8\%} & \multicolumn{1}{r|}{-6.4\%} & \multicolumn{1}{r|}{106.2\%} & \multicolumn{1}{r}{265.2\%} \bigstrut\\ \hline ClassC & \multicolumn{1}{r|}{-3.4\%} & \multicolumn{1}{r|}{-3.5\%} & \multicolumn{1}{r|}{-5.0\%} & \multicolumn{1}{r|}{113.1\%} & \multicolumn{1}{r|}{894.9\%} & \multicolumn{1}{r|}{-5.0\%} & \multicolumn{1}{r|}{-5.5\%} & \multicolumn{1}{r|}{-6.9\%} & \multicolumn{1}{r|}{113.0\%} & \multicolumn{1}{r|}{685.1\%} & \multicolumn{1}{r|}{-5.4\%} & \multicolumn{1}{r|}{-7.5\%} & \multicolumn{1}{r|}{-9.9\%} & \multicolumn{1}{r|}{106.5\%} & \multicolumn{1}{r}{326.3\%} \bigstrut\\ \hline ClassD & \multicolumn{1}{r|}{-3.2\%} & \multicolumn{1}{r|}{-4.7\%} & \multicolumn{1}{r|}{-6.0\%} & \multicolumn{1}{r|}{128.9\%} & \multicolumn{1}{r|}{1406.0\%} & \multicolumn{1}{r|}{-5.4\%} & \multicolumn{1}{r|}{-6.4\%} & \multicolumn{1}{r|}{-8.1\%} & \multicolumn{1}{r|}{121.6\%} & \multicolumn{1}{r|}{1047.1\%} & \multicolumn{1}{r|}{-5.9\%} & \multicolumn{1}{r|}{-7.8\%} & \multicolumn{1}{r|}{-10.5\%} & \multicolumn{1}{r|}{114.4\%} & \multicolumn{1}{r}{548.0\%} \bigstrut\\ \hline ClassE & \multicolumn{1}{r|}{-5.8\%} & \multicolumn{1}{r|}{-4.1\%} & \multicolumn{1}{r|}{-5.2\%} & \multicolumn{1}{r|}{112.3\%} & \multicolumn{1}{r|}{1110.2\%} & \multicolumn{1}{r|}{-6.5\%} & \multicolumn{1}{r|}{-5.5\%} & \multicolumn{1}{r|}{-5.6\%} & \multicolumn{1}{r|}{111.1\%} & \multicolumn{1}{r|}{836.7\%} & \multicolumn{1}{r|}{-8.1\%} & \multicolumn{1}{r|}{-9.2\%} & \multicolumn{1}{r|}{-9.7\%} & \multicolumn{1}{r|}{107.2\%} & \multicolumn{1}{r}{401.1\%} \bigstrut\\ \hline Average & \multicolumn{1}{r|}{-4.1\%} & \multicolumn{1}{r|}{-3.7\%} & \multicolumn{1}{r|}{-4.4\%} & \multicolumn{1}{r|}{114.3\%} & \multicolumn{1}{r|}{961.2\%} & \multicolumn{1}{r|}{-5.1\%} & \multicolumn{1}{r|}{-5.1\%} & \multicolumn{1}{r|}{-5.8\%} & \multicolumn{1}{r|}{112.9\%} & \multicolumn{1}{r|}{727.0\%} & \multicolumn{1}{r|}{\textbf{-6.3\%}} & \multicolumn{1}{r|}{\textbf{-7.0\%}} & \multicolumn{1}{r|}{\textbf{-8.6\%}} & \multicolumn{1}{r|}{\textbf{108.0\%}} & \multicolumn{1}{r}{\textbf{364.3\%}} \bigstrut\\ \hline FLOPs & \multicolumn{5}{c|}{334.84G } & \multicolumn{5}{c|}{50.39G } & \multicolumn{5}{c}{\textbf{10.51G}} \bigstrut\\ \hline Parameters & \multicolumn{5}{c|}{362,753} & \multicolumn{5}{c|}{54,512} & \multicolumn{5}{c}{\textbf{11,114}} \bigstrut\\ \hline Model size & \multicolumn{5}{c|}{1.38MB } & \multicolumn{5}{c|}{220KB } & \multicolumn{5}{c}{\textbf{58KB}} \bigstrut\\ \Xhline{1.0pt} \end{tabular}% } \label{comparsion}% \end{table*}% \begin{table*}[tbp] \centering \caption{Overall BD-rate Comparison of Previous Methods\cite{jia2019content,IFCNN,STResNet} in LDB, LDP, and RA Configuration} \begin{tabular}{rrrrrrrrrr} \Xhline{1.0pt} \multicolumn{1}{c|}{\multirow{2}[4]{*}{Methods}} & \multicolumn{3}{c|}{LDB} & \multicolumn{3}{c|}{LDP} & \multicolumn{3}{c}{RA} \bigstrut\\ \cline{2-10} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Y} & \multicolumn{1}{c|}{U} & \multicolumn{1}{c|}{V} & \multicolumn{1}{c|}{Y} & \multicolumn{1}{c|}{U} & \multicolumn{1}{c|}{V} & \multicolumn{1}{c|}{Y} & \multicolumn{1}{c|}{U} & \multicolumn{1}{c}{V} \bigstrut\\ \hline \hline \multicolumn{1}{c|}{Jia et al. \cite{jia2019content}} & \multicolumn{1}{r|}{\textbf{-6.0\%}} & \multicolumn{1}{r|}{-2.9\%} & \multicolumn{1}{r|}{-3.5\%} & \multicolumn{1}{r|}{-4.7\%} & \multicolumn{1}{r|}{-1.0\%} & \multicolumn{1}{r|}{-1.2\%} & \multicolumn{1}{r|}{\textbf{-6.0\%}} & \multicolumn{1}{r|}{-3.2\%} & -3.8\% \bigstrut\\ \hline \multicolumn{1}{c|}{Our network + RM} & \multicolumn{1}{r|}{-4.5\%} & \multicolumn{1}{r|}{\textbf{-5.1\%}} & \multicolumn{1}{r|}{\textbf{-5.4\%}} & \multicolumn{1}{r|}{\textbf{-5.4\%}} & \multicolumn{1}{r|}{\textbf{-6.6\%}} & \multicolumn{1}{r|}{\textbf{-6.4\%}} & \multicolumn{1}{r|}{-5.7\%} & \multicolumn{1}{r|}{\textbf{-6.1\%}} & \textbf{-6.6\%} \bigstrut\\ \hline \multicolumn{1}{c|}{Our network + Frame control\cite{IFCNN} } & \multicolumn{1}{r|}{-3.7\%} & \multicolumn{1}{r|}{-3.3\%} & \multicolumn{1}{r|}{-3.2\%} & \multicolumn{1}{r|}{-4.4\%} & \multicolumn{1}{r|}{-4.6\%} & \multicolumn{1}{r|}{-3.9\%} & \multicolumn{1}{r|}{-4.6\%} & \multicolumn{1}{r|}{-4.8\%} & -5.1\% \bigstrut\\ \hline \multicolumn{1}{c|}{Our network + CTU control \cite{STResNet}} & \multicolumn{1}{r|}{-4.1\%} & \multicolumn{1}{r|}{-4.4\%} & \multicolumn{1}{r|}{-4.9\%} & \multicolumn{1}{r|}{-4.6\%} & \multicolumn{1}{r|}{-5.8\%} & \multicolumn{1}{r|}{-5.9\%} & \multicolumn{1}{r|}{-4.5\%} & \multicolumn{1}{r|}{-5.1\%} & -5.8\% \bigstrut\\ \Xhline{1.0pt} \end{tabular}% \label{pbFilter}% \end{table*}% \subsubsection{Objective Evaluation} In this subsection, the objective evaluation is conducted to evaluate the performance of our proposed model. The experimental results compared with the HM-16.16 anchor are shown in Table \ref{results}. For the luminance component, the proposed model achieves 6.3\%, 4.5\%, 5.4\%, and 5.7\% BD-rate reduction compared with HEVC baseline under AI, LDB, LDP, and RA configuration, respectively. For chrominance components, the proposed model achieves more BD-rate reduction than the luminance component. It demonstrates the generalization ability for the proposed model because we only use the luminance components of intra samples for training. Furthermore, the comparisons with the previous works\cite{jia2019content, VRCNN} are conducted and the BD-rate reduction is shown in Table \ref{comparsion}. It can be seen that our model achieves more BD-rate reduction for AI configuration. For the performance evaluation of inter configurations, we introduce the comparison of our proposed model with frame-level control \cite{IFCNN}, CTU-level control \cite{STResNet}, and Jia \textit{et al.} \cite{jia2019content} as shown in Table \ref{pbFilter}. To compare fairly, we select the same padding and use the proposed model to test the different control methods. From the experiment results, it can be seen that our proposed model achieves about 1\% extra BD-rate reduction than both CTU-level and frame-level control for all inter configurations. Compared with Jia \textit{et al.} \cite{jia2019content}, our model achieves comparative BD-rate reduction in inter configurations. For the chrominance components, our model achieves about 3\% extra BD-rate reduction, it further demonstrates the generalization ability of our model. \begin{figure*}[tbp] \centering \subfigure[HM Rec.]{ \begin{minipage}[t]{0.325\linewidth} \centering \includegraphics[width=5.6cm]{comp_intra_rec.png} \end{minipage}% }% \subfigure[Jia \textit{et al.} \cite{jia2019content}]{ \begin{minipage}[t]{0.325\linewidth} \centering \includegraphics[width=5.6cm]{comp_intra_cann.png} \end{minipage}% }% \subfigure[Proposed model]{ \begin{minipage}[t]{0.325\linewidth} \centering \includegraphics[width=5.6cm]{comp_intra_dsc.png} \end{minipage}% }% \centering \caption{Visual quality comparison of Jia \textit{et al.} \cite{jia2019content} and the proposed model for AI configuration. The test qp is 37 and this is the 1st frame for FourPeople(Anchor HM-16.9). }\label{subjective1} \end{figure*} \begin{figure*}[tbp] \centering \subfigure[HM Rec.]{ \begin{minipage}[t]{0.325\linewidth} \centering \includegraphics[width=5.6cm]{comp_inter_rec.png} \end{minipage}% }% \subfigure[CTU-level control \cite{dai2018cnn}]{ \begin{minipage}[t]{0.325\linewidth} \centering \includegraphics[width=5.6cm]{comp_inter_ctu.png} \end{minipage}% }% \subfigure[Proposed model]{ \begin{minipage}[t]{0.325\linewidth} \centering \includegraphics[width=5.6cm]{comp_inter_wiener.png} \end{minipage}% }% \centering \caption{Visual quality comparison of CTU-level control \cite{dai2018cnn} and the proposed model for RA configuration. The test QP is 37 and this is the 16th frame for RaceHorse(Anchor HM-16.16).}\label{subjective2} \end{figure*} \subsubsection{Subjective Evaluation} We also conduct the subjective evaluation as shown in Fig. \ref{subjective1} and Fig. \ref{subjective2}. It can be seen from the experimental results that our model has a great de-artifacts capability. First, we re-deploy the proposed model in HM-16.9 for a fair subjective evaluation with Jia \textit{et al.} \cite{jia2019content}. From Fig. \ref{subjective1}, it can be found that the various kinds of artifacts in (a) are eliminated by the proposed model and the man's face looks smoother and plump. At the same time, some vertical blocky effects are produced by Jia \textit{et al.} \cite{jia2019content}, probably because it uses different filters for consecutive CTUs while our proposed model uses the same filter for the whole images and have no additional boundaries. Besides, the man's eyes seem to be blurred by \cite{jia2019content} and lead to the degradation of visual quality. Second, the subjective evaluation for the inter frames is conducted in Fig. \ref{subjective2}. The default HM and HM with CTU-level control \cite{dai2018cnn} are used as the anchors. As shown in Fig. \ref{subjective2}, the contouring and blocky artifacts on the number are eliminated by the proposed model. For CTU-level control \cite{dai2018cnn} based filtering, the subjective quality of this frame is reduced due to the artificial boundaries on the knee, whereas our proposed model has no boundaries on it and achieves a better visual quality. To sum up, because our proposed method makes full use of the frame-level filtering strategy, the proposed method has significantly better visual effects than previous CTU-based methods. \begin{table*}[tbp] \centering \caption{BD-rate Reduction and Computational Complexity (GPU) of the Proposed Method than VTM-6.3 Anchor} \begin{tabular}{c|c|r|r|r|r|r|r|r|r|r|r} \Xhline{1.0pt} \multicolumn{2}{c|}{\multirow{2}[4]{*}{Sequences}} & \multicolumn{5}{c|}{AI} & \multicolumn{5}{c}{RA} \bigstrut\\ \cline{3-12} \multicolumn{2}{c|}{} & \multicolumn{1}{c|}{Y} & \multicolumn{1}{c|}{U} & \multicolumn{1}{c|}{V} & \multicolumn{1}{c|}{$\Delta T_{enc}$} & \multicolumn{1}{c|}{$\Delta T_{dec}$} & \multicolumn{1}{c|}{Y} & \multicolumn{1}{c|}{U} & \multicolumn{1}{c|}{V} & \multicolumn{1}{c|}{$\Delta T_{enc}$} & \multicolumn{1}{c}{$\Delta T_{dec}$} \bigstrut\\ \hline \hline \multirow{2}[4]{*}{ClassA} & \multicolumn{1}{l|}{Traffic} & -1.6\% & -0.2\% & -0.4\% & 100.7\% & 234.1\% & -1.1\% & -0.7\% & -0.5\% & 99.6\% & 357.0\% \bigstrut\\ \cline{2-12} & \multicolumn{1}{l|}{PeopleOnStreet} & -1.3\% & -0.4\% & -0.3\% & 98.0\% & 225.4\% & -0.9\% & -0.1\% & -0.2\% & 99.7\% & 266.3\% \bigstrut\\ \hline \multirow{5}[10]{*}{ClassB} & \multicolumn{1}{l|}{Kimono} & -0.3\% & 0.1\% & -0.3\% & 104.9\% & 317.5\% & -0.2\% & 0.0\% & -0.4\% & 99.8\% & 319.3\% \bigstrut\\ \cline{2-12} & \multicolumn{1}{l|}{ParkScene} & -1.9\% & 0.1\% & -0.1\% & 108.3\% & 232.4\% & -1.4\% & 0.3\% & -0.2\% & 101.2\% & 302.1\% \bigstrut\\ \cline{2-12} & \multicolumn{1}{l|}{Cactus} & -1.3\% & -0.5\% & -0.8\% & 100.4\% & 244.5\% & -1.4\% & -1.8\% & -1.7\% & 103.1\% & 343.9\% \bigstrut\\ \cline{2-12} & \multicolumn{1}{l|}{BasketballDrive} & -0.3\% & -0.8\% & -1.0\% & 103.7\% & 282.7\% & -0.4\% & -0.8\% & -0.5\% & 100.5\% & 321.0\% \bigstrut\\ \cline{2-12} & \multicolumn{1}{l|}{BQTerrace} & -1.0\% & -0.6\% & -0.6\% & 101.6\% & 228.4\% & -1.9\% & -1.6\% & -1.5\% & 101.6\% & 313.0\% \bigstrut\\ \hline \multirow{4}[8]{*}{ClassC} & \multicolumn{1}{l|}{BasketballDrill} & -2.7\% & -3.8\% & -5.5\% & 101.6\% & 219.2\% & -1.6\% & -3.4\% & -2.9\% & 102.9\% & 270.8\% \bigstrut\\ \cline{2-12} & \multicolumn{1}{l|}{BQMall} & -2.2\% & -0.8\% & -0.7\% & 100.4\% & 220.6\% & -2.0\% & -1.1\% & -0.5\% & 104.6\% & 285.1\% \bigstrut\\ \cline{2-12} & \multicolumn{1}{l|}{PartyScene} & -1.8\% & -1.1\% & -1.5\% & 100.5\% & 198.7\% & -1.3\% & -1.6\% & -1.8\% & 101.7\% & 242.7\% \bigstrut\\ \cline{2-12} & \multicolumn{1}{l|}{RaceHorses} & -0.9\% & -1.1\% & -2.3\% & 101.6\% & 233.5\% & -1.1\% & -1.5\% & -2.5\% & 99.5\% & 243.8\% \bigstrut\\ \hline \multirow{4}[8]{*}{ClassD} & \multicolumn{1}{l|}{BasketballPass} & -2.1\% & -1.4\% & -4.7\% & 99.4\% & 407.5\% & -1.2\% & -2.4\% & -1.0\% & 98.0\% & 406.1\% \bigstrut\\ \cline{2-12} & \multicolumn{1}{l|}{BQSquare} & -3.0\% & -0.2\% & -1.0\% & 103.0\% & 319.1\% & -3.6\% & -1.0\% & -1.6\% & 105.6\% & 421.0\% \bigstrut\\ \cline{2-12} & \multicolumn{1}{l|}{BlowingBubbles} & -2.1\% & -1.4\% & -1.0\% & 101.1\% & 352.0\% & -1.6\% & -2.3\% & -2.4\% & 101.0\% & 371.1\% \bigstrut\\ \cline{2-12} & \multicolumn{1}{l|}{RaceHorses} & -2.8\% & -2.7\% & -4.6\% & 99.6\% & 366.6\% & -2.4\% & -3.1\% & -6.5\% & 98.6\% & 312.1\% \bigstrut\\ \hline \multirow{6}[12]{*}{ClassE} & \multicolumn{1}{l|}{Vidyo1} & -1.3\% & -0.1\% & -0.3\% & 101.5\% & 340.3\% & -1.0\% & -0.1\% & 0.4\% & 102.6\% & 486.6\% \bigstrut\\ \cline{2-12} & \multicolumn{1}{l|}{Vidyo3} & -1.1\% & 0.2\% & -0.2\% & 101.8\% & 298.8\% & -1.2\% & 1.5\% & 0.9\% & 102.0\% & 457.4\% \bigstrut\\ \cline{2-12} & \multicolumn{1}{l|}{Vidyo4} & -0.8\% & -0.3\% & -0.2\% & 106.4\% & 291.0\% & -1.0\% & 0.3\% & -1.4\% & 101.3\% & 425.5\% \bigstrut\\ \cline{2-12} & \multicolumn{1}{l|}{FourPeople} & -2.1\% & -0.5\% & -0.5\% & 99.2\% & 263.3\% & -1.8\% & -0.8\% & -0.7\% & 106.2\% & 449.3\% \bigstrut\\ \cline{2-12} & \multicolumn{1}{l|}{Johnny} & -1.3\% & -0.4\% & -0.6\% & 99.9\% & 317.9\% & -2.6\% & -1.2\% & -1.1\% & 100.1\% & 452.4\% \bigstrut\\ \cline{2-12} & \multicolumn{1}{l|}{KristenAndSara} & -1.7\% & -0.6\% & -0.7\% & 101.1\% & 344.5\% & -1.6\% & -0.4\% & -1.4\% & 100.2\% & 428.0\% \bigstrut\\ \hline \multicolumn{2}{c|}{Average} & \textbf{-1.6\%} & \textbf{-0.8\%} & \textbf{-1.3\%} & \textbf{101.6\%} & \textbf{282.8\%} & \textbf{-1.5\%} & \textbf{-1.0\%} & \textbf{-1.3\%} & \textbf{101.4\%} & \textbf{355.9\%} \bigstrut\\ \Xhline{1.0pt} \end{tabular}% \label{vtmtest}% \end{table*}% \subsubsection{Complexity Analysis} As shown in Table \ref{comparsion}, we compare the complexity of Jia \textit{et al.} \cite{jia2019content}, VR-CNN \cite{VRCNN}, and our proposed model from two aspects, including computational complexity and storage consumption. Firstly, for the coding complexity evaluation, we use the following equation to calculate the $\Delta T$: \begin{equation}\label{delta} \Delta T=\frac{T'}{T} \end{equation} where $T'$ and $T$ denote the HM coding time with and without the learning-based filter, respectively. FLOPs in Table \ref{comparsion} are also tested for the frame with a resolution of 720p. Compared with VR-CNN \cite{VRCNN}, the FLOPs of our model is reduced by 79.1\%. The decoding complexity is reduced by approximately 50\% and the encoding complexity is reduced by 4\%. The processing time of the proposed model is almost the same for both encoder and decoder. The difference in relative time is caused by that the network inference time accounts for a small proportion of the encoding complexity but comparative for the decoding. In terms of storage consumption, compared with \cite{VRCNN}, the number of trainable parameters in the proposed model is reduced by 79.6\%. It is almost the same with the reduction of model size because we use the same precision (float32) to save the models. The main reason why our model has relatively fewer parameters is that the design of the proposed model focuses more on complexity instead of performance. For example, we use the DSC as the backbone of the proposed model, whereas previous works \cite{VRCNN,jia2019content} utilize the standard convolution. Meanwhile, we also use many useful methods to limit the model size while maintaining the performance, including BN merge and special initialization of parameters. What's more, our proposed model only needs one learning-based network for both intra and inter frames. So there is no need for additional models in practical applications. Compared with previous works that need multiple models or classifiers, our proposed method reduces the required storage consumption effectively benefit from the RM module. \subsection{Experiment on H.266/VVC} To further evaluate the performance of our proposed model, we use the same test condition to test its performance in VTM-6.3. The only difference is that we use the entire DIV2k instead of the down-sampled dataset to train the proposed model. From the experimental results shown in Table \ref{vtmtest}, it can be found that our model achieves about 1.6\% and 1.5\% BD-rate reduction on the luminance component for AI and RA configurations. For chrominance components, it also achieved similar performance on BD-rate reduction. In terms of complexity, the proposed method introduces a negligible increase on the encoding side and brings about 3 times complexity to the decoding side. \begin{table}[tbp] \centering \caption{Ablation Study of RM (AI, VTM-6.3)} \begin{tabular}{c|r|r|r|r|r|r} \Xhline{1.0pt} \multirow{2}[4]{*}{Sequences} & \multicolumn{3}{c|}{Our network} & \multicolumn{3}{c}{Our network+RM} \bigstrut\\ \cline{2-7} & \multicolumn{1}{c|}{Y} & \multicolumn{1}{c|}{U} & \multicolumn{1}{c|}{V} & \multicolumn{1}{c|}{Y} & \multicolumn{1}{c|}{U} & \multicolumn{1}{c}{V} \bigstrut\\ \hline \hline ClassA & -0.5\% & -0.1\% & 0.4\% & -1.5\% & -0.3\% & -0.4\% \bigstrut\\ \hline ClassB & 0.3\% & 1.3\% & 0.1\% & -1.0\% & -0.3\% & -0.5\% \bigstrut\\ \hline ClassC & -1.6\% & -1.8\% & -2.8\% & -1.9\% & -1.7\% & -2.5\% \bigstrut\\ \hline ClassD & -2.6\% & -2.2\% & -3.6\% & -2.5\% & -1.4\% & -2.8\% \bigstrut\\ \hline ClassE & -0.3\% & 2.6\% & 1.5\% & -1.4\% & -0.3\% & -0.4\% \bigstrut\\ \hline Average & -0.9\% & 0.3\% & -0.7\% & \textbf{-1.6\%} & \textbf{-0.8\%} & \textbf{-1.3\%} \bigstrut\\ \Xhline{1.0pt} \end{tabular}% \label{ablation1}% \end{table}% \begin{table}[tbp] \centering \caption{Ablation Study of Parameter Initialization (AI, VTM-6.3)}\label{ablation2} \begin{tabular}{c|c|c|c} \Xhline{1.0pt} \multirow{2}[4]{*}{Methods} & \multicolumn{3}{c}{$\Delta$PSNR(dB)} \bigstrut\\ \cline{2-4} & Y &U&V \bigstrut \\ \hline \hline Student &0.310&0.231&0.295 \bigstrut \\ \hline Student + MMD \cite{huang2017like}&0.320&0.245&0.313 \bigstrut \\ \hline Student + AT \cite{zagoruyko2016paying}& \textbf{0.329}&\textbf{0.256}&\textbf{0.328} \bigstrut \\ \Xhline{1.0pt} \end{tabular} \end{table} \subsection{Ablation Study} \label{rm_intra_ab} \subsubsection{RM for intra frames} RM can effectively improve the generalization ability of learning-based filters. The experiments of RM about the inter frames have been carried out in Section \ref{hmtest}. Based on VTM here, we further conduct ablation experiments on intra frames to illustrate the performance of RM. Its test setting is the same as before. From the experiment shown in Table \ref{ablation1}, we can find about 0.8\% BD-rate reduction has been achieved by the RM module. Regarding the performance of class-B, only using the proposed CNN-filter may even have a negative effect and leads to 0.3\% BD-rate increment. But its performance has been well improved after using RM. For most other classes, the performance has also been improved more or less after using RM. \subsubsection{The initialization of parameters} The 1-st frame of all HEVC test sequences is tested and the overall PSNR increments are shown in Table \ref{ablation2}, where the student model without transfer learning is indicated as "Student" row. MMD and AT in Table \ref{ablation2} represent different transfer learning ways that act on the student model. By comparing the "Student" row with the other rows, we can find that the PSNR of the student model is improved by both MMD and AT. What's more, the improvements of the chrominance components are more obvious than that of the luminance component. \section{Conclusion} In this paper, a CNN-based low complexity filter is proposed for video coding. The lightweight DSC merged with the batch normalization is used as the backbone. Based on the transfer learning, attention transfer is utilized to initialize the parameters of the proposed network. By adding a novel parametric module RM after the CNN filter, the generality of the CNN filter is improved and can also handle the filtering problem of inter frames. What's more, RM is independent of the proposed network and can also combine with other learning-based filters to alleviate the over-smoothing problem. The experimental results show our proposed model achieves excellent performance in terms of both BD-rate and complexity. For HEVC test sequences, our proposed model achieves about 1.2\% BD-rate reduction and 79.1\% FLOPs than VR-CNN anchor. Compared with Jia \textit{et al.} \cite{jia2019content}, our model achieves comparative BD-rate reduction with much lower complexity. Finally, we also conduct the experiments on H.266/VVC and ablation studies to demonstrate the effectiveness of the model. Our future work aims at further performance improvement of the learning-based filter in video coding. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,941,325,221,197
arxiv
\section{Introduction} Intersecting D-branes provide an attractive, bottom-up route to standard-like model building \cite{Lust:2004ks}. In these models one starts with two stacks, $a$ and $b$ with $N_a=3$ and $N_b=2$, of D6-branes wrapping the three large spatial dimensions plus 3-cycles of the six-dimensional internal space (typically a torus $T^6$ or a Calabi-Yau 3-fold) on which the theory is compactified. These generate the gauge group $U(3) \times U(2) \supset SU(3) _c \times SU(2)_L$, and the non-abelian component of the standard model gauge group is immediately assured. Further, (four-dimensional) fermions in bifundamental representations $({\bf N} _a, \overline{\bf N}_b)= ({\bf 3}, \overline{\bf 2})$ of the gauge group can arise at the multiple intersections of the two stacks. These are precisely the representations needed for the quark doublets $Q_L$ of the Standard Model. In general, intersecting branes yield a non-supersymmetric spectrum, so that, to avoid the hierarchy problem, the string scale associated with such models must be low, no more than a few TeV. Then, the high energy (Planck) scale associated with gravitation does not emerge naturally. Nevertheless, it seems that these problems can be surmounted \cite{Blumenhagen:2002vp,Uranga:2002pg}, and indeed an attractive model having just the spectrum of the standard model has been constructed \cite{Ibanez:2001nd}. It uses D6-branes that wrap 3-cycles of an orientifold $T^6/\Omega$, where $\Omega$ is the world-sheet parity operator. The advantage and, indeed, the necessity of using an orientifold stems from the fact that for every stack $a,b, ...$ there is an orientifold image $a',b', ...$. At intersections of $a$ and $b$ there are chiral fermions in the $({\bf 3}, \overline{\bf 2})$ representation of $U(3) \times U(2)$, where the ${\bf 3}$ has charge $Q_a=+1$ with respect to the $U(1)_a$ in $U(3)=SU(3)_c \times U(1)_a$, and the $\overline{\bf 2}$ has charge $Q_b=-1$ with respect to the $U(1)_b$ in $U(2)=SU(2)_L \times U(1)_b$. However, at intersections of $a$ and $b'$ there are chiral fermions in the $({\bf 3},{\bf 2})$ representation, where the ${\bf 2}$ has $U(1)_b$ charge $Q_b=+1$. In general, besides gauge bosons, stacks of D-branes on orientifolds also have chiral matter in the symmetric $\mathbf{S}$ and antisymmetric $\mathbf{A}$ representations of the relevant gauge group; both have charge $Q=2$ with respect to the relevant $U(1)$. For the stack $a$ with $N_a=3$, $\mathbf{S}_a=\mathbf{6}$ and $\mathbf{A}_a=\overline{\mathbf{3}}$. The former must be excluded on phenomenological grounds, but the latter could be quark-singlet states $q^c_L$. Similarly, for the stack $b$ with $N_b=2$, $\mathbf{S}_b=\mathbf{3}$ and $\mathbf{A}_b=\mathbf{1}$. Again, the former must be excluded on phenomenological grounds, but the latter could be lepton-singlet states $\ell^c_L$. Suppose that the number of intersections $a \cap b$ of the stack $a$ with $b$ is $p$, the number of intersections $a \cap b'$ of the stack $a$ with $b'$ is $q$, and the number of copies of $\mathbf{A}_a=\overline{\mathbf{3}}$ is $r$. The standard model has 3 quark doublets $Q_L$, so that to get just the standard-model spectrum we must have $p+q=3$. The standard model also has a total of 6 quark-singlet states. To get just the standard model spectrum we also require that $6-r$ of the quark singlets arise from intersections of $a$ with other stacks $c,d,...$ having just a single D6-brane. These belong to the representation $({\bf 1}, \overline{\bf 3})$ of $U(1) \times U(3)$ and each has charge $Q_a=-1$. Ramond-Ramond (RR) tadpole cancellation requires that overall $Q_a$ sums to zero. Thus \beq 2p+2q+2r-(6-r)=0 \eeq Hence $r=0$ and we must also exclude the representations $\mathbf{A}_a=\overline{\mathbf{3}}$. Tadpole cancellation also requires that $Q_b$ sums to zero overall. To get just the standard model spectrum we require that there are 3 lepton doublets $L$ arising from intersections of $b$ with other stacks having just a single D6-brane. All have $Q_b=+1$ or $Q_b=-1$. Suppose the number of copies of $\mathbf{A}_b=\mathbf{3}$ is $s$. Then overall cancellation of $Q_b$ requires that \beq -3p+3q +2s \pm 3=0 \eeq Hence $s=0 \bmod 3$. In the case that $s=0$ the solutions are $(p,q)=(1,2)$ or $(2,1)$, whereas when $s=\pm 3$ the solutions $(p,q)=(3,0)$ or $(0,3)$ are also allowed \cite{Blumenhagen:2001te}. (Models with $|s|>3$ will obviously have non-standard model spectra.) However, states arising as the antisymmetric representation of $U(2)$ do not have the standard-model Yukawa couplings to the Higgs multiplet. Consequently we are only interested in models such as that in \cite{Ibanez:2001nd} with $(a \cap b ,a \cap b')=(1,2)$ or $(2,1)$. Despite the attractiveness of that model, there remain serious problems in the absence of supersymmetry. A generic feature of intersecting brane models is that flavour changing neutral currents are generated by four-fermion operators induced by string instantons \cite{Abel:2003yh}. The severe experimental limits on these processes require that the string scale is rather high, of order $10^4$ TeV. This makes the fine tuning problem very severe, and the viability of such models highly questionable. Further, in non-supersymmetric theories, such as these, the cancellation of RR tadpoles does not ensure Neveu Schwarz-Neveu Schwarz (NSNS) tadpole cancellation. NSNS tadpoles are simply the first derivative of the scalar potential with respect to the scalar fields, specifically the complex structure and K\"ahler moduli and the dilaton. A non-vanishing derivative of the scalar potential signifies that such scalar fields are not even solutions of the equations of motion. Thus a particular consequence of the non-cancellation is that the complex structure moduli are unstable \cite{Blumenhagen:2001mb}. One way to stabilise these moduli is for the D6-branes to wrap 3-cycles of an orbifold $T^6/P$, where $P$ is a point group, rather than a torus $T^6$. The FCNC problem can be solved and the complex structure moduli stabilised when the theory is supersymmetric. First, a supersymmetric theory is not obliged to have the low string scale that led to problematic FCNCs induced by string instantons. Second, in a supersymmetric theory, RR tadpole cancellation ensures cancellation of the NSNS tadpoles \cite{Cvetic:2001tj,Cvetic:2001nr}. An orientifold is then constructed by quotienting the orbifold with the world-sheet parity operator $\Omega$. As explained above, an orientifold is necessary to allow the possibility of obtaining just the spectrum of the supersymmetric standard model.) Several attempts have been made to construct the MSSM \cite{Blumenhagen:2002gw, Honecker:2003vq, Honecker:2004np, Honecker:2004kb} using an orientifold with point group $P=\mathbf{Z}_4$, $\mathbf{Z}_4 \times \mathbf{Z}_2$ or $\mathbf{Z}_6$. The most successful attempt to date is the last of these \cite{Honecker:2004kb, Ott:2005sa}, which uses D6-branes intersecting on a $\mathbf{Z}_6$ orientifold to construct an $\mathcal{N}=1$ supersymmetric standard-like model using 5 stacks of branes. We shall not discuss this beautiful model in any detail except to note that the intersection numbers for the stacks $a$, which generates the $SU(3)_c$ group, and $b$, which generates the $SU(2)_L$, are $(a \cap b,a \cap b')=(0,3)$. In this case it is impossible to obtain lepton singlet states $\ell ^c_L$ as antisymmetric representations of $U(2)$. Further, it was shown, quite generally, that it is impossible to find stacks $a$ and $b$ such that $(a \cap b,a \cap b')=(2,1)$ or $(1,2)$. Thus, as explained above, it is impossible to obtain exactly the spectrum of the (supersymmetric) standard model. The question then arises as to whether the use of a different orientifold could circumvent this problem. Here we address this question for the $\mathbf{Z}_6'$ orientifold. We do not attempt to construct a standard(-like) MSSM. Instead, we merely see whether there are any stacks $a,b$ that simultaneously satisfy the supersymmetry constraints, the absence of chiral matter in symmetric representations of the gauge groups (see below), which have not too much chiral matter in antisymmetric representations of the gauge groups, and which have $(a \cap b,a \cap b')=(2,1)$ or $(1,2)$. Further details of this work may be found in reference \cite{Bailin:2006zf}. \section{$\mathbf{Z}_6'$ orientifold} We assume that the torus $T^6$ factorises into three 2-tori $T^2_1 \times T^2_2 \times T^2_3$. The 2-tori $T^2_k \ (k=1,2,3)$ are parametrised by complex coordinates $z_k$. The action of the generator $\theta$ of the point group $\mathbf{Z} ' _6$ on the coordinates $z_k $ is given by \beq \theta z_k = e^{2\pi i v_k} z_k \eeq where \beq (v_1,v_2,v_3 = \frac{1}{6} (1,2,-3) \eeq The point group action must be an automorphism of the lattice, so in $T^2_{1,2}$ we may take an $SU(3)$ lattice. Specifically we define the basis 1-cycles by $\pi _1$ and $\pi _2 \equiv e^{i\pi /3} \pi _1$ in $T^2_1$, and $\pi_3$ and $\pi _4 \equiv e^{i\pi /3} \pi _3$ in $T^2_2$. Thus the complex structure of these tori is given by $U_1=e^{i\pi /3}=U_2$. The orientation of $\pi _{1,3}$ relative to the real and imaginary axes of $z_{1,2}$ is arbitrary. Since $\theta $ acts as a reflection in $T^2_3$, the lattice, with basis 1-cycles $\pi _5$ and $\pi _6$, is arbitrary. The point group action on the basis 1-cycles is then \bea \theta \pi _1 = \pi _2 \quad &{\rm and}& \quad \theta \pi _2 =\pi _2-\pi _1 \label{theta12} \\ \theta \pi _3 =\pi _4 -\pi _3 \quad &{\rm and}& \quad \theta \pi _4 =-\pi _3 \label{theta34} \\ \theta \pi _5=-\pi _5 \quad &{\rm and}& \quad \theta \pi _6 =-\pi _6 \label{thetaZ6} \eea We consider ``bulk'' 3-cycles of $T^6$ which are linear combinations of the 8 3-cycles $\pi_{i,j,k} \equiv \pi _i \otimes \pi _j \otimes \pi _k$ where $i=1,2, \ j=3,4, \ k=5,6$. The basis of 3-cycles that are {\em invariant} under the action of $\theta$ contains 4 elements $\rho _{1,3,4,6}$, where \bea \rho _1 &=& 2(\pi _{1,3,5} + \pi _{2,3,5} +\pi _{1,4,5} - 2 \pi_{2,4,5}) \label{rho1} \\ \rho _3 &=& 2(-2\pi _{1,3,5} + \pi _{2,3,5} +\pi _{1,4,5} + \pi_{2,4,5}) \label{rho3} \eea and similarly for $\rho _{4,6}$ replacing $\pi _5$ by $ \pi _6$ in $\rho_{1,3}$ respectively. Then the general $\mathbf{Z}_6'$-invariant bulk 3-cycle with (co-prime) wrapping numbers $(n_k,m_k)$ of the cycles $(\pi _{2k-1},\pi _{2k})$ on $T^2_k$ is \beq \Pi _a=A_1\rho _1+ A_3 \rho_3 +A_4\rho _4+ A_6 \rho_6 \label{genbulk} \eeq where \bea A_1= (n_1n_2+n_1m_2+ m_1n_2)n_3 \label{A1}\\ A_3= (m_1m_2+n_1m_2+ m_1n_2)n_3 \\ A_4= (n_1n_2+n_1m_2+ m_1n_2)m_3 \\ A_6= (m_1m_2+n_1m_2+ m_1n_2)m_3 \label{A6} \eea are the ``bulk coefficients''. If $\Pi _a$ has wrapping numbers $(n^a_k, m^a_k)$ $(k=1,2,3)$, and $\Pi _b$ has wrapping numbers $(n^b_k, m^b_k)$, then, in an obvious notation, the intersection number of the orbifold-invariant 3-cycles is \bea \Pi _a \cap \Pi _b = -4(A^a_1A^b_4-A^a_4A^b_1)&+& 2(A^a_1A^b_6-A^a_6A^b_1) +2(A^a_3A^b_4-A^a_4A^b_3) - \nonumber \\ &-&4(A^a_3A^b_6-A^a_6A^b_3) \label{pia0pib} \eea which is always even. Besides these (untwisted) 3-cycles, there are also exceptional 3-cycles associated with (some of) the twisted sectors of the orbifold. They arise in twisted sectors in which there is a fixed torus, and consist of a collapsed 2-cycle at a fixed point times a 1-cycle in the invariant plane. We shall only be concerned with those that arise in the $\theta ^3$ sector, which has $T^2_2$ as the invariant plane. There is a $\mathbf{Z}_2$ symmetry acting in $T^2_1$ and $T^2_3$ and this has sixteen fixed points $f_{i,j}$ where $i,j=1,4,5,6$. There are then 32 independent exceptional cycles given by $f_{i,j} \otimes \pi _{3,4}$ from which 8 independent $\mathbf{Z}_6'$-invariant combinations may be formed. They are \bea \epsilon _j\equiv (f_{6,j}-f_{4,j}) \otimes \pi _3+ (f_{4,j}-f_{5,j}) \otimes \pi _4 \label{epsj}\\ \tilde{\epsilon} _j\equiv(f_{4,j}-f_{5,j}) \otimes \pi _3+ (f_{5,j}-f_{6,j}) \otimes \pi _4 \label{epstilj} \eea The non-zero intersection numbers for the invariant combinations are given by \beq \epsilon _j \cap \tilde{\epsilon} _k=-2 \delta _{jk} \label{eps0jk} \eeq and again these are always even. The relation between the fixed points $f_{i,j}$ and the invariant exceptional cycles is given in Table \ref{FPex} \begin{table} \begin{tabular}{cc} \hline \tablehead{1}{c}{b}{Fixed point $\otimes$ 1-cycle} & \tablehead{1}{c}{b}{Invariant exceptional 3-cycle} \\ \hline $f_{1,j} \otimes (n_2 \pi _3 +m_2 \pi _4)$ & 0 \\ $f_{4,j} \otimes (n_2 \pi _3 +m_2 \pi _4)$ & $m_2 \epsilon _j + (n_2+m_2) \tilde{\epsilon _j}$ \\ $f_{5,j} \otimes (n_2 \pi _3 +m_2 \pi _4)$ & $-(n_2+m_2) \epsilon _j-n_2 \tilde{\epsilon _j} $ \\ $f_{6,j} \otimes (n_2 \pi _3 +m_2 \pi _4)$ & $n_2 \epsilon _j-m_2 \tilde{\epsilon _j} $ \\ \hline \end{tabular} \caption{ \label{FPex} Relation between fixed points and exceptional 3-cycles.} \end{table} The embedding $\mathcal{R}$ of the world-sheet parity operator $\Omega$ may be chosen to act on the three complex coordinates $z_k \ (k=1,2,3)$ as complex conjugation $\mathcal{R}z_k=\overline{z} _k$, and we require that this too is an automorphism of the lattice. This fixes the orientation of the basis 1-cycles in each torus relative to the Re $z_k$ axis. It requires them to be in one of two configurations {\bf A} or {\bf B}. When $T^2_1$ is in the {\bf A} configuration, $\pi _1$ is aligned along the Re $z_1$ axis, whereas in the {\bf B} configuration it makes an angle of $\pi /6$ below this axis. Similarly for $\pi _3$ and $T^2_2$. In $T^2_3$ the cycle $\pi _5$ is is aligned along the Re $z_3$ axis in both {\bf A} and {\bf B} configurations. The difference is that in {\bf A} the 1-cycle $\pi _6$ aligned along the Im $z_3$ axis, whereas in {\bf B} it is inclined such that its real part is one half that of $\pi _5$. In both cases the imaginary part is arbitrary, and so therefore is the imaginary part of the complex structure $U_3$ of $T^2_3$. It is then straightforward to determine the action of $\mathcal{R}$ on the bulk 3-cycles $\rho _p\ (p=1,3,4,6)$ and on the exceptional cycles $\epsilon _j$ and $\tilde{\epsilon} _j$. In particular, requiring that a bulk 3-cycle $\Pi _a = \sum _p A_p \rho _p$ be invariant under the action of $\mathcal{R}$ gives 2 constraints on the bulk coefficients $A_p$, so that just 2 of the 4 independent bulk 3-cycles are $\mathcal{R}$-invariant. Which 2 depends upon the lattice. The twist (\ref{z61vk}) ensures that the closed-string sector is supersymmetric. In order to avoid supersymmetry breaking in the open-string sector, the D6-branes must wrap special Lagrangian cycles. Then the stack $ \Pi _a$ with wrapping numbers $(n^a_k,m^a_k) \ (k=1,2,3)$ is supersymmetric if \beq \sum _{k=1}^3 \phi^a _k= 0 \bmod 2\pi \label{sumphik} \eeq where $\phi^a _k$ is the angle that the 1-cycle in $T^2_k$ makes with the Re $z_k$ axis. Defining \beq Z^a \equiv \prod _{k=1}^3\pi_{2k-1}(n^a_k+m^a_kU_k) \equiv X^a+iY^a \eeq where $U_k$ is the complex structure on $T^2_k$, the condition (\ref{sumphik}) that $\Pi_a$ is supersymmetric may be written as \beq X^a>0, \ Y^a=0 \label{XaYa} \eeq (A stack with $Y^a=0$ but $X^a<0$, so that $\sum _k \phi^a _k= \pi \bmod 2\pi$, corresponds to a (supersymmetric) stack of anti-D-branes.) In our case $T^2_{1,2}$ are $SU(3)$ lattices, and $U_1=e^{i\pi /3}=U_2$, as already noted . Thus \beq Z^a= \pi_1\pi_3\pi_5[A^a_1-A^a_3+U_3(A^a_4-A^a_6)+e^{i\pi /3}(A^a_3+A^a_6U_3)] \eeq It is then straightforward to evaluate $X^a$ and $Y^a$ for the different lattices. The results for the cases in which $T^2_3$ is of {\bf B} type are given in Table \ref{susy3cycle}. \begin{table} \begin{tabular}{ccc} \hline \tablehead{1}{c}{b}{Lattice} & \tablehead{1}{c}{b}{ $X^a$} & \tablehead{1}{c}{b}{$Y^a$} \\ \hline {\bf AAB}&$ 2A^a_1-A^a_3+A^a_4-\frac{1}{2}A^a_6-A^a_6\sqrt{3}{\rm Im} \ U_3 $ & $\sqrt{3}(A^a_3+\frac{1}{2}A^a_6)+(2A^a_4-A^a_6){\rm Im} \ U_3 $ \\ {\bf ABB} and {\bf BAB} &$\sqrt{3}(A^a_1+\frac{1}{2}A^a_4)+(A^a_4-2A^a_6){\rm Im} \ U_3 $& $2A^a_3-A^a_1+A^a_6-\frac{1}{2}A^a_4+A^a_4\sqrt{3}{\rm Im} \ U_3$ \\ {\bf BBB} &$(A^a_3+A^a_1+\frac{1}{2}A^a_6+\frac{1}{2}A^a_4)+$& $\sqrt{3}(A^a_3-A^a_1+\frac{1}{2}A^a_6-\frac{1}{2}A^a_4)$ \\ &$+(A_4-A_6)\sqrt{3}{\rm Im} \ U_3 $ & $+(A_4+A_6){\rm Im} \ U_3$ \\ \hline \end{tabular} \caption{ \label{susy3cycle} The functions $X^a$ and $Y^a$. (An overall positive factor is omitted.) A stack $a$ of D6-branes is supersymmetric if $X^a>0$ and $Y^a=0$.} \end{table} The (single) requirement that $Y_a=0$ means that 3 independent combinations of the 4 invariant bulk 3-cycles may be chosen to be supersymmetric. Of these, 2 are the $\mathcal{R}$-invariant combinations. However, unlike in the case of the $\mathbf{Z}_6$ orientifold, in this case there is a third, independent, supersymmetric bulk 3-cycle that is {\em not} $\mathcal{R}$-invariant. We noted earlier that the intersection numbers of both the bulk 3-cycles $\rho _p \ (p=1,3,4,6)$ and of the exceptional cycles $\epsilon _j, \tilde{\epsilon} _j \ (j=1,4,5,6)$ are always even. However, in order to get just the (supersymmetric) standard-model spectrum, either $a \cap b$ or $a \cap b'$ must be odd. It is therefore necessary to use fractional branes of the form \beq a= \frac{1}{2} \Pi _a^{\rm bulk}+ \frac{1}{2} \Pi _a^{\rm ex} \label{pifrac} \eeq where $\Pi _a^{\rm bulk}=\sum _p A_p \rho _p$ is an invariant bulk 3-cycle, associated with wrapping numbers $(n^a_1,m^a_1)(n^a_2,m^a_2)(n^a_3,m^a_3)$, as shown in (\ref{genbulk}). The exceptional branes (in the $\theta ^3$ sector) are associated with the fixed points $f_{i,j}, \ (i,j=1,4,5,6)$ in $T^2_1 \otimes T^2_3$, as shown in (\ref{epsj}) and (\ref{epstilj}). If $\Pi _a^{\rm bulk}$ is a supersymmetric bulk 3-cycle, then the fractional brane $a$, defined in (\ref{pifrac}), preserves supersymmetry provided that the exceptional part $\Pi _a ^{\rm ex}$ arises only from fixed points traversed by the bulk 3-cycle. Since the wrapping numbers $(n^a_1,m^a_1)$ on $T^2_1$ are integers, the 1-cycle on $T^2_1$ either traverses zero fixed points or two. In the latter case we denote the fixed points by $(i^a_1,i^a_2)$. Similarly for the 1-cycle on $T^2_3$, where the two fixed points are denoted by $(j^a_1,j^a_2)$. Thus, supersymmetry requires that the exceptional part $\Pi _a^{\rm ex}$ of $a$ derives from four fixed points, $f_{i^a_1j^a_1},f_{i^a_1j^a_2},f_{i^a_2j^a_1},f_{i^a_2j^a_2}$. The choice of Wilson lines affects the relative signs with which the contributions from the four fixed points are combined to determine $\Pi _a ^{\rm ex}$. The rule is that \beq (i^a_1,i^a_2)(j^a_1,j^a_2) \rightarrow (-1)^{\tau^a _0} \left[ f_{i^a_1j^a_1}+(-1)^{\tau^a _2}f_{i^a_1j^a_2}+(-1)^{\tau^a _1}f_{i^a_2j^a_1} +(-1)^{\tau^a _1+\tau^a _2}f_{i^a_2j^a_2} \right] \eeq where $\tau^a _{0,1,2}=0,1$ with $\tau^a _1 =1$ corresponding to a Wilson line in $T^2_1$ and likewise for $\tau^a _2$ in $T^2_3$. The fixed point $f_{i^a,j^a}$ with 1-cycle $n^a_2\pi _3+m^a_2\pi _3$ is then associated with the orbifold invariant exceptional cycle as shown in Table \ref{FPex}. In general, besides the chiral matter in bifundamental representations that occurs at the intersections of brane stacks $a,b,...$, with each other or with their orientifold images $a', b',...$, there is also chiral matter in the symmetric ${\bf S}_a$ and antisymmetric representations ${\bf A}_a$ of the gauge group $U(N_a)$, and likewise for $U(N_b)$. Orientifolding induces topological defects, O6-planes, which are sources of RR charge. The number of multiplets in the ${\bf S}_a$ and ${\bf A}_a$ representations is \bea \#({\bf S}_a)=\frac{1}{2}(a \cap a' -a \cap \Pi _{\rm O6}) \\ \#({\bf A}_a)=\frac{1}{2}(a \cap a' +a \cap \Pi _{\rm O6}) \eea where $\Pi _{\rm O6}$ is the total O6-brane homology class; it is $\mathcal{R}$-invariant. If $a \cap \Pi _{\rm O6}=\frac{1}{2} \Pi _a ^{\rm bulk} \cap \Pi _{\rm O6}\neq 0$, then copies of one or both representations are inevitably present. Since we require supersymmetry, $\Pi _a ^{\rm bulk}$ is necessarily supersymmetric. However, we have observed above that this does not require $\Pi _a ^{\rm bulk}$ to be $\mathcal{R}$-invariant, as $\Pi _{\rm O6}$ is. Thus, unlike the $\mathbf{Z}_6$ case, in this case $a \cap \Pi _{\rm O6}$ is generally non-zero. We noted in the Introduction that we must exclude the appearance of the representations ${\bf S}_a$ and ${\bf S}_b$. Consequently, we impose the constraints \bea a \cap a'&=& a \cap \Pi _{\rm O6} \label{aa'o6}\\ b \cap b'&=& b \cap \Pi _{\rm O6} \label{bb'o6} \eea We also showed that demanding that the $U(1)$ charges $Q_a$ and $Q_b$ sum to zero overall requires that $\#({\bf A}_a)=0=\#({\bf A}_b)$, at least if we also demand standard-model Yukawa couplings. However, for the moment we proceed more conservatively. With the constraint (\ref{aa'o6}) the number of multiplets in the antisymmetric representation ${\bf A}_a$ is $a \cap \Pi _{\rm O6}$. For the present we require only that \beq |a \cap \Pi _{\rm O6}|\leq 3 \label{aopio6} \eeq since otherwise there would again be non-minimal vector-like quark singlet matter. Similarly, using just (\ref{bb'o6}), we only require that \beq |b \cap \Pi _{\rm O6}|\leq 3 \label{bopio6} \eeq to avoid unwanted vector-like lepton singlets. \section{Results and conclusions} We have shown \cite{Bailin:2006zf} that, unlike the $\mathbf{Z}_6$ orientifold, at least on some lattices, the $\mathbf{Z}_6'$ orientifold {\em can} support supersymmetric stacks $a$ and $b$ of D6-branes with intersection numbers satisfying $(a \circ b,a \circ b')=(2,1)$ or $(1,2)$. Stacks having this property are an indispensable ingredient in any intersecting brane model that has {\em just} the matter content of the (supersymmetric) standard model. By construction, in all of our solutions there is no matter in symmetric representations of the gauge groups on either stack. However, some of the solutions {\em do} have matter, 2 quark singlets $q^c_L$ or 2 lepton singlets $\ell ^c_L$, in the antisymmetric representation of gauge group on one of the stacks. This is not possible on the $\mathbf{Z}_6$ orientifold because all supersymmetric D6-branes wrap the same bulk 3-cycle as the O6-planes. In contrast, on the $\mathbf{Z}_6'$ orientifold there exist supersymmetric 3-cycles that do not wrap the O6-planes. Thus, there is more latitude in this case, and the solutions with antisymmetric matter exploit this feature. Unfortunately, however, none of the solutions of this nature that we have found can be enlarged to give just the standard-model spectrum, since the overall cancellation of the relevant $U(1)$ charge cannot be achieved with this matter content. Nevertheless, some of our solutions have no antisymmetric (or symmetric) matter on either stack. We shall attempt in a future work to construct a realistic (supersymmetric) standard model using one of these solutions. The presence of singlet matter on the branes in some, but not all, of our solutions is an important feature of our results. It is clear that different orbifold point groups produce different physics, as indeed, for the reasons just given, our results also illustrate. The point group must act as an automorphism of the lattice used, but it is less clear that realising a given point group symmetry on different lattices produces different physics. Our results show that different lattices can produce different physics. The observation that the lattice does affect the physics suggests that other lattices are worth investigating in both the $\mathbf{Z}_6$ and $\mathbf{Z}_6'$ orientifolds. In particular, since $Z_6$ can be realised on a $G_2$ lattice, as well as on an $SU(3)$ lattice, one or more of all three $SU(3)$ lattices in the $\mathbf{Z}_6$ case, and of the two on $T^2_{1,2}$ in the $\mathbf{Z}_6' $ case, could be replaced by a $G_2$ lattice. We shall explore this avenue too in future work. The construction of a realistic model will, of course, entail adding further stacks of D6-branes $c,d,..$, with just a single brane in each stack, arranging that the matter content is just that of the supersymmetric standard model, the whole set satisfying the condition for RR tadpole cancellation. In a supersymmetric orientifold, RR tadpole cancellation ensures that NSNS tadpoles are also cancelled, but some moduli, (some of) of the complex structure moduli, the K\"ahler moduli and the dilaton, remain unstabilised. Recent developments have shown how such moduli may be stabilised using RR, NSNS and metric fluxes \cite{Derendinger:2004jn,Kachru:2004jr,Grimm:2004ua,Villadoro:2005cu,DeWolfe:2005uu}, and indeed C\'amara, Font \& Ib\'a\~nez \cite{Camara:2005dc, Aldazabal:2006up} have shown how models similar to the ones we have been discussing can be uplifted into ones with stabilised K\"ahler moduli using a ``rigid corset''. In general, such fluxes contribute to tadpole cancellation conditions and might make them easier to satisfy. In which case, it may be that one or other of our solutions with antisymmetric matter could be used to obtain just the standard-model spectrum. In contrast, the rigid corset can be added to any RR tadpole-free assembly of D6-branes in order to stabilise all moduli. Thus our results represent an important first step to obtaining a supersymmetric standard model from intersecting branes with all moduli stabilised.
1,941,325,221,198
arxiv
\section{Introduction} Turbulence is a ubiquitous phenomenon in nature, arising in neutral fluids such as in Earth's atmosphere and ocean as well as in astrophysical plasmas. The study of plasma turbulence is of great necessity because it is deeply related to fundamental nonlinear plasma physics and is crucial in understanding various important processes in astrophysics, such as the heating and acceleration of the solar wind and the acceleration of high-energy particles. In the solar wind, direct measurements have shown that fluctuations in the velocity and magnetic field display properties of well-developed turbulence \citep[e.g.,][]{Coleman1968}. One important feature of these fluctuations that appears to be contradictory with a well-developed turbulence is the high Alfv\'enicity, that is to say the strong correlation between velocity and magnetic field fluctuations invariably displaying the properties of Alfv\'en waves propagating away from the Sun \citep[e.g.,][]{BelcherandDavis1971}, even though the solar wind is propagating much faster than any wave speed and should therefore advect fluctuations outward irrespective of the direction of their propagation. This Alfv\'enic turbulence is most prevalent in high-speed solar wind streams, and the Alfv\'enic property appears to decay with distance from the Sun and survive out to distances much greater than 1 AU only in the polar heliosphere at solar minimum \citep{bavassano1998cross}. Alfv\'enic turbulence is also nearly incompressible. Radial evolution of the power spectra and other quantities, such as cross-helicity (defined below), seems to confirm ongoing nonlinear dynamics \citep{Bavassanoetal1982,bavassano1998cross}, which for incompressible Alfv\'enic turbulence requires the interaction between colliding counter-propagating wave packets \citep{Iroshnikov1964,Kraichnan1965}. \begin{figure*} \centering \includegraphics[angle=-90,width=0.8\textwidth]{overview_Encounter_1_4_5.png} \caption{Overview of Encounter 01, 04, and 05. Row (a) shows the magnetic field with blue, orange, and green curves being radial, tangential, and normal components (RTN coordinates) and the black curve being the magnitude. Row (b) shows the radial ion flow speed (blue) and the ion thermal speed (orange). Row (c) shows the ion density (blue) and radial distance of PSP to the Sun (orange). Row (d) shows the spectral slopes of the magnetic field in Alfv\'en speed (blue) and velocity (orange). The two dashed lines mark the values $3/2$ and $5/3$ for reference. Row (e)-(h) are $\sigma_c$, $\sigma_r$, $E_b$, and $E_v$, respectively, as defined in Section \ref{sec:inst_data}. In each panel of these four rows, three curves are plotted and they correspond to wave band 2 (blue), 5 (purple), and 8 (yellow), respectively. All quantities were averaged or calculated through Fourier analysis in the $2048\times 0.874 \approx 30$min time window as described in Section \ref{sec:inst_data}.} \label{fig:overview} \end{figure*} Much theoretical work has been devoted to understanding the nature of nonlinear interactions in incompressible magnetohydrodynamic (MHD) turbulence. The early statistically isotropic phenomenological models of \citet{Iroshnikov1964} and \citet{Kraichnan1965} were extended to include parallel and perpendicular wave-number anisotropy by \citet{Goldreich1995}; the concept of dynamical alignment \citep{dobrowolny1980properties} was introduced to explain the dominance of outwardly propagating Alfv\'enic turbulence in the solar wind as a nonlinear phenomenon, and this was shown to lead to different spectra for inward and outward fluctuations by \citet{Grappinetal1990}. A phenomenology of anisotropic turbulence with a preferred sense of propagation was presented in \citet{lithwick2003imbalanced}. The above models predict different spectral slopes and energy cascade rates, each under specific assumptions. However, solar wind observations cannot be solely explained by any one of the models, and this may be due to the inapplicability of assumptions such as homogeneity and incompressibility: The wind expands spherically, slowing nonlinear interactions and providing a quasi-scale free energy loss in the turbulence; different velocity streams with significant shear are present at meso-scales; and compressible processes such as parametric decay may occur \citep{primavera2019parametric,tenerani2020magnetic}, not to mention the potential role of particle distribution function anisotropies. Two important problems that stand out are the different spectral slopes for the magnetic field and velocity \citep[e.g.,][]{Grappinetal1991,boldyrev2011spectral,Chenetal2013} and the observed excess of magnetic energy over the kinetic energy \citep[e.g.,][]{roberts1987nature,MarschandTu1990,Grappinetal1991}. It is observed beyond 0.3AU that the velocity spectrum, whose slope is around $-3/2$, is shallower than the magnetic field spectrum, whose slope is close to $-5/3$. Evidence shows that beyond 1 AU, the velocity spectrum steepens toward a $-5/3$ slope, implying an active nonlinear process \citep{roberts2007agu,bruno2013solar}. This nonlinear process, however, is not captured by the Alfv\'enic turbulence models. The observed magnetic energy excess is potentially a natural result of MHD turbulence evolution \citep[e.g.,][]{grappin1983dependence,boldyrev2009spectrum}, but it may also be explained by the convective magnetic structures in the solar wind \citep{tu1991case,tu1993model}. As mentioned before, the solar wind, instead of being a homogeneous medium, is radially stratified due to the spherical expansion. This inhomogeneity linearly couples the outward and inward propagating waves through the reflection of them \citep[e.g.,][]{Vellietal1991,Velli1993}. In addition, the radial expansion generates a new anisotropy with respect to the radial direction which mixes with the anisotropy with respect to the background magnetic field direction \citep{GrappinandVelli1996,dong2014evolution,TeneraniandVelli2017,Shietal2020}. Parker Solar Probe (PSP), launched on August 12, 2018, has completed its first five orbits, with a closest approach of $\sim 27.9$ solar radii ($R_s$) to the Sun in encounters (E) E4 and E5, which is much closer than the previous record held by Helios B at $\sim 62.4$ $R_s$. Thus, its data provide a unique opportunity to study solar wind turbulence in its early stage of evolution. Initial PSP data have revealed many interesting phenomena, among which the omnipresence of the so-called magnetic switchbacks may be especially important \citep{Baleetal2019,deWitetal2020,McManusetal2020,tenerani2020magnetic}. There are fluctuations in the solar magnetic field of a sufficient magnitude to invert the local direction with respect to the Sun, that is to say they switch the field backward locally into a fold. Intriguingly, these folds retain some features typical of Alfv\'enic turbulence, among which the strong correlation of velocity to magnetic field fluctuations as well as a nearly constant magnitude of the total magnetic field. The velocity-magnetic field correlation and the outward sense of propagation from the Sun reveal themselves through the presence of radial velocity outward jets superimposed on the background solar wind flow \citep{matteini2014dependence,kasper2019alfvenic,horbury2020sharp}. \citet{reville2020role}, via MHD simulations compared to the PSP data, show that the Alfv\'enic fluctuations provide sufficient power to accelerate the measured slow solar wind streams. \citet{chen2020evolution} surveyed the PSP data from the first two orbits and analyzed the Alfv\'enicity of the MHD turbulence in the solar wind. They show that the dominance of the outward-propagating wave decreases with radial distance to the Sun, which is consistent with previous observations made beyond 0.3 AU \citep{roberts1987origin,bavassano1998cross}. In addition, a steepening of the magnetic field spectrum from a slope around $-1.5$ toward $-1.67$ is also observed. In this study, we make use of the PSP data from the first five orbits and conduct a statistical analysis of the MHD fluctuations in the solar wind. We show how the properties of the turbulence vary with both radial distance to the Sun and the wind speed. The wind speed in combination with the radial distance controls the turbulence spectra via the useful concept of turbulence ``age'' \citep{Grappinetal1991}. The Alfv\'enicity has a complicated behavior. In general, the fast wind is more Alfv\'enic than the slow wind and the Alfv\'enicity, if defined by the relative amplitude of the outward and inward propagating Alfv\'en waves, gradually decreases with radial distance. However, the magnetic energy seems to be much larger than the kinetic energy close to the Sun and gradually relaxes to similar levels as the wind propagates. In addition, Alfv\'enicity of streams of a similar speed can be very different. We discuss several factors that possibly influence the turbulence properties, including fast-slow stream shears, the heliospheric current sheet, and the different origin of the solar wind streams at the Sun. \section{Instruments \& data processing}\label{sec:inst_data} There are four instrument suites onboard PSP. Here we make use of the Level-2 magnetometer (MAG) data from Fields Experiment (FIELDS) and Level-3 Solar Probe Cup (SPC) data from Solar Wind Electrons Alphas and Protons investigation (SWEAP). We refer to the five orbits of PSP as ``Encounter 1, 2, 3, 4, 5,'' respectively, or ``E 1, 2, 3, 4, 5'' for short, as high resolution data are only produced near perihelions of the orbits ($R \leq 0.3-0.4$ AU). During the encounters, SPC measures the proton spectrum at a cadence of 0.218-0.874s and the time resolution of FIELDS is smaller than 13.7ms. The exact time periods that are analyzed in this study are listed in Table \ref{tab:time_periods}. \begin{table*}[t] \caption{Time periods selected for analysis of the PSP data. The third column shows PSP perihelion dates and the fourth column shows the distance of each perihelion to the Sun.} \label{tab:time_periods} \centering \begin{tabular}{c c c c} \hline Encounter \# & Period & Perihelion date & Perihelion to the Sun\\ \hline 1 & Oct 31-Nov 11, 2018 & Nov 06, 2018 & 35.7$R_s$\\ 2 & Mar 30-Apr 11, 2019 & Apr 04, 2019 & 35.7$R_s$\\ 3 & Aug 22-Aug 31, 2019 & Sep 01, 2019 & 35.7$R_s$\\ 4 & Jan 16-Feb 04, 2020 & Jan 29, 2020 & 27.9$R_s$\\ 5 & May 30-Jun 13, 2020 & Jun 07, 2020 & 27.9$R_s$\\ \hline \end{tabular} \end{table*} We first resampled the measurements of the magnetic field, proton density, velocity, and thermal speed to a time resolution of 0.874s, which is enough for the purpose of analyzing MHD turbulence. Then we binned the data into 2048-point time windows and filled the data gaps using linear interpolation. Windows with a data gap ratio larger than 10\% were discarded. We determined the polarity of the radial magnetic field by averaging $B_r$ inside each time window and defined the two Els\"asser variables \begin{equation}\label{eq:definition_Zoi} \mathbf{Z_{o,i}} = \mathbf{U} \mp \mathrm{sign}(B_{r0}) \frac{\mathbf{B}}{\sqrt{\mu_0 \rho}} \end{equation} where subscripts ``o'' and ``i'' represent ``outward'' and ``inward,'' respectively, and $B_{r0}$ is the averaged $B_r$. We note that to have well-defined outward and inward propagating waves, the angle between the background magnetic field and the radial direction should not be too large. One can estimate that for a solar wind speed of 300 km/s, the spiral angle of the magnetic field is approximately 20 degrees at 60 solar radii, which is sufficiently small. In Eq (\ref{eq:definition_Zoi}), the density is the averaged value in each half-hour window. As is shown in Fig. \ref{fig:statistic_n_fluc_T_vr}, the relative density fluctuation $\Delta n /n$ is mostly small with values around 0.05-0.10. This density fluctuation introduces a small, negligible uncertainty, around $(2.5-5)\%$ when calculating the Alfv\'en speed. Fourier transforms were applied to $\mathbf{U}$, $\mathbf{V_A} = \mathbf{B}/\sqrt{\mu_0 \rho}$, and $\mathbf{Z}_{o,i}$ to obtain power spectra. We then fit the power spectra over modes 5-60, which correspond to periods $T \in [30s,360s]$, which are within the inertial range of the turbulence. Similar to \citet{Grappinetal1991}, we divided the Fourier modes into ten logarithmic bands, such that band $i$ includes modes $[2^{i-1},2^i)$. Inside each band, integrated wave energies $E_{b}, E_v, E_o,$ and $E_i$ were calculated. Then we defined the normalized cross helicity \begin{equation} \sigma_c = \frac{E_o - E_i}{E_o + E_i} ,\end{equation} which measures the relative amplitude of outward and inward Alfv\'en wave energies, and the normalized residual energy \begin{equation} \sigma_r = \frac{E_v - E_b}{E_v + E_b} ,\end{equation} which measures the relative amplitude of kinetic and magnetic energies. We note that $\sigma_c = \pm 1$ corresponds to purely outward and inward propagating Alfv\'enic fluctuations, while $\sigma_r = \pm 1$ corresponds to fluctuations that are either purely in the velocity field (kinetic) or magnetic. For purely outwardly-propagating Alfv\'enic fluctuations, we expect $\sigma_c=1$ and $\sigma_r=0$. \begin{figure} \centering \includegraphics[width=\hsize]{draft_3D_trajectory_E04.png} \caption{Measurements of radial magnetic field (top panel), radial flow speed (second panel), and proton number density (third panel) during Encounter 4. The values were plotted on a radius-(Carrington longitude) grid, i.e., in the reference frame corotating with the Sun. The bottom panel shows PSP's orbit with the $z$-axis being the Carrington latitude. We note that the variation in latitude is small. The colors represent time such that PSP moves from the light-colored end to the dark-colored end.} \label{fig:3D_trajectory_E04} \end{figure} \begin{figure} \centering \includegraphics[width=\hsize]{draft_Vr_carr_longitude_E04.png} \caption{Top: Radial solar wind speed varying with Carrington longitude of PSP measured during Encounter 4. The colors represent the time and PSP travels from the light-colored end toward the dark-colored end as indicated by the red arrows. Red circles connected by dashed lines mark several selected structures that were observed at different radial distances. Bottom: Radial wind speed as a function of radial distance to the Sun. Each line corresponds to one single structure marked by the connected red circles in the top panel. The structures are numbered as 1, 2, 3, 4, 5, and 6, as annotated in the top panel.} \label{fig:Vr_carr_long_E04} \end{figure} \section{Results} \subsection{Overview of Encounter 1, 4, \& 5}\label{sec:overview} In Fig. \ref{fig:overview} we present the overview plot of Encounter 1 (left), 4 (middle), and 5 (right). We did not plot Encounter 2 \& 3 due to the limited figure size and less data coverage during the two encounters. All quantities were calculated, either averaged or Fourier-analyzed, in the $2048\times 0.874 \mathrm{s} \approx 30$ min time window as described in Section \ref{sec:inst_data}. Consequently, the magnetic switchbacks, whose typical time scales are several minutes long \citep{deWitetal2020}, are absent from the magnetic field plot. We note that the large gaps in the last four rows ($\sigma_c$, $\sigma_r$, $E_b$, and $E_v$) of the middle and right columns do not mean that the original SPC and MAG data have large gaps. The reason is, as mentioned in Section \ref{sec:inst_data}, that we discarded the half-hour time windows with data gap ratios larger than 10\%. So these large gaps are actually a result of frequent small data gaps in Encounters 4 \& 5. \begin{figure*} \centering \includegraphics[width=\textwidth]{statistical_n_proton_fluc_vthermal_vr.png} \caption{Relative density fluctuation $\Delta n / n$ (left) and ion temperature (right) expressed in thermal speed squared as functions of radial solar wind speed. Each dot corresponds to a single half-hour window and the colors represent the radial distance to the Sun. Squares on solid curves are median values of the dots binned according to $V_r$, and squares on dashed curves are the other two quartiles.} \label{fig:statistic_n_fluc_T_vr} \end{figure*} There are several points that are worth underscoring here. (1) From Row (g)\&(h), we can see that both magnetic and kinetic energies in the waves decrease as we move away from the Sun. This is a natural result, mainly of the spherical expansion of the solar wind but also of the energy cascade of the turbulence. A similar trend of $E_{o}$ and $E_i$ was reported in \citep{chen2020evolution}. (2) The streams measured by PSP during Encounters 4\&5 are mostly of a very low speed (Row (b)). As is shown in Section \ref{sec:acceleration_SW}, the streams are actually still accelerating radially. (3) The ion thermal speed (Row (b)), or equivalently the square-root of ion temperature, is highly correlated with the radial flow speed, which is a well-known phenomenon that has already been observed by other satellites \citep{Grappinetal1990,demoulin2009temperature} and the PSP measurements show that this correlation exists at radial distances even down to $\sim 28$ solar radii. A statistical analysis of this point is presented in Section \ref{sec:density_fluc_T_Vr}. (4) The density profile (Row (c)), except for a slow variation with the radial distance, shows strong structures near the perihelion during Encounters 4\&5. For example, a short plasma sheet crossing was observed on January 30, 2020 and a long plasma sheet crossing was observed on June 8, 2020. These structures have significant impacts on the turbulence properties as is discussed in detail in Section \ref{sec:discussion}. (5) The slopes of the magnetic field and velocity spectra (Row (d)) highly fluctuate and show a dependence on the stream properties. It can be observed from Fig. \ref{fig:overview} that the magnetic field spectrum is usually steeper than the velocity spectrum, especially far from the Sun. Near perihelion, the two slopes seem to be close to each other. A statistical survey of spectral variability is presented in Section \ref{sec:evolution_spectra}. (6) Usually, the normalized cross helicity $\sigma_c$ (Row (e)) is close to 1, implying a status of dominating outward-propagating Alfv\'en waves. However, there are periods when $\sigma_c$ oscillates and becomes negative, for example, November 10, 2018, January 17-21, 2020, and June 8, 2020. As is shown in Section \ref{sec:discussion}, these periods correspond to PSP observing heliospheric large-scale inhomogeneous structures, such as velocity shears and the heliospheric current sheet. Furthermore, $\sigma_c$ is also found to be significantly smaller throughout E5 when compared to E1\& E4 and the reason for this is also discussed in Section \ref{sec:discussion}. (7) We note that $\sigma_r$ (Row (f)) shows interesting behavior: During Encounter 1, its value is very close to zero, indicating balanced magnetic and kinetic energies, which are expected for Alfv\'enic turbulence. However, during Encounters 4\&5, most of the time it is negative, especially close to perihelion. This suggests the possibility that the turbulence is magnetically-dominated at its origin inside certain types of streams. Statistical analyses are presented later in Section \ref{sec:evolution_spectra}. \subsection{Evidence of accelerating solar wind streams}\label{sec:acceleration_SW} \begin{figure*} \centering \includegraphics[width=\textwidth]{statistical_spectral_slopes_2D.png} \caption{Spectral slopes of the magnetic field in Alfv\'en unit $\mathbf{B}/\sqrt{\mu_0 \rho}$ (top left), velocity (top right), outward Els\"asser variable (bottom left), and inward Els\"asser variable (bottom right) as functions of the radial solar wind speed $V_r$ and radial distance to the Sun $R$. The data points were binned according to $V_r$ and $R$, and the median value inside each bin was calculated, which is reflected in the colors and written in the plot. The bracketed numbers in the plots are the number of data points inside each bin. Bins with no more than 15 data points were discarded. } \label{fig:spectral_slopes_2D} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{draft_average_spectrum.png} \caption{Averaged power spectra of magnetic field (in Alfv\'en speed) and velocity for different $R$ and $V_r$. Left: $35 \leq R/R_s \leq 45$ and 300km/s$\leq V_r \leq$350km/s. Middle: $65 \leq R/R_s \leq 75$ and 300km/s$\leq V_r \leq$350km/s. Right: $35 \leq R/R_s \leq 45$ and 200km/s$\leq V_r \leq$250km/s. The spectra were fitted over $2.8\times 10^{-3} s^{-1} \leq f \leq 1.7 \times 10^{-2} s^{-1}$ as shown by the dotted (for magnetic field) and dashed (for velocity) lines and the fitted slopes are written in the legend.} \label{fig:average_spectrum} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{statistical_correlation_slopes_Vr.png} \caption{Left: Spectral slope of outward Els\"asser variable $S_o$ as a function of the spectral slope of magnetic field $S_b$. Right: Spectral slope of velocity $S_v$ as a function of the spectral slope of magnetic field $S_b$. Vertical lines mark $S_b = 5/3$. The horizontal line in the left panel marks $S_o = 5/3$ and the horizontal line in the right panel mark $S_v = 3/2$. The colors represent the radial speed of the solar wind.} \label{fig:correlation_slopes} \end{figure*} As PSP travels to a sufficiently low altitude above the Sun, its relative longitudinal speed to the rotating solar surface changes sign when it crosses a critical height. That is to say, in the reference frame corotating with the Sun, PSP moves toward the west first as its altitude lowers, then it retrogrades to the east near the perihelion and finally moves back toward the west as it goes away from the perihelion. This unique feature of PSP's orbit makes it possible to conduct a better analysis of the spatial structures in the solar wind as PSP may measure streams from the same region on the solar surface for two or three times at different radial distances to the Sun during one encounter. In Fig. \ref{fig:3D_trajectory_E04}, we show the measurements of the radial magnetic field, radial flow speed, and proton number density during Encounter 4 in the top three panels. Instead of plotting these quantities against time, we plotted them on a radius-(Carrington longitude) grid so that the projection of the curves on the grid is approximately the trajectory of PSP in the reference frame corotating with the Sun. We note that the inclination of PSP's orbit is very low: The Carrington latitude of PSP during Encounter 4 varies between $\sim \pm 4^\circ$. In the bottom panel, we plotted the trajectory of PSP for reference purposes with $z$-axis being the Carrington latitude. The colors of the curves represent the time such that PSP travels from the light-colored end toward the dark-colored end. From the $V_r$ plot, we can clearly see that the stream measured near perihelion (the dark branch of the curve) contains similar structures observed in the stream further away from the Sun (the light branch of the curve). This similarity can also be seen in the $B_r$ and $n_p$ plots, implying that the satellite observed streams coming from the same region of the Sun twice as it traveled inward and outward during the encounter. In the top panel of Fig. \ref{fig:Vr_carr_long_E04}, we show a 2D $V_r$-longitude plot so that we can better compare the measurements made as PSP traveled back-and-forth in longitude. Despite some deformation, the two curves show very similar variations. We marked the identified similar structures at different radial distances by the red circles connected by dashed lines and the radial wind speed at these circles is plotted against the radial distance in the bottom panel, where each line corresponds to one single structure, which is numbered as 1, 2, 3, 4, 5, and 6 as annotated in the top panel. We can see the wind is accelerated at a rate of $1-3$ km/s/$R_s$ for most of these structures and overall the streams are accelerated from $\sim (200-300)$ km/s near perihelion ($\sim 30 R_s$) to $\sim (300-400)$ km/s beyond $60 R_s$. Thus, these kinds of measurements made uniquely by PSP can be used to quantify the acceleration of the solar wind in the future with increasing data volume. A couple of caveats should be noted here. First, in the corotating frame of the Sun, there is an additional longitudinal speed of the wind. Thus, the stream measured by PSP at a certain longitude should come from a region located at a larger longitude on the solar surface. For example, a 300 km/s wind drifts along longitude by $\sim 17^\circ$ after it propagates $50R_s$. Apparently, streams of different velocities and measured at different distances should drift by different amounts in longitude. In addition, considering the wind is accelerating, it is even more complicated to estimate this longitudinal drift. Second, the variation of the latitude of PSP, though not very large, may also account for the deformation of the longitudinal profiles of the measured streams. \begin{figure*} \centering \includegraphics[width=\textwidth]{statistical_sigma_c_sigma_r_05_2D.png} \caption{Normalized cross helicity $\sigma_c$ (left) and normalized residual energy $\sigma_r$ (right) of wave band 5 ($T \approx 112-56 $ s) as functions of the radial distance to the Sun $R$ and the radial speed of solar wind $V_r$. The colors of each block represent the median values of the binned data. Text on each block shows the value of the block and the number of data points (in brackets) in the block. Bins with no more than 15 data points were discarded.} \label{fig:sigma_c_sigma_r} \end{figure*} \subsection{Dependence of density fluctuations and ion temperature on the wind speed}\label{sec:density_fluc_T_Vr} In Fig. \ref{fig:statistic_n_fluc_T_vr}, we show the scatter plot of the relative density fluctuation $\Delta n / n$ and temperature $V_{th}^2$ versus the radial flow speed $V_r$. Each single dot represents a half-hour time window and the color of each dot shows the radial distance to the Sun. We note that $V_{r}$, $V_{th}$, and $n$ are averaged over the time window while $\Delta n $ is the standard deviation of $n$ inside the window. To better show the trend, we binned the dots with 50 km/s $V_r$ intervals and calculated the three quartiles inside each bin, which are shown as the blue squares. \begin{figure*} \centering \includegraphics[width=\textwidth]{draft_blowup_plots.png} \caption{Blow-ups of the time periods marked by the shaded regions in Fig. \ref{fig:overview}. Row (a) shows the radial magnetic field $B_r$ (blue) and the magnitude of magnetic field $|B|$ (black). Row (b) shows the radial flow speed $V_r$ (blue) and the ion thermal speed $V_{th}$ (orange). Row (c) shows the ion density $n_p$. Row (d) shows the relative ion density fluctuation $\Delta n/n$ (blue) and the plasma beta $\beta$ (orange), defined as the ion thermal pressure $p_{th}=n_p m_i V_{th}^2$ divided by magnetic pressure $p_{mag}=B^2/2\mu_0$. Row (e) shows the spectral slopes of magnetic field in Alfv\'en speed (blue) and velocity (orange). The two dashed lines mark $3/2$ and $5/3$ for reference. Row (f) shows $\sigma_c$ (blue) and $\sigma_r$ (orange). Row (g) shows the energies in magnetic field fluctuations $E_b$ (blue) and velocity fluctuations $E_v$ (orange).} \label{fig:blow_up} \end{figure*} \begin{figure} \centering \includegraphics[width=\hsize]{Encounter_05_SPC_QTN.png} \caption{Comparison between the ion and electron densities during Encounter 5. Blue: Ion density measured by the Faraday cup (SPC). Orange: Electron density calculated using the quasi thermal noise (QTN) measurements made by the Radio Frequency Spectrometer Low Frequency Receiver (RFS/LFR).} \label{fig:compare_SPC_QTN} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{E5_quiet_sun.png} \caption{Left: SDO/HMI image taken on Jun 16, 2020, corresponding to Encounter 5 of PSP. The grid is in Carrington degrees. One can see that during Encounter 5, the visible side of the Sun was very quiet. Right: Magnetic pressure map at $R=1.2R_s$ calculated by the PFSS model with the source surface at $R_{ss}=2.5R_s$ and SDO/HMI data on Jun 16, 2020 as input. The blue diamond is the direct radial projection of PSP to the source surface. The blue crosses are the foot points of the magnetic field lines connected to PSP on the source surface. Different blue crosses correspond to a prediction using varying wind speeds, from $230- 80$km/s to $230+80$km/s. The blue circles are on the surface $R=1.2R_s$ and are magnetically connected to the blue crosses. The thick black lines are the neutral lines at $R=1.2R_s$, and colored regions are the open magnetic field regions with blue being negative polarity and red being positive polarity.} \label{fig:connection_to_corona} \end{figure*} Although the value of $\Delta n/n$ is pretty scattered, it is in general small, mostly smaller than 0.2, and it decreases with $V_r$. We note that the rise of the blue squares at $V_r \in [450,500]$ km/s is very likely a result of a lack of data points. From the colors of the dots, we cannot see a clear relation between $\Delta n/n$ and $R$. Thus, we conclude that the density fluctuation is larger in the slow streams than the fast streams and it does not evolve significantly as the solar wind propagates. The ion temperature is less scattered than $\Delta n / n$ and the $V_{th}^2 - V_r$ relation shows very good linearity, as already mentioned in Section \ref{sec:overview}. This strong $T-V$ correlation is a well-known phenomenon observed at 1 AU \citep[e.g.,][]{elliott2005improved,matthaeus2006correlation,demoulin2009temperature} and PSP data show that this correlation is already well established as close as 30 $R_s$. This may be a clue as to the origin of this $T-V$ correlation. \citet{matthaeus2006correlation} proposed that this correlation is a result of the fact that the transport equation of temperature with a constant radial speed $V$ has a solution of the form $T=T(R/V)$. Thus, in supposing $T$ is a decreasing function, we expect that a larger radial speed leads to a slower decay of $T$ with $R$, resulting in the observed positive $T-V$ correlation. On the other hand, \citet{demoulin2009temperature} argued that this correlation is a requirement by the momentum equation as a higher temperature is needed to accelerate the solar wind to a higher speed. Since the measurements made during the encounters of PSP are likely in the accelerating solar wind streams as pointed out in Section \ref{sec:acceleration_SW}, it is reasonable to say that the origin of the positive $T-V$ correlation is related to the acceleration mechanism of the solar wind. A better modeling of the solar wind heating and acceleration is necessary to fully understand this issue. Last, from the right panel of Fig. \ref{fig:statistic_n_fluc_T_vr}, it seems that very close to the Sun (light yellow dots), the slope of the $T-V$ relation is larger than that further away from the Sun (dark red dots). If it is true that the $T-V$ slope changes radially, it implies that the adiabatic cooling rate is a function of the solar wind speed, which is true for electrons \citep{maksimovic2020anticorrelation}. However, we should be cautious in making this conclusion because during different encounters, the solar condition might be very different. \subsection{Evolution of the turbulence spectra and Alfv\'enicity}\label{sec:evolution_spectra} As already described in Section \ref{sec:inst_data}, we calculated the spectral slopes over a period range $T \in [30s, 360s]$ for the magnetic field in Alfv\'en speed unit, velocity, and outward and inward Els\"asser variables. The statistical results of these slopes are presented in Fig. \ref{fig:spectral_slopes_2D}. We binned the data points according to the radial solar wind speed $V_r$ and the radial distance to the Sun and then calculated the median value inside each bin. The median values are reflected by the colors of the blocks in Fig. \ref{fig:spectral_slopes_2D} and are also written in the blocks. The bracketed numbers in the plots are the number of data points and we discarded the bins with no more than 15 data points (values were set to N/A). We first compared the top two panels of Fig. \ref{fig:spectral_slopes_2D}, that is to say the spectral slopes of the magnetic field ($S_b$) and velocity ($S_v$). There is no clear $V_r$-dependence of $S_v$, while a negative $S_b$-$V_r$ correlation is observed in the range of $R \in [35, 65]R_s$. Close to the Sun ($R<45R_s$), the difference between $S_b$ and $S_v$ is small. Both of the magnetic field and velocity spectra are flatter than the Kolmogorov’s prediction -5/3 and are around -1.5. As the radial distance increases, steepening of magnetic field spectrum toward a -5/3 slope is seen while the velocity spectrum slope remains quite constant. In Fig. \ref{fig:average_spectrum}, we plotted the power spectra of the magnetic field (in Alfv\'en speed) in blue and the velocity in orange, averaged over all half-hour windows that fall into a specific radial distance range and wind speed range. We fit the spectra over the period range $T\in[30,360]s$ and the fitted slopes are written in the plot. The left panel is for $R\in[35,45]R_s$ and $V_r \in[300,350]$km/s and the two spectra have nearly identical slopes close to $-1.5$. The middle panel is for $R\in[65,75]R_s$ and $V_r \in[300,350]$km/s and it shows that as $R$ increases to around 0.3 AU, the magnetic field spectrum becomes close to the Kolmogorov's spectrum while the velocity spectrum is still the Iroshnikov-Kraichnan spectrum. The right panel is for $R\in[35,45]R_s$ and $V_r \in[200,250]$km/s and by comparing it with the left panel, we can see that at the same radial distance $R$, the slower wind has a steeper magnetic field spectrum. It has been long observed outside 0.3 AU that the magnetic field spectrum is steeper than the velocity spectrum \citep[e.g.,][]{Grappinetal1991}. Figures \ref{fig:spectral_slopes_2D}\&\ref{fig:average_spectrum} suggest that very close to the Sun, the two spectra may have the same slope. The anticorrelation between $S_b$ and $V_r$ and the positive correlation between $S_b$ and $R$ imply the existence of a ``turbulence age'' which determines the level of the turbulence development. A recent work analyzing Helios, Wind, and Ulysses data reveals similar ``aging'' of turbulence radially beyond 0.3 AU \citep{weygand2019jensen}. From the bottom two panels of Fig. \ref{fig:spectral_slopes_2D}, the spectral slope of $\mathbf{Z_{o}}$ shows a similar evolution with that of the magnetic field, but it is shallower. The spectral slope of $\mathbf{Z_i}$, on the other hand, resembles the velocity, that is to say it does not show significant radial evolution and it is even smaller than the velocity slope. In Fig. \ref{fig:correlation_slopes} we show the correlation between $S_o$ and $S_b$ in the left panel and the correlation between $S_v$ and $S_b$ in the right panel. Both of the two correlations are high, especially that between $S_o$ and $S_b$. The $S_v-S_b$ correlation is weaker than the $S_o - S_b$ correlation due to the fact that $S_b$ varies with $V_r$ and $R$, while $S_v$ is quite constant. For reference purposes, we marked $S_b = 5/3$ by the vertical lines and $S_o = 5/3$, $S_v = 3/2$ by the two horizontal lines. We see that, on average, $S_o$ is close to $S_b$, though slightly smaller, while $S_v$ is clearly smaller than $S_b$ such that $S_b = 5/3$ corresponds to an $S_v$ around 1.55-1.6. This result is similar to that reported by \citet{Grappinetal1991} (see their Figure 7), although the data used here are mainly within 0.3 AU, while \citet{Grappinetal1991} analyzed Helios data that were collected outside 0.3 AU. Figures \ref{fig:spectral_slopes_2D}\&\ref{fig:average_spectrum} reveal that in the very young solar wind, the magnetic field and velocity spectra have the same slope; furthermore, as the turbulence evolves, the magnetic field spectrum steepens while the velocity spectrum has an invariant slope. This poses a challenge in understanding the nature of the MHD turbulence in the solar wind. Most of the turbulence theories \citep[e.g.,][]{Kraichnan1965,Goldreich1995,lithwick2003imbalanced,Zanketal2017} describe the turbulence based on the two Els\"asser variables, thus they cannot directly capture the differential evolution of the magnetic field and velocity spectra. \citet{boldyrev2011spectral} conducted 3D incompressible MHD simulations based on the reduced equation set of Els\"asser variables and they reproduced the different magnetic field and velocity spectra statistically. However, how the final status is established is still unknown from the simulations. In addition, PSP data show that the steepening of the magnetic field spectrum is quite slow. The top-left panel of Fig. \ref{fig:spectral_slopes_2D} implies the steepening from 3/2 to 5/3 takes time for the wind to travel from $R\sim 30R_s$ to $R \sim 70 R_s$. This is much longer than the nonlinear time, that is the ``eddy-turnover'' time or the Alfv\'en crossing time, of the turbulence. Thus, it is possible that in the solar wind, the differential evolution of $B$ and $V$ is controlled by some external mechanisms, such as stream shears and the spherical-expansion effect, which leads to different decay rates of the magnetic energy and kinetic energy \citep{GrappinandVelli1996}. In Fig. \ref{fig:sigma_c_sigma_r}, we present the $(V_r,R)$ variation of the normalized cross helicity $\sigma_c$ (left panel) and the normalized residual energy $\sigma_r$ (right panel), in a similar manner as we do for Fig. \ref{fig:spectral_slopes_2D}. Here the values were calculated for the wave band 5, that is corresponding to wave period $T\in[112,56]$s, while other wave bands show similar features as shown in Fig. \ref{fig:sigma_c_sigma_r}. For $\sigma_c$, an overall positive $\sigma_c$-$V_r$ correlation is observed, at least for $R \leq 65 R_s$, indicating that the fast wind is generally more Alfv\'enic than the slow wind. The lack of a definite $\sigma_c-V_r$ correlation for $R>65 R_s$ might be due to the lack of data points so that the value in one single block mainly reflects the turbulence property inside one stream instead of multiple streams, increasing the uncertainty. The $\sigma_c-R$ correlation is clearly negative in the range of $R \ge 35R_s $ and $V_r \in [300,400]$km/s, implying that the dominance of the outward propagating wave declines with the radial distance, which was already reported in previous works \citep[e.g.,][]{chen2020evolution}. But this correlation is not well-defined in other parametric regions. Especially, for measurements made below $35R_s$ and for very slow wind ($V_r \leq 250 $km/s), $\sigma_c$ is much lower compared with the neighboring blocks in $V_r-R$ space. This is caused by the non-Alfv\'enic, or low-Alfv\'enic, slow wind measured by PSP during Encounter 5 (see right column of Fig. \ref{fig:overview}). For $\sigma_r$, we can see that it is in general negative, that is to say the magnetic energy exceeds the kinetic energy, which is a well-known phenomenon that is not fully understood yet. For $R \leq 65 R_s$, $\sigma_r$ is also positively correlated with $V_r$. That is to say, in the fast wind, the magnetic and kinetic energies are more balanced, which is consistent with the high $\sigma_c$ values which imply a highly Alfv\'enic status. The radial evolution of $\sigma_r$, however, shows a surprising result as it is clear that inside $65R_s$, $\sigma_r$ increases with radial distance, meaning that the turbulence is relaxing from a magnetic-dominating status toward a more balanced status. Actually, by examining the middle and right columns of Fig. \ref{fig:overview}, one can find that $\sigma_r$ is clearly an increasing function of $R$ from January 21-29, 2020 and from June 1-6, 2020, which is consistent with the statistical result here. Even for Encounter 1 (left column of Fig. \ref{fig:overview}), a slight increase in $\sigma_r$ with $R$ is observed from November 6-9, 2018. Outside $65R_s$, the evolution is not very clear but it seems that $\sigma_r$ may start to drop with $R$. Similar to $\sigma_c$, the values of $\sigma_r$ are extremely low for $R\leq 35R_s$ and for $V_r \leq 250 $km/s. As mentioned before, this region in the parameter space corresponds to the very low Alfv\'enic streams observed during Encounter 5. \section{Discussion}\label{sec:discussion} From Section \ref{sec:evolution_spectra}, we conclude the following points: (1) During the evolution of the solar wind turbulence, the magnetic field spectrum steepens from a $-3/2$ slope toward a $-5/3$ slope while the velocity spectrum slope remains $-3/2$. (2) The fast solar wind is in general more Alfv\'enic than the slow solar wind, with $\sigma_c$ closer to 1 and $\sigma_r$ closer to 0. However, we should emphasize here that the ``fast'' solar wind in this study is not the typical fast wind that originates from large-scale polar coronal hole open regions because during the first five encounters, PSP did not observe any long-lasting fast solar wind of this type. Thus, it is more likely that the ``fast'' winds here should probably be classified as examples of ``faster'' Alfv\'enic slow wind \citep{DAmicisandBruno2015,panasenco2020exploring}. (3) Closer to the Sun, $\sigma_c$ increases toward 1, confirming that the turbulence is dominated by outward propagating Alfv\'en waves in the young solar wind. However, there are periods where $\sigma_c$ is quite low even at very close distances to the Sun (below $35R_s$). (4) For some solar wind streams, for example, those observed during Encounters 4\&5, $\sigma_r$ evolves from negative values toward 0 at close distances, suggesting that the turbulence is actually magnetic-dominated at its origin and then gradually relaxes to a more balanced status in these streams. The above conclusions are based on the statistical results using all high-resolution data from PSP's first five encounters. While they help us depict an average picture of the evolution of solar wind turbulence, it is still necessary to examine the turbulence from different time periods so that we can have deeper insights on how the turbulence varies in different streams. In fact, as one may have noticed in Fig. \ref{fig:overview}, fluctuations in streams of a similar radial wind speed can have significantly different Alfv\'enicity. For example, during E1 from November 3-7, the solar wind speed is around 300km/s and the fluctuations are highly Alfv\'enic, while during E5 from June 1-6, the solar wind speed is also around 300km/s but the Alfv\'enicity of the fluctuations is quite low. A more detailed analysis is presented later in this section. In Fig. \ref{fig:blow_up}, we present the blow-ups of Fig. \ref{fig:overview} over three short time periods marked by the shades in Fig. \ref{fig:overview}. Compared with Fig. \ref{fig:overview}, the top three rows of Fig. \ref{fig:blow_up} present data at a time resolution of 0.874s instead of a half hour. In addition, in the bottom two rows of Fig. \ref{fig:blow_up}, $\sigma_c$, $\sigma_r$, $E_{b}$, and $E_v$ were calculated by integrating over all wave modes except mode 0, that is to say the background field. \subsection{Alfv\'enic turbulence and the effect of velocity shear} The left column of Fig. \ref{fig:blow_up} shows the time period from 12:00 November 9 to 00:00 November 11, 2018 during Encounter 1. Before 08:00 November 10, PSP was inside a fast stream with a radial speed of $V_r \sim 500-600$km/s. Between 08:00 and 13:00 November 10, PSP crossed a fast-slow stream shear region, marked by the shaded region, after which the wind speed dropped to less than 400km/s. Inside the fast stream, a large amount of switchbacks were observed with nearly constant $|B|$ and $n_p$, as well as $\sigma_c \approx 1$ and $\sigma_r \approx 0$. These parameters imply that the turbulence is highly Alfv\'enic, with very little inward propagating wave component. Inside the shear region, a decrease in $\sigma_c$ and increase in $\sigma_r$ were observed and the wave energies were dissipated right after the shear. From Panel (a1), we can see that inside and shortly after the shear region, no switchbacks are observed, implying a strong dissipation of the wave energies. These results are consistent with the 2D MHD simulations \citep{Robertsetal1992,Shietal2020}, which showed that near the fast-slow stream interaction region, the wave energy is dissipated quickly because the shear transfers energies from long wavelengths to short wavelengths rapidly. They also found that inside the stream interaction region, the outward wave dominance is destroyed and kinetic energy exceeds the magnetic energy at small scales, which is consistent with the drop in $\sigma_c$ and increase in $\sigma_r$ observed by PSP. The positive $\sigma_r$ indicates that the velocity shear efficiently transfers kinetic energies from large to small scales. Thus, the velocity shear may play an important role in the turbulence evolution and is a good candidate to explain the observed negative $\sigma_c-R$ relation and positive $\sigma_r-R$ relation as discussed in Section \ref{sec:evolution_spectra}. \subsection{Non(low)-Alfv\'enic turbulence} The middle column of Fig. \ref{fig:blow_up} shows the time period from 12:00 June 5 to 12:00 June 6, 2020 during Encounter 5. During this time period, and for most of Encounter 5 shown in Fig. \ref{fig:overview}, the turbulence property is ``abnormal.'' From Panel (a2), we can see that the magnetic field strength $|B|$ is quite constant and a lot of switchbacks are present. In addition, Panel (c2)\&(d2) show that the plasma density is quite constant with very small fluctuations. These features normally indicate a highly Alfv\'enic status of the turbulence. However, we can see from Panel (f2) that $\sigma_c$ is systematically small, around 0.5, and as is $\sigma_r$, which is around -0.75. That is to say, in this time period, there is a non-negligible amount of inward propagating wave component while magnetic energy significantly exceeds the kinetic energy, despite the near incompressibility. One can see from Fig. \ref{fig:overview} that actually during most of Encounter 5, the turbulence has low Alfv\'enicity and the wind speed is slow. In examining the middle column of Fig. \ref{fig:overview}, we noticed that in Encounter 4 after the heliospheric current sheet crossing on February 1 until February 4, the solar wind was also quite slow and had relatively low $\sigma_c$ and $\sigma_r$, which is similar to what PSP observed in Encounter 5. Thus, the observed non-Alfv\'enic, or low-Alfv\'enic, turbulence is possibly related with the sources of the very slow solar wind. One thing that we should point out is that the ion density measured by the Faraday cup (SPC) seems to be lower than the electron density derived using the quasi thermal noise (QTN) measurements made by the Radio Frequency Spectrometer Low Frequency Receiver (RFS/LFR) \citep{moncuquet2020first}. In Fig. \ref{fig:compare_SPC_QTN}, we plotted these two quantities for Encounter 5, where blue is the SPC ion density $n_p$ and orange is the QTN electron density $n_e$. We can see that $n_p$ is systematically lower than $n_e$ and the difference can be as large as $\sim 30\%$ for some time periods. As we expect that the QTN measurements are more accurate than the SPC measurements, this indicates that the real ion density is larger than the SPC data used in the current study. As a result, the magnetic energy density $E_b = b^2/\mu_0 \rho$ calculated here is larger than real, leading to an overestimate of the magnetic energy excess over the kinetic energy. Thus, we used the QTN-derived density to reconduct the calculation of the magnetic energy, $\sigma_c$ and $\sigma_r$. The result is not presented here but we confirm that the effect of this density difference is not significant and does not change the low-Alfv\'enicity in E5. In Fig. \ref{fig:connection_to_corona}, we show the SDO/HMI image of the whole disk of the Sun taken on June 16, 2020. During most of Encounter 5, PSP was flying over this side of the Sun, which is very quiet as can be seen from the image. We note that this image was not taken during the period that PSP data were analyzed (May 30-June 13, 2020) since PSP was not on the Sun-Earth line during E5 so there is a time lag between the encounter and when SDO was looking at the solar surface over which PSP flew by. In the right panel of Fig. \ref{fig:connection_to_corona}, we show the map of magnetic pressure at $R=1.2R_s$, which was calculated using the PFSS model with the source surface set to $R_{ss}= 2.5R_s$ and the SDO/HMI measurements as input. The blue diamond is the direct radial projection of PSP to the source surface and the blue crosses are the foot points of the magnetic field lines connected to PSP on the source surface. Different crosses correspond to a prediction using varying wind speeds, from 230-80km/s to 230+80km/s. The blue circles are on the surface $R=1.2R_s$ and are magnetically connected to the blue crosses according to the PFSS model results. The detailed procedure to create this plot can be found in \citet{panasenco2020exploring} and Velli et al. (2021, this issue). We can see that at this time period PSP was connected to the boundary of the northern polar coronal hole without any activities nearby, neither active regions, nor pseudo-streamers, which are shown to be crucial in generating the Alfv\'enic slow wind observed in Encounter 1 \citep{panasenco2020exploring}. For most of E5, PSP was magnetically connected to the boundaries of either the northern or southern polar coronal hole (Velli et al., 2021, this issue). This may be relevant to explain why the slow wind observed during Encounter 5 is non-Alfv\'enic despite of the quite incompressible fluctuations. One possibility is the different ion compositions in the slow wind originating from different regions. For example, if the slow wind that originates near the boundaries of polar coronal holes comprises more helium or heavier ions which are not considered in the current study, the real plasma density should be larger than our estimate. As a result, the real magnetic energy density should be smaller than our calculation. If so, $\sigma_r$ should be closer to 0 and $\sigma_c$ should be closer to 1, that is the Alfv\'enicity of the wind should be larger than our estimate. Further analysis of the ion composition is necessary, but this is beyond the scope of the current study. Other mechanisms are also possible. For example, if the Alfv\'en waves in the slow wind originating near the polar coronal holes experience strong reflection due to large inhomogeneity of the background Alfv\'en velocity, the Alfv\'enicity is low. Modeling the propagation of Alfv\'en waves at different regions of the Sun will be a future topic. We conclude here that the coronal magnetic structures play a key role in the Alfv\'enic properties of the solar wind. \subsection{Effect of the heliospheric current sheets} The right column of Fig. \ref{fig:blow_up} shows the time period from 18:00 June 7 to 00:00 June 9, 2020 during Encounter 5. In this time period, PSP crossed a plasma sheet, inside which the ion density, speed, and temperature were all enhanced while the magnetic field strength was weakened with multiple polarity reversals. These measurements imply that PSP crossed the heliospheric current sheet, which is typically embedded inside a plasma sheet \citep{Smith2001}, multiple times. The turbulence properties inside this plasma sheet are very different compared with those in the normal solar wind streams. First, the spectra of both the magnetic field and velocity become steeper, with slopes close to $-2$ because of the frequent discontinuities. Second, $\sigma_c$ is on average close to 0, that is there are no well-defined Alfv\'enic fluctuations or the outward and inward propagating Alfv\'en waves are strongly mixed. Third, $\sigma_r$ is close to -1, implying magnetic-dominant fluctuations. During Encounter 4, from January 17 to January 20, 2020, PSP also crossed current sheets multiple times and one can observe from the middle column of Fig. \ref{fig:overview} that in this time period, $\sigma_c$ was frequently negative and $\sigma_r$ was very low. These measurements suggest that current sheets may also play an important role in generating the low $\sigma_c$ and $\sigma_r$ fluctuations observed in the slow streams such as that shown in the middle column of Fig. \ref{fig:blow_up}. \citet{malara1996gompressive}, via 2.5D MHD simulations of Alfv\'en waves on top of a current sheet, showed that the initially large $\sigma_c$ is rapidly destroyed in the vicinity of the current sheet, supporting our observation. In assuming that these fluctuations in the slow wind are strongly affected by current sheets such that they are non-Alfv\'enic at their origins, then we need to explain why the magnitude of magnetic field is still nearly constant. Firehose instability may play a key role in explaining this as \citet{tenerani2018nonlinear} show that magnetic field fluctuations in high-$\beta$ plasma naturally relax to a constant-$|B|$ status due to the firehose instability. \section{Conclusions} In this study, we have analyzed data from the first five orbits of PSP. We focus on the properties of the MHD-scale turbulence and how they vary with the large-scale solar wind streams. A general nonlinear steepening of the magnetic field spectrum from a $-3/2$ slope toward $-5/3$ slope is observed statistically. The progress of the steepening depends on both the wind speed and the radial distance to the Sun, suggesting the existence of a ``turbulence age'' that controls the steepening process (see Fig. \ref{fig:spectral_slopes_2D}). The slope of velocity spectrum, on the contrary, remains almost constant at $-3/2$. The observed spectral evolution indicates that, on average, the magnetic field and velocity have similar spectra in the very young solar wind and their spectra evolve differently. Better theoretical models are still needed to explain this differential evolution of velocity and the magnetic field and they will be a future research topic. We investigated the Alfv\'enicity of the turbulence through two widely used diagnostics, namely the normalized cross helicity $\sigma_c$, which measures the relative abundance of outward and inward propagating Alfv\'en wave energies, and the normalized residual energy $\sigma_r$, which measures the relative abundance of magnetic and kinetic energies. Statistically, turbulence in fast solar wind is more ``Alfv\'enic'' than that in slow wind as $\sigma_c$ is closer to 1 and $\sigma_r$ is closer to 0 in the fast wind. During radial evolution, in general, the dominance of an outward propagating wave gradually weakens, manifested in a decreasing $\sigma_c$ (see left panel of Fig. \ref{fig:sigma_c_sigma_r}). The magnetic-kinetic energy comparison is surprising as our result shows that the magnetic energy significantly exceeds the kinetic energy close to the Sun and gradually relaxes to a balanced status. This is in contrast to the commonly accepted idea that the magnetic energy excess is a result of the dynamic evolution of MHD turbulence \citep[e.g.,][]{grappin1983dependence}. A similar result was reported by \citet{bavassano1998cross}, who analyzed Ulysses data and showed that the least evolved high-latitude stream has the strongest imbalance between magnetic and kinetic energies compared with more evolved mid- and low-latitude streams. They attributed this phenomenon to the abundance of pickup ions in the polar region, which modifies the kinetic normalization of the Alfv\'enic unit. However, other mechanisms, such as the contribution of heavy ions and the effect of the velocity shears, may also play important roles. We note that the above results are all based on a statistical analysis. In practice, individual streams can be quite different from each other and one cannot simply infer the turbulence properties from the wind speed. For example, from Fig. \ref{fig:overview} \& \ref{fig:blow_up}, we observe that the slow streams with a similar speed ($\sim 300$km/s) can be either highly Alfv\'enic (Encounter 1) or non-Alfv\'enic (Encounters 4\&5). To fully understand the cause of these differences, we must examine the origin of each individual solar wind stream because the location of the origin can significantly impact the Alfv\'enicity of the slow wind \citep{DAmicisandBruno2015,panasenco2020exploring}. In addition, it is possible that the large-scale structures, such as the heliospheric current sheets and velocity shears, greatly modify the turbulence properties at the very early stage \citep[e.g.,][]{Robertsetal1992,Shietal2020}. \begin{acknowledgements} This research was funded in part by the FIELDS experiment on the Parker Solar Probe spacecraft, designed and developed under NASA contract NNN06AA01C and the NASA Parker Solar Probe Observatory Scientist grant NNX15AF34G. \end{acknowledgements} \bibliographystyle{aa}
1,941,325,221,199
arxiv
\section{Introduction} Cosmological structure formation is a nonlinear process. It can be described either in Eulerian space, where the evolution equations contain terms quadratic in the deviations from a Friedmann-Robertson-Walker (FRW) Universe (the density contrast, the peculiar velocity, and all higher moments of the matter distribution functions), or in Lagrangian space, where the force is a nonlinear functional of the displacement field. These nonlinear terms are, in general, always present, even at the earliest stages of the perturbation growth. However, their effect is small at early times, and therefore it is believed that it can be treated perturbatively at sufficiently early times and large scales (for reviews on cosmological perturbation theory (PT) see \cite{PT} and \cite{Bernardeau:2013oda}). Indeed, nonlinear and nonperturbative are not synonyms, and one can expect that nonlinear quantities, such as, for instance, the variance of the density field, its power spectrum (PS), and so on, can be represented by a perturbative expansion in some parameter. On the other hand, it is well known that PT expansions, both Eulerian and Lagrangian, are doomed to fail at some point. In Eulerian PT, one truncates the full Boltzmann hierarchy to the continuity and the Euler equations, by setting to zero the velocity dispersion. This {\em single stream approximation} corresponds to assigning a unique velocity value to any space-time point, and therefore breaks down after {\em shell-crossing}, when multiple streams of matter coexist in the same region of space. In Lagrangian coordinates, the force becomes intrinsically nonlocal after shell-crossing, and therefore it cannot be represented as a local expansion in the displacement field and its derivatives. In order to extend the PT approaches beyond shell-crossing, some information on the effect of multistreaming has to be provided. This is the approach followed in the ``Coarse Grained PT" of \cite{Pietroni:2011iz, Manzotti:2014loa}, recently implemented in the Time Renormalization Group (TRG) evolution equations \cite{Peloso:2016qdr, Noda:2017tfh,Nishimichi:2017gdq}. The idea there is to keep the full Boltzmann hierarchy, and to treat the higher moments as external sources to be computed from N-body simulations. These sources carry the fully nonperturbative information encoded in the small (UV) scales, including the shell-crossing effects. Earlier work along similar lines can be found in \cite{Pueblas:2008uv}. A related approach is the Effective Field Theory of the Large Scale Structure \cite{Carrasco:2012cv}, in which the source terms are expanded in terms of the long wavelength fields via a derivative expansion, whose coefficients are then fit by comparing with simulations. Besides these ``effective approaches'', the impact of shell-crossing, also in relation to the range of validity of PT approaches (including resummations) deserves more consideration. In \cite{Valageas:2010rx} the effect of shell-crossing on the PS was estimated by comparing the prediction of two dynamics which were exactly coincident before shell-crossing and drastically different afterwards. Assuming gaussian initial conditions and Zel'dovich dynamics before shell-crossing, the difference is due to field configurations populating the tails of the distribution, and is therefore nonperturbative in the PT expansion parameter, the variance of the field. Valageas showed that these nonperturbative effects are subdominant with respect to PT corrections in a sizable range of wavenumbers $k$, rapidly increasing with redshift. More recently, cosmological PT and shell-crossing have been studied in the context of 1+1 dimensional gravity \cite{McQuinn:2015tva,Taruya:2017ohk,McDonald:2017ths,Pajer:2017ulp, Rampf:2017jan}. This setup has the advantage that the Zel'dovich dynamics is exact before shell-crossing and therefore one can hope to single out and model the effects of multistreaming more neatly. In particular, in \cite{McQuinn:2015tva} and \cite{Pajer:2017ulp} the behavior of the PT expansion was considered, and it was shown that it converges to the Zel'dovich approximation before shell-crossing. The importance of the nonperturbative corrections was emphasized in both papers. In \cite{Taruya:2017ohk} and \cite{McDonald:2017ths} semi-analytical approaches to go beyond shell-crossing were proposed, showing a clear improvement with respect to PT after first shell-crossing. It would be very interesting to extend the regime of validity of these approaches beyond second shell-crossing, possibly by some sort of resummation as envisaged in \cite{Taruya:2017ohk}. Finally, in \cite{Rampf:2017jan}, the analysis was extended beyond 1+1 dimensions in a controlled perturbative expansion valid up to shell-crossing. In this paper we further investigate structure formation in 1+1 gravity providing, we believe, some insight on two aspects. First, we study the impact of shell-crossing on the Eulerian PT expansion. We regulate the divergence of the density contrast induced by the singularity of the Lagrangian to Eulerian mapping after shell-crossing and study the analyticity properties of the density field with respect to the regulator. Then, by taking cosmological averages of the density field, we show the crucial role of the divergent behavior in linking the PT expansion with the nonperturbative contributions. The two sectors are then, maybe surprisingly, intimately related, at least in our toy example. Then we take into account the real dynamics, and consider the regime of deep multistreaming, with the aim of finding a ``resummation'' of the shell-crossing effects. We show explicitly that the evolution equations have an attractor in that regime. For simple initial conditions it coincides with the prescription of the ``adhesion model" for structure formation \cite{Gurbatov:1989az,Dubrulle:1994psg,Bernardeau:2009ab,Valageas:2010uh}, while for cosmological initial conditions this identification is not so immediate. We show how to compute this attractor from the Zel'dovich approximation, and discuss how to implement this procedure in a cosmological context, for instance, to predict halo positions and sizes. The paper is organised as follows. In section \ref{npterms} we classify the divergences induced by shell-crossing on the Eulerian density contrast, and then focus on the statistical distribution of maxima of this field. We derive an expansion for this quantity, which can be written as the sum of perturbative and nonperturbative terms, including a divergent one, and show that it converges to the exact result well beyond the PT range. In section \ref{exdyn} we introduce the exact equations of motion for the displacement field, and show the non-local nature of the force term after shell-crossing. Then, in section \ref{numsol} we solve the equation numerically for a simple set of initial condition and in section \ref{pattr} we show analytically the existence of an attractor in the multistreaming regime, and discuss its properties. In section \ref{cosmosym} we present a cosmological simulation (in 1+1 dimensions!) and implement an algorithm to simulate the attractor solution starting from the Zel'dovich approximation. We show how this algorithm can be used to predict the location and sizes of ``haloes" in the real simulation. Finally, in section \ref{out} we summarize our findings and discuss some possible future developments. \section{Shell-crossing and non-perturbative terms} \label{npterms} \subsection{Density contrast after shell-crossing} The mapping between the initial (Lagrangian) position of a given particle, $q$, and its later (Eulerian) position $x$ at a conformal time $\tau$ is given by \begin{equation} x(q,\tau)=q+\Psi(q,\tau)\,, \end{equation} $\Psi(q,\tau)$ is the displacement field, $\Psi(q,\tau=0)=0$. Assuming a uniform density at $\tau=0$, the density contrast in Eulerian space, $\delta(x,\tau)=-1+\rho(x,\tau)/\bar\rho$, is given by \begin{equation} \delta(x,\tau)=\int dq\;\delta_D\left(q+\Psi(q,\tau) - x\right)-1\,. \label{0map} \end{equation} Shell-crossing induces divergences in Eulerian space. These are, however, unphysical, as they are regulated in any physical realization, either by pressure, if dark matter is not perfectly cold, or by the finite space resolution. Therefore, in order to discuss the impact of shell-crossing on the Eulerian quantities, and on the PT expansion, it is more physical to consider a smoothened version of \re{0map}, \begin{equation} \bar\delta(\bar x,\tau;\sigma)\equiv \int\frac{dx}{\sqrt{2\pi\sigma^2}}\; e^{-\frac{(x-\bar x)^2}{2 \sigma^2}} \,\delta(x,\tau)= \int\frac{dq}{\sqrt{2\pi\sigma^2}}\,e^{-\frac{( q+\Psi(q,\tau)-\bar x)^2}{2 \sigma^2}} -1\,, \label{Gmap} \end{equation} and to study the analyticity properties in the $\sigma^2\to 0$ limit. The integral in \re{Gmap} is dominated by regions of $q$ around the values $q_i(\bar x)$, such that \begin{equation} x(q,\tau)-\bar x= q+\Psi(q,\tau)-\bar x=0\,. \label{S} \end{equation} We can now classify the different types of small $\sigma^2$ behavior by considering the derivatives of of $\Psi(q,\tau)$ with respect to $q$ at the roots of eq.~\re{S}, that we will indicate with $q_i(\bar x)$, and will number in increasing values by the natural index $i$, that is $q_i(\bar x)\le q_2(\bar x)\le q_3(\bar x) \cdots$. The first case to be considered is when (primes denote derivatives with respect to $q$),\\ {\bf 1) $ x'(q_i(\bar x),\tau) \neq 0$, for any $q_i(\bar x)$ (as in Fig.~\ref{configs} a) and b)):} \noindent Around each of the $q_i's(\bar x)$, we can compute the integral in the steepest descent approximation, \begin{eqnarray} &&\bar\delta(\bar x,\tau;\sigma)\simeq \sum_i \int\frac{dq}{\sqrt{2\pi\sigma^2}}\; e^{-\frac{\left(1+ \Psi'(q_i(\bar x),\tau) \right) ^2}{2 \sigma^2} \left(q-q_i(\bar x)\right)^2} -1\,,\nonumber\\ &&\qquad\quad\;\;\,= \sum_i \frac{1}{\left| 1+ \Psi'(q_i(\bar x),\tau) \right|}-1\,. \label{1d} \end{eqnarray} The approximation at the first line is exact in the $\sigma^2\to 0$ limit, that is, when $\sigma$ is much smaller than the distance between roots, and leads to the $\sigma^2$-independent result of the last line. The requirement that $x(q,\tau) $ is a continuous function of $q$ taking all values from $-\infty$ to $+\infty$ guarantees that for any $\bar x$ there are an odd number of $q_i(\bar x)$ roots, corresponding to the number of streams in $\bar x$, and that, by ordering the $q_i(\bar x)$'s in increasing values, the signs of $x'(q_i(\bar x),\tau)= 1+ \Psi'(q_i(\bar x),\tau)$ are alternating, starting from positive. In the {\it single stream} case, Fig.~\ref{configs} a), there is only one root, $q(\bar x)$, therefore $1+ \Psi'(q(\bar x),\tau) >0$ and the density contrast is given by \begin{equation} \bar\delta(\bar x,\tau;\sigma) = -\frac{ \Psi'(q(\bar x),\tau)}{1+ \Psi'(q(\bar x),\tau)}\,. \label{ssc} \end{equation} The denominator can be expanded perturbatively in $\Psi'(q(\bar x),\tau)$, and the series is guaranteed to converge since $\left| \Psi'(q(\bar x),\tau)\right| <1$. Moreover, the single stream hypothesis ensures that the mapping between Lagrangian and Eulerian space can be inverted perturbatively, giving the PT expansion for $\bar\delta(\bar x,\tau;\sigma)$ \cite{McQuinn:2015tva,Pajer:2017ulp}. Next, we consider the case in which at one of the $q_i(\bar x)$ roots one has \\ {\bf 2) $ x'(q_i(\bar x),\tau)=0$, $x''(q_i(\bar x),\tau)\neq 0$ (see root $q_1$ in Fig.~\ref{configs} c)):} \noindent Around that point, we have \begin{eqnarray} &&\bar\delta(\bar x,\tau;\sigma)\simeq \int\frac{dq}{\sqrt{2\pi\sigma^2}}\; e^{-\frac{\left(\Psi''(q_i(\bar x),\tau) \right) ^2}{8\, \sigma^2} \left(q-q_i(\bar x)\right)^4} +``\mathrm{other\;roots}"-1\,,\nonumber\\ &&\qquad\quad\;\;\, = \frac{\Gamma\left(\frac{1}{4}\right)}{\pi^{1/2}|\Psi''(q_i(\bar x),\tau)|^{1/2} \left(8 \,\sigma^2\right)^{1/4}}+``\mathrm{other\;roots}"-1\,, \label{d2} \end{eqnarray} where $\Gamma(x)$ is Euler's gamma function, and with ``other roots" we indicate the contribution from the remaining (at least, one) roots. Comparing to \re{1d} we notice some very relevant differences. First of all, the divergence in the $\sigma^2 \to 0$ limit signals that the mapping is singular at $q_i(\bar x)$, whereas it is invertible on a finite interval around any of the roots in \re{1d}. Moreover, the non-analyticity in $\sigma^2$ signals the onset of non-locality in lagrangian space, as the present situation occurs when two new streams are reaching $\bar x$. Finally, eq.~\re{d2} clearly does not admit a PT expansion in powers of $\Psi$ or its derivatives. The third case we will discuss is: \\ {\bf 3) $x'(q_i(\bar x),\tau)=x''(q_i(\bar x),\tau)= 0$, $x^{\prime\prime\prime}(q_i(\bar x),\tau)\neq 0$ (as in Fig.~\ref{configs} d))}: \noindent This case (if $x^{\prime\prime\prime}(q_i(\bar x),\tau)>0$) corresponds to $q_i(\bar x)$ being a point of first shell-crossing. Now, \begin{eqnarray} &&\bar\delta(\bar x,\tau;\sigma)\simeq \int\frac{dq}{\sqrt{2\pi\sigma^2}}\; e^{-\frac{\left(\Psi'''(q_i(\bar x),\tau) \right) ^2}{72\, \sigma^2} \left(q-q_i(\bar x)\right)^6} +``\mathrm{other\;roots}"-1\,,\nonumber\\ &&\qquad\quad\;\;\, = \frac{\Gamma\left(\frac{1}{6}\right)}{\pi^{1/2}|\Psi'''(q_i(\bar x),\tau)|^{1/3} \left(9 \,\sigma^2\right)^{1/3}}+``\mathrm{other\;roots}"-1\,, \label{d4} \end{eqnarray} which, as expected, is more singular than \re{d2} in the $\sigma^2\to 0$ limit. Increasing the degree of ``flatness'' of $x(q,\tau)$ in Lagrangian space leads to more and more singular behavior for the density contrast in Eulerian space, which, in turn, decrees the failure of the Eulerian PT expansion. \begin{figure}[t] \centering \includegraphics[width=.65\textwidth,clip]{graphs.pdf} \caption{Various examples of correspondence between Lagrangian and Eulerian space. $a)$ shows a situation in which there is a single stream everywere in Eulerian space. In $b)$ there are three streams in $\bar x$, corresponding to the three roots $q_i(\bar x)$. In $c)$, $x'(q,\tau)=0$ in $q_1$: this is the border between case $a)$ and $b)$. Finally, in panel $d)$, point $q_1(\bar x)$ corresponds to a point of first shell crossing $x''(q_1(\bar x),\tau)=0$, $x'''(q_1(\bar x),\tau)>0$ .} \label{configs} \end{figure} In order to perform a concrete computation we will consider the case in which the point $x=q=0$ is a maximum of the initial density field, \begin{equation} \left.\frac{\partial \delta(x,\tau_{in})}{\partial x}\right|_{x=0} =0\,,\qquad \left.\frac{\partial^2 \delta(x,\tau_{in})}{\partial x^2}\right|_{x=0} < 0\,. \end{equation} In this case, the initial displacement field has vanishing second derivative in $q=0$ and can be written as (see eq.~\re{ssc}), \begin{equation} \Psi(q,\tau_{in}) = -qA_{in}+\frac{q^3}{\bar q^2} \,, \end{equation} where $\bar q$ is a fixed length scale and, when $A_{in} \ll 1$, we can identify, \begin{eqnarray} && A_{in}= \delta(x=0,\tau_{in})\,,\nonumber\\ && \frac{1}{\bar q^2}= -\frac{1}{6}\delta''(x=0,\tau_{in})\,. \label{2dc} \end{eqnarray} In the Zel'dovich approximation (see next section) the displacement field evolves as \begin{equation} \Psi(q,\tau)=\frac{D(\tau)}{D(\tau_{in})}\Psi(q,\tau_{in}) = - A(\tau)q+ \frac{q^3}{\bar q^2(\tau)}\,, \label{qpsi} \end{equation} where $D(\tau)$ is the linear growth factor ($D(\tau)=a(\tau)$ in the Einstein de Sitter cosmology) and we have defined \begin{eqnarray} &&A(\tau)=\frac{D(\tau)}{D(\tau_{in})} A_{in} = \delta_L(x=0,\tau) \,,\nonumber\\ &&\frac{1}{\bar q^2(\tau)} =\frac{D(\tau)}{D(\tau_{in})}\frac{1}{\bar q^2} = -\frac{1}{6}\delta_L''(x=0,\tau)\,, \label{At} \end{eqnarray} where $\delta_L(x=0,\tau) $ is the linearly evolved density field. The root equation \re{S} in $x=0$ has now one real solution, $q_1(0)=0$ for $A(\tau)<1$, and three solutions, \begin{equation} q_{1,3}(0)= \mp\; \bar q(\tau) \,\sqrt{A(\tau)-1}\,,\qquad\qquad q_2(0)=0\,, \end{equation} for $A(\tau)>1$. For $A=1$ the three real solutions coincide. The regularized density contrast, \re{Gmap}, is now \begin{eqnarray} &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\bar\delta(\bar x, \tau;\sigma)=\int\frac{dq}{\sqrt{2 \pi \sigma^2}} \;e^{-\frac{1}{2\sigma^2}\left(q(1-A) + \frac{q^3}{\bar q^2}-\bar x\right)^2}-1 = \int\frac{dy}{\sqrt{2 \pi}} \;e^{-\frac{1}{2}\left(y(1-A) + \epsilon^2 y^3-\bar x\right)^2}-1\,,\nonumber\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\,\,\equiv\Delta(\bar x, A(\tau);\epsilon(\tau)) \label{intA} \end{eqnarray} where we have defined the effective regularization parameter, given by the ration between the smoothing scale in Eulerian space, $\sigma^2$, and the scale setting the distance between the roots in Lagrangian space, $\bar q^2$, \begin{equation} \epsilon^2(\tau)\equiv \sigma^2/\bar q^2(\tau)\,. \end{equation} Notice that, if one sends the scale $\bar q$ to infinity, the density contrast in $\bar x=0$ diverges, at $A=1$, even if the smoothing parameter, $\sigma^2$, is kept fixed. In other words, the would-be short-distance (local) divergence in Eulerian space around $\bar x=0$ in controlled by the large distance (nonlocal) structure in Lagrangian space. Performing the integral in \re{intA} in the $\epsilon\to 0$ limit gives, in $\bar x=0$, \begin{eqnarray} &&\!\!\!\!\!\qquad\qquad\quad\,\frac{1}{1-A} -1 = \frac{A}{1-A}\qquad\qquad \qquad\qquad\qquad\quad\;\;\;(A<1)\nonumber\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\! \Delta(\bar x=0, A;\epsilon)= \,\quad\frac{\Gamma\left(\frac{1}{6}\right)}{ 2^{1/3} 3 \,\pi^{1/2}} \frac{1}{(\epsilon^2)^{1/3}}-1 \qquad\qquad\qquad\qquad\qquad\quad\;\; (A=1)\nonumber\\ &&\qquad\quad\quad\;\,\frac{1}{2(A-1)}+\frac{1}{A-1}+\frac{1}{2(A-1)}-1=\frac{3-A}{A-1}\qquad (A>1)\,, \label{ds} \end{eqnarray} where, for the $A>1$ case, we have shown explicitly the contribution of each root. Before shell-crossing ($A<1$) the solution can be expanded perturbatively in the ``time" parameter $A$ around $A=0$, and the series is guaranteed to converge has it has convergence radius $|A|=1$. For $A=1$ we recover the result \re{d4}, and the singular dependence $( \Psi^{\prime\prime\prime}(0,\tau_{sc})\sigma^2)^{-1/3}\sim(\epsilon^2) ^{-1/3}$. \subsection{Non-perturbative cosmological expansion} To see how the singular behavior at shell-crossing manifests itself in cosmological averages, and its impact on the cosmological PT expansion, we perform a Gaussian average of \re{intA} with respect to $A$. The quantity we obtain corresponds to the average of the nonlinear field evaluated at positions corresponding to a maximum of the linear density field. Moreover, to simplify the computation, we keep the second derivative of the field fixed at the value \re{At}. We get, \begin{eqnarray} &&\langle \Delta(\bar x=0, A;\epsilon) \rangle_{\sigma_A}=\int\frac{d A}{\sqrt{2\pi \sigma_A^2}}\;e^{-\frac{A^2}{2\sigma_A^2}} \int\frac{dy}{\sqrt{2 \pi}} \;e^{-\frac{1}{2}\left(y(1-A) + \epsilon^2 y^3\right)^2}-1\,,\nonumber\\ &&\qquad\qquad\quad\quad\;\;\;\;\,\,\,= \int\frac{dy}{\sqrt{2 \pi}} \frac{e^{-\frac{y^2 (1+\epsilon^2 y^2)^2}{2\left(1+y^2 \sigma_A^2\right)}}}{\sqrt{1+y^2 \sigma_A^2}}-1\,, \label{it} \end{eqnarray} where, from \re{At} we have \begin{equation} \sigma_A^2 = \langle \delta_L^2(\tau) \rangle\,, \end{equation} which will play the role of the PT expansion parameter in what follows. Being constrained to run over maxima of the density field, the average \re{it} does not vanish. A power series in $\sigma_A^2$, \begin{equation} \sum_{n=1}^{N_\mathrm{max}} c_{2n}(\epsilon) \sigma_A^{2n}\,, \label{PTd} \end{equation} clearly is not enough to represent \re{it}. By Taylor expanding in $\sigma_A^2$ around $\sigma_A^2=0$, one easily realizes that he $c_{2n}(\epsilon)$'s, are all finite for $\epsilon \to 0$, as the corresponding integrals are all cut off by the $e^{-y^2/2}$ factor. On the other hand, eq.~\re{it} diverges logarithmically for $\epsilon=0$, with a coefficient which is nonperturbative (and non analytic) in $\sigma_A^2$. This can be seen by considering the regime $\epsilon^2\ll \sigma_A^2$, and evaluating the contribution to the integral from the region $ 1/\sigma_A^2 \ll y^2 \ll 1/\epsilon^2$, in which it can be approximated as \begin{equation} \sim 2 e^{-\frac{1}{2 \sigma_A^2}} \int_{1/\sigma_A}^{1/\epsilon}\frac{dy}{\sqrt{2 \pi y^2 \sigma_A^2}}\sim \log(\sigma_A^2/\epsilon^2)\frac{e^{-\frac{1}{2 \sigma_A^2}} }{\sqrt{2\pi \sigma_A^2}}\,. \label{logdiv1} \end{equation} Indeed, as shown in \ref{transerie}, the integral can be represented with a {\it transseries} with the following structure (for an introduction to transseries and resurgence, see \cite{Aniceto:2018bis} and references therein), \begin{eqnarray} && \!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\! \langle \Delta(\bar x=0, A;\epsilon) \rangle_{\sigma_A}\simeq \sum_{n=1}^{N_{\mathrm{max}}} c_{2n}(\epsilon) \sigma_A^{2n} \nonumber\\ &&\qquad\quad +\frac{e^{-\frac{1}{2 \sigma_A^2}}}{ \sqrt{2\pi \sigma_A^2}} \left(\log(\sigma_A^2/\epsilon^2)+ C_0(\epsilon)+\sum_{m=1}^{N_{\mathrm{max}}} d_{2m}(\epsilon)\sigma_A^{2m}\right) +\cdots\,, \label{tss} \end{eqnarray} where all the $c_{2n}(\epsilon)$ and $d_{2m}(\epsilon)$ are regular in the $\epsilon \to 0$ limit. The perturbative coefficients are, for $\epsilon=0$, \begin{equation} c_{2n}(0)=(2n-1)!!\,. \label{c2n} \end{equation} The remaining terms are all nonperturbative, as, due to the $\exp(-1/2\sigma_A^2)$ factor, they go to zero faster than any power of $\sigma_A^2$ as $\sigma_A^2\to 0$, and therefore cannot be captured at any order of the Taylor expansion. Among the nonperturbative coefficients, $C_0(\epsilon)$ depends on the particular $\epsilon$-regularization procedure and cannot be computed in the general case (as it can be reabsorbed in the logarithmic term by a redefinition of $\epsilon$). On the other hand, the other coefficients can be computed analytically by taking the appropriate number of derivatives with respect to $\sigma_A$, evaluated in $\sigma_A=0$, of the expression, \begin{equation} \sum_{m=1}^{N_{\mathrm{max}}} d_{2m}(0) \sigma_A^{2m}=-\,e^{\frac{1}{2\sigma_A^2}}\sum_{n=0}^{N_{\mathrm{max}}} ( 2 \sigma_A^2)^{n+1/2}\;\Gamma\left(n+\frac{1}{2},\frac{1}{2\,\sigma_A^2}\right)\,, \label{largecoeff} \end{equation} where $\Gamma(n,x)=\int_x^\infty dt \;t^{n-1}e^{-t}$ is the incomplete Gamma function. The dots in eq.~\re{tss} represent nonperturbative terms that are more suppressed than $\exp(-1/2\sigma_A^2)$, see \ref{transerie}. Notice that the perturbative and the nonperturbative sectors of the expansion are intimately related, as they originate from the same integral (see eq~\re{A5}). Indeed, the relation can be made more explicit by writing them collectively as \begin{eqnarray} &&\sum_{n=1}^{N_{\mathrm{max}}} \left(c_{2n}(0) +\frac{e^{-\frac{1}{2 \sigma_A^2}}}{ \sqrt{2\pi \sigma_A^2}} d_{2n}(0)\right)\sigma_A^{2n} \nonumber\\ &&= \frac{1}{\sqrt{\pi}} \sum_{n=0}^{N_{\mathrm{max}}} (2\sigma_A^2)^n\left[\Gamma\left(n+\frac{1}{2}\right) -\Gamma\left(n+\frac{1}{2},\frac{1}{2\,\sigma_A^2}\right) \right]-1\,, \label{sumnpt} \end{eqnarray} with $\Gamma(n+1/2)=\sqrt{\pi}(2n-1)!!/2^n$. As $\Gamma(n,0)=\Gamma(n)$ we see that the coefficients of the nonperturbative series coincide with those of the perturbative one in the $\sigma_A\to \infty$ limit. The reason for this remarkable connection is clear: in the $\epsilon/\sigma_A \to 0$ limit the expression \re{tss} has to diverge as the full integral \re{it}, that is, as dictated by the logarithm in \re{logdiv1} or \re{tss}, and the remaining terms in the expansion should conspire as to give a finite quantity. In other terms, the divergences induced by shell-crossing provide a bridge between the perturbative and the nonperturbative sectors. The PT expansion misses the latter, and therefore is doomed to fail after shell-crossing. This perturbative/nonperturbative connection is an example of {\it resurgent} behavior of a perturbative expansion, which appears almost ubiquitously in physical problems \cite{Aniceto:2018bis}. In order to explore this point further, we ask ourselves to what extent we can recover information on the full function, eq.~\re{it}, from the simple knowledge of its PT expansion, namely \re{PTd}, by using methods typical of the resurgence approach such as Borel summation. We anticipate that this attempt will be unsuccessful, therefore, the reader not interested in the details of the Borel procedure can safely skip this part and jump to the text after eq.~\re{CCB}. As a first step, we define \begin{equation} \tilde g(z)\equiv \sum_{n=0}^\infty c_{2n}(0)\; z^{-n-1}\,, \end{equation} where the coefficients are given in \re{c2n}. $\tilde g(z)$ is related to the PT expansion \re{PTd} for $\epsilon=0$ by \begin{equation} \sum_{n=1}^\infty c_{2n}(0) \sigma_A^{2n}= \sigma_A^{-2}\,\tilde g(\sigma_A^{-2})-1\,. \end{equation} We then define the Borel transform of $\tilde g(z)$ as \begin{equation} \hat g(\xi) = \sum_{n=0}^\infty c_{2n}(0)\frac{\xi^n}{n!}\,, \end{equation} which, due to the $1/n!$ factors has a finite radius of convergence, in which it converges to \begin{equation} \hat g(\xi) = \frac{1}{\sqrt{1-2\,\xi}}\,. \end{equation} Then, we can try to define a meaningful summation for the initial divergent series by transforming $\hat g(\xi)$ back via the directional Laplace transform \begin{equation} {\cal L}^\theta[\hat g](z) = \int_0^{e^{i\theta}\infty}\;d\xi\;e^{-z\xi}\,\hat g(\xi)\,, \end{equation} where the integral is taken on a half-line starting from the origin and making an angle theta with the positive real axis. Since the integrand is singular in $\xi=1/2$ the procedure has an ambiguity, in the form of a nonperturbative imaginary part emerging from the discontinuity of the directional Laplace transform as the $\theta\to 0$ limit is taken from above or from below, \begin{equation} \lim_{\theta\to 0^\pm} {\cal L}^\theta[\hat g](z) = \frac{e^{-z/2}}{\sqrt{\frac{\pi}{2 z}}}\left(\mathrm{Erfi}\left(\sqrt{\frac{z}{2}}\right)\pm i \right)\, \end{equation} which leads to the possible identification \begin{equation} \sum_{n=1}^\infty c_{2n}(0) \sigma_A^{2n}\simeq e^{-\frac{1}{2\sigma_A^2}}\sqrt{\frac{\pi}{2\sigma_A^2}}\left(\mathrm{Erfi}\left(\sqrt{\frac{1}{2\sigma_A^2}}\right)+ i\, C \right) -1\,, \label{CCB} \end{equation} where the $C$ constant contains the ambiguity of the procedure. It can be checked that the PT expansion of the expression at the RHS reproduces \re{PTd} at all orders, and moreover that it is finite for any value of $\sigma_A^2$. In Fig.~\ref{Ratioeps001} we show the ratios between different approximations to the full integral \re{it} and the integral itself, evaluated numerically. The non-convergence of the standard PT expansion (the first sum in \re{sumnpt}) is clear from the behavior of the dashed lines for increasing values of the truncation order, that is, on the value of $N_\mathrm{max}$ in the sums. On the other hand, the full result \re{tss} not only shows convergence, but also convergence to the correct function. In order to obtain these lines we had to tune the parameter $C_0(0)$ in \re{tss}, which cannot be extracted from our considerations in \ref{transerie}. However, this is done once for all, as we checked that it does not depend neither on $\epsilon$ (as long as $\epsilon \ll \sigma_A$) nor on the truncation order of the full sum \re{sumnpt}. We also show (by brown dash-dotted line) the result for the Borel summation procedure described above, in which we also take the freedom to fine tune the $C$ parameter in \re{CCB} to imaginary values. As we see, this procedure gives a non diverging result, but does not appear to converge to the true function \re{it}. It was to be expected, as the only input we gave are the perturbative coefficients in the $\epsilon \to 0$ limit, $c_{2n}(0)$, and therefore all the information on the logarithmic divergence of the integral was obliterated since the beginning. It would be interesting to see if this information can, at least in part, be recovered by giving instead the $c_{2n}(\epsilon)$'s, or at least some perturbative truncation in $\epsilon$ of these coefficients. \begin{figure}[t] \centering \includegraphics[width=.65\textwidth,clip]{Ratiodeltaepsilon001.pdf} \caption{The ratio between $\Delta$ as obtained in different approximations and the full one, obtained by integrating eq.~\re{it} numerically. The order is referred to the value of $N_\mathrm{max}$ in the summations of eqs.~\re{PTd} and \re{tss}. For the dashed lines, the nonperturbative coefficients $d_{2n}$ have been set to zero, leaving only the SPT expansion. The dash-dotted line represents the Borel summation of eq.~\re{CCB}.} \label{Ratioeps001} \end{figure} \subsection{Fourier space} Before closing this section, we briefly outline how the post shell-crossing features discussed above should manifest in Fourier space. Going back to the density contrast of eq.~\re{Gmap} and taking the Fourier transform, we get, \begin{equation} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\int dx\,e^{i k \bar x}\left(-1+ \int\frac{dq}{\sqrt{2\pi \sigma^2}} e^{-\frac{(q+\Psi(q)-\bar x)^2}{2\sigma^2}} \right)=-2\pi\delta_D(k) +e^{-\frac{k^2 \sigma^2}{2}}\int\frac{dq}{\sqrt{2\pi \sigma^2}} e^{i k\left(q+\Psi(q)\right)}\,, \end{equation} of which we can safely take the $\sigma\to 0$ limit. Indeed, we could have directly started from the unregolarized density contrast, eq.~\re{0map}, and we woud have obtained the same result. The point is that the Fourier transformation acts as a regulator, where the role of the spatial resolution $\sigma$ of \re{Gmap} is played by $\sim 2\pi/k$. By following our case study of the cubic displacement field, \re{qpsi}, we expect that the role of $\epsilon$ is played, in Fourier space, by \begin{equation} \epsilon \to \frac{2 \pi}{k\,\bar q}. \end{equation} This relation summarizes very well the nature of the divergences for $\epsilon\to 0$, that is, for \begin{equation} k\,\bar q\gg 2\pi\,, \end{equation} of both \re{ds} and \re{tss}. They manifest themselves as Eulerian UV divergences (large $k$) induced by Lagrangian IR effects (non-locality on scales $O(\bar q)$). Our result \re{tss} indicates that, when considering statistically averaged quantities, shell-crossing should give rise to non-perturbative terms which are at most logarithmically divergent in $k$ for large $k$'s. \section{Exact dynamics in 1+1 dimensions} \label{exdyn} In this section we take into account the exact dynamics in one spatial dimension, and discuss the property of the equation of motion and its solutions after shell-crossing. The equation of motion for the displacement field is \begin{equation} \ddot \Psi(q,\tau)+{\cal H}\, \dot \Psi(q,\tau) = -\partial_x\Phi(x(q,\tau),\tau)\,, \label{eom1} \end{equation} where dots denote derivatives with respect to conformal time $\tau$ and ${\cal H}=\dot a/a$, where $a(\tau)$ is the scale factor. The gravitational potential satisfies a Poisson equation \begin{eqnarray} \partial^2_x \Phi(x,\tau) &=& \frac{3}{2}{\cal H}^2 \delta(x,\tau)\nonumber\\ & =& \frac{3}{2}{\cal H}^2\, \int d q\left(\delta_D\left(x-q- \Psi(q,\tau)\right)-\delta_D\left(x-q\right)\right)\,, \end{eqnarray} where we have used the relation \re{0map} for the density contrast. The force is therefore obtained by integration, \begin{equation} -\partial_x\Phi(x,\tau) = -\frac{3}{2}{\cal H}^2\, \int d q\left(\Theta\left(x-q- \Psi(q,\tau)\right)-\Theta\left(x-q\right)\right) + c(\tau)\,, \label{forcef} \end{equation} where $\Theta(x)$ is the Heaviside theta function, and the possibly time-dependent quantity $c(\tau)$ is zero in the ``CMB rest frame" in which the force vanishes everywhere when all the particles are in they unperturbed positions, i.e. $\Psi(q,\tau)=0$. In the following, we will will assume to be in that frame and will set $c(\tau)=0$. Eq.~\re{forcef} has a very simple physical interpretation. Since in 1 spatial dimension the force is independent on the distance, to compute the force on $x$ of a segment of matter of infinitesimal length $dq$ that was initially in $q$, say, at the left (right) with respect to $x$, it suffices to know if this segment is still at the left (right), in which case there is no change in the force, or it has moved to the right (left), in which case $x$ will feel an excess infinitesimal force towards the right (left) given by \begin{equation} + (-) \frac{3}{2}{\cal H}^2\, dq\,. \end{equation} Therefore, counting the total amount of matter crossing x from its initial position is exactly what the $\Theta$ functions in \re{forcef} do, keeping track of the sign of the crossing. The expression for the force can be written in a more useful form in terms of the roots $q_i(x,\tau)$ of eq.~\re{S}. Let's start from the case in which there is a single root, $q_1(x,\tau)$, as in panel $a)$ of Fig.~\ref{configs} Then \begin{eqnarray} && \!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\int d q\left(\Theta\left(x-q- \Psi(q,\tau)\right)-\Theta\left(x-q\right)\right) =\lim_{L\to\infty} \int_{-L/2}^{q_1(x,\tau)}dq-\int_{-L/2}^{x}dq\nonumber\\ && \!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!= q_1(x,\tau)+\frac{L}{2}-\left(x+\frac{L}{2}\right)= -\Psi(q_1(x,\tau),\tau)\,\qquad\qquad \mathrm{(one\;stream\;in\;}x), \end{eqnarray} where we have introduced a finite box of length $L$, and then sent $L\to\infty$. Then, let's consider the case in which the point $x$ is in a region which in which first shell crossing has occurred, as in panel $b)$ of Fig.~\ref{configs}, and there are three coexisting streams. In this case, eq.~\re{S} has three solutions, $q_i(x,\tau)$ ($i=1,\cdots,3$). We get, in this case, \begin{eqnarray} &&\lim_{L\to \infty}\int_{-L/2}^{L/2} d q\left(\Theta\left(x-q- \Psi(q,\tau)\right)-\Theta\left(x-q\right)\right) \nonumber\\ &&=\lim_{L\to \infty}\left[ \left(q_1(x,\tau) +\frac{L}{2}\right) + \left(q_3(x,\tau)-q_2(x,\tau)\right) -\left(x+\frac{L}{2}\right)\right]\nonumber\\ &&\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\! =-\Psi(q_1(x,\tau),\tau)+\Psi(q_2(x,\tau),\tau)-\Psi(q_3(x,\tau),\tau)\,\;\;\qquad\quad \mathrm{(three\;streams\;in\;}x). \end{eqnarray} At this point, one realises that, considering an arbitrary (odd) number of roots of Eq.~\re{S}, $N_s(x,\tau)$, gives \begin{eqnarray} &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \lim_{L\to \infty}\int_{-L/2}^{L/2} d q\left(\Theta\left(x-q- \Psi(q,\tau)\right)-\Theta\left(x-q\right)\right) =-\sum_{i=1}^{N_s(x,\tau)} (-1)^{i+1} \Psi(q_i(x,\tau),\tau)\,,\nonumber\\ &&\;\;\qquad\quad\qquad\quad\qquad\quad\qquad\quad\qquad\quad\qquad( N_s(x,\tau)\;\mathrm{streams\;in\;}x). \label{fullstream} \end{eqnarray} The exact equation of motion \re{eom1} therefore can be also written as, \begin{equation} \ddot \Psi(q,\tau)+{\cal H}\, \dot \Psi(q,\tau) = \frac{3}{2}{\cal H}^2 \sum_{i=1}^{N_s(x(q,\tau),\tau)} (-1)^{i+1} \Psi(q_i,\tau)\,, \label{eom2} \end{equation} where, for each $q$, the roots $q_i$ are the solutions of \[ q_i+\Psi(q_i,\tau)=q+\Psi(q,\tau)=x(q,\tau)\,. \] One of the $q_i(x)$ roots is clearly coincides with $q$. Each root contributes to the RHS with a sign given by the sign of $1+\Psi'(q_i,\tau)=x'(q,\tau)$. In the case in which at point $x$ there is no shell-crossing, $N_s(x(q,\tau),\tau)=1$, the only root is $q$ itself, and eq.~\re{eom2} reduces to the equation of motion in the Zel'dovich approximation \begin{equation} \ddot \Psi_Z(q,\tau)+{\cal H}\, \dot \Psi_Z(q,\tau) = \frac{3}{2}{\cal H}^2 \Psi_Z(q,\tau)\,, \label{eomZ} \end{equation} which is then exact in absence of multistreaming. When multistreaming is present, the Zel'dovich approximation is unable to reproduce the backreaction on the force term, and it departs from the exact dynamics, see next section, and in particular Figs.~\ref{force} and ~\ref{force2}. The exact equation of motion, eq.~\re{eom2}, is manifestly not problematic at shell-crossing. At fixed $x$, shell crossing occurs when two new real roots of \re{S}, $q_j(x,\tau)$ and $q_{j+1}(x,\tau)$, appear. Since at the time of shell crossing, $\tau_x$, one has $q_j(x,\tau_x)=q_{j+1}(x,\tau_x)$, the contribution of the new couple to the RHS of \re{eom2} vanishes for $\tau \le \tau_x$ and is continuous in $\tau_x$ as $\lim_{\tau\to\tau_x^+}(q_j(x,\tau)-q_{j+1}(x,\tau))=0$. On the other hand, it is clear from eq.~\re{eom2} that the force term is nonlocal in Lagrangian space after shell-crossing, and any attempt based on the expansion of it in terms of the Zel'dovich displacement field evaluated in $q$, appears unjustified. ``Naive'' PT expansion schemes, both in Eulerian and in Lagrangian space, are therefore doomed to failure due to multistreaming. \section{Numerical solution} \label{numsol} Using as ``time'' variable the logarithm of the scale factor, \begin{equation} \eta=\log\frac{a}{a_0}=-\log (1+z)\,, \end{equation} the equation of motion \re{eom2} can be written as the system \begin{eqnarray} &&\partial_\eta \Psi(q,\eta) = \chi(q,\eta)\,,\nonumber\\ &&\partial_\eta \chi(q,\eta) = -\frac{1}{2} \chi(q,\eta)+ \frac{3}{2} \sum_{i=1}^{N_s(x,\eta)} (-1)^{i+1} \Psi(q_i(x,\eta),\eta)\,. \label{syst} \end{eqnarray} The initial condition is given at an early redshift in which we assume the linear theory growing mode, namely \begin{equation} \Psi(q,\eta_{in}) = \chi(q,\eta_{in}) = \frac{v(q,\eta_{in})}{{\cal H}(\eta_{in})} \,, \end{equation} where $v$ is the peculiar velocity. The solution of the above system of equations can then be computed by a straightforward algorithm, which requires just a few lines of code. At each time-step, for each $x$ we identify the subset of Lagrangian points $\{q_i(x,\eta)\}$, containing all the real roots of the equation $x-q-\Psi(q,\eta)=0$. Then, for each $q$, we compute the corresponding $x=q+\Psi(q,\eta)$, and then the increment of $\Psi(q,\eta)$, and $\chi(q,\eta)$, which involves, through the sum in \re{syst}, the previously identified subset $\{q_i(x,\eta)\}$ (which, of course, includes also $q$). In order to familiarise with the solution, and to follow the example of \cite{Taruya:2017ohk}, where with similar tests were performed with the particle-mesh code presented in that paper, one can apply the algorithm to some simple initial conditions. The first one is a single initial gaussian overdensity, see the red curve in Fig.~\ref{feat}, \begin{equation} \delta(q,\eta_{in})=\frac{A}{\sqrt{2\pi \sigma^2}} e^{-\frac{q^2}{2\,\sigma^2}} + C\,, \label{deltain} \end{equation} on a periodic segment bounded by $-\frac{L}{2}<q\le\frac{L}{2}$. The constant $C$ is tuned so that the integral of the overdensity on the full $q$ range vanishes. Later we will also consider a modified initial condition, in which we added a gaussian feature on top of \re{deltain}, by multiplying it by $1+B \exp(-(q-q_0)^2/\sigma_1^2)$, see the blue line in Fig.~\ref{feat}. \begin{figure}[t] \centering \includegraphics[height=.24\textwidth]{deltainwf} \includegraphics[height=.24\textwidth]{psiinwf.pdf} \caption{ Gaussian initial condition for the density contrast with and without feature (left). The corresponding initial displacement field (right).} \label{feat} \end{figure} Using again linear theory, we find that the initial condition for $\Psi(q,\eta_{in})$ is obtained by integrating \re{deltain}, \begin{equation} \Psi(q,\eta_{in})=- \int_{-L/2}^q dq'\, \delta(q',\eta_{in}) \,. \label{psidelta} \end{equation} We set $A=0.2$, $\sigma=0.12$, $L=1$, $z_{in}=99$ and integrate the equations on a grid with $1200$ points and in $100$ logarithmic steps in time. We show in Fig.~\ref{gaussia} the results of the integration in phase space, namely, in the $(x,\chi)$ plane. The first shell-crossing occurs at about $a=0.154$ in $x=0$ and the second one at $a=0.24$. Before first shell-crossing, the full solution coincides with the Zel'dovich one everywhere, but the two rapidly diverge afterwards. Around the position $x=0$ we count $3$, $9$, and $13$ streams, respectively, in the snapshots taken at $a=0.18,\,0.63$, and $1$. Notice that, even when high order multistreaming occurs, the Zel'dovich solution is recovered very fast for $x$'s outside the multistreaming region. So, while multistreaming is very non-local in Lagrangian space, it is a local effect in Eulerian space, (see also Fig.~\ref{elshift}), where it manifests itself by the emergence of higher moments of the distribution function. Moreover, the Zel'dovich approximation greatly overestimates the extension of the multistreaming region in Eulerian space (compare the solid and the dashed lines both in Fig.~\ref{gaussia} and \ref{elshift}) and it fails in giving the number of streams after second shell-crossing. \begin{figure}[tbp] \centering \includegraphics[width=.45\textwidth,clip]{pspacea013.pdf} \includegraphics[width=.45\textwidth,clip]{pspacea015.pdf} \includegraphics[width=.45\textwidth,clip]{pspacea018.pdf} \includegraphics[width=.45\textwidth,clip]{pspacea024.pdf} \includegraphics[width=.45\textwidth,clip]{pspacea063.pdf} \includegraphics[width=.45\textwidth,clip]{pspacea100.pdf} \caption{Phase space at different epochs obtained from the initial conditions of eq.~\re{deltain}. Continuous red lines are obtained with the exact dynamics, black dashed ones with the Zel'dovich one.} \label{gaussia} \end{figure} \begin{figure} \centering \includegraphics[height=.25\textwidth]{xqgwfa015.pdf} \includegraphics[height=.25\textwidth]{xqgwfa024.pdf} \includegraphics[height=.25\textwidth]{xqgwfa063.pdf} \includegraphics[height=.32\textwidth]{xqgwf.pdf} \includegraphics[height=.32\textwidth]{xqgwfzoom.pdf} \caption{The mapping between Lagrangian and Eulerian space evolved from the initial conditions of eq.~\re{deltain} without (red) and with (blue) the feature shown in Fig.~\ref{feat}. Continuous lines are obtained with the exact dynamics, dashed ones with the Zel'dovich one.} \label{elshift} \end{figure} To better visualize the origin of the failure of the Zel'dovich approximation, in Fig.~\ref{force} we plot the force, namely the right hand side of eq.~\re{eom2}, and compare it to the right hand side of the Zel'dovich equation of motion, eq.~\re{eomZ}. Before shell-crossing, as expected, the Zel'dovich right hand side coincides with the full force term, however as soon as shell-crossing happens, $O(1)$ deviations take place. They are to be expected: after crossing each other, the mutual attraction between two particles changes sign, whereas in the Zel'dovich approximation they proceed along their ballistic paths. While the amplitude of the Zel'dovich ``force" grows in time as $\Psi(q,\tau)$, namely, proportionally to the linear growth factor, the amplitude of the full force stays approximately constant. These results clearly show that expanding around the Zel'dovich solution is not a good option to explore the post shell-crossing regime, apart for, possibly, a very short time after the first shell-crossing, along the lines explored recently in \cite{Taruya:2017ohk, McDonald:2017ths}. \begin{figure}[tbp] \centering \includegraphics[width=.45\textwidth]{force1.pdf} \includegraphics[width=.45\textwidth]{force2.pdf} \includegraphics[width=.45\textwidth]{force3.pdf} \includegraphics[width=.45\textwidth]{force4.pdf} \caption{The exact force term, RHS of eq.~\re{eom2} (continuous red lines) compared to the RHS of the Zel'dovich equations of motion, eq.~\re{eomZ} (dashed black lines).} \label{force} \end{figure} \section{Post shell-crossing attractor} \label{pattr} The relation between the Lagrangian and Eulerian positions is shown in Fig.~\ref{elshift}. While before shell-crossing the Zel'dovich mapping is exact, soon after the first shell crossing the exact mapping deviates sensibly from the Zel'dovich one. As time passes, it flattens out over the whole shell-crossing region. This behavior does not depend on the particular initial condition, but is a generic attractor feature of the equations of motion, as we now show. The equations of motion \re{syst} can be written in terms of $x(q,\eta) = q+\Psi(q,\eta)$ as \begin{eqnarray} \!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\! \!\!\!\partial_\eta^2 x(q,\eta)+\frac{1}{2}\partial_\eta x(q,\eta)=-\frac{3}{2} \int dr\left[\Theta(x(q,\eta)-r-\Psi(r,\eta))-\Theta(x(q,\eta)-r)\right]\,, \label{att1} \end{eqnarray} taking the first derivative with respect to $q$, we get the equation for \begin{equation} x'(q,\eta)= 1+\Psi'(q,\eta)\,, \label{xq} \end{equation} \begin{eqnarray} &&\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\! \partial_\eta^2 x'(q,\eta)+\frac{1}{2}\partial_\eta x'(q,\eta)\nonumber\\ &&\;\;\;\;\;= -\frac{3}{2} x'(q,\eta) \int dr\left[\delta(x(q,\eta)-r-\Psi(r,\eta))-\delta(x(q,\eta)-r)\right]\,,\nonumber\\ &&\;\;\;\;\;=-\frac{3}{2} x'(q,\eta)\left(\sum_{i=1}^{N_s(x(q))}\frac{1}{|x'(q_i,\eta)|}-1\right)\,, \label{att2} \end{eqnarray} where the summation is made over all the roots in $x(q,\eta)$. In absence of shell-crossing, $N_s(x(q))=1$, the RHS gives \begin{equation} -\frac{3}{2}\left(1-x'(q,\eta)\right) = \frac{3}{2} \Psi'(q,\eta)\,, \end{equation} which, using \re{xq}, coincides with the equation of motion for the first derivative of the displacement field in the Zel'dovich approximation, see eq.~\re{eomZ}. Now, assume that first shell-crossing takes place in $q_{sc}$ at $\eta_{sc}$, that is, $x'(q_{sc},\eta_{sc})=0$. The RHS of eq.~\re{att2} is negative for $\eta\leq\eta_{sc}$. Soon after shell-crossing, the term inside parenthesis at the RHS of \re{att2} gives \begin{equation} \left(\frac{1}{|x'(q_1,\eta_{sc}^{+})|}+\frac{1}{|x'(q_3,\eta_{sc}^{+})|}+\frac{1}{|x'(q_{sc},\eta_{sc}^{+})|}-1\right) >0\,, \end{equation} where $q_1$ and $q_3$ are the two new roots. The term inside parentheses is positive, as $ |x'(q_{sc},\eta)|\ll 1$ close to shell-crossing. As a consequence, $x'(q_{sc},\eta)$, which is negative soon after shell-crossing, starts to increase, as the RHS of eq.~\re{att2} is globally positive. As it crosses zero again, two new streams are generated, whose contribution to the parentheses at the RHS is still positive, and therefore the latter is now globally negative. As a result, $x'(q_{sc},\eta)$ oscillates around zero with decreasing amplitude, and is driven asymptotically to zero. Notice that the value $x'(q_{sc},\eta)=0$ can be safely reached both from positive and negative time directions, there is no divergence there, therefore $x'(q_{sc},\eta)=0$ represents a fixed point of the post-shell crossing evolution. One can now investigate the fate of the {\em second derivative} of $x(q_{sc},\eta)$, $x''(q_{sc},\eta)$, by taking the derivative of eq.~\re{att2} with respect to $q$, finding \begin{eqnarray} &&\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\! \partial_\eta^2 x''(q_{sc},\eta)+\frac{1}{2}\partial_\eta x''(q_{sc},\eta)\nonumber\\ && \!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\! =-\frac{3}{2} x''(q_{sc},\eta)\left(\sum_{i=1}^{N_s(x(q))}\frac{1}{|x'(q_i,\eta)|}-1\right)-\frac{3}{2}(x'(q_{sc},\eta))^2\sum_{i=1}^{N_s(x(q))}\frac{ x''(q_i,\eta) }{|x'(q_i,\eta)|^3} \,.\nonumber\\ && \label{att3} \end{eqnarray} The second term at the RHS goes to zero due to the behavior of eq.~\re{att2} discussed above, and the first term gives the same fixed point mechanism for $ x''(q_{sc},\eta)$. The same holds for all the higher order derivatives, so we conclude that, deep into the shell-crossing regime, the Lagrangian to Eulerian mapping function $x(q)$ approaches the attractor solution \begin{eqnarray} && \!\!\!\!\!\!\!\!\!\!\!\!\! x(q,\eta)=x_Z(q,\eta), \qquad\qquad\qquad\qquad\qquad\quad \,\; \;\mathrm{for}\; N_s(x(q,\eta),\eta)=1\,,\nonumber\\ && \!\!\!\!\!\!\!\!\!\!\!\!\! x'(q_{sc},\eta)=x''(q_{sc},\eta)=x'''(q_{sc},\eta)\cdots \to 0, \qquad \mathrm{at\;shell-crossing\;points}\,, \label{fpoint} \end{eqnarray} where $x_Z(q,\eta)$ is the Zel'dovich solution. We still have to determine the value $x(q_{sc},\eta)$ at shell-crossing points. It represents the coordinate of the center of mass of the mass falling in the multi-streaming region. Therefore, it can be determined by the requirement that the force in $\bar x(q,\eta)$ due to all the mass contained in the multi-streaming region vanishes \begin{eqnarray} &&\int_{q_1(\bar x)}^{q_{N_s}(\bar x)} dr\left[\Theta(\bar x-r-\Psi(r,\eta))-\Theta(r+\Psi(r,\eta)-\bar x)\right]\nonumber\\ &&=-2\left[\sum_{i=1}^{N_s(\bar x)} (-1)^{i+1} \Psi(q_i(\bar x),\eta) -\frac{\Psi(q_1(\bar x),\eta) +\Psi(q_{N_s}(\bar x),\eta) }{2}\right]=0\,. \label{barx} \end{eqnarray} In other terms, we find that the asymptotic configuration is that in which all the particles in the multi-streming regions form a thin shock at the position given by their center of mass, whose position $\bar x$ is determined by solving eq.~\re{barx}. This asymptotic configuration is similar to the one predicted by the `adhesion model' for structure formation \cite{Gurbatov:1989az,Dubrulle:1994psg}. In particular, introducing the potential $\varphi(q,\eta)$, defined by \begin{equation} x(q,\eta)=\frac{\partial \varphi(q,\eta) }{\partial q}\,, \end{equation} we find that it becomes convex everywhere and flat in the multistreaming regions. The ``geometrical adhesion model" of \cite{Bernardeau:2009ab,Valageas:2010uh} gives a similar prescription, by imposing that the nonlinear potential is obtained as the convex hull of the Zel'dovich one. As we will discuss later, this prescription does not coincide with our attractor, as the actual position of the shock cannot be inferred correctly from the Zel'dovich solution when it predicts a number of streams larger than three. Moreover, the attractor is reached only asymptotically, in the deep multi-streaming regime, and deviations from it are present at any epoch. To visualize the attractor behavior we added a feature on top of the gaussian initial condition of eq.~\re{deltain}, to get the blue line in Fig.~\ref{feat}, again keeping to zero the integral of the function over the full spatial interval. The resulting evolution of the Lagrangian to Eulerian mapping is shown in Fig.~\ref{elshift} at $a=1$ (lower row). The red lines are obtained with pure gaussian overdensities, while the blue lines are obtained with the feature added on top of it (continuous lines are for the full dynamics, dashed ones for Zel'dovich). Flattening in the multistreaming Lagrangian region is evident from the left plot and, by zooming inside (right plot), one sees that it starts from the point of first shell-crossing, at $q=0$, to propagate outwards, as higher and higher derivatives approach the fixed points \re{fpoint}. The memory of the feature at $q=0.1$ is barely noticeable from the comparison between the shapes of the blue and red solid curves. A stronger effect is a shift in the position $\bar x$ of the plateau in the two cases. This shift is entirely explained by the shift in the position of the center of mass due to the feature, and can be reproduced by computing $\bar x$ from eq.~\re{barx} {\em using the Zel'dovich solutions} for the two different initial conditions. Indeed, shifting the solution of the exact equations evolved from initial condition with feature by the $\delta \bar x = \bar x(\mathrm{no\; feature})- \bar x(\mathrm{feature})$ computed in the Zel-dovich approximation we get the dash-dotted line. The fact that it overlaps nearly perfectly with the solution for the featureless case shows that one can use the Zel'dovich approximation to identify the center of mass of the particles fallen in the multi-streaming region. On the other hand, the same effect can be obtained by any other initial condition which perturbs the gaussian one by shifting the center of mass by the same amount, and therefore it carries no information on the detail of the feature that we have imposed. We repeat the same operation in phase space, see Fig.~\ref{pswf}. Shifting both $x$ and $\chi$ by the same amount as we do in Fig.~\ref{elshift}, we get the blue dash-dotted line, which has a nearly perfect overlap with the featureless red curve. Notice the remnant of the initial feature, in the form of the little spiral in the blue line at $x\sim 0.02$, $\chi\sim 0.03$ at $a=0.32$ and at $x\sim 0.02$, $\chi\sim -0.04$ at $a=1$: the information is still there, but it is diving deeper and deeper in the multistreaming region and becoming more and more difficult to recover. \begin{figure}[t] \centering \includegraphics[width=.35\textwidth]{pPSwfa32zoom.pdf} \includegraphics[width=.35\textwidth]{pPSwfa100zoom.pdf} \caption{Phase space solution for the initial condition of eq.~\re{deltain} without (red) and with (blue) the added feature. At $a=1$ we also show the effect of shifting the blue curve by the deviation of the center of mass position induced by the feature, obtained according to Eq.~\re{barx} from the Zel'dovich solution (dash-dotted blue). } \label{pswf} \end{figure} As a second test we consider initial conditions given by a pair of gaussian overdensities, as the red lines in Fig.~\ref{feature2}, on top of which we added gaussian features asymmetrically (blue lines). \begin{figure \centering \includegraphics[height=.25\textwidth]{deltainwfg2} \includegraphics[height=.25\textwidth]{psiinwfg2.pdf} \caption{Initial condition with two gaussians without (red) and with (blue) features.} \label{feature2} \end{figure} The evolution in phase space and the evolution of the force in the featureless case is given in Figs.~\ref{gauss} and \ref{force2}, respectively. The flattening inside the shell-crossing region is evident also in this case, see Fig.~\ref{eulag2}, and the main difference induced by the features is, again, a shift in the position of the plateau. In order to compute this shift, we apply eq.~\re{barx} to the Zel'dovich solution at the time at which the featureless initial conditions evolve to shell-crossing at $q=0$, namely at $a=0.34$, and compare it with the Zel'dovich solution with features at the same redshift. In this way, the information on the shift of the center of mass induced by the features is preserved, and the latter is given by the difference between the horizontal dotted lines in the lower-left plot in Fig.~\ref{eulag2}. Had we computed the shift from the Zel'dovich solution at $z=0$ we would have got it wrong, as the multiple streams predicted by this approximation are completely irrealistic and cannot account for the true position of the center of mass of the shock. On the right we show that this effect accounts for the shift between the exact solutions at $a=1$. The same is shown in phase space in Fig.~\ref{pswf2}, where, again, we see that the direct deformation induced by the features on the phase-space curves is getting swallowed by the multistreaming spirals. \begin{figure \centering \includegraphics[width=.45\textwidth,clip]{pGspacea0237.pdf} \includegraphics[width=.45\textwidth,clip]{pGspacea43.pdf} \includegraphics[width=.45\textwidth,clip]{pGspacea0630.pdf} \includegraphics[width=.45\textwidth,clip]{pGspacea100.pdf} \caption{Phase-space snapshots of the evolution of the initial conditions of Fig.~\ref{feature2} without features, in the exact dynamics (continuous red) and in Zel'dovich approximation (dashed black). } \label{gauss} \end{figure} \begin{figure \centering \includegraphics[width=.48\textwidth,clip]{forceG1.pdf} \includegraphics[width=.48\textwidth,clip]{forceG2.pdf} \caption{The evolution of the exact force (continuous red) and the Zel'dovich one (dashed black) for the initial conditions Fig.~\ref{feature2} without features. } \label{force2} \end{figure} \begin{figure \centering\includegraphics[height=.29\textwidth]{EL2Ga.pdf} \includegraphics[height=.29\textwidth]{EL2G.pdf} \includegraphics[height=.29\textwidth]{EL2Gazoom.pdf} \includegraphics[height=.29\textwidth]{EL2Gzoom.pdf} \caption{Lagrangian to Eulerian space mapping for the initial conditions of Fig.~\ref{feature2} without (red) and with features (blue). Solid lines are for the exact dynamics and dashed ones for Zel'dovich. In the lower left panel we also show the effect of shifting the blue line by the center of mass deviation induced by the features, evaluated in the Zel'dovich solution before second shell-crossing. } \label{eulag2} \end{figure} \begin{figure}[t] \centering \includegraphics[height=.32\textwidth]{pPSwfa034zoom.pdf} \includegraphics[height=.32\textwidth]{pPSwfa1zoom.pdf} \caption{Phase space solution for the initial condition of Fig.~\ref{feature2} without (red) and with (blue) the added feature. At $a=1$ we also show the effect of shifting the blue curve by the deviation of the center of mass position induced by the feature, obtained according to Eq.~\re{barx} from the Zel'dovich solution before second shell-crossing (dash-dotted blue).} \label{pswf2} \end{figure} \section{Attractors in a cosmological setting} \label{cosmosym} Finally, we describe what would be a cosmological simulation if the world were 1+1 dimensional. We will consider a power spectrum $P_{1D}(k)$ related to the $3D$ one by \begin{equation} P_{1D}(k)=\frac{k^2}{2\pi}P_{3D}(k)\,, \end{equation} where the linear 3D PS has been obtained by the Class Boltzmann code \cite{Blas:2011rf}. The relation above results in the same variance in the density per interval in $k$ as in 3D CDM, as well as the same linear-order parallel RMS displacement \cite{McQuinn:2015tva}. We choose $n_s=0.966$, $\Omega_b h^2=0.02269$, $\Omega_m h^2=0.134$, $h=0.703$, and scalar amplitude $A_s=2.42\cdot10^{-9}$. The linear PS is scaled back at $z=99$ using the growth factor computed from the above cosmological parameters, and then used to give the gaussian initial conditions in Fourier space \begin{equation} \Psi(q,\eta_{in})=\chi(q,\eta_{in})=\frac{1}{L}\sum_{m=1}^{N_p} c_m \cos\left(q \,p_m+\phi_m\right)\,, \end{equation} with $p_m=2\pi\, m L^{-1}$. The $c_m$'s are taken from a Rayleigh distribution with \begin{equation} \sigma_m=\sqrt{ \frac{L P_{1D}(p_m)}{2 p_m^2} }\,, \end{equation} where the $1/p_m^2$ factor comes from the relation \re{psidelta}, which in Fourier space reads, \begin{equation} \tilde\Psi(p_m,\eta_{in}) = i\frac{\tilde\delta(p_m,\eta_{in})}{p_m}\,. \end{equation} The phases $\phi_m$ are extracted randomly from the $[0,2\pi)$ interval. We perform a run on a one-dimensional line of $L=1\,\mathrm{Gpc\,h^{-1}}$, sampled in 4000 points. We run from $a_{in}=1/100$ to $a=1$ in 100 logarithmic steps. The evolution forward in time assumes an Einstein de Sitter cosmology. In Fig.~\ref{cosmosim} we show our results on the Lagrangian to Eulerian mapping zooming on a portion of space of $100\,\mathrm{Mpc \;h^{-1}}$. We confirm the general trend observed for simple initial conditions in the previous sections: the Zel'dovich approximation is able to roughly identify the regions undergoing multistreaming in Lagrangian space, but greatly overestimates their extension in Eulerian space. The failure of the Zel'dovich approximation after shell-crossing is even more evident in phase space, see Fig.~\ref{cosmosimphase}. Following the discussion of the previous section, we now investigate to what extent the attractor behavior can describe the displacement field at late times. First, we implement a simple algorithm to ``flatten out'' the Lagrangian to Eulerian mapping inside multistreaming regions, in order to reproduce the asymptotic state. To do so, we consider the Zel'dovich approximation at $a=1$ and find the points $\bar x$'s such that eq.~\re{barx} is fullfilled. This procedure has a number of shortcomings, which should be addressed in order to improve it. First of all, as we discussed in the two-gaussians case, in order to properly evaluate $\bar x$, the Zel'dovich approximation should be used in each region only between the first and the second shell-crossing, as, after that, it becomes unreliable in estimating the extension of the Lagrangian region falling in a given Eulerian multistreaming region. Therefore, we should implement the plateau-finding algorithm \re{barx} in time, extracting the position of each ``halo'' from the Zel'dovich approximation after the first shell crossing epoch for that particular halo, and not at the same epoch ($a=1$ in our case) for all halos. Second, and connected to the previous point, we have to artificially cut off the maximum possible length of the multistreaming intervals, otherwise our operation on the final Zel'dovich solution would end up in a large number of supermassive ``halos". Finally, the attractor is exact only asymptotically, therefore the flattening is never completely accomplished at finite times. Nevertheless our flattened displacement field (red lines in Fig.~\ref{eulag}) provides a reasonable localization of real ``halos", failing mainly for haloes corresponding to a large number of streams in the Zel'dovich approximation. A complementary view of this result is given in Fig.~\ref{halos}, where the squares and circles represent the positions and sizes of halos obtained by a friends of friend algorithm for the real dynamics and the flattened result, respectively. The flattening prescription is able to reproduce the location and sizes of the halos of the real simulation. It typically overestimates the size of the halos, mainly due to the exact flat limit. This problem can be mitigated by introducing some form of smoothing on the halo profiles of the flattened Zel'dovich solution. \begin{figure}[t] \centering \includegraphics[width=.48\textwidth,clip]{Pxqa040.pdf} \includegraphics[width=.48\textwidth,clip]{Pxqa071.pdf} \includegraphics[width=.48\textwidth,clip]{Pxqa1.pdf} \caption{Lagrangian to Eulerian mapping for a portion of our cosmological simulation. The purple line is the initial condition, the orange one the Zel'dovich solution, and the blue line is obtained from the exact dynamics. } \label{cosmosim} \end{figure} \begin{figure \centering \includegraphics[height=.30\textwidth]{Pspacea040.pdf} \includegraphics[height=.30\textwidth]{Pspacea071.pdf} \includegraphics[height=.30\textwidth]{Pspacea1.pdf} \caption{Phase space for a portion of our cosmological simulation. The the orange line is the Zel'dovich solution, and the blue line is obtained from the exact dynamics. } \label{cosmosimphase} \end{figure} \begin{figure \centering \includegraphics[width=.62\textwidth]{EL.pdf} \caption{Again, Lagrangian to Eulerian mapping for a portion of our cosmological simulation. We also show the result of the ``flattening" algorithm discussed in the text (red line), constracted from the Zel'dovich approximation at $z=0$. } \label{eulag} \end{figure} \begin{figure \centering \includegraphics[width=.45\textwidth]{halos.pdf}\\ \includegraphics[width=.45\textwidth]{phsp.pdf} \caption{(Upper panel) Location and sizes of the ``halos'' identified from the cosmological simulation with the exact dynamics (blue circles) and from the flattened Zel'dovich approximation discussed in the text (red squared). (Lower panel) Phase space in the same portion of Eulerian space, obtained from the exact dynamics. } \label{halos} \end{figure} \section{Outlook} \label{out} What is the link between the first and the second part of this paper? In section \ref{npterms} we showed, using a toy model (Zel'dovich dynamics in 1+1 dimensions) that the origin of the failure of the PT expansion, namely, shell-crossing, manifests itself as a divergent contribution, which acts as a ``bridge'' between the perturbative and the nonperturbative sectors. In concrete, eq.~\re{sumnpt} shows that one can obtain the terms of the PT expansion, at any order, by taking the $\sigma_A\to \infty$ limit of the corresponding term in the nonperturbative expansion. Of course, for practical reasons, one would prefer to be in the inverse situation (get the nonperturbative expansion from the perturbative one), nevertheless, the connection exhibited by this simple computation is remarkable. In the second part of this work we considered the real dynamics and showed that, inside multistreaming regions, the mapping between Lagrangian and Eulerian spaces becomes flatter and flatter, that is, more and more singular. Therefore, it is not unreasonable that also the statistical averages of the exact density field can be represented by transseries, and that eq.~\re{tss} would then be just one of the simplest examples of a large class. In this class of expansions, the presence of singularities forces a correlation between PT and nonperturbative terms. Exploring these connections and considering nonperturbative expansions can, in principle, shed light on how to extend the range of validity of analytic methods beyond shell-crossing. Another pressing question is, of course, the extension of these 1+1 results to the 3+1 world. We believe that some of our results are generic, even if the possibility to express them in manageable analytical terms in 3+1 dimensions is still to be proved. A generic result is that post shell-crossing effects cannot be expressed as power laws in the PT expansion parameter which is, ultimately, proportional to the linear growth factor. Therefore, one cannot expect power law dependence on the linear growth factor of fully nonperturbative quantities, such as the UV sources of coarse grained PT \cite{Pietroni:2011iz,Manzotti:2014loa}, or the counterterms of the effective field theory of the large scale structure \cite{Carrasco:2012cv}. Indeed, as shown in \cite{Manzotti:2014loa} the time dependence of these source terms appears to be much steeper than predicted in PT and cannot be expressed as a power law. It would be of great interest to see if it can be expressed by nonperturbative terms such as $\sim \exp(-\alpha/D^2)$ which become sizable only at low redshifts. The other generic feature is, probably, the existence of attractors in the multistreaming regime. A complete characterization of these should be obtained in phase space, also for the 1+1 case. In this connection, our examples of modifications of the initial conditions adding some ``features'' shows that, es expected, most of the information gets practically lost as the attractor is approached. The language of the renormalization group could be fruitfully employed to tell apart the `relevant' content of the initial conditions (in our example, the position of the center of mass before second shell crossing) from the `irrelevant' one (as the detailed shape of the added features). This type of analysis could be of practical use to extend the reconstruction procedures presently used to recover linear information from the nonlinear configurations. \section*{Acknowledgments} It is a pleasure to thank D. Comelli, L. Griguolo, S. Matarrese and C. Pavlidis for many useful discussions and comments. The author acknowledges support from the European Union Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreements Invisible- sPlus RISE No. 690575, Elusives ITN No. 674896 and Invisibles ITN No. 289442.
1,941,325,221,200
arxiv
\section{Finite size effects in observing the phase transition} The quantum phase transition of the Dicke model only truly emerges in the thermodynamic limit $N\to\infty$ \cite{Emary2003_PRL,Emary2003_PRE}. It is thus important to consider the relevance of finite size effects, specifically pertaining to the number of ions $N$ and thus the collective spin length $S=N/2$. In this spirit, we plot the order parameter $\langle ( \hat{a} + \hat{a}^{\dagger} )\hat{S}_z \rangle$ and energy gap $\Delta$ between the ground-state and excited state in the same parity sector, for various ion numbers in Fig.~\ref{fig:NPlot} and as a function of transverse field strength $B$. A minimum in the energy gap, as a function of $B$, emerges for $N\gtrsim5$. This minimum is associated with the crossover between the normal and superradiant phase, and thus we predict that features of the crossover should be observable for $N\gtrsim 5$. This is consistent with the increasingly sharp transition observable in the order parameter for $N\gtrsim5$. Similarly, calculation of the spin observables $\vert S_z \vert$ and $S_x$ from dynamical ramps [plotted as a function of $B(t)$, parameters taken as per Fig.~2b of the manuscript], indicate that the crossover between normal and superradiant phases is evident for $N\gtrsim5$, which is easily satisfied by the experimentally considered crystal of $N\sim70$. \begin{figure*}[!] \includegraphics[width=16cm]{RefRespvsN_OP-crop} \includegraphics[width=16cm]{RefRespvsN_LIN-crop} \caption{Key quantities and observables of Dicke model as a function of atom number $N$ and transverse field $B$. Energy gap $\Delta$ is between the ground-state and excited state in the same parity sector. Magnetization $\vert S_z \vert$ and mean spin projection $S_x$ are computed for a LIN ramp with $t=2$ms and other parameters taken as per Fig.~2b of the manuscript. \label{fig:NPlot}} \end{figure*} \section{Effect of the resonance on the energy gap \label{app:gap}} As discussed in the main text, the Dicke Hamiltonian features a spin-boson resonance at $B=\vert \delta \vert $. At this field strength, the states $\vert m \rangle\vert -N/2 \rangle_x $ and $\vert m-k\rangle\vert -N/2 + k \rangle_x$, with $k$ a positive integer, become nearly degenerate and can be resonantly coupled. The location of this resonance, relative to the critical field strength $B_c$, can greatly affect the energy spectrum of the Dicke model and in particular the magnitude of the energy gap $\Delta$ between the ground-state and excited states in the same parity sector. In this context, we can separate the effects of the resonance into two cases, defined by the relative position of the resonance to the critical field strength: \begin{itemize} \item {Case (i): $|\delta| \gg B_c$.} In this regime the resonance $B\simeq |\delta|$ is well separated from the critical point. The ground-state $\vert \psi^{\rm Nor}_{0, N/2} \rangle = \vert 0 \rangle\vert -N/2 \rangle_x$ is decoupled from other states at resonance. Thus, the dynamics can be affected by resonant couplings to other states (as above) only if excited states have become occupied throughout the quench. \item{Case (ii): $|\delta| \sim B_c$.} If the resonance is in the proximity of the quantum critical point then the low-lying excitations near the critical point of the Dicke Hamiltonian are non-trivial superpositions of spin and phonon excitations. A radical consequence of this complex interplay is the relative reduction of the energy gap between the ground and the first excited states of the same parity at the critical point. We illustrate this in Fig.~\ref{fig:norm_gap} as a function of detuning $\delta$, with the spin-phonon coupling $g_0$ scaled such that the critical field strength $B_c = g_0^2/\delta$ is held fixed. \end{itemize} \begin{figure}[!hbt] \includegraphics[width=8cm]{Normalizedgap} \caption{The size of the gap as a function of the detuning from the COM for $N=40$. As the size of the detuning increases, the resonant region of the Dicke model moves away from the quantum critical point separating the normal and superradiant phases. The energy gap at the critical point eventually saturates to a maximum value $\Delta E_{\rm gap}^{\rm max}$. } \label{fig:norm_gap} \end{figure} \section{Additional sequence to disentangle the spin cat-state \label{app:cat}} In the main text, we briefly outline a procedure to disentangle the pure spin-cat state from adiabatic preparation of the ground-state of the Dicke Hamiltonian. Here, we expand upon this discussion and give the appropriate details to verify this step. In the weak-field limit, $B \ll B_c$, the ground-state of the Dicke Hamiltonian is the spin-phonon cat-state: \begin{equation} \vert \psi^{S}_{0,N/2} \rangle = \frac{1}{\sqrt{2}}\Big(\vert\alpha_0,0\rangle\vert N/2 \rangle_z \pm \vert-\alpha_0,0\rangle\vert -N/2 \rangle_z\Big) , \label{eqn:Supp_SpinPhCat} \end{equation} where $\alpha_0 = g_0\sqrt{N}/(2\delta)$. Without loss of generality we fix the sign of the superposition due to conservation of the spin-phonon parity symmetry, which dictates that the positive superposition is prepared by an adiabatic quench from the strong-field ground-state $\vert \psi^{\mathrm{Nor}}_{0,N/2} \rangle$. The choice of the sign in the superposition state Eq.~(\ref{eqn:Supp_SpinPhCat}) is dictated by the spin-phonon parity symmetry of the Dicke Hamiltonian. Specifically, $\hat{H}$ is preserved under the simultaneous transformation of $\hat S_z\to -\hat S_z$, $\hat{S}_y \to -\hat{S}_y$ and $\hat a \to -\hat a$, and the associated conserved quantity of the Hamiltonian is the generator of the symmetry $\hat{\Pi} \equiv e^{i\pi(\hat{a}^{\dagger}\hat{a} + \hat{S}_x + \frac{N}{2})}$. This symmetry dictates that when ramping from high to low field, the state $\vert \psi_{0, N/2}^{Nor} \rangle$ will adiabatically connect to the superposition $\vert \psi_{0,N/2}^{S} \rangle$, to conserve the parity $\langle \hat{\Pi} \rangle = e^{i\pi N}$. Specifically, for even $N$ the ground-state will be the symmetric superposition with $\langle \hat{\Pi} \rangle = 1$, whilst for odd $N$ the ground-state is the anti-symmetric superposition with $\langle \hat{\Pi} \rangle = -1$. Without loss of generality, we assume for the following that $N$ is even and thus we fix the sign of the superposition to be positive. Since the spin and phonon degrees of freedom are entangled in the ground-state [Eq.~(\ref{eqn:Supp_SpinPhCat})], the state obtained by tracing over the phonon degree of freedom is characterized by the reduced density operator \begin{multline} \hat{\rho}_s = \frac{1}{2}\Big[ |N/2\rangle_z\langle N/2|_z + |-N/2\rangle_z\langle -N/2|_z \Big] \notag \\ + \frac{e^{-|\alpha_0|^2}}{2} \Big[|-N/2\rangle_z\langle N/2|_z + |N/2\rangle_z\langle -N/2|_z \Big] . \end{multline} As the displacement amplitude $|\alpha_0|$ is increased, the reduced density matrix exponentially loses any information about the coherences which are exhibited in the spin-phonon superposition state. As a concrete example, the ground-states of the main text typically have a mean phonon occupation $|\alpha_0|^2 \sim 2$--$30$ depending on the chosen parameters (i.e., detuning and spin-phonon coupling), leading to $e^{-|\alpha_0|^2}\lesssim 0.1$. To fully probe the available coherences via only the spin degree of freedom, we must first transform Eq.~(\ref{eqn:Supp_SpinPhCat}) to a spin and phonon product state, \begin{equation} \vert \psi_{{\rm SB}} \rangle = \vert \phi\rangle \otimes \frac{1}{\sqrt{2}} \Big(\vert N/2 \rangle_z + \vert -N/2 \rangle_z\Big) , \label{eqn:Supp_SpinPhProduct} \end{equation} where $|\phi\rangle$ is some arbitrary state characterizing the phonon degree of freedom. A possible procedure to achieve this decomposition is the following: At the conclusion of the ramp protocol, we fix the transverse field at $B=0$ and quench the detuning $\delta \rightarrow \delta^{\prime} = 2\delta$. The spin-phonon state is then allowed to evolve for a duration $t_{d} = \pi/\delta^{\prime}$. In the interaction picture, the initial spin-phonon superposition state evolves as \begin{eqnarray} \vert \psi_{{\rm SB}} \rangle = \hat{U}(t) \vert \psi_{0,N/2}^{S} \rangle , \end{eqnarray} where \begin{eqnarray} \hat{U}(t) & = & \hat{U}_{SB}(t)\hat{U}_{SS}(t) , \\ \hat{U}_{\rm SS}(t) & = & \exp\left( -i \frac{J}{N} \hat S_z^2 t\right) , \\ \hat{U}_{\rm SB}(t)& = & \hat D( \beta(t,\delta') S_z) . \end{eqnarray} Here, $\hat U(t)$ is the propagator corresponding to the Dicke Hamiltonian with $B=0$ [Eq.~1 of the main text]. The propagator is comprised of two parts, the spin-spin propagator $\hat U_{\rm SS}(t)$ and the spin-phonon propagator $\hat U_{\rm SP}(t)$ where $\beta(t,\delta)= -g_0(1-e^{-i \delta t})/(2\delta\sqrt{N})$ (see \cite{Wall2017} for a more detailed discussion). If at the end of the ramp we quench the detuning to $\delta^{\prime} = 2\delta$ and apply $\hat U(t)$ for $t_d=\pi/\delta^{\prime}$, such that $\beta(t_d,\delta')=-g N/(2\delta)$, it is then clear that $\hat{U}_{SB}$ will displace the phonon coherent states (in a direction dependent on the sign of the $S_z$ component) back to vacuum, $|\pm\alpha_0,0\rangle \rightarrow |0\rangle$. We illustrate this displacement in Fig.~\ref{fig:Supp_disentangle}. Note that the action of $\hat{U}_{SS}$ on the spin component of the ground-state imprints an irrelevant global phase $\varphi = JNt_d/2$ on the decoupled state Eq.~(\ref{eqn:Supp_SpinPhProduct}). \begin{figure}[!] \includegraphics[width=8cm]{disentangle} \caption{Schematic of the disentangling protocol to extract a pure spin cat-state from the spin-phonon ground-state $\vert \psi_{0,N/2}^{S} \rangle$. At the end of the ramp, we quench the detuning $\delta \rightarrow 2\delta$ and evolve the system for an additional duration $t_d = \pi/|2\delta|$ at fixed $B=0$. The phonon states start at opposing coherent amplitudes and undergo a spin-dependent coherent displacement which maps them to the phonon vacuum state.} \label{fig:Supp_disentangle} \end{figure} An alternative, but closely related, procedure to disentangle the spin-phonon state is to drive the spin-phonon coupling on resonance, $\delta \to \delta^{\prime} = 0$. In this case, one must shift the phase of the drive by $\pi/2$ such that the spin-phonon coupling transforms as $\frac{g_0}{\sqrt{N}}(\hat{a} + \hat{a}^{\dagger})\hat{S}_z \to \frac{ig_0}{\sqrt{N}}(\hat{a} - \hat{a}^{\dagger})\hat{S}_z$, and subsequently evolve the system for a duration $t_d = 1/|\delta|$. Following this procedure results in a spin-dependent coherent displacement of the phonon state back to vacuum, $|\pm\alpha_0,0\rangle \rightarrow |0\rangle$, in a manner similar to the previously discussed protocol. We make one further point regarding the disentangling protocols. In the experimental system we generally characterize the initial state of the phonons as a thermal ensemble $\hat{\rho}_{\bar n}$ while the spin-degree of freedom is prepared in a pure state, such that the initial spin-phonon state is $\hat{\rho}_{SB}(0) = \hat{\rho}_{\bar n} \otimes \vert -N/2 \rangle_x \langle -N/2 \vert_x $. If the protocol is adiabatic and there is no coupling between the excited energy levels, then not only is the ground-state component of this initial ensemble mapped to the weak-field ground-state of the Dicke Hamiltonian, but the excited fraction due to the thermal distribution is also mapped identically. This implies that the final state at the end of the ramp protocol will be a mixture of the true ground-state and the low-lying excitations, which, if $\delta^2 < g^2 N$, can be characterised as displaced Fock states $\vert \pm\alpha_0, n \rangle$ where $n$ corresponds to the number of phonon excitations above the true ground-state. The action of this protocol on these states is to identically displace the phonon state such that $\vert \pm\alpha_0, n \rangle \rightarrow |n\rangle$. This maps the spin-phonon excited states to the form of a product state identical to Eq.~(\ref{eqn:Supp_SpinPhProduct}). Hence, tracing the phonons out of these excited states also recovers the spin cat-state. \section{Qualitative effects of initial phonon occupation} In the main text we comment that the oscillations in the spin observable $\langle \vert \hat{S}_z \vert \rangle$ at short times is an indication of a non-negligible initial thermal occupation of the phonon mode (Fig.~2 of main text). Here, we support this conclusion by comparing results of theoretical calculations with different initial phonon occupation. Taking relevant parameters as per Fig.~2 of the main text and considering only the EXP ramp for simplicity, we plot the theoretical results for evolution of $\langle \vert \hat{S}_z \vert \rangle$ in Fig.~\ref{fig:PhononOsc}. We observe that if the phonons are taken to be initially in a vacuum state, the short time dynamics displays only extremely weak signs of oscillations. In contrast, when the phonons are taken to be initially described by a thermal ensemble with mean occupation $\bar{n} = 3$-$9$ there are signficant oscillations at short-times, consistent with the observed experimental data. Moreover, the final magnetization at the conclusion of the ramp protocol is much larger than that predicted from the vacuum case. The various values of $\bar{n}$ plotted give relatively similar agreement with the experimental data. However, $\bar{n} = 6$ is chosen in the main text as this is consistent with the estimated limit from Doppler cooling in the experiment. \begin{figure}[!] \includegraphics[width=8cm]{PhononOsc_nthExp-crop} \caption{Comparison of magnetization $\langle |\hat{S}_z|\rangle$ from experimental data and theoretical calculations for different initial thermal occupation $\langle \hat{a}^{\dagger}\hat{a}\rangle = \bar{n}$ of the phonon mode. The amplitude of the oscillations at $t \lesssim 1$ clearly increase with $\bar{n}$, whilst the frequency appears to remain comparitively fixed. Data is for an EXP ramp, with all other parameters taken as per Fig.~2b of the manuscript. \label{fig:PhononOsc}} \end{figure} \section{Inference of spin-phonon correlations \label{app:spinphonon}} As detailed in the main text, we infer the presence of spin-phonon correlations from the time evolution of the spin observable $\langle \hat{S}_x \rangle$. Specifically, starting from the Lindblad master equation for the density matrix of the spin-phonon system $\hat{\rho}$, \begin{equation} \frac{d\hat{\rho}}{dt} = -\frac{i}{\hbar} \left[ \hat{H}^{\mathrm{Dicke}}, \hat{\rho} \right] +\frac{\Gamma_{el}}{2}\sum_{i=1}^N \left( \hat{\sigma}^z_i \hat{\rho} \hat{\sigma}^z_i - \hat{\rho} \right) , \end{equation} wherein we have assumed single-particle dephasing is the dominant decoherence mechanism, it then follows that \begin{equation} \frac{d\langle \hat{S}_x \rangle}{dt} = \frac{g_0}{\sqrt{N}}\langle \left( \hat{a} + \hat{a}^{\dagger} \right) \hat{S}_y \rangle - \Gamma_{el}\langle \hat{S}_x \rangle . \end{equation} From here it is straightforward to rearrange for the relation between the spin-phonon correlation and the evolution of $\langle \hat{S}_x \rangle$: \begin{equation} \mathcal{C}_{\mathrm{sp-ph}} \equiv \langle \left( \hat{a} + \hat{a}^{\dagger} \right) \hat{S}_y \rangle = \frac{\sqrt{N}}{g_0} \left( \Gamma_{el}\langle \hat{S}_x \rangle + \frac{d\langle \hat{S}_x \rangle}{dt} \right) . \label{eqn:SpinPhCorr} \end{equation} We emphasize that evaluation of this spin-phonon correlation directly from either ground-state $\vert \psi^{\mathrm{Nor}}_{0,N/2} \rangle$ $\vert \psi^S_{0,N/2} \rangle$ yields $\mathcal{C}_{\mathrm{sp-ph}} = 0$, and this result has been confirmed numerically for all transverse field strengths $B$ for the systems considered in the main text. This directly implies that the finite value reported in the main text is due to contributions from excited states. Such contributions may come from diabatic excitations created throughout the ramping protocol or from the initial thermal phonon ensemble. In the main text, we extract the spin-phonon correlation from the experimental data using the RHS of Eq.~(\ref{eqn:SpinPhCorr}) and evaluating the time-derivative numerically with a one-sided derivative. We model dephasing using $\langle \hat{S}_x \rangle_{\Gamma} = \langle \hat{S}_x \rangle_{\Gamma = 0} e^{-\Gamma t}$ in our theoretical calculations, and extract the theoretically predicted spin-phonon correlation in an identical manner. \section{Experimental Optimisation of ramp protocols \label{app:RampOpt}} To experimentally optimize the ramp protocols demonstrated in this work, we chose to optimize with respect to the total magnetization $\langle |\hat{S}_z| \rangle$ at the end of the ramp. For the EXP ramp, we compared approximately $20$ different ramp profiles that utilized different exponential decay rates. Specifically, we would perform an experiment where the effective transverse field was ramped from the initial field $B(t=0)$ at a fixed decay rate to $B\approx0$, where we then measured the spin-projection $M_z^{\mathrm{exp}}$ along the $\hat{z}$-axis. This experiment was repeated, typically $500-700$ times, to gather statistics on the resulting distribution and obtain a measurement of $\langle|\hat{S}_z|\rangle$ from the histogram of $M_z^{\mathrm{exp}}$ measurements. We then picked a ramp profile with a different exponential decay rate, and repeated this procedure. After identifying the exponential decay rate that optimized the final magnetization $\langle|\hat{S}_z|\rangle$, we performed experiments that measured the magnetization distribution $P(M_z^{\mathrm{exp}})$ when stopping the ramp at different times, as discussed in the main text. \begin{figure}[!] \includegraphics[width=8cm]{offset_freq_scan-crop.pdf} \caption{Balancing the $P(M_z^{\mathrm{exp}})$ distributions. (a) $P(M_z^{\mathrm{exp}})$ distribution functions extracted from experimental measurements of the spin-projection $M_z^{\mathrm{exp}}$ at the end of an EXP ramp of the transverse magnetic field to zero. The distribution functions are plotted as a function of frequency offset of the microwaves that generate the effective transverse magnetic field from the spin-flip resonance in the absence of the spin-dependent force. (b) Plot of the average magnetization $\langle\hat{S}_z\rangle$ from (a) as a function of the microwave offset frequency. An offset frequency that balanced the distributions at the end of the ramping sequence, defined by $\langle\hat{S}_z\rangle$, was used in studies described in the main text that measured the spin-projection distribution when stopping the ramp at different times.} \label{fig:Supp_Exp} \end{figure} When performing these ramp sequences and observing the distributions of $M_z^{\mathrm{exp}}$, in some cases the distributions would be biased to positive or negative spin-projection. This can be observed in the distribution of Fig.~\ref{fig:Supp_Exp}(a) at zero offset frequency. Such an effect can be explained by a small longitudinal magnetic field that breaks the symmetry of the ground state. The small longitudinal field was likely due to imperfect nulling of the Stark shift from the off-resonant laser beams that generate the spin-dependent force \cite{Bohnet2016}. We would observe that this effect varies day to day. To compensate for this effect, during the ramp we would apply a small frequency offset to the microwaves that provided the effective transverse field. For each frequency offset, we would measure the distribution of measurements $M_z^{\mathrm{exp}}$ at the end of the transverse field ramp as shown in Fig.~\ref{fig:Supp_Exp}(a). For the appropriate offset, the distribution would be balanced, with large, separated peaks at positive and negative values of $M_z^{\mathrm{exp}}$. To choose the optimum, we plot $\langle\hat{S}_z\rangle$ as a function of the frequency offset and extract the zero crossing, as shown in Fig.~\ref{fig:Supp_Exp}(b). \end{document}
1,941,325,221,201
arxiv
\section{Introduction} Efficient energy transfer (ET) at the nanoscale is one of the major goals in the rapidly developing field of plasmonics. F\"{o}rster resonance energy transfer (FRET) \cite{forster-ap48,dexter-jcp53} between spacially separated donor and acceptor fluorophores, e.g., dye molecules or semiconductor quantum dots (QD), underpins diverse phenomena in biology, chemistry and physics such as photosynthesis, exciton transfer in molecular aggregates, interaction between proteins \cite{lakowicz-book,andrews-book} or, more recently, energy transfer between QDs and in QD-protein assemblies \cite{willard-prl01,klimov-prl02,clark-jpcc07}. FRET spectroscopy is widely used, e.g., in studies of protein folding \cite{deniz-pnas00,lipman-science03}, live cell protein localization \cite{selvin-naturesb00,sekar-jcb03}, biosensing \cite{gonzalez-bj95,medintz-naturemat03}, and light harvesting \cite{andrews-lp11}. During past decade, significant advances were made in ET enhancement and control by placing molecules or QDs in microcavities \cite{hopmeier-prl99,andrew-science00,finlayson-cpl01} or near plasmonic materials such as metal films and nanoparticles (NPs) \cite{leitner-cpl88,lakowicz-jf03,andrew-science04,lakowicz-jpcc07-1,lakowicz-jpcc07-2,krenn-nl08,rogach-apl08,bodreau-nl09,yang-oe09,an-oe10,lunz-nl11,zhao-jpcc12,west-jpcc12,lunz-jpcc12}. While F\"{o}rster transfer is efficient only for relatively short donor-acceptor separations $\sim$10 nm \cite{lakowicz-book}, a plasmon-mediated transfer channel supported by metal NPs \cite{nitzan-cpl84,nitzan-jcp85,druger-jcp87,dung-pra02,stockman-njp08,pustovit-prb11}, films and waveguides \cite{dung-pra02,moreno-nl10} or doped monolayer graphene \cite{velizhanin-prb12}, can significant increase the transition rate at larger distances between donor and acceptor. At the same time, dissipation in metal and plasmon-enhanced radiation reduce the fraction of donor's energy available for transfer to the acceptor. In a closely related phenomenon of plasmon-enhanced fluorescence from a single fluorophore \cite{feldmann-nl05,novotny-prl06,sandoghdar-prl06,halas-nl07}, the interplay between dissipation and radiation channels, which determines fluorophore's quantum efficiency, depends sensitively on its distance to the metal surface \cite{nitzan-jcp81,ruppin-jcp82}. A nearby acceptor will absorb some of the donor fluorophore energy via three main transfer channels: F\"{o}rster channel, non-radiative plasmon-mediated channel, and plasmon-enhanced radiative channel, the latter being dominant for intermediate distances \cite{pustovit-prb11}. The fraction of the donor energy absorbed by the acceptor is then determined by an interplay between transfer, radiation and dissipation channels, so that an increase of ET efficiency implies either increase of the transfer rate or reduction of the dissipation and/or radiation rates. Here we describe a novel \textit{cooperative amplification} mechanism for ET from an \textit{ensemble} of donors to acceptors that takes advantage of the subtle balance between energy flow channels in a plasmonic system. In a typical experimental setup, a large number of donors are deposited on top of a silica shell around a gold or silver core of a spherical core-shell NP, while the acceptors are attached to the NP surface via linker molecules (see schematic in Fig.~\ref{fig:rad20}). If the donors' separation from the metal surface is not so small that dissipation is the dominant channel, then the donors' coupling through NP plasmons gives rise to new system eigenstates -- superradiant and subradiant states \cite{dicke-pr54}, which, in the presence of a NP, are considerably more robust due to a strong plasmonic enhancement of radiative coupling \cite{pustovit-prl09,pustovit-prb10}. In this case, ET to an acceptor takes place from these collective states rather than from each of many individual donors. Importantly, the energy flows in a system in the cooperative regime differ dramatically from those in a system of individual donors. While a few superradiant states carry only a small fraction of donors' energy, the large matrix element with external electric fields leads to a huge decay rate that scales with the system size. In a similar manner, the large coupling of superradiant states with the electric field of an acceptor spatially separated from the donor layer increases the transfer rate and ensures, as we demonstrate below, a much more efficient plasmon-assisted ET than from the same number of individual donors. \begin{figure}[tb] \begin{center} \includegraphics[width=0.85\columnwidth]{fret-rad20} \end{center} \caption{\label{fig:rad20} ET for $N=100$ donors on top of spherical core-shell NP with Ag core radius $R_{c}=20$ nm and SiO$_{2}$ shell thickness 5 nm (a), 20 nm (b), 30 nm (c), 40 nm (d), and 50 nm (e), and acceptor at a distance $d=10$ nm from NP surface. Full calculations for two donor-acceptor sets with spectral bands tuned to dipole (set 1) and high-\textit{l} (set 2) plasmon resonances are compared to individual donors approximation. } \end{figure} On the other hand, the multitude of subradiant states which store the nearly entire system energy are characterized by a much slower decay rate than individual donors coupled to a NP \cite{pustovit-prl09,pustovit-prb10} implying lower energy losses through dissipation and radiation channels. As we show below, this reduction of losses in the cooperative regime leads to a dramatic ET amplification relative to ET from individual donors. Importantly, here the reduction of losses does not require continuous pumping to sustain loss compensation by gain medium, but takes place "naturally" due to plasmon exchanges by the donors. To pinpoint the origin of cooperative amplification, consider ET from a single donor to an acceptor near a metal nanostructure. The fraction of transferred energy, $W_{ad}$, in unit frequency interval relative to the total energy stored in the donor, $W_{d}$, is given by \cite{pustovit-prb11} \begin{align} \label{fret-new} \frac{1}{W_{d}}\frac{dW_{ad}}{d\omega}=\frac{9\tilde{\sigma}_{a}(\omega) }{8\pi k^4} \, \tilde{f}_{d}(\omega) \left |\tilde{D}_{ad}(\omega)\right |^{2}\dfrac{\gamma_{d}^{r}}{\Gamma_{d}(\omega)}, \end{align} where $\gamma_{d}^{r}$ is the donor \textit{free space} radiative decay rate \cite{novotny-book}, $\Gamma_{d}$ is its \textit{full} decay rate that depends on its distance to the metal surface, $\tilde{f}_{d}$ and $\tilde{\sigma}_{a}$ are, respectively, donor's spectral function and aceptor's absorption cross section modified by the metal, $\tilde{D}_{da}$ is the donor-acceptor coupling that includes direct Coulomb as well as plasmon-mediated channels, and $k=\omega/c$ is light wavevector. The ET efficiency is mainly governed by the competition between the factor $|\tilde{D}_{ad}(\omega) |^{2}$ that determines plasmon-enhanced transition rate and the quenching factor $\gamma_{d}^{r}/\Gamma_{d}$. In the absence of metal, only Coulomb interaction contributes to the transition rate, i.e., $|D_{ad}|^{2}\propto r_{ad}^{-6}$, where $r_{ad}$ is donor-acceptor separation, and there is no quenching, i.e., $\gamma_{d}^{r}/\Gamma_{d}=1$, so that frequency integration of Eq.~(\ref{fret-new}) yields the standard F\"{o}rster transfer rate $\left(r_{F}/r_{ad}\right)^{6}$, where $r_{F}$ is the F\"{o}rster radius determined by the overlap of donor and acceptor spectral bands. In the case of many individual donors (i.e., not interacting with each other), the r.h.s of Eq.~(\ref{fret-new}) is summed over all donors positions. For an ensemble of donors, ET takes place from the system eigenstates, hereafter labeled by $J$, and Eq.~(\ref{fret-new}) holds for each eigenstate, while the ET spectral density is obtained by summation over all eigenstates $J$, \begin{equation} \label{dw-final} \frac{1}{W_{d}}\dfrac{dW_{\rm ens}}{d\omega} =\frac{9\tilde{\sigma}_{a}(\omega)}{8 \pi k^4}\sum_{J}\tilde{f}_{J}(\omega)\left |\tilde{D}_{aJ}(\omega)\right |^{2} \dfrac{\gamma_{d}^{r}}{\Gamma_{J}(\omega)}, \end{equation} where $\Gamma_{J}$, $\tilde{f}_{J}(\omega)$, and $\tilde{D}_{aJ}(\omega)$ are, respectively, the eigenstate $J$ decay rate, spectral function, and coupling strength to the acceptor. The full energy absorbed by an acceptor is obtained by frequency integration of Eq.~(\ref{dw-final}). The derivation of Eq.~(\ref{dw-final}) is provided in Supplemental Material, and here we discuss its implications and present our numerical results. \begin{figure}[tb] \begin{center} \includegraphics[width=0.85\columnwidth]{fret-rad30} \end{center} \caption{\label{fig:rad30} Same as Fig.~\ref{fig:rad20} for core radius $R_{c}=30$ nm and shell thickness 5 nm (a) and 20 nm (b), 40 nm (c), 60 nm (d), and 80 nm (e). } \end{figure} In the cooperative regime, the superradiant states are strongly coupled to the electric field of an acceptor, i.e., $D_{aJ}\gg D_{ad}$, while subradiant states decay much slower than individual donors, i.e., $\Gamma_{J}\ll \Gamma_{d}$. In both cases, the result is a significant amplification of ET. This is illustrated in Figs.~\ref{fig:rad20} and \ref{fig:rad30} for an ensemble of 100 donors randomly distributed on surface of a spherical core-shell NP of radius $R$ immersed in water, with Ag core radius $R_{c}$ and Silica shell thickness $L=R-R_{c}$, and an acceptor at distance $d=10$ nm from the NP surface (see inset in Fig.~\ref{fig:rad20}). We assume that the donors' and acceptor's dipole orientations are all normal to the NP surface and their respective emission and absorption bands are Lorentzians of width 0.1 eV centered at energies $\omega_{d}$ and $\omega_{a}$. We expect that in the cooperative regime, the superradiant states are dominant at energies near the dipole plasmon resonance, while subradiant states are best developed at energies close to those of $l=2$ and $l=3$ plasmons (note that, at a given distance, dipole-NP interaction rapidly falls with increasing $l$). Accordingly, we use two sets of donors and acceptors: set 1 has $\omega_{d}$ lying in the dipole plasmon band (at 3.15 eV), and set 2 has $\omega_{d}$ lying in the higher-\textit{l} plasmons region (at 3.85 eV); in both cases, $\omega_{a}$ is redsifted by 0.1 eV from the corresponding $\omega_{d}$. Note that high-\textit{l} plasmons with energies above 4.0 eV are damped by electronic interband transitions in Ag. In all calculations, we used experimental Ag permittivity and included angular momenta up to $l_{\rm max}=75$. \begin{figure}[tb] \begin{center} \includegraphics[width=0.85\columnwidth]{ampl} \end{center} \caption{\label{fig:ampl} Amplification factor of frequency-integrated ET relative to that for individual donors vs. shell/core ratio. } \end{figure} In Fig.~\ref{fig:rad20} we plot the energy dependence of normalized ET, given by Eq.~(\ref{dw-final}), for Ag core radius $R_{c}=20$ nm and SiO$_{2}$ shell thickness in the range from 5 nm to 50 nm (i.e., overall NP radius $R$ between 25 and 70 nm). ET for each donor-acceptor set is compared to ET calculated for individual donors and all curves are normalized per donor. For relatively thin shells, the individual donor approximation yields significantly higher ET for set 1 since it neglects plasmon exchanges between the donors and hence underestimates the dissipation, while ET for set 2 is suppressed by very strong dissipation in the high-\textit{l} plasmon region. With increasing shell thickness, as donors move away from the metal core, the system transitions to the cooperative regime \cite{pustovit-prl09,pustovit-prb10} and ET from \textit{superradiant} states (set 1) overtakes ET from individual donors due to the much stronger coupling to the acceptor, as discussed above. At the same time, ET from \textit{subradiant} states emerges (set 2) and, for thicker shells, significantly exceeds ET from individual donors due to the much slower decay rates. Remarkably, the superradiant and subradiant amplification mechanisms can be utilized independently in different energy domains. For larger NP, the ET amplification sharply increases (see Fig.~\ref{fig:rad30}) due to a stronger plasmon-enhanced radiative coupling between the donors \cite{pustovit-prb11} that leads to a more robust cooperative regime \cite{pustovit-prl09,pustovit-prb10}. The peak enhancement for both set 1 and set 2 relative to individual donors reaches $\sim 10$ for the largest NP. The crossover to the cooperative regime is determined by the ratio of shell thickness to core radius ($L/R_{c}$) rather than by shell thickness alone, and therefore ET for $R_{c}=30$ nm core NP overtakes ET from individual donors at thicker shells (compare Figs.~\ref{fig:rad20} and \ref{fig:rad30}). Importantly, the evolution of cooperative ET and of ET from individual donors with increasing shell thickness show opposite trends: the former \textit{increases} with thickness while the latter is reduced. The role of cooperative effects is most pronounced in frequency-integrated ET relative to that for individual donors (amplification factor). This shows a dramatic ET increase for larger NP at similar values of shell/core ratio (Fig.~\ref{fig:ampl}). The onset of the cooperative regime corresponds to amplification factor $\sim 1$ and takes place at shell/core ratio $\sim 1$. With increasing shell thickness, amplification factor reaches $\sim 5$ for 20 nm core NP and $\sim 20$ for 30 nm core NP. With further increase of NP size, ET amplification should eventually saturate and, as the system size exceeds the radiation wavelength $\lambda$ ($\sim 400$ nm in our case), scale back to $\sim 1$. Indeed, as system size approaches $\lambda$, the plasmonic field enhancement weakens due to retardation effects and the cooperative regime is destroyed by the dissipation effects \cite{pustovit-prl09,pustovit-prb10}. The optimal NP size for cooperative amplification of ET is expected to be similar to that for plasmon-enhanced fluorescence \cite{feldmann-nl05,novotny-prl06,sandoghdar-prl06,halas-nl07}. \begin{acknowledgments} This work was performed while the first author held a National Research Council Research Associateship Award at AFRL. Support by AFRL Materials and Manufacturing Directorate Applied Metamaterials Program is also acknowledged. Work at JSU was supported through NSF under Grant DMR-1206975, CREST Center, and EPSCOR program. \end{acknowledgments}
1,941,325,221,202
arxiv
\section{Proof of Lemma~\ref{lem:stability}}\label{app:A} Consider $T$ iterations of $\mathcal{A}_{\sf NSGD}$. Let $\mathbf{G}_1, \ldots, \mathbf{G}_T$ denote the noise vectors and $\mathcal{I}_1, \ldots, \mathcal{I}_T \in [n]^m$ denote the \emph{index} sets of the mini-batches selected in the $T$ iterations. Consider any pair of datasets $S=(z_1, \ldots, z_k, \ldots, z_n)$ and $S'=(z_1, \ldots, z'_k, \ldots, z_n)$ differing in exactly one data point $z_k\neq z'_k$ for some fixed $k\in [n]$. Let $\mathbf{w}_0, \mathbf{w}_1, \ldots, \mathbf{w}_T$ and $\mathbf{w}_0, \mathbf{w}'_1, \ldots, \mathbf{w}'_T$ denote the trajectories of $\mathcal{A}_{\sf NSGD}$ corresponding to input datasets $S$ and $S'$, respectively. For any $t\in [T],$ let $\xi_t\triangleq \mathbf{w}_t-\mathbf{w}'_t$. We follow the proof technique of \cite[Lemma~4.3]{feldman2019high}. We prove the following claim via induction on $t$: $$\ex{}{\norm{\xi_t}}\leq 2\,L\,\frac{\eta\, t}{n},$$ where the expectation is taken over $\mathcal{I}_0,\ldots,\mathcal{I}_{t-1}, \mathbf{G}_0,\ldots,\mathbf{G}_{t-1}$. First, it's trivial to see that the claim is true for $t=0$. Suppose the claim holds for all $t\leq \tau$. Fix the randomness in $\mathbf{G}_{\tau}$ and $\mathcal{I}_{\tau}$. Let $r$ denote the number of occurrences of the index $k$ (where $S$ and $S'$ differ) in $\mathcal{I}_{\tau}$. By the non-expansiveness property of the gradient update step, we have \begin{align*} \norm{\xi_{\tau+1}}&\leq \norm{\xi_{\tau}}+2\,L\,\eta\,\frac{r}{m} \label{ineq:recurs} \end{align*} Now, we now invoke the randomness in $\mathbf{G}_{\tau}$ and $\mathcal{I}_{\tau}$. Note that $r$ is a Binomial random variable with mean $m/n$. Hence, by taking expectation and using the induction hypothesis, we end up with \begin{align*} \ex{\substack{\mathcal{I}_0,\ldots,\mathcal{I}_{\tau}\\\mathbf{G}_0,\ldots,\mathbf{G}_{\tau}}}{\norm{\xi_{\tau+1}}}&\leq 2\,L\,\frac{\eta\,(\tau+1)}{n} \end{align*} This proves the claim. Now, let $\overline{\mathbf{w}}_T=\frac{1}{T}\sum_{t=1}^T\mathbf{w}_t$ and $\overline{\mathbf{w}}'_T=\frac{1}{T}\sum_{t=1}^T\mathbf{w}'_T$. Since $\ell$ is $L$-Lipschitz, thus for every $z\in\mathcal{Z}$, we have \begin{align*} \ex{\substack{\mathcal{I}_0,\ldots,\mathcal{I}_{t-1}\\\mathbf{G}_0,\ldots,\mathbf{G}_{t-1}}}{\ell(\overline{\mathbf{w}}_T, ~z)-\ell(\overline{\mathbf{w}}'_T, ~z)}&\leq L \ex{\substack{\mathcal{I}_0,\ldots,\mathcal{I}_{t-1}\\\mathbf{G}_0,\ldots,\mathbf{G}_{t-1}}}{\norm{\overline{\mathbf{w}}_T-\overline{\mathbf{w}}'_T}}\leq L \frac{1}{T}\sum_{t=1}^T \ex{\mathcal{I}_t, \mathbf{G}_t}{\norm{\xi_t}}\hspace{1cm}\\ &\leq 2 L^2\,\frac{\eta}{n\,T}\frac{T(T+1)}{2}=L^2\, \frac{\eta\,(T+1)}{n} \end{align*} This completes the proof. \section{Proof of Lipschitz property of Moreau envelope (Lemma~\ref{lem:prop_moreau})}\label{app:B} Fix any $\mathbf{w}\in \mathcal{W}$. We will show that $\norm{\nabla f_{\beta}(\mathbf{w})}\leq 2L.$ Define $g(\mathbf{v})\triangleq f(\mathbf{v})+\frac{\beta}{2}\norm{\mathbf{v}-\mathbf{w}}^2,~ \mathbf{v}\in\mathcal{W}$. Note that ${\sf prox}_{f/\beta}(\mathbf{w})=\arg\min\limits_{\mathbf{v}\in\mathcal{W}}g(\mathbf{v}).$ Let $\mathbf{v}^*$ denote ${\sf prox}_{f/\beta}(\mathbf{w})$. Now, observe that \begin{align*} 0&\leq g(\mathbf{w})-g(\mathbf{v}^*)=f(\mathbf{w})-f(\mathbf{v}^*)-\frac{\beta}{2}\norm{\mathbf{w}-\mathbf{v}^*}^2 \end{align*} Thus, we have \begin{align*} \frac{\beta}{2}\norm{\mathbf{w}-\mathbf{v}^*}^2&\leq f(\mathbf{w})-f(\mathbf{v}^*)\leq L\,\norm{\mathbf{w}-\mathbf{v}^*} \end{align*} where the last inequality follows from the fact that $f$ is $L$-Lipschitz. Thus, we get $\norm{\mathbf{w}-\mathbf{v}^*}\leq 2\,L/\beta.$ By property~\ref{prop:moreau_3}, we have $\norm{\nabla f_{\beta}(\mathbf{w})}=\beta\,\norm{\mathbf{w}-\mathbf{v}^*}$. This together with the above bound gives the desired result. \section{Optimality of Our Bounds}\label{sec:lower} Our upper bounds in Sections~\ref{sec:smooth} and \ref{sec:non-smooth} are tight (up to logarithmic factors in $1/\delta$). In particular, our bounds match a lower bound of $\Omega\left(M\,L\cdot \max\left(\frac{1}{\sqrt{n}},~\frac{\sqrt{d}}{n}\right)\right)$ on the excess population loss. The first term is simply the known lower bound on the excess population loss in the non-private setting. The second term follows from the lower bound in \cite{bassily2014differentially} on excess empirical loss, and the fact that a lower bound on excess empirical loss implies nearly the same lower bound on the excess population loss. We elaborate on this below. \paragraph{Reduction from Private ERM to Private SCO:} For any $\gamma>0$, suppose there is $\left(\frac{\epsilon}{4\,\log(2/\delta)}, \frac{e^{-\epsilon}\delta}{8\,\log(2/\delta)}\right)$-differentially private algorithm $\mathcal{A}$ such that for any distribution on a domain $\mathcal{Z}$, when $\mathcal{A}$ is given a sample $T\sim\mathcal{D}^n,$ it yields expected excess population loss $\Delta\mathcal{L}(\mathcal{A};~\mathcal{D}) \leq \gamma$. Then, there is $(\epsilon, \delta)$-differentially private algorithm $\mathcal{B}$ that when given any dataset $S\in\mathcal{Z}^n$, it yields expected excess empirical loss $\Delta\widehat{\mathcal{L}}(\mathcal{B}; ~S)\triangleq \ex{\mathcal{B}}{\widehat{\mathcal{L}}\left(\mathcal{B}(S); S\right)}-\min\limits_{\mathbf{w}}\widehat{\mathcal{L}}(\mathbf{w}; S)\leq \gamma$. Fix any $\gamma>0$. Suppose algorithm $\mathcal{A}$ described above exists. We construct algorithm $\mathcal{B}$ as follows: \begin{enumerate} \item Given input dataset $S\in\mathcal{Z}^n,$ let $\mathcal{D}_S$ be the empirical distribution induced by $S$. \item Sample $T\sim\mathcal{D}_S^n$. \label{step:unif-samp} \item Return $\mathcal{A}(T)$ \end{enumerate} First, note that $\Delta\widehat{\mathcal{L}}(\mathcal{B}; S)\leq \gamma$. This easily follows from the fact that for any $\mathbf{w}$, $\mathcal{L}(\mathbf{w}; \mathcal{D}_S)=\widehat{\mathcal{L}}(\mathbf{w}; S)$. In particular, observe that \begin{align*} &\ex{\mathcal{B}}{\widehat{\mathcal{L}}\left(\mathcal{B}(S); S\right)}-\min\limits_{\mathbf{w}}\widehat{\mathcal{L}}(\mathbf{w}; S)=\ex{T\sim\mathcal{D}_S^n, \mathcal{A}}{\mathcal{L}\left(\mathcal{A}(T); ~\mathcal{D}_S\right)}-\min\limits_{\mathbf{w}}\mathcal{L}(\mathbf{w};~\mathcal{D}_S)\\ &=\Delta\mathcal{L}\left(\mathcal{A}; \mathcal{D}_S\right)\leq \gamma. \end{align*} Next, we show that $\mathcal{B}$ is $(\epsilon, \delta)$-differentially private. Let $S=(z_1, \ldots, z_k, \ldots, z_n), S'=(z_1, \ldots, z'_k, \ldots, z_n)$ be neighboring datasets differing in single point whose index is $k\in [n]$. Let $T, T'$ be the samples obtained by running $\mathcal{B}$ on $S, S',$ respectively, with the same set of random coins in Step~\ref{step:unif-samp}. More precisely, let $R$ denote the random sampling procedure used in Step~\ref{step:unif-samp}, and define $T=R(S)$ and $T'=R(S')$. Let $r$ be the number of times the $k$-th point of the input dataset is sampled by $R$. Hence, $r=\lvert T\Delta T'\rvert$, i.e., $r$ is the number of points where $T$ and $T'$ differ. By Chernoff's bound, $r\leq 4\,\log(2/\delta)$ with probability $1-\delta/2$. Let $\mathcal{V}$ be any measurable subset of the range of $\mathcal{B}$. Observe that \begin{align*} \pr{\mathcal{B}}{\mathcal{B}(S)\in\mathcal{V}}&=\pr{\mathcal{A}, R}{\mathcal{A}(T)\in\mathcal{V}}\\ &\leq \pr{\mathcal{A}, R}{\mathcal{A}(T)\in\mathcal{V}\vert ~r\leq 4\,\log(2/\delta)}\cdot\pr{}{~r\leq 4\,\log(2/\delta)} +\delta/2\\ &\leq e^{\frac{r\,\epsilon}{4\,\log(2/\delta)}}\cdot \pr{\mathcal{A}, R}{\mathcal{A}(T')\in\mathcal{V}\vert~ r\leq 4\,\log(2/\delta)}\cdot\pr{}{~r\leq 4\,\log(2/\delta)} +\frac{\delta}{2}+r\,e^{\frac{r\,\epsilon}{4\,\log(2/\delta)}}\frac{e^{-\epsilon}\delta}{8\,\log(2/\delta)}\\ &\leq e^{\epsilon}\cdot \pr{\mathcal{A}, R}{\mathcal{A}(T')\in\mathcal{V}} +\delta\\ &=e^{\epsilon}\cdot\pr{\mathcal{B}}{\mathcal{B}(S')\in\mathcal{V}}+\delta, \end{align*} where the third inequality follows from the fact that $\mathcal{A}$ is $\left(\frac{\epsilon}{4\,\log(2/\delta)}, \frac{\delta}{2}\right)$-differentially private and group differential privacy (e.g.~\cite{dwork2013algorithmic}). This shows that $\mathcal{B}$ is $(\epsilon, \delta)$-differentially private, proving the reduction, and hence, the lower bound. \section{Introduction}\label{sec:intro} Many fundamental problems in machine learning reduce to the problem of minimizing the expected loss (also referred to as {\em population loss}) $\mathcal{L}(\mathbf{w}) = \ex{z \sim \mathcal{D}}{\ell(\mathbf{w},z)}$ for convex loss functions of $\mathbf{w}$ given access to i.i.d.~samples $z_1, \ldots, z_n$ from the data distribution $\cal D$. This problem arises in various settings, such as estimating the mean of a distribution, least squares regression, or minimizing a convex surrogate loss for a classification problem. This problem is commonly referred to as {\em stochastic convex optimization} (SCO) and has been the subject of extensive study in machine learning and optimization. In this work we study this problem with the additional constraint of differential privacy with respect to the samples \cite{DMNS06}. A natural approach toward solving SCO is minimization of the empirical loss $\widehat{\mathcal{L}}(\mathbf{w}) = \frac{1}{n} \sum_i \ell(\mathbf{w},z_i)$ and is referred to as empirical risk minimization (ERM). The problem of ERM with differential privacy (DP-ERM) has been well-studied and asymptotically tight upper and lower bounds on excess loss\footnote{{\em Excess loss} refers to the difference between the achieved loss and the true minimum.} are known \cite{CM08,CMS,jain2012differentially,kifer2012private,ST13sparse,song2013stochastic,DuchiJW13,ullman2015private,JTOpt13,bassily2014differentially,talwar2015nearly,smith2017interaction,wu2017bolt,wang2017differentially,iyengartowards}. A standard approach for deriving bounds on the population loss is to appeal to {\em uniform convergence} of empirical loss to population loss, namely an upper bound on $\sup_{\mathbf{w}} (\mathcal{L}(\mathbf{w}) - \widehat{\mathcal{L}}(\mathbf{w}))$. This approach can be used to derive optimal bounds on the excess population loss in a number of special cases, such as regression for generalized linear models. However, in general, it leads to suboptimal bounds. It is known that there exist distributions over loss functions over $\mathbb{R}^d$ for which the best bound on uniform convergence is $\Omega(\sqrt{d/n})$ \cite{feldman2016generalization}. In contrast, in the same setting, DP-ERM can be solved with excess loss of $O(\frac{\sqrt{d}}{\epsilon n})$ and the optimal excess population loss achievable without privacy is $O(\sqrt{1/n})$. As a result, in the high-dimensional settings often considered in modern ML (when $n = \Theta(d)$), bounds based on uniform convergence are $\Omega(1)$ and do not lead to meaningful bounds on population loss. The first work to address the population loss for SCO with differential privacy (DP-SCO) is \cite{bassily2014differentially}. It gives bounds based on two natural approaches. The first approach is to use the generalization properties of differential privacy itself to bound the gap between the empirical and population losses \cite{dwork2015preserving, bassily2016algorithmic}, and thus derive bounds for SCO from bounds on ERM. This approach leads to a suboptimal bound (specifically\footnote{For clarity, in the introduction we focus on the dependence on $d$ and $n$ and $\epsilon$ for $(\epsilon, \delta)$-DP. We suppress the dependence on $\delta$ and on parameters of the loss function such as Lipschitz constant and the constraint set radius.}, $\approx \max\left(\tfrac{d^{\frac{1}{4}}}{\sqrt{n}}, \tfrac{\sqrt{d}}{\epsilon n}\right)$ \cite[Sec. F]{bassily2014differentially}). For the important case when $d=\Theta(n)$ and $\epsilon=\Theta(1)$ this results in the bound of $\Omega(n^{-\frac 1 4})$ on excess population loss. The second approach relies on generalization properties of stability to bound the gap between the empirical and population losses \cite{bousquet2002stability,SSSS}. Stability is ensured by adding a strongly convex regularizer to the empirical loss \cite{SSSS}. This technique also yields a suboptimal bound on the excess population loss $\approx (d^{\frac{1}{4}}/\sqrt{\epsilon\,n})$. There are two natural lower bounds that apply to DP-SCO. The lower bound of $\Omega(\sqrt{1/n})$ for the excess loss of non-private SCO applies for DP-SCO. Further it is not hard to show that lower bounds for DP-ERM translate to essentially the same lower bound for DP-SCO, leading to a lower bound of $\Omega(\frac{\sqrt{d}}{\epsilon n})$ (see Appendix~\ref{sec:lower} for the proof). \subsection{Our contribution} In this work, we address the gap between the known bounds for DP-SCO. Specifically, we show that the optimal rate of $O\left(\sqrt\frac{1}{n} + \frac{\sqrt{d}}{\epsilon n}\right)$ is achievable, matching the known lower bounds. In particular, we obtain the statistically optimal rate of $O(1/\sqrt{n})$ whenever $d=O(n)$. This is in contrast to the situation for DP-ERM where the cost of privacy grows with the dimension for all $n$ In our first result we show that, under relatively mild smoothness assumptions, this rate is achieved by a variant of the standard noisy mini-batch SGD. The classical analyses for non-private SCO depend crucially on making only one pass over the dataset. However, a single pass noisy SGD is not sufficiently accurate as we need a non-trivial amount of noise in each step to carry out the privacy analysis. We rely instead on generalization properties of {\em uniform stability} \cite{bousquet2002stability}. Unlike in \cite{bassily2014differentially}, our analysis of stability is based on extension of recent stability analysis of SGD \cite{hardt2015train,feldman2019high} to noisy SGD. In this analysis, the stability parameter degrades with the number of passes over the dataset, while the empirical loss decreases as we make more passes. In addition, the batch size needs to be sufficiently large to ensure that the noise added for privacy is small. To satisfy all these constraints the parameters of the scheme need to be tuned carefully. Specifically we show that $\approx \min(n, n^2\epsilon^2/d)$ steps of SGD with a batch size of $\approx \max(\sqrt{\epsilon n}, 1)$ are sufficient to get all the desired properties. Our second contribution is to show that the smoothness assumptions can be relaxed at essentially no additional increase in the rate. We use a general smoothing technique based on the Moreau-Yosida envelope operator that allows us to derive the same asymptotic bounds as the smooth case. This operator cannot be implemented efficiently in general, but for algorithms based on gradient steps we exploit the well-known connection between the gradient step on the smoothed function and the proximal step on the original function. Thus our algorithm is equivalent to (stochastic, noisy, mini-batch) proximal descent on the unsmoothed function. We show that our analysis in the smooth case is robust to inaccuracies in the computation of the gradient. This allows us to show that sufficient approximation to the proximal steps can be implemented in polynomial time given access to the gradient of the $\ell(w,z_i)$'s. Finally, we show that {\em Objective Perturbation} \cite{CMS,kifer2012private} also achieves optimal bounds for DP-SCO. However, objective perturbation is only known to satisfy privacy under some additional assumptions, most notably, Hessian being rank $1$ on all points in the domain. The generalization analysis in this case is based on the uniform stability of the solution to strongly convex ERM. Aside from extending the analysis of this approach to population loss, we show that it can lead to algorithms for DP-SCO that use only near-linear number of gradient evaluations (whenever these assumptions hold). In particular, we give a variant of objective perturbation in conjunction with the stochastic variance reduced gradient descent (SVRG) with only $O(n\log n)$ gradient evaluations. We remark that the known lower bounds for uniform convergence \cite{feldman2016generalization} hold even under those additional assumptions invoked in objective perturbation. Finding algorithms with near-linear running time in the general setting of SCO is a natural avenue for future research. Our work highlights the importance of uniform stability as a tool for analysis of this important class of problems. We believe it should have applications to other differentially private statistical analyses. \subsection*{Acknowledgements} We thank Adam Smith, Thomas Steinke and Jon Ullman for the insightful discussions of the problem at the early stages of this project. We are also grateful to Tomer Koren for bringing the Moreau-Yosida smoothing technique to our attention. \newpage \bibliographystyle{alpha} \section{Private SCO for Non-smooth Losses}\label{sec:non-smooth} In this section, we consider the setting where the convex loss is non-smooth. First, we show a generic reduction to the smooth case by employing the smoothing technique known as \emph{Moreau-Yosida regularization} (a.k.a. Moreau envelope smoothing) \cite{nesterov2005smooth}. Given an appropriately smoothed version of the loss, we obtain the optimal population loss w.r.t. the original non-smooth loss function. Computing the smoothed loss via this technique is generally computationally inefficient. Hence, we move on to describe a computationally efficient algorithm for the non-smooth case with essentially optimal population loss. Our construction is based on an adaptation of our noisy SGD algorithm $\mathcal{A}_{\sf NSGD}$ (Algorithm~\ref{Alg:NSGD-smooth}) that exploits some useful properties of Moreau-Yosida smoothing technique that stem from its connection to proximal operations. \begin{defn}[Moreau envelope]\label{defn:moreau} Let $f:\mathcal{W}\rightarrow \mathbb{R}^d$ be a convex function, and $\beta>0$. The $\beta$-Moreau envelope of $f$ is a function $f_{\beta}:\mathcal{W}\rightarrow\mathbb{R}^d$ defined as $$f_{\beta}(\mathbf{w})=\min\limits_{\mathbf{v}\in\mathcal{W}}\left(f(\mathbf{v})+\frac{\beta}{2}\norm{\mathbf{w}-\mathbf{v}}^2\right), \quad \mathbf{w}\in\mathcal{W}.$$ \end{defn} Moreau envelope has direct connection with the proximal operator of a function defined below. \begin{defn}[Proximal operator]\label{defn:prox} The prox operator of $f:\mathcal{W}\rightarrow\mathbb{R}^d$ is defined as $${\sf prox}_f(\mathbf{w})=\arg\min\limits_{\mathbf{v}\in\mathcal{W}}\left(f(\mathbf{v})+\frac{1}{2}\norm{\mathbf{w}-\mathbf{v}}^2\right), \quad \mathbf{w}\in\mathcal{W}.$$ \end{defn} It follows that the Moreau envelope $f_{\beta}$ can be written as $$f_{\beta}(\mathbf{w})=f\left({\sf prox}_{f/\beta}\left(\mathbf{w}\right)\right)+\frac{\beta}{2}\norm{\mathbf{w}-{\sf prox}_{f/\beta}\left(\mathbf{w}\right)}^2.$$ The following lemma states some useful, known properties of Moreau envelope. \begin{lem}[See \cite{nesterov2005smooth, candes_opt_notes}]\label{lem:prop_moreau} Let $f:\mathcal{W}\rightarrow \mathbb{R}^d$ be a convex, $L$-Lipschitz function, and let $\beta>0$. The $\beta$-Moreau envelope $f_{\beta}$ satisfies the following: \begin{enumerate} \item $f_{\beta}$ is convex, $2L$-Lipschitz, and $\beta$-smooth.\label{prop:moreau_1} \item $\forall \mathbf{w}\in \mathcal{W}\quad f_{\beta}(\mathbf{w})\leq f(\mathbf{w})\leq f_{\beta}(\mathbf{w})+\frac{L^2}{2\,\beta}.$\label{prop:moreau_2} \item $\forall \mathbf{w}\in\mathcal{W} \quad \nabla f_{\beta}(\mathbf{w})=\beta\, \left(\mathbf{w} - {\sf prox}_{f/\beta}(\mathbf{w})\right).$\label{prop:moreau_3} \end{enumerate} \end{lem} The convexity and $\beta$-smoothness together with properties~\ref{prop:moreau_2} and \ref{prop:moreau_3} are fairly standard and the proof can be found in the aforementioned references. The fact that $f_{\beta}$ is $2L$-Lipschitz follows easily from property~\ref{prop:moreau_3}. We include the proof of this fact in Appendix~\ref{app:B} for completeness. Let $\ell:\mathcal{W}\times\mathcal{Z}\rightarrow\mathbb{R}$ be a convex, $L$-Lipschitz loss. For any $z\in\mathcal{Z},$ let $\ell_{\beta}(\cdot, z)$ denote the $\beta$-Moreau envelope of $\ell(\cdot,~z).$ For a dataset $S=(z_1, \ldots, z_n)\in\mathcal{Z}^n,$ let $\widehat{\mathcal{L}}_{\beta}(\cdot;~ S)\triangleq \frac{1}{n}\sum_{i=1}^n\ell_{\beta}(\cdot,~z_i)$ be the empirical risk w.r.t. the $\beta$-smoothed loss. For any distribution $\mathcal{D}$, let $\mathcal{L}_{\beta}(\cdot; \mathcal{D})\triangleq \ex{z\sim \mathcal{D}}{\ell_\beta(\cdot,~z)}$ denote the corresponding population loss. The following theorem asserts that, with an appropriate setting for $\beta,$ running $\mathcal{A}_{\sf NSGD}$ over the $\beta$-smoothed losses $\ell_{\beta}(\cdot, z_i), ~i\in [n]$ yields the optimal population loss w.r.t.~the original non-smooth loss $\ell$. \begin{thm}[Excess population loss for non-smooth losses via smoothing]\label{thm:pop_risk_non-smooth_generic} Let $\mathcal{D}$ be any distribution over $\mathcal{Z}$. Let $S=(z_1, \ldots, z_n)\sim\mathcal{D}^n$. Let $\beta = \frac{L}{M}\cdot \min\left(\frac{\sqrt{n}}{4}, \frac{\epsilon\,n}{8\sqrt{d\,\log(1/\delta)}}\right).$ Suppose we run $\mathcal{A}_{\sf NSGD}$ (Algorithm~\ref{Alg:NSGD-smooth}) over the $\beta$-smoothed version of $\ell$ associated with the points in $S$: $\left\{\ell_{\beta}(\cdot, z_i),~i\in[n]\right\}$. Let $\eta$ and $T$ be set as in Theorem~\ref{thm:pop_risk_Ansgd}. Then, the excess population loss of the output of $\mathcal{A}_{\sf NSGD}$ w.r.t. $\ell$ satisfies \begin{align*} \Delta\mathcal{L}\left(\mathcal{A}_{\sf NSGD}; \mathcal{D}\right)&\leq 24\,M\,L\cdot\max\left(\frac{\sqrt{d\,\log(1/\delta)}}{\epsilon\, n},~\frac{1}{\sqrt{n}}\right) \end{align*} \end{thm} \begin{proof Let $\overline{\mathbf{w}}_T$ be the output of $\mathcal{A}_{\sf NSGD}$. Using property~\ref{prop:moreau_1} of Lemma~\ref{lem:prop_moreau} together with Theorem~\ref{thm:pop_risk_Ansgd}, we have $$\ex{S\sim\mathcal{D}^n, \mathcal{A}_{\sf NSGD}}{\mathcal{L}_{\beta}(\overline{\mathbf{w}}_T; \mathcal{D})}-\min\limits_{\mathbf{w}\in\mathcal{W}}\mathcal{L}_{\beta}(\mathbf{w};~\mathcal{D})\leq 20\,M\,L\cdot\max\left(\frac{\sqrt{d\,\log(1/\delta)}}{\epsilon\, n},~\frac{1}{\sqrt{n}}\right).$$ Now, by property~\ref{prop:moreau_2} of Lemma~\ref{prop:moreau_2} and the setting of $\beta$ in the theorem statement, for every $\mathbf{w}\in \mathcal{W}$, we have $$\mathcal{L}_{\beta}(\mathbf{w};~\mathcal{D})\leq \mathcal{L}(\mathbf{w};~\mathcal{D})\leq \mathcal{L}_{\beta}(\mathbf{w};~\mathcal{D})+2\,M\,L\cdot\max\left(\frac{1}{\sqrt{n}}, ~\frac{2 \sqrt{d\,\log(1/\delta)}}{\epsilon\, n}\right).$$ Putting these together gives the stated result. \end{proof} \subsection*{Computationally efficient algorithm $\mathcal{A}_{\sf ProxGD}$ ({\sf NSGD} + {\sf Prox}) Computing the Moreau envelope of a function is computationally inefficient in general. However, by property~\ref{prop:moreau_3} of Lemma~\ref{lem:prop_moreau}, we note that evaluating the gradient of Moreau envelope at any point can be attained by evaluating the proximal operator of the function at that point. Evaluating the proximal operator is equivalent to minimizing a strongly convex function (see Definition~\ref{defn:prox}). This can be approximated efficiently, e.g., via gradient descent. Since our $\mathcal{A}_{\sf NSGD}$ algorithm (Algorithm~\ref{Alg:NSGD-smooth}) requires only sufficiently accurate gradient evaluations, we can hence use an efficient, approximate proximal operator to approximate the gradient of the smoothed losses. The gradient evaluations in $\mathcal{A}_{\sf NSGD}$ will thus be replaced with such approximate gradients evaluated via the approximate proximal operator. The resulting algorithm, referred to as $\mathcal{A}_{\sf ProxGD}$, will approximately minimize the smoothed empirical loss without actually computing the smoothed losses. \begin{defn}[Approximate ${\sf prox}$ operator]\label{defn:approx-prox} We say that $\widehat{\prox}_f$ is an $\xi$-approximate proximal operator of ${\sf prox}_f$ for a function $f:\mathcal{W}\rightarrow\mathbb{R}$ if $~\forall \mathbf{w}\in\mathcal{W},~ \norm{\widehat{\prox}_f(\mathbf{w})-{\sf prox}_f(\mathbf{w})}\leq \xi.$ \end{defn} \begin{fact}\label{fact:prox-approx-err-conv} Let $\mathcal{W}\subset\mathbb{R}^d$ be $M$-bounded. Let $f:\mathcal{W}\rightarrow\mathbb{R}$ be convex, $L$-Lipschitz function. Suppose $\beta\geq \frac{L}{M}$. For all $\xi>0$, there is $\xi$-approximate $\widehat{\prox}_{f/\beta}$ such that for each $\mathbf{w}\in\mathcal{W}$, computing $\widehat{\prox}_{f/\beta}(\mathbf{w})$ requires time that is equivalent to at most $\lceil\frac{8\,M^2}{\xi^2}\rceil$ gradient evaluations. \end{fact} This fact follows from the fact that ${\sf prox}_{f/\beta}(\mathbf{w})=\arg\min\limits_{\mathbf{v}\in\mathcal{W}}g_{\mathbf{w}}(\mathbf{v}),$ where $g_{\mathbf{w}}(\mathbf{v})\triangleq\frac{1}{\beta}\,f(\mathbf{v}) + \frac{1}{2}\norm{\mathbf{v} -\mathbf{w}}^2$. This is minimization of $1$-strongly convex and $2\,M$-Lipschitz function over $\mathcal{W}$, The Lipschitz constant follows from the fact that $\beta\geq L/M$. Hence, one can run ordinary Gradient Descent to obtain an approximate minimizer. From a standard result on convergence of GD for strongly convex and Lipschitz functions \cite{bubeck2015convex}, in $\tau$ gradient steps we obtain an approximate $\mathbf{v}_{\tau}$ satisfying $g_{\mathbf{w}}(\mathbf{v}_{\tau})-g_{\mathbf{w}}(\mathbf{v}^*)\leq \frac{8\,M^2}{\tau}$, where $\mathbf{v}^*=\arg\min\limits_{\mathbf{v}\in\mathcal{W}}g_{\mathbf{w}}(\mathbf{v})$. Since $g_{\mathbf{w}}$ is $1$-strongly convex, we get $\norm{\mathbf{v}_{\tau}-\mathbf{v}^*}\leq \sqrt{\frac{8\,M^2}{\tau}}$. \paragraph{Description of $\mathcal{A}_{\sf ProxGD}$:} The algorithm description follows exactly the same lines as $\mathcal{A}_{\sf NSGD}$ except that: (i) the input loss $\ell$ is now non-smooth, and (ii) for each iteration $t$, the gradient evaluation $\nabla \ell(\mathbf{w}_t, z)$ for each data point $z$ in the mini-batch is replaced with the evaluation of an approximate gradient of the smoothed loss $\ell_{\beta}(\cdot, z)$. The approximate gradient, denoted as $\widehat{\nabla} \ell_{\beta}(\mathbf{w}_t, z)$, is computed using an approximate proximal operator. Namely, $$\widehat{\nabla} \ell_{\beta}(\mathbf{w}_t, z):= \beta\cdot\left(\mathbf{w}_t-\widehat{\prox}_{\ell_z/\beta}(\mathbf{w}_t)\right),$$ where $\ell_z\triangleq \ell(\cdot,~z)$. Here, we use a computationally efficient $\xi$-approximate $\widehat{\prox}_{\ell_z/\beta}$ like the one in Fact~\ref{fact:prox-approx-err-conv} with $\xi$ set as $$\xi:= 4\,\frac{M}{n}\cdot\max\left(\frac{2\,\sqrt{d\,\log(1/\delta)}}{\epsilon\,n},~\frac{1}{\sqrt{n}}\right).$$ Note that the approximation error in the gradient $\norm{\widehat{\nabla}\ell_{\beta}(\mathbf{w}_t, z)-\nabla\ell_{\beta}(\mathbf{w}_t, z)}\leq \beta\cdot\xi$, and that $\beta\cdot\xi=\frac{L}{n},$ where $L$ is the Lipschitz constant of $\ell$. \paragraph{Running time of $\mathcal{A}_{\sf ProxGD}$:} if we use the approximate proximal operator in Fact~\ref{fact:prox-approx-err-conv}, then it is easy to see that $\mathcal{A}_{\sf ProxGD}$ requires a number of gradient evaluations that is a factor of $n^2\,T$ more than $\mathcal{A}_{\sf NSGD}$, where $T=O\left(\max\left(n,~\frac{\epsilon^2\,n^2}{d\,\log(1/\delta)}\right)\right).$ That is, the total number of gradient evaluations is $n^2\cdot T^2\cdot m,$ where $m=O\left(\max\left(\sqrt{\epsilon\,n},~\sqrt{\frac{d\,\log(1/\delta)}{\epsilon}}\right)\right)$ is the mini-batch size. We now argue that privacy, stability, and accuracy of the algorithm are preserved under the approximate proximal operator. \paragraph{Privacy:} Note that to bound the sensitivity of the approximate gradient of the mini-batch, it suffices to bound the norm of the approximate gradient. From the discussion above, note that $\forall ~z, \forall~ \mathbf{w}\in\mathcal{W},$ we have $\norm{\widehat{\nabla}\ell_{\beta}(\mathbf{w}, z)}\leq \norm{\widehat{\nabla}\ell_{\beta}(\mathbf{w}, z)-\nabla\ell_{\beta}(\mathbf{w}, z)}+\norm{\nabla\ell_{\beta}(\mathbf{w}, z)}\leq L\,(1+\frac{1}{n}).$ Thus, the sensitivity remains basically the same as in the case where the algorithm is run with the exact gradients. Hence, the same privacy guarantee holds as in $\mathcal{A}_{\sf NSGD}$. \paragraph{Empirical error:} Note that the approximation error in the gradient of the mini-batch (due to the approximate proximal operation) can be viewed as a \emph{fixed} error term of magnitude at most $\frac{L}{n}$ that is added to the exact gradient of the smoothed loss. It is well-known and easy to see that the effect of this additional approximation error on the standard convergence bounds is that excess empirical loss may grow by at most the error times the diameter of the domain (e.g.~\citep{nedic2010effect,FeldmanGV:15}). Hence, compared to the error bound error in Lemma~\ref{lem:emp-risk}, the bound we get incurs an additional term of $2LM/n$. Clearly, this additional error is dominated by the other terms in the empirical loss bound in Lemma~\ref{lem:emp-risk}, and thus will have no significant impact on the final bound. \paragraph{Uniform stability:} This easily follows from the following facts. First, note that the additional approximation error due to gradient approximation is $\frac{L}{n}$. Second, the gradient update w.r.t.~the exact gradient of the smoothed loss is non-expansive operation (which is the key fact in proving uniform stability of (stochastic) gradient methods \cite{hardt2015train, feldman2019high}), and hence the approximation error in the gradient is not going to be amplified by the gradient update step. Hence, for any trajectory of $T$ approximate gradient updates, the accumulated approximation error in the final output $\overline{\mathbf{w}}_T$ cannot exceed $\frac{T\,\eta\,L}{n}$. This cannot increase the final uniform stability bound by more than an additive term of $\frac{T\,\eta\,L^2}{n}$. Thus, we obtain basically the same bound in Lemma~\ref{lem:stability}. Putting these together, we have argued that $\mathcal{A}_{\sf ProxGD}$ is computationally efficient algorithm that achieves the optimal population loss bound in Theorem~\ref{thm:pop_risk_non-smooth_generic}. \section{Private SCO via Objective Perturbation} In this section, we show that the technique known as objective perturbation \cite{CMS, kifer2012private} can be used to attain optimal \emph{population} loss for a large subclass of convex, smooth losses. In objective perturbation, the empirical loss is first perturbed by adding two terms: a \emph{noisy} linear term and a regularization term. As shown in \cite{CMS, kifer2012private}, under some additional assumptions on the Hessian of the loss, an appropriate random perturbation ensures differential privacy. The excess \emph{empirical} loss of this technique for smooth convex losses was originally analyzed in the aforementioned works, and was shown to be optimal by the lower bound in \cite{bassily2014differentially}. We revisit this technique and show that the regularization term added for privacy can be used to attain the optimal excess population loss by exploiting the stability-inducing property of regularization. In addition to smoothness and convexity of $\ell$, as in \cite{CMS, kifer2012private}, we also make the following assumption on the loss function. \begin{assumption}\label{assump:twice-diff} For all $z\in\mathcal{Z},~ \ell(\cdot,~z)$ is twice-differentiable, and the rank of its Hessian $\nabla^2 \ell(\mathbf{w}, z)$ at any $\mathbf{w}\in\mathcal{W}$ is at most $1$. \end{assumption} The description of the objective perturbation algorithm $\mathcal{A}_{\sf ObjP}$ is given in Algorithm~\ref{Alg:ObjP-smooth}. The outline of the algorithm is the same as the one in \cite{kifer2012private} for the case of $(\epsilon, \delta)$-differential privacy. \begin{algorithm} \caption{$\mathcal{A}_{\sf ObjP}$: Objective Perturbation for convex, smooth losses} \begin{algorithmic}[1] \REQUIRE Private dataset: $S=(z_1, \ldots, z_n)\in \mathcal{Z}^n$, $L$-Lipschitz, $\beta$-smooth, convex loss function $\ell$, convex set $\mathcal{W}\subseteq \mathbb{R}^d$, privacy parameters $\epsilon \leq 1,\, \delta \leq 1/n^2$, regularization parameter $\lambda$. \STATE Sample $\mathbf{G}\sim \mathcal{N}\left(\mathbf{0}, \sigma^2\,\mathbb{I}_d\right),$ where $\sigma^2= \frac{10\,L^2\,\log(1/\delta)}{\epsilon^2}$ \RETURN $\widehat{\mathbf{w}}=\arg\min\limits_{\mathbf{w}\in\mathcal{W}} \widehat{\mathcal{L}}\left(\mathbf{w};~S\right)+\frac{\langle \mathbf{G}, ~\mathbf{w}\rangle}{n}+\lambda\norm{\mathbf{w}}^2,$ where $\widehat{\mathcal{L}}(\mathbf{w}; ~S)\triangleq \frac{1}{n}\sum_{i=1}^n\ell(\mathbf{w}, ~z_i).$ \end{algorithmic} \label{Alg:ObjP-smooth} \end{algorithm} \paragraph{Note:} The regularization term as appears in $\mathcal{A}_{\sf ObjP}$ is of different scaling than the one that appears in \cite{kifer2012private}. In particular, the regularization term in \cite{kifer2012private} is normalized by $n$, whereas here it is not. Hence, whenever the results from \cite{kifer2012private} are used here, the regularization parameter in their statements should be replaced with $n\lambda$. This presentation choice is more consistent with literature on regularization The privacy guarantee of $\mathcal{A}_{\sf ObjP}$ is given in the following theorem, which follows directly from \cite{kifer2012private}. \begin{thm}[Privacy guarantee of $\mathcal{A}_{\sf ObjP}$, restatement of Theorem~2 in \cite{kifer2012private}]\label{thm:priv_Aobj} Suppose that Assumption~\ref{assump:twice-diff} holds and that the smoothness parameter satisfies $\beta \leq \epsilon\,n\,\lambda$. Then, $\mathcal{A}_{\sf ObjP}$ is $(\epsilon, \delta)$-differentially private. \label{thm:privObjPert} \end{thm} We now state our main result for this section showing that, with appropriate setting for $\lambda$, $\mathcal{A}_{\sf ObjP}$ yields asymptotically optimal excess population loss. \begin{thm}[Excess population loss of $\mathcal{A}_{\sf ObjP}$]\label{thm:pop_risk_Aobj} Let $\mathcal{D}$ be any distribution over $\mathcal{Z},$ and let $S\sim\mathcal{D}^n$. Suppose that Assumption~\ref{assump:twice-diff} holds. Suppose that $\mathcal{W}$ is $M$-bounded. In $\mathcal{A}_{\sf ObjP},$ set $\lambda= \frac{2\,L}{M}\sqrt{\frac{2}{n}+\frac{4\,d\,\log(1/\delta)}{\epsilon^2\, n^2}}.$ Then, we have \begin{align*} \Delta\mathcal{L}\left(\mathcal{A}_{\sf ObjP};~\mathcal{D}\right)&\leq 2\,M\,L\,\sqrt{\frac{2}{n}+\frac{4\,d\,\log(1/\delta)}{\epsilon^2\, n^2}}=O\left(M\,L\cdot \max\left(\frac{1}{\sqrt{n}},~ \frac{\sqrt{d\,\log(1/\delta)}}{\epsilon\, n}\right)\right). \end{align*} \end{thm} \paragraph{Note:} According to Theorem~\ref{thm:priv_Aobj}, $(\epsilon, \delta)$-differential privacy of $\mathcal{A}_{\sf ObjP}$ entails the assumption that $\beta \leq \epsilon\, n\, \lambda.$ With the setting of $\lambda$ in Theorem~\ref{thm:pop_risk_Aobj}, it would suffice to assume that $\beta \leq \frac{2\,\epsilon\, L}{M}\sqrt{2\,n+4\,d\,\log(1/\delta)}.$ To prove the above theorem, we use the following lemmas. \begin{lem}[Excess empirical loss of $\mathcal{A}_{\sf ObjP}$, restatement of Theorem~26 in \cite{kifer2012private}]\label{lem:emp_risk_Aobj} Let $S\sim\mathcal{Z}^n$. Under Assumption~\ref{assump:twice-diff}, the excess empirical loss of $\mathcal{A}_{\sf ObjP}$ satisfies \begin{align*} \ex{}{\widehat{\mathcal{L}}(\widehat{\mathbf{w}}; S)}-\min\limits_{\mathbf{w}\in\mathcal{W}}\widehat{\mathcal{L}}(\mathbf{w}; S)&\leq \frac{16\, L^2\,d\,\log(1/\delta)}{n^2\,\epsilon^2\,\lambda}+\lambda\,M^2. \end{align*} where the expectation is taken over the Gaussian noise in $\mathcal{A}_{\sf ObjP}$. \end{lem} The next lemma states the well-known stability property of regularized empirical risk minimization. \begin{lem}[\cite{shalev2014understanding}]\label{lem:gen-error-regularize} Let $f:\mathcal{W}\times\mathcal{Z}\rightarrow \mathbb{R}$ be a convex, $\rho$-Lipschitz loss, and let $\lambda>0$. Let $S=(z_1, \ldots, z_n)\sim\mathcal{Z}^n$. Let $\mathcal{A}$ be an algorithm that outputs $\widetilde{\mathbf{w}}=\arg\min\limits_{\mathbf{w}\in\mathcal{W}} \left(\widehat{\mathcal{F}}(\mathbf{w};~ S)+\lambda\,\norm{\mathbf{w}}^2\right),$ where $\widehat{\mathcal{F}}(\mathbf{w};~S)=\frac{1}{n}\sum_{i=1}^n f(\mathbf{w},~z_i).$ Then, $\mathcal{A}$ is $\frac{2\,\rho^2}{\lambda\,n}$-uniformly stable. \end{lem} \subsubsection*{Proof of Theorem~\ref{thm:pop_risk_Aobj}} Fix any realization of the noise vector $\mathbf{G}$. For every $\mathbf{w}\in\mathcal{W}, z\in\mathcal{Z},$ define $f_{\mathbf{G}}(\mathbf{w}, z)\triangleq \ell(\mathbf{w},~z)+\frac{\langle \mathbf{G}, \mathbf{w}\rangle}{n}.$ Note that $f_{\mathbf{G}}$ is $\left(L+\frac{\norm{\mathbf{G}}}{n}\right)$-Lipschitz. For any dataset $S=(z_1, \ldots, z_n)\in\mathcal{Z}^n,$ define $\widehat{\mathcal{F}}_{\mathbf{G}}(\mathbf{w}; S)\triangleq \frac{1}{n}\sum_{i=1}^n f_{\mathbf{G}}(\mathbf{w}, z_i).$ Hence, the output $\widehat{\mathbf{w}}$ of $\mathcal{A}_{\sf ObjP}$ on input dataset $S$ can be written as $\widehat{\mathbf{w}}=\arg\min\limits_{\mathbf{w}\in\mathcal{W}}\widehat{\mathcal{F}}_{\mathbf{G}}(\mathbf{w};~S)+\lambda\,\norm{\mathbf{w}}^2$. Define $\mathcal{F}_{\mathbf{G}}(\mathbf{w};~\mathcal{D})\triangleq \ex{z\sim \mathcal{D}}{f_{\mathbf{G}}(\mathbf{w},~z)}.$ Thus, for any fixed $\mathbf{G},$ by combining Lemma~\ref{lem:gen-error-regularize} with Lemma~\ref{lem:gen_err_stability}, we have $\ex{S\sim\mathcal{D}^n}{\mathcal{F}_{\mathbf{G}}(\widehat{\mathbf{w}};~\mathcal{D})-\widehat{\mathcal{F}}_{\mathbf{G}}(\widehat{\mathbf{w}};~S)}\leq \frac{2\,\left(L+\frac{\norm{\mathbf{G}}}{n}\right)^2}{\lambda\,n}.$ On the other hand, note that for any dataset $S,$ we always have $\mathcal{F}_{\mathbf{G}}(\widehat{\mathbf{w}};~\mathcal{D})-\widehat{\mathcal{F}}_{\mathbf{G}}(\widehat{\mathbf{w}};~S)=\mathcal{L}(\widehat{\mathbf{w}};~\mathcal{D})-\widehat{\mathcal{L}}(\widehat{\mathbf{w}};~S)$ since the linear term cancels out. Hence, the expected generalization error (w.r.t. $S$) satisfies \begin{align} \ex{S\sim\mathcal{D}^n}{\mathcal{L}(\widehat{\mathbf{w}};~\mathcal{D})-\widehat{\mathcal{L}}(\widehat{\mathbf{w}};~S)}&\leq 2\,\frac{\left(L+\frac{\norm{\mathbf{G}}}{n}\right)^2}{\lambda\,n}\nonumber \end{align} Now, by taking expectation over $\mathbf{G}\sim\mathcal{N}\left(\mathbf{0}, \sigma^2\mathbb{I}_d\right)$ as well, we arrive at \begin{align} \ex{}{\mathcal{L}(\widehat{\mathbf{w}};~\mathcal{D})-\widehat{\mathcal{L}}(\widehat{\mathbf{w}};~S)}&\leq 2\,L^2\,\frac{\left(1+\frac{\sqrt{10\,d\,\log(1/\delta)}}{\epsilon\,n}\right)^2}{\lambda\,n}\leq 8\,\frac{L^2}{\lambda\,n}\label{ineq:gen-error-bound} \end{align} where we assume $\frac{\sqrt{10\,d\,\log(1/\delta)}}{\epsilon\, n}\leq 1$ (since otherwise we would have the trivial error). Now, observe that: \begin{align*} \Delta\mathcal{L}\left(\mathcal{A}_{\sf ObjP}; \mathcal{D}\right)&=\ex{}{\mathcal{L}(\widehat{\mathbf{w}}; \mathcal{D})}-\min\limits_{\mathbf{w}\in\mathcal{W}}\mathcal{L}(\mathbf{w};~\mathcal{D})\\ &\leq\ex{}{\widehat{\mathcal{L}}(\widehat{\mathbf{w}};~S)-\min\limits_{\mathbf{w}\in\mathcal{W}}\widehat{\mathcal{L}}(\mathbf{w};~S)}+\ex{}{\mathcal{L}(\widehat{\mathbf{w}};~\mathcal{D})-\widehat{\mathcal{L}}(\widehat{\mathbf{w}};~S)}\\ &\leq \frac{8}{\lambda}\left(\frac{2\,L^2\,d\,\log(1/\delta)}{\epsilon^2\,n^2}+\frac{L^{\,2}}{n}\right) + \lambda\,M^2 \end{align*} where the second inequality follows from the fact that $\ex{S\sim\mathcal{D}^n}{\min\limits_{\mathbf{w}\in\mathcal{W}}\widehat{\mathcal{L}}(\mathbf{w};~S)}\leq \min\limits_{\mathbf{w}\in\mathcal{W}}\ex{S\sim\mathcal{D}^n}{\widehat{\mathcal{L}}(\mathbf{w};~S)}=\min\limits_{\mathbf{w}\in\mathcal{W}}\mathcal{L}(\mathbf{w};~\mathcal{D})$, and the last bound follows from combining (\ref{ineq:gen-error-bound}) with Lemma~\ref{lem:emp_risk_Aobj}. Optimizing this bound in $\lambda$ yields the setting of $\lambda$ in the theorem statement. Plugging that setting of $\lambda$ into the bound yield the stated bound on the excess population loss. \mypar{A note on the rank assumption} While in this section we presented our result under the assumption that rank of $\bigtriangledown^2\ell(\mathbf{w},z)$ is at most one, one can extend the analysis (by using similar argument in \cite{iyengartowards}) to a rank of $\widetilde{O}\left(\frac{L\sqrt{n+d}}{\beta M}\right)$ without affecting the asymptotic population loss guarantees. In general, to ensure differential privacy to $\mathcal{A}_{\sf ObjP}$, one only needs the following assumption involving the Hessian of individual losses: $\left|{\sf det}\left(\mathbb{I}+\frac{\bigtriangledown^2\ell(\mathbf{w},z)}{\lambda}\right)\right|\leq e^{\epsilon/2}$ for all $z\in\mathcal{Z}$ and $\mathbf{w}\in\mathcal{W}$, rather than a constraint on the rank. \subsection{Oracle Efficient Objective Perturbation} \label{sec:OracEff} The privacy guarantee of the standard objective perturbation technique is given only when the output is the exact minimizer \cite{CMS, kifer2012private}. In practice, we usually cannot attain the exact minimizer, but rather obtain an approximate minimizer via efficient optimization methods. Hence, in this section we focus on providing a practical version of algorithm $\mathcal{A}_{\sf ObjP}$, called \emph{approximate objective perturbation} (Algorithm $\mathcal{A}_{\sf ObjP-App}$), that i) is $(\epsilon,\delta)$-differentially private, ii) achieves nearly the same population loss as $\mathcal{A}_{\sf ObjP}$, and iii) only makes $O(n\log n)$ evaluations of the gradient $\bigtriangledown_\mathbf{w}\ell(\mathbf{w}, z)$ at any $\mathbf{w}\in\mathcal{W}$ and $z\in\mathcal{Z}$. The main idea in $\mathcal{A}_{\sf ObjP-App}$ is to first obtain a $\mathbf{w}_2$ that ensures $\mathcal{J}(\mathbf{w}_2;S)-\min\limits_\mathcal{W} \mathcal{J}(\mathbf{w};S)$ is at most $\alpha$, and then perturb $\mathbf{w}_2$ with Gaussian noise to ``fuzz'' the difference between $\mathbf{w}_2$ and the true minimizer. In this work, we use Stochastic Variance Reduced Gradient Descent (SVRG) \cite{johnson2013accelerating,xiao2014proximal} as the optimization algorithm. This leads to a construction that requires near linear oracle complexity (i.e., number of gradient evaluations). In particular, $\mathcal{A}_{\sf ObjP-App}$ achieves oracle complexity of $O(n\log n)$ and asymptotically optimal excess population loss. \begin{algorithm} \caption{$\mathcal{A}_{\sf ObjP-App}$: Approximate Objective Perturbation for convex, smooth losses} \begin{algorithmic}[1] \REQUIRE Private dataset: $S=(z_1, \ldots, z_n)\in \mathcal{Z}^n$, $L$-Lipschitz, $\beta$-smooth, convex loss function $\ell$, convex set $\mathcal{W}\subseteq \mathbb{R}^d$, privacy parameters $\epsilon \leq 1,\, \delta \leq 1/n^2$, regularization parameter $\lambda$, Optimizer $\mathcal{O}:\mathcal{F}\times[0,1]\rightarrow\mathcal{W}$ (where $\mathcal{F}$ is the class of objectives, and the other argument is the optimization accuracy), $\alpha\in[0,1]:$ optimization accuracy. \STATE Sample $\mathbf{G}\sim \mathcal{N}\left(\mathbf{0}, \sigma^2\,\mathbb{I}_d\right),$ where $\sigma^2= \frac{20\,L^2\,\log(1/\delta)}{\epsilon^2}$. \STATE Let $\mathcal{J}(\mathbf{w};S)=\widehat{\mathcal{L}}\left(\mathbf{w};~S\right)+\frac{\langle \mathbf{G}, ~\mathbf{w}\rangle}{n}+\lambda\norm{\mathbf{w}}^2,$ where $\widehat{\mathcal{L}}(\mathbf{w}; ~S)\triangleq \frac{1}{n}\sum_{i=1}^n\ell(\mathbf{w}, ~z_i).$ {\RETURN $\widehat{\mathbf{w}}=\mathsf{Proj}_\mathcal{W}\left[ \mathcal{O}\left(\mathcal{J},\alpha\right)+\mathbf{H}\right]$, where $\mathbf{H}\sim\mathcal{N}\left(\mathbf{0}, \sigma^2\,\mathbb{I}_d\right)$, and $\sigma^2= \frac{40\alpha\,\log(1/\delta)}{\lambda\epsilon^2}$\label{step:3}}. \end{algorithmic} \label{Alg:ObjP-smooth-app} \end{algorithm} \begin{thm}[Privacy guarantee of $\mathcal{A}_{\sf ObjP-App}$] Suppose that Assumption~\ref{assump:twice-diff} holds and that the smoothness parameter satisfies $\beta \leq \epsilon\,n\,\lambda$. Then, Algorithm $\mathcal{A}_{\sf ObjP-App}$ is $(\epsilon, \delta)$-differentially private. \label{thm:privObjApp} \end{thm} \begin{proof} Let $\mathbf{w}_1=\arg\min\limits_{\mathbf{w}\in\mathcal{W}}\underbrace{\widehat{\mathcal{L}}\left(\mathbf{w};~S\right)+\frac{\langle \mathbf{G}, ~\mathbf{w}\rangle}{n}+\lambda\norm{\mathbf{w}}^2}_{\mathcal{J}(\mathbf{w},S)}$, and $\mathbf{w}_2=\mathcal{O}(\mathcal{J},\alpha)$, where $\mathcal{O}$ is the optimizer defined in Algorithm $\mathcal{A}_{\sf ObjP-App}$. Notice that one can compute $\widehat{\mathbf{w}}$ from the tuple $(\mathbf{w}_1,\mathbf{w}_2-\mathbf{w}_1+\mathbf{H})$ by simple post-processing. Furthermore, the algorithm that outputs $\mathbf{w}_1$ is $(\epsilon/2,\delta/2)$-differentially private by Theorem \ref{thm:privObjPert}. In the following, we will bound $\|\mathbf{w}_2-\mathbf{w}_1\|$ in order to make $(\mathbf{w}_2-\mathbf{w}_1 +\mathbf{H})$ differentially private, conditioned on the knowledge of $\mathbf{w}_1$. As $\mathcal{J}(\mathbf{w},S)$ is $\lambda$-strongly convex, $\mathcal{J}(\mathbf{w}_2,S)\geq \mathcal{J}(\mathbf{w}_1,S)+\frac{\lambda}{2}\|\mathbf{w}_2-\mathbf{w}_1\|^2$ so that \begin{align} \|\mathbf{w}_2-\mathbf{w}_1\|\leq\sqrt{\frac{2\cdot|\mathcal{J}(\mathbf{w}_2,S)-\mathcal{J}(\mathbf{w}_1,S)|}{\lambda}}\leq\sqrt{\frac{2\alpha}{\lambda}}. \label{eq:abc12} \end{align} From eq.~\eqref{eq:abc12} it follows that, conditioned on $\mathbf{w}_1$, $\mathbf{w}_2-\mathbf{w}_1$ has $\ell_2$-sensitivity of $\sqrt{\frac{8\alpha}{\lambda}}$. Therefore, by the standard analysis of the Gaussian mechanism~\cite{DR14}, it follows that $(\mathbf{w}_2-\mathbf{w}_1)+\mathbf{H}$ (with $\mathbf{H}$ sampled as in Step \ref{step:3} of Algorithm $\mathcal{A}_{\sf ObjP-App}$) satisfies $(\epsilon/2,\delta/2)$-differential privacy. Therefore by standard composition~\cite{DR14}, the tuple $(\mathbf{w}_1,\mathbf{w}_2-\mathbf{w}_1+\mathbf{H})$ (and hence $\widehat{\mathbf{w}}$) satisfies $(\epsilon,\delta)$-differential privacy. \end{proof} \begin{thm}[Excess population loss guarantee of $\mathcal{A}_{\sf ObjP-App}$] Let $\mathcal{D}$ be any distribution over $\mathcal{Z},$ and let $S\sim\mathcal{D}^n$. Suppose that Assumption~\ref{assump:twice-diff} holds and that $\mathcal{W}$ is $M$-bounded. In Algorithm $\mathcal{A}_{\sf ObjP-App}$, set $\lambda= \frac{2\,L}{M}\sqrt{\frac{2}{n}+\frac{4\,d\,\log(1/\delta)}{\epsilon^2\, n^2}}$, $\alpha=\frac{M^2\lambda}{n^2}$. Then, we have \begin{align*} \Delta\mathcal{L}\left(\mathcal{A}_{\sf ObjP-App};~\mathcal{D}\right)&\leq O\left(M\,L\cdot \max\left(\frac{1}{\sqrt{n}},~ \frac{\sqrt{d\,\log(1/\delta)}}{\epsilon\, n}\right)\right). \end{align*} \label{thm:excessPop} \end{thm} \begin{proof} Let $\mathbf{w}_1=\arg\min\limits_{\mathbf{w}\in\mathcal{W}}{\widehat{\mathcal{L}}\left(\mathbf{w};~S\right)+\frac{\langle \mathbf{G}, ~\mathbf{w}\rangle}{n}+\lambda\norm{\mathbf{w}}^2}$. For $\widehat{\mathbf{w}}$ defined in Step \ref{step:3} of $\mathcal{A}_{\sf ObjP-App}$, notice that using Theorem \ref{thm:pop_risk_Aobj}, $$\Delta\mathcal{L}\left(\widehat{\mathbf{w}};~\mathcal{D}\right)\leq \Delta\mathcal{L}\left(\mathbf{w}_1;~\mathcal{D}\right)+L\cdot\mathbb{E}\left[\|\widehat{\mathbf{w}}-\mathbf{w}_1\|\right]\leq O\left(M\,L\cdot \max\left(\frac{1}{\sqrt{n}},~ \frac{\sqrt{d\,\log(1/\delta)}}{\epsilon\, n}\right)\right)+L\cdot\mathbb{E}\left[\|\mathbf{H}\|\right].$$ Now, $$\mathbb{E}\left[\|\mathbf{H}\|\right]=O\left(\sqrt{\frac{d\alpha\log(1/\delta)}{\lambda\epsilon^2}}\right)=O\left(M\,L\cdot\frac{\sqrt{d\log(1/\delta)}}{\epsilon n}\right)$$ when $\alpha=\frac{M^2\lambda}{n^2}$. Therefore, $\Delta\mathcal{L}\left(\widehat{\mathbf{w}};~\mathcal{D}\right)\leq O\left(M\,L\cdot \max\left(\frac{1}{\sqrt{n}},~ \frac{\sqrt{d\,\log(1/\delta)}}{\epsilon\, n}\right)\right)$, which completes the proof. \end{proof} \mypar{Oracle complexity} The population loss guarantee of Algorithm $\mathcal{A}_{\sf ObjP-App}$ is independent of the choice of the exact optimizer $\mathcal{O}$, as long it produces a $\widehat{\mathbf{w}}\in\mathcal{W}$ for an objective function $\mathcal{J}$ such that\\ $\left[\mathcal{J}(\widehat{\mathbf{w}})-\min\limits_{\mathbf{w}\in\mathcal{W}}\mathcal{J}(\mathbf{w})\right]\leq \alpha$, where $\alpha=\frac{M^2\lambda}{n^2}$ (defined in Theorem \ref{thm:excessPop}). We will now show that if one uses SVRG (Stochastic Variance Reduced Gradient Descent Optimizer) from \cite{johnson2013accelerating,xiao2014proximal,bubeck2015convex} as the optimizer $\mathcal{O}$, then one can achieve an error of at most $\alpha$ using $O\left((n+\beta/\lambda)\log(1/\alpha)\right)$ calls to the gradients of $\ell(\cdot,\cdot)$, for any $\alpha\in(0,1]$. The following theorem immediately gives this. Plugging in the value of $\alpha$ from Theorem \ref{thm:excessPop}, noticing from Theorem \ref{thm:priv_Aobj} that $\beta/\lambda\leq \epsilon n$, and considering $\epsilon, M$ and $L$ to be constants, we get the oracle complexity of Algorithm $\mathcal{A}_{\sf ObjP-App}$ to be $O(n\log(n))$ \begin{thm}[Convergence of SVRG \cite{johnson2013accelerating,xiao2014proximal,bubeck2015convex} Let $f_1,\cdots,f_n$ be $\beta$-smooth, $\lambda$-strongly convex functions over $\mathcal{W}$, $\mathcal{F}(\mathbf{w})=\frac{1}{n}\sum\limits_{i=1}^n f_i(\mathbf{w})$, and $\mathbf{w}^* \triangleq \arg\min_{\mathbf{w}\in \mathcal{W}} \mathcal{F}(\mathbf{w})$. Let $\mathbf{y}^{(1)}\in\mathcal{W}$ be an arbitrary initial point. For $t=\{1,2,\cdots\}$, let $\mathbf{w}^{(t)}_1=\mathbf{y}^{(t)}$. For $s\in[k]$, let $$\mathbf{w}^{(t)}_{s+1}=\mathsf{Proj}_\mathcal{W}\left[\mathbf{w}^{(t)}_{s}-\frac{1}{10\beta}\left(\bigtriangledown f_{i^{(t)}_s}\left(\mathbf{w}^{(t)}_{s}\right)-\bigtriangledown f_{i^{(t)}_s}\left(\mathbf{y}^{(t)}\right)+\bigtriangledown \mathcal{F}\left(\mathbf{y}^{(t)}\right)\right)\right],$$ where $i^{(t)}_s$ is drawn uniformly at random from $[n]$, and $\mathbf{y}^{(t+1)}=\frac{1}{k}\sum\limits_{s=1}^k \mathbf{w}^{(t)}_s$. Then, for $k=20\beta/\lambda$ it holds that: $$\mathbb{E}\left[\mathcal{F}\left(\mathbf{y}^{(t+1)}\right)\right]-\mathcal{F}\left(\mathbf{w}^*\right)\leq 0.9^t\left(\mathcal{F}\left(\mathbf{y}^{(1)}\right)-\mathcal{F}\left(\mathbf{w}^*\right)\right).$$ \label{thm:SVRG} \end{thm} \section{Preliminaries}\label{sec:prelim} \paragraph{Notation:} We use $\mathcal{W}\subset \mathbb{R}^d$ to denote the parameter space, which is assumed to be a convex, compact set. We denote by $M = \max\limits_{\mathbf{w}\in\mathcal{W}}\norm{\mathbf{w}}$ the $L_2$ radius of $\mathcal{W}$. We use $\mathcal{Z}$ to denote an arbitrary data domain and $\mathcal{D}$ to denote an arbitrary distribution over $\mathcal{Z}$. We let $\ell:\mathbb{R}^d \times \mathcal{Z}\rightarrow \mathbb{R}$ be a loss function that takes a parameter vector $\mathbf{w}\in\mathcal{W}$ and a data point $z\in\mathcal{Z}$ as inputs and outputs a real value. The \emph{empirical loss} of $\mathbf{w}\in\mathcal{W}$ w.r.t. loss $\ell$ and dataset $S=(z_1, \ldots, z_n)$ is defined as $\widehat{\mathcal{L}}(\mathbf{w};~ S)\triangleq \frac{1}{n}\sum_{i=1}^n \ell(\mathbf{w},z_i).$ The \emph{excess empirical loss} of $\mathbf{w}$ is defined as $\widehat{\mathcal{L}}(\mathbf{w};~ S)-\min\limits_{\widetilde{\mathbf{w}}\in\mathcal{W}}\widehat{\mathcal{L}}\left(\widetilde{\mathbf{w}}; ~S\right).$ The \emph{population loss} of $\mathbf{w}\in\mathcal{W}$ with respect to a loss $\ell$ and a distribution $\mathcal{D}$ over $\mathcal{Z}$, is defined as $\mathcal{L}(\mathbf{w}; \mathcal{D})\triangleq \ex{z\sim \mathcal{D}}{\ell(\mathbf{w},z)}.$ The \emph{excess population loss} of $\mathbf{w}$ is defined as $\mathcal{L}(\mathbf{w};~\mathcal{D})-\min\limits_{\widetilde{\mathbf{w}}\in\mathcal{W}}\mathcal{L}(\widetilde{\mathbf{w}};~\mathcal{D}).$ \begin{defn}[Uniform stability]\label{def:unif-stable} Let $\alpha>0$. A (randomized) algorithm $\mathcal{A}: \mathcal{Z}^n\rightarrow \mathcal{W}$ is $\alpha$-uniformly stable (w.r.t. loss $\ell:\mathcal{W}\times\mathcal{Z}\rightarrow \mathbb{R}$) if for any pair $S, ~S' \in\mathcal{Z}^n$ differing in at most one data point, we have $$\sup\limits_{z\in\mathcal{Z}}\,\ex{\mathcal{A}}{\ell\left(\mathcal{A}(S), z\right)-\ell\left(\mathcal{A}(S'), z\right)}\leq \alpha$$ where the expectation is taken only over the internal randomness of $\mathcal{A}$. \end{defn} We will use the following simple generalization property of stability that upper bounds the expectation of population loss. Our bounds on excess population loss can also be shown to hold (up to log factors) with high probability using the results from \cite{feldman2019high}. \begin{lem}[\cite{bousquet2002stability}]\label{lem:gen_err_stability} Let $\mathcal{A}:\mathcal{Z}^n\rightarrow\mathcal{W}$ be an $\alpha$-uniformly stable algorithm w.r.t. loss $\ell:\mathcal{W}\times\mathcal{Z}\rightarrow\mathbb{R}$. Let $\mathcal{D}$ be any distribution over $\mathcal{Z}$, and let $S\sim\mathcal{D}^n$. Then, \begin{align*} \ex{S\sim\mathcal{D}^n, \mathcal{A}}{\mathcal{L}\left(\mathcal{A}(S);~\mathcal{D}\right)-\widehat{\mathcal{L}}\left(\mathcal{A}(S); ~ S\right)}&\leq \alpha. \end{align*} \end{lem} \begin{defn}[Smooth function]\label{defn:smooth} Let $\beta>0$. A differentiable function $f:\mathbb{R}^d\rightarrow \mathbb{R}$ is $\beta$-smooth over $\mathcal{W} \subseteq \mathbb{R}^d$ if for every $\mathbf{w}, \mathbf{v} \in\mathcal{W},$ we have $$f(\mathbf{v})\leq f(\mathbf{w})+\langle \nabla f(\mathbf{w}), \mathbf{v}-\mathbf{w}\rangle + \frac{\beta}{2} \,\norm{\mathbf{w}-\mathbf{v}}^2.$$ \end{defn} In the sequel, whenever we attribute a property (e.g., convexity, Lipschitz property, smoothness, etc.) to a loss function $\ell$, we mean that for every data point $z\in\mathcal{Z},$ the loss $\ell(\cdot, z)$ possesses that property over $\mathcal{W}$. \paragraph{Stochastic Convex Optimization (SCO):} Let $\mathcal{D}$ be an arbitrary (unknown) distribution over $\mathcal{Z}$, and $S=(z_1, \ldots, z_n)$ be a sequence of i.i.d.~samples from $\mathcal{D}$. Let $\ell:\mathcal{W}\times\mathcal{Z}\rightarrow \mathbb{R}$ be a convex loss function. A (possibly randomized) algorithm for SCO uses the sample $S$ to generate an (approximate) minimizer $\widehat{\mathbf{w}}_S$ for $\mathcal{L}(\cdot; ~\mathcal{D})$. We measure the accuracy of $\mathcal{A}$ by the \emph{expected} excess population loss of its output parameter $\widehat{\mathbf{w}}_S$, defined as: $$\Delta\mathcal{L}\left(\mathcal{A};~\mathcal{D}\right)\triangleq \ex{}{\mathcal{L}(\widehat{\mathbf{w}}_S;~\mathcal{D})-\min\limits_{\mathbf{w}\in\mathcal{W}}\mathcal{L}(\mathbf{w}; ~\mathcal{D})},$$ where the expectation is taken over the choice of $S\sim\mathcal{D}^n$, and any internal randomness in $\mathcal{A}$. \paragraph{Differential privacy \cite{DMNS06, DKMMN06}:} A randomized algorithm $\mathcal{A}$ is $(\epsilon,\delta)$-differentially private if, for any pair of datasets $S$ and $S'$ differ in exactly one data point, and for all events $\mathcal{O}$ in the output range of $\mathcal{A}$, we have $$\pr{}{\mathcal{A}(S)\in \mathcal{O}} \leq e^{\epsilon} \cdot \pr{}{\mathcal{A}(S')\in \mathcal{O}} +\delta ,$$ where the probability is taken over the random coins of $\mathcal{A}$. For meaningful privacy guarantees, the typical settings of the privacy parameters are $\epsilon<1$ and $\delta \ll 1/n$. \paragraph{Differentially Private Stochastic Convex Optimization (DP-SCO):} An $(\epsilon, \delta)$-DP-SCO algorithm is a SCO algorithm that satisfies $(\epsilon, \delta)$-differential privacy. \section{Private Convex, Smooth Stochastic Optimization}\label{sec:smooth} \section{Private SCO via Mini-batch Noisy SGD}\label{sec:smooth} In this section, we consider the setting where the loss $\ell$ is convex, Lipschitz, and smooth. We give a technique that is based on a mini-batch variant of Noisy Stochastic Gradient Descent (NSGD) algorithm \cite{bassily2014differentially, abadi2016deep} described in Figure \ref{Alg:NSGD-smooth}. \begin{algorithm} \caption{$\mathcal{A}_{\sf NSGD}$: Mini-batch noisy SGD for convex, smooth losses} \begin{algorithmic}[1] \REQUIRE Private dataset: $S=(z_1, \ldots, z_n)\in \mathcal{Z}^n$, $L$-Lipschitz, $\beta$-smooth, convex loss function $\ell$, convex set $\mathcal{W}\subseteq \mathbb{R}^d$, step size $\eta$, mini-batch size $m$, ~\# iterations $T$, privacy parameters $\epsilon \leq 1,\, \delta \leq 1/n^2$. \STATE Set noise variance $\sigma^2 := \frac{8T\,L^2\,\log(1/\delta)}{n^2\epsilon^2}.$ \STATE Set batch size $m:=\max\left(n\,\sqrt{\frac{\epsilon}{4\,T}},~ 1\right).$\label{step:batch-size} \STATE Choose arbitrary initial point $\mathbf{w}_0 \in \mathcal{W}.$ \FOR{$t=0$ to $T-1$\,} \STATE Sample a batch $B_t=\{z_{i_{(t, 1)}}, \ldots, z_{i_{(t, m)}}\}\leftarrow S$ uniformly with replacement.\label{step:sampling} \STATE $\mathbf{w}_{t+1} := \mathsf{Proj}_{\mathcal{W}}\left(\mathbf{w}_{t}-\eta\cdot\left(\frac{1}{m}\sum_{j=1}^m\nabla\ell(\mathbf{w}_{t}, z_{i_{(t,j)}})+\mathbf{G}_t\right)\right),$ where $\mathsf{Proj}_{\mathcal{W}}$ denotes the Euclidean projection onto $\mathcal{W}$, and $\mathbf{G}_t \sim \mathcal{N}\left(\mathbf{0}, \sigma^2 \mathbb{I}_d\right)$ drawn independently each iteration.\label{step:grad-step} \ENDFOR \RETURN $\overline{\mathbf{w}}_T=\frac{1}{T}\sum_{t=1}^{T}\mathbf{w}_t$ \end{algorithmic} \label{Alg:NSGD-smooth} \end{algorithm} \begin{thm}[Privacy guarantee of $\mathcal{A}_{\sf NSGD}$] Algorithm~\ref{Alg:NSGD-smooth} is $(\epsilon, \delta)$-differentially private. \end{thm} \begin{proof} The proof follows from \cite[Theorem~1]{abadi2016deep}, which gives a tight privacy analysis for mini-batch NSGD via the Moments Accountant technique and privacy amplification via sampling. We note that the setting of the mini-batch size in Step~\ref{step:batch-size} of Algorithm~\ref{Alg:NSGD-smooth} satisfies the condition in \cite[Theorem~1]{abadi2016deep} (we obtain here an explicit value for the universal constants in the aforementioned theorem in that reference). We also note that the setting of the Gaussian noise in \cite{abadi2016deep} is not normalized by the mini-batch size, and hence the noise variance reported in \cite[Theorem~1]{abadi2016deep} is larger than our setting of $\sigma^2$ by a factor of $m^2$. \end{proof} The population loss attained by $\mathcal{A}_{\sf NSGD}$ is given by the next theorem. \begin{thm}[Excess population loss of $\mathcal{A}_{\sf NSGD}$]\label{thm:pop_risk_Ansgd} Let $\mathcal{D}$ be any distribution over $\mathcal{Z},$ and let $S\sim\mathcal{D}^n$. Suppose $\beta \leq \frac{L}{M}\cdot \min\left(\sqrt{\frac{n}{2}}, \frac{\epsilon\,n}{2\sqrt{2 d\log(1/\delta)}}\right)$. Let $T =\min\left(\frac{n}{8},~ \frac{\epsilon^2\,n^2}{32\,d\,\log(1/\delta)}\right)$ and $\eta= \frac{M}{L\,\sqrt{T}}$. Then, \begin{align*} \Delta\mathcal{L}\left(\mathcal{A}_{\sf NSGD};~\mathcal{D}\right)&\leq 10\,ML\cdot\max\left(\frac{\sqrt{d\,\log(1/\delta)}}{\epsilon\, n},~\frac{1}{\sqrt{n}}\right) \end{align*} \end{thm} Before proving the above theorem, we first state and prove the following useful lemmas. \begin{lem}\label{lem:emp-risk} Let $S\in \mathcal{Z}^n$. Suppose the parameter set $\mathcal{W}$ is convex and $M$-bounded. For any $\eta > 0,$ the excess empirical loss of $\mathcal{A}_{\sf NSGD}$ satisfies \begin{align*} \ex{}{\widehat{\mathcal{L}}(\overline{\mathbf{w}}_T; S)}-\min\limits_{\mathbf{w}\in\mathcal{W}}\widehat{\mathcal{L}}(\mathbf{w}; S)&\leq \frac{M^2}{2\,\eta\, T} + \frac{\eta\,L^2}{2} \left(16\frac{T\,d\,\log(1/\delta)}{n^2\,\epsilon^2}+1\right) \end{align*} where the expectation is taken with respect to the choice of the mini-batch (step~\ref{step:sampling}) and the independent Gaussian noise vectors $\mathbf{G}_1, \ldots, \mathbf{G}_T$. \end{lem} \begin{proof The proof follows from the classical analysis of the stochastic oracle model (see, e.g., \cite{shalev2014understanding}) In particular, we can show that $$\ex{}{\widehat{\mathcal{L}}(\overline{\mathbf{w}}_T; S)}-\min\limits_{\mathbf{w}\in\mathcal{W}}\widehat{\mathcal{L}}(\mathbf{w}; S)\leq \frac{M^2}{2\,\eta\, T} + \frac{\eta\,L^2}{2} +\eta\,\sigma^2\, d,$$ where the last term captures the additional empirical error due to privacy. The statement now follows from the setting of $\sigma^2$ in Algorithm~\ref{Alg:NSGD-smooth}. \end{proof} The following lemma is a simple extension of the results on uniform stability of GD methods that appeared in \cite{hardt2015train} and \cite[Lemma~4.3]{feldman2019high} to the case of \emph{mini-batch noisy} SGD. For completeness, we provide a proof in Appendix~\ref{app:A}. \begin{lem}\label{lem:stability} In $\mathcal{A}_{\sf NSGD}$, suppose $\eta \leq \frac{2}{\beta},$ where $\beta$ is the smoothness parameter of $\ell$. Then, $\mathcal{A}_{\sf NSGD}$ is $\alpha$-uniformly stable with $\alpha= L^2\frac{T\,\eta}{n}$. \end{lem} \subsubsection*{Proof of Theorem~\ref{thm:pop_risk_Ansgd}} By Lemma~\ref{lem:gen_err_stability}, $\alpha$-uniform stability implies that the expected population loss is upper bounded by $\alpha$ plus the expected empirical loss. Hence, by combining Lemma~\ref{lem:emp-risk} with Lemma~\ref{lem:stability}, we have \begin{align} \ex{S\sim\mathcal{D}^n,~\mathcal{A}_{\sf NSGD}}{\mathcal{L}(\overline{\mathbf{w}}_T;~\mathcal{D})}-\min\limits_{\mathbf{w}\in\mathcal{W}}\mathcal{L}(\mathbf{w}; ~\mathcal{D})&\leq \ex{S\sim\mathcal{D}^n,~\mathcal{A}_{\sf NSGD}}{\widehat{\mathcal{L}}(\overline{\mathbf{w}}_T; S)}-\min\limits_{\mathbf{w}\in\mathcal{W}}\mathcal{L}(\mathbf{w}; ~\mathcal{D})+L^2\,\frac{\eta\,T}{n}\nonumber\\ &\leq \ex{S\sim\mathcal{D}^n,~\mathcal{A}_{\sf NSGD}}{\widehat{\mathcal{L}}(\overline{\mathbf{w}}_T; S)-\min\limits_{\mathbf{w}\in\mathcal{W}}\widehat{\mathcal{L}}(\mathbf{w}; S)}+L^2\,\frac{\eta\,T}{n}\label{ineq:emp-less-pop}\\ &\leq \frac{M^2}{2\,\eta\, T} + \frac{\eta\,L^2}{2} \left(16\frac{T\,d}{n^2\,\epsilon^2}+1\right)+ L^2\,\frac{\eta\,T}{n}\nonumber \end{align} where (\ref{ineq:emp-less-pop}) follows from the fact that $\ex{S\sim\mathcal{D}^n}{\min\limits_{\mathbf{w}\in\mathcal{W}}\widehat{\mathcal{L}}(\mathbf{w}; S)}\leq \min\limits_{\mathbf{w}\in\mathcal{W}}\ex{S\sim\mathcal{D}^n}{\widehat{\mathcal{L}}(\mathbf{w}; S)}=\min\limits_{\mathbf{w}\in\mathcal{W}}\mathcal{L}(\mathbf{w}; ~\mathcal{D})$. Optimizing the above bound in $\eta$ and $T$ yields the values in the theorem statement for these parameters, as well as the stated bound on the excess population loss.
1,941,325,221,203
arxiv
\section{Introduction} The World Wide Web has provided a means for people to share their opinions online through various text-based channels, such as online reviews and social media. Being able to use these opinions to accurately assess what people think of a certain product, person, or place is highly valuable in many industries. Restaurants can adjust their menus based on online food reviews, companies can improve their products based on what consumers exactly want, and the effects of political campaigns can be evaluated by analysing social media posts. As such, with the rise of the Internet, the task of sentiment analysis has become more and more significant \cite{pang2008opinion}. Sentiment analysis is the task of extracting and analysing people's sentiments towards certain entities from text documents \cite{liu2012sentiment}. Sentiment analysis is sometimes also referred to as opinion mining in the literature. Yet, a distinction must be made between ``sentiments" and ``opinions". Namely, opinions indicate a person's views on a specific matter, while sentiment indicates a person's feelings towards something. Yet, the two concepts are highly related and opinion words can typically be used to extract sentiments \cite{hu2004mining, 10.5555/1597148.1597269}. In sentiment analysis, the goal is to assign sentiment polarities based on a body of text. Although the words ``polarity" and ``sentiment" are often used interchangeably, a distinction can be made again. Namely, a sentiment indicates a feeling, while the polarity expresses the orientation (e.g., \textit{``positive"}, \textit{``neutral"}, or \textit{``negative"}). The granularity of a sentiment analysis task can be described using three separate characteristics: the sentiment type, the level of the task, and the target. First, the type of sentiment output must be defined. Namely, the task can entail a simple binary classification (\textit{``positive"} or \textit{``negative"} labels), but there could also be additional labels (\textit{``neutral"}), or even a sentiment intensity or score that must be predicted. Next, the level of the task determines at which level of the text sentiment is extracted. For example, in document-level sentiment analysis, the sentiment is analysed for the entire document. For sentence-level sentiment analysis, the sentiment is analysed for each sentence within the document. There are many other levels, such as words, paragraphs, groups of sentences, or chunks of text. Lastly, the target of the task determines the focus of the sentiment. First, there may be no sentiment target, which means that the task entails assigning a sentiment score or label to the text itself. On the other hand, one may want to know the sentiment regarding specific topics, entities, or aspects within the text. To illustrate, suppose we analyse a product review. If we do not define a target, we simply consider the sentiment of the entire text. Yet, knowing exactly which aspects of the product the consumer is satisfied or dissatisfied with can be highly useful, as it provides detailed information necessary for many applications \cite{laskari2016aspect}. The task of identifying aspects and analysing their sentiments in texts is known as aspect-based sentiment analysis (ABSA). ABSA is a relatively new field of research that has gained significant popularity due to the many useful applications \cite{6468032}. However, ABSA and sentiment analysis generally are difficult tasks due to the general structures of sentences and writing styles \cite{schouten2015survey}. In \cite{nazir2020issues}, a comprehensive overview is provided of the issues and challenges present in ABSA. Like general sentiment analysis tasks, ABSA can be performed at multiple levels of which the document level and sentence level are typically the most popular \cite{pontiki2016semeval}. Document-level ABSA methods are concerned with finding general aspects linked to a certain entity in the document and assigning sentiments to them. Sentence-level ABSA approaches, on the other hand, attempt to identify all aspects for each sentence individually, determine the sentiment associated with these aspects, and possibly aggregate sentiment at the review-level \cite{pontiki2016semeval}. As such, document-level ABSA considers the general concepts that summarize the sentiment in the text, while sentence-level ABSA considers each mention of an aspect individually. The task of ABSA can be further split up into three tasks: aspect detection/extraction, sentiment classification, and sentiment aggregation \cite{schouten2015survey}. Aspect extraction involves identifying the aspects present in a text, the classification step consists of assigning a sentiment label or score to the extracted aspects, and the aggregation step amounts to summarizing the aspect sentiment classifications. In this survey, we focus on the sentiment classification step, which is generally referred to as aspect-based sentiment classification (ABSC). Various surveys on ABSA have already been published \cite{schouten2015survey, pavlopoulos2014aspect}. However, it seems that a survey focused specifically on ABSC would be more effective at providing an in-depth discussion and evaluation of ABSC models. The only survey dedicated solely to ABSC is \cite{zhou2019deep}, which provides an overview of deep learning techniques. While deep learning models are currently state-of-the-art for ABSC, we would argue that a broader scope would allow for a more effective evaluation of the current and future state of research in ABSC. For example, a significant part of ABSC is the research on approaches that incorporate knowledge bases into classification models. Furthermore, various important deep learning models, such as transformer-based models \cite{vaswani2017attention}, are missing from previous surveys. For these reasons, this survey presents a comprehensive overview of the state-of-the-art ABSC models. For this purpose, we propose a novel taxonomy of ABSC models, which is currently missing from the literature. The taxonomy categorizes ABSC models into three major categories: knowledge-based methods, machine learning models, and hybrid models. Based on the structure of this taxonomy, we discuss and compare the architectures of the various ABSC models using both technical and intuitive explanations. Furthermore, we also provide summarizing overviews of the performances of various ABSC models on a larger scale than previous surveys. Last, we identify trends in the research on ABSC and use these findings to discuss ways in which the field of ABSC can be advanced in the future. The sections of this survey are based on the main steps taken when designing ABSC methods. In Section \ref{sec:Inputs}, we discuss the different ways of representing the inputs for ABSC. In Section \ref{sec:Evaluation}, techniques for evaluating the model performance are reviewed. In Section \ref{sec:Methods}, various ABSC models are presented according to a proposed taxonomy. We discuss how these models use the preprocessed inputs to produce the desired classification output, and compare the performances of various models found in the literature. In Section \ref{sec:Related}, additional topics related to ABSC are discussed. In Section \ref{sec:Conclusion}, we give our conclusions and discuss directions for future research. \section{Input Representation}\label{sec:Inputs} In this section, we explain in detail the input representations necessary for ABSC. We start by introducing some definitions in accordance with Schouten and Frasincar \cite{schouten2015survey}. Given a corpus $C$ containing the records $R_1, \dots, R_{n_R}$, ABSA can be formally defined as finding all quadruples $(y, a, h, t)$ \cite{liu2012sentiment} for each record $R_j$, where $y$ is the sentiment, $a$ represents the target aspect for the sentiment, $h$ is the holder, or the individual expressing the sentiment, and $t$ represents the time that the sentiment was expressed \cite{schouten2015survey}. A record is defined as an individual piece of text in the corpus, which can be a single phrase, a sentence, or a large body of text, like a document. Furthermore, in general, most methods are concerned with finding $(y, a)$, the aspects and the corresponding sentiments. Since this survey is only focused on the classification step of ABSA, we assume the target aspects $a$ are already identified in the text. As such, ABSC models are focused only on finding the sentiments $y$ corresponding to the given aspects $a$. For example, consider the following record that takes the form of a restaurant review: \textit{``The atmosphere was fantastic, but the food was bland"}. This sentence contains two aspects \textit{``atmosphere"} and \textit{``food"}, which we assume have been identified in a prior aspect extraction phase. The eventual goal is to use an ABSC model to determine a sentiment classification for each of these aspects. The correct sentiments, in this case, would be \textit{``positive"} and \textit{``negative"}, respectively. However, before one can attempt this task, input representations need to be constructed. Since a piece of text can generally not directly be used as input for a classification model, the aspect and corresponding context must be represented through numeric features. Note that, in aspect-based, or feature-based, sentiment analysis, the term `features' is sometimes used to describe the aspects of, for example, a product. However, `features' in this context refers not to the aspects themselves, but to the features of the data. Before an ABSC model can be implemented, a preprocessing phase is necessary to construct a numeric representation \cite{haddi2013preprocess, goldberg2017neural}. The features that are used as inputs are a crucial part of the classification process since they determine what information the ABSC model has access to. It is important to understand how to represent a piece of a text in such a way that the classification model has access to as much information as possible without including redundant or irrelevant features such that it can perform optimally. Input representations for ABSC generally consist of three characteristics: \textbf{context}, \textbf{dimensionality}, and \textbf{feature types}. We discuss these in Subsection \ref{sec:InputContext}, Subsection \ref{sec:InputDimensionality}, and Subsection \ref{sec:InputTypes}, respectively. \subsection{Context}\label{sec:InputContext} Given a record $R_j$, we define the \textbf{context} as the subset of words that is considered as the input. If the text only contains a single aspect, then one can choose to simply consider all words. However, if the record contains multiple aspects, a representation needs to be developed for each aspect individually by, for example, taking a subset of the words for each aspect. The methods of representing the input for each aspect depend on whether a target phrase is present in the text. As such, we make a distinction in the input representation techniques for \textbf{explicit} and \textbf{implicit} aspects. Explicit aspects are generally the most common type of aspects. An example of such an aspect can be seen in the review sentence \textit{``The price of this phone is very high."}, where the aspect is explicitly stated to be \textit{``price"}. The simplest method of determining the context for such an explicit aspect is by employing a window around the target phrase and developing an input representation based only on the words in the window. For example, Guha et al. \cite{guha2015siel} only consider the aspect itself, the three words to the left of the aspect, and the three words to the right of the aspect. However, such methods based on the physical proximity may not be optimal since the words expressing the sentiment may be far removed from the aspect. As such, more robust methods do not rely on the physical distance between words. For example, one can use the grammatical dependencies to determine which words in the record are related to the aspect and should therefore be considered in the context \cite{thet2010aspect}. Another technique for determining the context is the use of text kernels to express the distances based on the relations between words. Nguyen and Shirai \cite{nguyen2015treekernel} implement tree kernels for relation extraction to determine which words they consider in the analysis. An example of a sentence with an implicit aspect is \textit{``This phone is really expensive."}, where the aspect is again \textit{``price"}, but the aspect is not directly mentioned in the text. Implicit aspects must be treated differently since we do not have a target phrase to center a window around or to determine distances to words with. If we assume the record contains only one aspect, such as in the given example sentence, then this is not a major problem as one can simply consider the whole sentence for the input representation. However, as Dosoula et al. \cite{dosoula2016sentiment} argue, reviews may often contain multiple implicit aspects, even within the same sentence. For instance, the previously given example can be extended with another implicit aspect: \textit{``This phone is really expensive, but also very fast"}. Dosoula et al. \cite{dosoula2016sentiment} present different methods based on aspect proxy words to determine the context for each aspect in a sentence. \subsection{Dimensionality}\label{sec:InputDimensionality} The \textbf{dimensionality} of the input representation is directly determined by the type of model used in the subsequent sentiment classification step. Some models, such as support vector machines and decision trees, work only with a single vector representation, whereas other models, such as recurrent neural networks, can process a set of vectors or a data matrix. Suppose record $R_j$ contains a single aspect and we wish to represent the entire record using numeric features. Then, depending on the ABSC models used, record $R_j$ can be represented using a single vector $\bm x_j \in \mathbb{R}^{d_x}$, or a matrix $\bm X_j \in \mathbb{R}^{d_x \times n_j}$, where $d_x$ represents the number of features used for the representation, and $n_j$ represents the number of words in record $R_j$. A single vector representation of the record can be interpreted as a record embedding where each element of the vector indicates the presence of a feature. For example, whether certain sentiment-indicating words or phrases are present. On the other hand, a data matrix can provide more detailed information since it does not have to summarize all information into a single vector. Each column of the matrix is an embedding containing features representing a part of the record, such as a sentence, word, or character. The dimensionality of the inputs generally influences the types of features used, as some feature types work better with a vector representation, whereas others can be used more effectively with a matrix representation. The next subsection discusses this in further detail. \subsection{Feature Types}\label{sec:InputTypes} When representing text as a single vector, classical text classification \textbf{feature types} can be used. Arguably one of the simplest and most popular ways of representing a sentence or piece of text is via a bag-of-words (BoW) representation. A BoW vector representation is a feature vector where each element represents a word. It is called a ``bag" because we simply congregate the words together and disregard the structure of the text. Again, suppose we wish to represent record $R_j$ using a single vector $\bm x_j$. We start by building a vocabulary that contains every unique word used in any of the records in our corpus $C$. For a BoW representation, each element $x_{i,j}$ in vector $\bm x_j$ represents a unique word from the vocabulary. The simplest BoW vector is a binary vector where each element $x_{i,j}$ in vector $\bm x_j$ is set equal to 1 if the corresponding word occurs in $R_j$, and 0 otherwise. A more popular version is to also consider the frequency of the words by setting each element $x_{i,j}$ equal to the number of times the corresponding word occurs in $R_j$ \cite{salton1968tf}. However, some words are simply used more often than others, which can cause a bias towards certain words in the word counts. Thus, instead of using the direct word counts, one can also implement the term frequency-inverse document frequency (TF-IDF) measure \cite{jones1972statistical}. This measure is essentially the word frequency in a record scaled by how often the word appears in all records. For word $w_i$ in the vocabulary, the TF-IDF can be calculated using Equation (\ref{eq:TF-IDF_BoW}). \begin{equation}\label{eq:TF-IDF_BoW} \def\scriptscriptstyle{\scriptscriptstyle} \setstackgap{L}{8pt} \defL{L} x_{i,j} = \frac{n_{i,j}}{n_j} \times \text{log}\frac{n_R}{|\{R_j \in C: w_i \in R_j \}|}, \end{equation} where $n_{i,j}$ denotes the number of times word $w_i$ occurs in record $R_j$, $n_j$ represents the number of words in record $R_j$, $n_R$ denotes the total number of records, and $C$ indicates the set of records. As previously mentioned, for ABSC, one should preferably not use a singular text representation when multiple aspects are present in the record. As such, one can simply create a BoW representation based only on the words considered to be in the context. \begin{table} \caption{Overview of various types of features.} \label{Table:FeatureTypes} \begin{center} \resizebox{230pt}{!}{% \begin{tabular}{p{100pt}|p{150pt}} \textbf{Feature Types} & \textbf{Feature Examples} \\ \hline Word Features & Bag-of-words \cite{scott1998text}, word embeddings \cite{mikolov2013efficient}, $n$-grams \cite{pak2010twitter}\\ \hline Syntactic Features & Part-of-speech tagging \cite{ratnaparkhi1996maximum, dragoni2018combining} and dependency parsing \cite{dragoni2018combining, kubler2009dependency, mukherjee2012feature, schouten2015benefit} \\ \hline Proximity-Based Features & Proximity relative to sentiment words \cite{mullen2004sentiment, hasan2011proximity} or target words \cite{jebbara2016aspect} \\ \hline Semantic Features & Sentiment scores of context words \cite{hatzivassiloglou1997predicting, turney2002SO}\\ \hline Morphological Features & Lexemes and lemmas \cite{alsmadi2019enhancing, schouten2015benefit, abdul2014samar} \\ \end{tabular} } \end{center} \end{table} BoW features work well for ABSC but may come with certain problems that make the classification process difficult. Firstly, a large corpus can contain a substantial number of different words, meaning that the vocabulary and corresponding feature vectors will be very large if one would represent every single word with an element. Thus, methods have been developed to select the most important words that can be used for features. For example, based on a list or dictionary, one can filter out the words that generally hold no meaningful or semantic value, such as stop words. Even though such techniques can significantly reduce the number of features, the vectors generally remain large. Because of this, an important characteristic of ABSC models is that they should typically be able to handle high-dimensional vectors well, which is discussed in more detail in Section \ref{sec:Methods}. Secondly, as previously mentioned, the structure of the words is completely disregarded, making it difficult to capture relations between the aspect and its context. One solution is to include $n$-grams, but this further exacerbates the previously mentioned high dimensionality problem. Another solution is to include features that define the relations between the words. Additional feature types are generally used and can take a variety of forms to include different types of information. For example, Al-Smadi et al. \cite{alsmadi2019enhancing} enhance a TF-IDF BoW representation with additional morphological, syntactic, and semantic features. Similarly, Mullen and Collier \cite{mullen2004sentiment} implement semantic, syntactic, and proximity-based features. Table \ref{Table:FeatureTypes} provides an overview of a variety of feature types with the corresponding examples. Suppose we now wish to represent the record $R_j$ using a matrix $\bm X_j$. In this case, the idea is that each word in the text is represented by a vector, which is stored as a column in $\bm X_j$. A standard choice, in this case, is the use of word embeddings. For an overview of the many types of word embeddings, we refer to \cite{camacho2018word}. Some examples are: \textit{GloVe} \cite{pennington2014glove}, \textit{fastText} \cite{grave2018learning}, and \textit{Word2Vec} \cite{mikolov2017advances}. Each of these embedding types attempts to represent a word's meaning using a vector of limited size. While pre-trained word embeddings are often used, it is also possible to train word embeddings along with the ABSC model. Additionally, the previously mentioned embedding models are all non-contextual, which means that an embedding is produced for each word independently from the other words. Yet, it is known that a word can have multiple meanings depending on the context that it is used in. Therefore, another solution is to use contextual word embeddings, such as embeddings based on \textit{ELMo} \cite{peters2018deep} or \textit{BERT} \cite{devlin2018bert}. These word embedding models produce different vectors for words depending on the surrounding context. For example, the word ``playing" has a different meaning in ``playing tennis" than in ``playing piano" and will therefore get a different word embedding assigned to it. Furthermore, there are also word embeddings made specifically for sentiment analysis \cite{7296633}. Word embeddings are a powerful representation tool on their own, but one can still add additional feature types like the examples provided in Table \ref{Table:FeatureTypes}. For example, in \cite{alsmadi2018rnnvsvm}, word embeddings are enhanced with syntactic and semantic features. The input representations discussed in this section are the basis for the classification models discussed in Section \ref{sec:Methods}. In the rest of this paper, we assume that input representations have already been constructed beforehand. We only further elaborate on the input representation if the classification model modifies the input representations. Furthermore, in the rest of this paper, for clarity purposes, we do not use record-specific or aspect-specific subscripts. \section{Performance Evaluation}\label{sec:Evaluation} The goal of an ABSC model is to produce an output using the input representations. Thus, given a record $R$ containing an aspect $a$, an ABSC model produces a label $\hat{y} \in \mathbb{R}^{1}$ using the feature vector $\bm x \in \mathbb{R}^{d_x}$ or matrix $\bm X \in \mathbb{R}^{d_x \times n_x}$, where $d_x$ represents the number of features used, and $n_x$ indicates the number of words considered to be in the context. The effectiveness of ABSC models can be compared by evaluating the output $\hat{y}$ using a variety of performance measures. Each of these measures highlights certain strengths and weaknesses of the classification models. In this section, we represent various techniques for evaluating ABSC models. These measures are used to compare and contrast the various ABSC approaches in Section \ref{sec:Methods}. The most commonly used performance measures for ABSC and ABSA, in general, are the well-known accuracy, precision, recall, and $F_1$-measure \cite{schouten2015survey}. These measures evaluate the sentiment classification $\hat{y}$ by comparing it to the true sentiment label $y$ of the aspect. Models that achieve high values for these performance measures, will produce higher-quality predictions. The accuracy measures the ratio of correctly classified aspects to the total number of aspects in the dataset. This is intuitively clear, as it is a direct indication of how well the model predicts. In a binary classification setting, a simple baseline to use is a random classifier. A fully random classifier would obtain an expected accuracy of 0.5, which any predictive model should be able to outperform. Yet, the accuracy measure is often not a valid indicator of performance in imbalanced datasets. If a dataset contains two classes, where 90\% of the records are of one class, then always predicting that class will provide a high accuracy of 0.9. As such, other measures like the precision and recall are used that provide more useful evaluations of classification performance in imbalanced datasets. One can aggregate the precision and recall for each class to obtain a measure of the model performance. This can be done by evaluating the mean of the measures (macro-averaging), or by aggregating based on the contributions of all classes (micro-averaging). A similar process can be used when aggregating the results from multiple different datasets. One can either take the mean of the performance measures from the various datasets (macro-averaging), or one can aggregate based on the contributions of the datasets (micro-averaging). As the precision and recall focus on different parts of the model performance, the F$_1$-measure is typically used to summarize the information. The accuracy, precision, recall, and F$_1$-measure are used in most of the works discussed in this survey. However, there are various alternative measures that are occasionally used as well. For example, the mean squared error (MSE) and the ranking loss. Suppose we have a training dataset consisting of the $N$ feature vectors $\bm x_1, \dots, \bm x_N$ with the corresponding labels $y_1, \dots, y_N$. Each of the feature vectors $\bm x_1, \dots, \bm x_N$ represents an aspect with the corresponding context in a record, as explained in Section \ref{sec:Inputs}. The corresponding labels $y_1, \dots, y_N$ indicate the true sentiment expressed towards the aspects. An ABSC model is used to produce the label predictions $\hat{y}_1, \dots, \hat{y}_N$. Since the labels for sentiment analysis are generally considered to be ordinal, the MSE can be calculated as follows: \begin{equation}\label{equation:MSE} \def\scriptscriptstyle{\scriptscriptstyle} \setstackgap{L}{8pt} \defL{L} \text{MSE} = \frac{1}{N}\sum_{i=1}^{N}(\stackunder{\hat{y}_i}{\scriptscriptstyle 1 \times 1} - \stackunder{y_i}{\scriptscriptstyle 1 \times 1})^2. \end{equation} Due to the square, the MSE penalizes large errors more than small errors. An alternative is the ranking loss \cite{crammer2002pranking} that punishes small and large errors in a more equal manner. The ranking loss measure is closely related to the mean absolute error and can be calculated according to Equation (\ref{equation:MAE}). \begin{equation}\label{equation:MAE} \def\scriptscriptstyle{\scriptscriptstyle} \setstackgap{L}{8pt} \defL{L} \text{Ranking Loss} = \frac{1}{N}\sum_{i=1}^{N}|\stackunder{\hat{y}_i}{\scriptscriptstyle 1 \times 1} - \stackunder{y_i}{\scriptscriptstyle 1 \times 1}|, \end{equation} Both the Ranking Loss and the MSE measures are used to measure the errors in the classification predictions. As such, the lower the values attained for these measures, the better the model performs. A final performance measure is the area under the Receiver Operating Characteristics (ROC) curve, abbreviated as AUC. The ROC curve plots the recall against the true positive rate. The area under the curve (AUC) is then a measure of how well the model can separate classes, meaning that the higher the AUC value, the better the performance of the model. \section{Sentiment Classification}\label{sec:Methods} ABSC models can generally be classified into three major categories: knowledge-based approaches, machine learning models, and hybrid models. In Figure \ref{fig:Taxonomy}, a taxonomy consisting of these categories and their sub-categories is presented. The taxonomy categories are explained in more detail in Subsection \ref{sec:MethodsKnowledgeBased}, Subsection \ref{sec:MethodsMachineLearning}, and Subsection \ref{sec:MethodsHybrid}, respectively. Each of these subsections contains an overview of various prominent ABSC models for that model type in the form of a summarizing table. Each table consists of columns detailing the model, the types of data used, and the various performance measures reported for every model. A row corresponding to a model can consist of multiple entries if datasets from different domains are used. Additionally, we report results from other works that implement the same model using different datasets. When a model is reimplemented in another paper, we indicate this beneath the entry. Unavailable information is indicated by a ``-''. Since we are unable to include all results reported by each paper, we take several steps to summarize the results. First of all, we only include the results of the best model architecture presented in each paper. Secondly, when multiple datasets are used in the same domain, we average the results across those datasets. The only exception is when a model is reimplemented in another paper since we cannot guarantee that the model implementation is the same. We have included the references for all datasets in the tables for easier model comparisons. General characteristics of some ABSC datasets are provided in Table \ref{Table:Data}. Note that this table only includes the most used domains and languages from the listed datasets. For example, the SemEval-2016 dataset contains many additional languages, such as Dutch, Chinese, and Turkish. \begin{figure} \centering \includegraphics[scale=0.7]{ABSCtaxonomy_Wide.pdf} \caption{Taxonomy of ABSC methods.} \label{fig:Taxonomy} \end{figure} \begin{table*} \caption{Overview of datasets used for ABSC.} \label{Table:Data} \begin{center} \resizebox{430pt}{!}{% \begin{tabular}{p{105pt}|p{120pt}|p{60pt}|p{40pt}|p{40pt}|p{40pt}|p{40pt}} \textbf{Dataset} & \textbf{Domain(s)} & \textbf{Language(s)} & \textbf{Positives} & \textbf{Neutrals} & \textbf{Negatives} & \textbf{Total} \\ \hline SemEval-2014 \cite{pontiki-etal-2014-semeval} & Reviews (Electronics) \newline Reviews (Restaurants) & English \newline English & 1328 \newline 2894 & 629 \newline 829 & 994 \newline 1001 & 2951 \newline 4724 \\ \hline SemEval-2015 \cite{pontiki2015semeval} & Reviews (Electronics) \newline Reviews (Restaurants) \newline Reviews (Hotels) & English \newline English \newline English & 1644 \newline 1652 \newline 243 & 185 \newline 98 \newline 12 & 1094 \newline 749 \newline 84 & 2923 \newline 2499 \newline 339 \\ \hline SemEval-2016 \cite{pontiki2016semeval} & Reviews (Electronics) \newline Reviews (Restaurants) \newline Reviews (Hotels) & English \newline English \newline Arabic & 1540 \newline 1802\newline 7705 & 154 \newline 104\newline 852 & 869 \newline 623\newline 4556 & 2563 \newline 2529 \newline 13113 \\ \hline Twitter Data \cite{dong2014adaptive} & Social Media (Various) & English & 1735 & 3470 & 1735 & 6940 \\ \hline SentiHood \cite{saeidi-etal-2016-sentihood} & Social Media (Neighborhoods) & English & 2460 & 0 & 1216 & 3676 \\ \hline Hindi Reviews \cite{akhtar2016aspect_h} & Reviews (Various) & Hindi & 1986 & 1914 & 569 & 4469 \\ \hline Indonesian Marketplace \cite{8285850} & Reviews (Various) & Indonesian & 7982 & 0 & 1677 & 9659 \\ \hline FiQA-2018 \cite{10.1145/3184558.3192301} & Social Media (Finance) & English & 722 & 13 & 378 & 1113 \\ \hline \end{tabular} } \end{center} \end{table*} \begin{table*} \caption{Overview of prominent knowledge-based ABSC models and their reported performances. If no performance measures are displayed, then no quantitative analysis was provided for ABSC in the original paper.} \label{Table:KB} \begin{center} \resizebox{430pt}{!}{% \begin{tabular}{p{115pt}|p{65pt}|p{118pt}|p{40pt}|p{40pt}|p{28pt}|p{18pt}|p{50pt}} \textbf{Reference} & \textbf{Category} & \textbf{Domain(s)} & \textbf{Accuracy} & \textbf{Precision} & \textbf{Recall} & \textbf{F$_1$} & \textbf{Alternative \newline Measures} \\ \hline Hu and Liu (2004) \cite{hu2004mining} & Dictionary-Based & Reviews (Electronics) \cite{amazon, cnet} & 0.842 & 0.642 & 0.693 & 0.667 & - \\ \hline Zhu et al. (2009) \cite{zhu2009multiaspect} & Dictionary-Based & Reviews (Restaurants) \cite{dianping} & - & 0.755 & 0.755 & 0.755 & - \\ \hline Moghaddam and Ester (2010) \cite{moghaddam2010opinion} & Dictionary-Based & Reviews (Electronics) \cite{epinions} & - & - & - & - & Ranking Loss: \newline 0.49 \\ \hline Eirinaki et al. (2012) \cite{eirinaki2012feature} & Dictionary-Based & Reviews (Various) \cite{sentdata} & - & - & - & - & - \\ \hline Zhou and Chaovalit (2008) \cite{zhou2008ontology} & Ontology-Based & Reviews (Movies) \cite{imdb} & 0.722 & - & - & - & - \\ \hline Kontopoulos et al.\newline (2013) \cite{kontopoulos2013ontology} & Ontology-Based & Social Media (Electronics) \cite{twitter} & - & - & - & - & - \\ \hline Nie et al. (2013) \cite{nie2013opinion} & Ontology-Based & Reviews (Electronics) \cite{jd} & - & 0.587 & 0.625 & 0.605 & - \\ \hline Hoogervorst et al. (2016) \cite{hoogervorst2016aspect} & Discourse-Based & Reviews (Electronics) \cite{pontiki2015semeval}\newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval, pontiki2015semeval} & -\newline - & 0.670 \newline 0.670 & 0.670 \newline 0.670 & 0.670 \newline 0.670 & -\newline - \\ \hline Sanglerdsinlapachai et al. (2016) \cite{sanglerdsinlapachai2016exploring} & Discourse-Based & Reviews (Electronics) \cite{cnet} & 0.633 & - & - & - & - \\ \hline Dragoni et al. (2018) \cite{dragoni2018combining} & Discourse-Based & Reviews (Electronics) \cite{pontiki2015semeval}\newline Reviews (Restaurants) \cite{pontiki2015semeval} & 0.859\newline 0.779 & - \newline - & - \newline - & - \newline - & -\newline - \\ \end{tabular} } \end{center} \end{table*} \subsection{Knowledge-Based}\label{sec:MethodsKnowledgeBased} Knowledge-based methods, also known as symbolic AI, are approaches that make use of a knowledge base. Knowledge bases are generally defined as stores of information with underlying sets of rules, relations, and assumptions that a computer system can draw upon. Knowledge-based methods are heavily related to the input representation since these approaches often use a knowledge base to define features. One of the advantages of knowledge-based methods is interpretability. Namely, it is generally easy to identify the information used to produce the model output. The underlying mechanisms of knowledge-based methods are typically relatively simple, which allows for highly transparent approaches to ABSC. Knowledge-based methods require no training time, but the construction of knowledge bases can take considerable time. We discuss the following three knowledge-based approaches: dictionary-based, ontology-based, and discourse-based methods. The performances of various knowledge-based methods are displayed in Table \ref{Table:KB}. \subsubsection{Dictionary-Based}\label{sec:MethodsDictionaryBased} Early methods for ABSC were mostly based on dictionaries. Given a record $R$ containing an aspect $a$, dictionary-based methods construct a feature vector $\bm x$ using a dictionary, where each element $x_i$ represents a sentiment score, or orientation, of a word in the context regarding the aspect. Various dictionaries can be used for ABSC, such as \textit{WordNet} \cite{miller1995wordnet} and \textit{SentiWordNet} \cite{baccianella2010sentiwordnet}. These dictionaries define sets of words and the linguistic relations between them. For example, \textit{WordNet} \cite{miller1995wordnet} groups nouns, verbs, adjectives, and adverbs into so-called \textit{synsets}: sets of synonyms, or groups of words that have the same meaning. One can exploit these relations for ABSC since words generally portray the same sentiment as their synonyms \cite{hu2004mining}. As such, we start with a set of \textit{seed words} for which the sentiment is known. For example, one can use \textit{``good"}, \textit{``fantastic"}, and \textit{``perfect"} as positive seed words, and \textit{``bad"}, \textit{``boring"}, and \textit{``ugly"} as negative seed words. These seed words can then be used to determine the sentiment of words surrounding an aspect via the synsets defined in a dictionary, like \textit{WordNet} \cite{miller1995wordnet}. Words that are synonyms of a positive seed word, or antonyms of a negative seed word, will receive a positive sentiment score. \textit{SentiWordNet} \cite{baccianella2010sentiwordnet} is in part based on this idea. This dictionary generates a sentiment label (\textit{``positive"}, \textit{``neutral"}, or \textit{``negative"}) for each of the synsets using a combination of a seed set expansion approach and a variety of classification models. After determining sentiment scores for the context words, a method needs to be implemented to determine a sentiment output. For example, the authors of \cite{hu2004mining} examine whether there are mostly negative or positive words present in the context. In \cite{hu2004mining}, the sentiment scores for the context words are encoded as 1 for positive, -1 for negative, and 0 for neutral. Then, the sentiment classification can be determined by summing the elements of $\bm x$. If the sum is positive, a positive label is returned, and a negative label otherwise. Similarly, in \cite{eirinaki2012feature}, the context words are assigned a sentiment score in the range $[-4, 4]$. The sentiment polarity of the aspect is then determined based on the average score per opinion word relating to the aspect. \subsubsection{Ontology-Based}\label{sec:MethodsOntologyBased} An ontology is generally defined as an \textit{``explicit, machine-readable specification of a shared conceptualization"} \cite{gruber1995toward, studer1998knowledge}, and defines a set of entities and relations corresponding to their properties. The main difference between an ontology and a dictionary is that a dictionary captures linguistic relations between words, while an ontology represents relations between real entities. These relations can be used for determining which words in the record are important for determining the sentiment towards the aspect. For ABSC, one can either use an existing ontology or create one based on the domain at hand \cite{kontopoulos2013ontology}. Examples of existing ontologies are the semantically interlinked web communities (SIOC) ontology \cite{SIOC}, which is an ontology that captures data from online community websites, and the ontology of emotions proposed in \cite{roberts2012empatweet}. Yet, rather than relying on existing ones, most researchers choose to create their own ontology. This is because finding an ontology that relates well to a specific domain is difficult since many existing ontologies typically capture rather generic concepts. Designing ontologies is often done using specific methods, such as formal concept analysis \cite{obitko2004ontology} or the \textit{OntoClean} methodology \cite{guarino2002evaluating}. Ontologies are often manually created to ensure the accuracy of the captured relations \cite{mevskele2020aldonar}, but this process is rather time-consuming. As such, methods for creating ontologies semi-automatically or even completely automatically can be vital when creating larger ontologies. For example, in \cite{10.1007/978-3-319-12024-9_4}, an ontology for ABSC is created semi-automatically using a proposed semantic asset management workbench (SAMW). A method for fully automatically creating ontologies is proposed in \cite{1179189}. Ontologies capture the structure of objects in a domain. These relations can be used to determine the context corresponding to the aspect \cite{kontopoulos2013ontology}. Then, a method can be used to determine the sentiment label using the context obtained using the ontology. For example, by using a dictionary-based sentiment classifier \cite{zhou2008ontology}. While regular ontologies are useful tools for defining relations between objects, to further facilitate sentiment analysis, sentiment information can be incorporated into the ontology. Such sentiment ontologies specifically define sentiment relations between words or entities. For example, in \cite{nie2013opinion}, a sentiment ontology tree is constructed that links product aspects to opinion words with the corresponding sentiment scores. In \cite{wojcik2014ontology}, a sentiment ontology is constructed where the sentiment relations are defined using the \textit{SentiWordNet} dictionary \cite{baccianella2010sentiwordnet}. Zhuang et al. \cite{zhuang2019soba} propose a semi-automated ontology builder for ABSA that constructs an ontology based on sentiment relations. When classifying an aspect, one can link the words in the context to the concepts and relations in the sentiment ontology, and summarize the sentiment relations to produce a sentiment classification. The semi-automatic method proposed in \cite{zhuang2019soba} mainly focuses on word frequencies, due to the limited amount of domain data. In \cite{dera2020sasobus}, another semi-automatic ontology builder is proposed that is based on synsets from the \textit{WordNet} \cite{miller1995wordnet} dictionary. Lastly, ten Haaf et al. \cite{tenhaaf2021websoba} propose a semi-automatic ontology builder that is based on word embeddings produced using the \textit{word2vec} \cite{mikolov2017advances} method. \subsubsection{Discourse-Based}\label{sec:MethodsDiscourseBased} Another type of knowledge base that can be used for ABSC is a discourse tree based on rhetorical structure theory (RST) \cite{mann1988rhetorical}. RST can be used to define a hierarchical discourse structure within a record to categorize phrases into elementary discourse units (EDU). Similar to ontologies, discourse trees define sets of relations that can be used to determine which words are important when assigning a sentiment classification to the aspect. For a record $R$, a discourse tree is constructed that defines the hierarchical discourse relations in the record. To determine the context concerning an aspect $a$ in the record, the authors of \cite{hoogervorst2016aspect} use an RST-based method that produces a sub-tree of the discourse tree that contains the discourse relations specifically regarding the aspect. This sub-tree is named the context tree and can be used to determine the sentiment classification of the aspect. Since the context tree does not contain any sentiment information, the authors of \cite{hoogervorst2016aspect} use a dictionary-based method to label the leaf nodes of the context tree with sentiment orientation scores for sentence-level ABSC. To determine the sentiment classification of the aspect, one can simply evaluate the sum of the sentiment scores defined for the context tree. In \cite{sanglerdsinlapachai2016exploring}, a similar technique is used with a more sophisticated method of aggregating the scores. There are other types of discourse structure theories besides RST \cite{HOU2020113421}. One example of this is cross-document structure theory (CST) \cite{10.3115/1117736.1117745}, which can be used to analyse the structure and discourse relations between groups of documents. Cross-document structure analysis can be useful in social media analysis since social media posts are short documents that are typically highly interrelated. An interesting example of this is the SMACk system \cite{dragoni2016smack}, which analyses a cross-document structure based on abstract argumentation theory \cite{DUNG1995321}. For example, products often receive multiple reviews from various users that may respond to each other. The SMACk system analyses the relations between the different arguments presented in the reviews to improve the aggregation of the sentiment expressed towards the different aspects \cite{dragoni2018combining}. \subsection{Machine Learning}\label{sec:MethodsMachineLearning} As opposed to knowledge-based methods that exploit knowledge bases to achieve a sentiment classification, machine learning models, also known as subsymbolic AI, use a training dataset of feature vectors and corresponding correct labels. The machine learning model is trained to extract patterns from the data that can be used to distinguish between sentiment classes. There are many types of machine learning models, which can be grouped into the following categories: support vector machines, tree-based models, deep learning models, and attention-based deep learning models. The performances of various machine learning models are presented in Table \ref{Table:ML}. \begin{table*} \caption{Overview of prominent machine learning ABSC models and their reported performances. DT = ``Decision Tree" and RF = ``Random Forest".} \label{Table:ML} \begin{center} \resizebox{430pt}{!}{% \begin{tabular}{p{110pt}|p{100pt}|p{120pt}|p{40pt}|p{40pt}|p{28pt}|p{20pt}|p{45pt}} \textbf{Reference} & \textbf{Category} & \textbf{Domain(s)} & \textbf{Accuracy} & \textbf{Precision} & \textbf{Recall} & \textbf{F$_1$} & \textbf{Alternative \newline Measures} \\ \hline Jiang et al. (2011) \cite{jiang2011target} & SVM & Social Media (Various) \cite{twitter}\newline Social Media (Various) \cite{dong2014adaptive} \newline (reimplemented in \cite{vo2015target}) & 0.682\newline 0.634 & -\newline - & -\newline - & -\newline 0.633 & -\newline - \\ \hline Yu et al. (2011) \cite{yu2011aspect} & SVM & Reviews (Electronics) \cite{cnet, viewpoints, reevoo, gsmarena} & - & - & - & 0.787 & - \\ \hline Pannala et al. (2016) \cite{pannala2016supervised} & SVM & Reviews (Electronics) \cite{pontiki2015semeval}\newline Reviews (Restaurants) \cite{pontiki2015semeval} & 0.732\newline 0.823 & -\newline - & -\newline - & -\newline - & -\newline - \\ \hline Akhtar et al. (2016) \cite{akhtar2016aspect} & SVM & Reviews (Electronics) \cite{akhtar2016aspect}\newline Reviews (Mobile Apps) \cite{akhtar2016aspect}\newline Reviews (Holidays) \cite{akhtar2016aspect}\newline Reviews (Movies) \cite{akhtar2016aspect} & 0.511\newline 0.421\newline 0.606\newline 0.916 & -\newline -\newline -\newline - & -\newline -\newline -\newline - & -\newline -\newline -\newline - & -\newline -\newline -\newline - \\ \hline Akhtar et al. (2016) \cite{akhtar2016aspect_h} & SVM & Reviews (Various) \cite{akhtar2016aspect_h} & 0.541 & - & - & - & - \\ \hline Hegde and Seema (2017) \cite{7972395} & SVM & Reviews (Various) \cite{7972395} & 0.758 & 0.762 & 0.755 & 0.758 & - \\ \hline De Fran\c{c}a Costa and da Silva (2018) \cite{10.1145/3184558.3191828} & SVM & Social Media (Finance) \cite{10.1145/3184558.3192301} & - & - & - & - & MSE = 0.151 \\ \hline Al-Smadi et al. (2019) \cite{alsmadi2019enhancing} & SVM & Reviews (Hotels) \cite{pontiki2016semeval} & 0.954 & - & - & - & AUC = 0.989 \\ \hline Akhtar et al, (2016) \cite{akhtar2016aspect} & Tree-Based (DT) & Reviews (Electronics) \cite{akhtar2016aspect}\newline Reviews (Mobile Apps) \cite{akhtar2016aspect}\newline Reviews (Holidays) \cite{akhtar2016aspect}\newline Reviews (Movies) \cite{akhtar2016aspect} & 0.545\newline 0.480\newline 0.652\newline 0.916 & -\newline -\newline -\newline - & -\newline -\newline -\newline - & -\newline -\newline -\newline - & -\newline -\newline -\newline - \\ \hline Hegde and Seema (2017) \cite{7972395} & Tree-Based (DT) & Reviews (Various) \cite{7972395} & 0.806 & 0.729 & 0.792 & 0.759 & - \\ \hline Al-Smadi et al. (2019) \cite{alsmadi2019enhancing} & Tree-Based (DT) & Reviews (Hotels) \cite{pontiki2016semeval} & 0.949 & - & - & - & AUC = 0.996 \\ \hline Gupta et al. (2014) \cite{gupta-ekbal-2014-iitp} & Tree-Based (RF) & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval} & 0.671\newline 0.674 & -\newline - & -\newline - & -\newline - & -\newline - \\ \hline Tang et al. (2016) \cite{tang-etal-2016-effective} & Deep Learning (RNN) & Social Media (Various) \cite{dong2014adaptive} & 0.715 & - & - & 0.695 & - \\ \hline Ruder et al. (2016) \cite{ruder2016hierarchical} & Deep Learning (RNN) & Reviews (Electronics) \cite{pontiki2016semeval}\newline Reviews (Restaurants) \cite{pontiki2016semeval} \newline Reviews (Hotels) \cite{pontiki2016semeval}& 0.790 \newline 0.807 \newline 0.829 & -\newline -\newline - & -\newline -\newline - & -\newline -\newline - & -\newline -\newline - \\ \hline Al-Smadi et al. (2018) \cite{alsmadi2018rnnvsvm} & Deep Learning (RNN) & Reviews (Hotels) \cite{pontiki2016semeval} & 0.870 & - & - & - & - \\ \hline Yang et al. (2018) \cite{yang2018financial} & Deep Learning (RNN) & Social Media (Finance) \cite{10.1145/3184558.3192301} & - & - & - & - & MSE = 0.080 \\ \hline Dong et al. (2014) \cite{dong2014adaptive} & Deep Learning (RecNN) & Social Media (Various) \cite{dong2014adaptive} \newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval} \newline (reimplemented in \cite{nguyen2015phrasernn}) & 0.663\newline 0.604 & -\newline 0.368 & -\newline 0.604 & 0.659\newline 0.457 & - \\ \hline Nguyen and Shirai (2015) \cite{nguyen2015phrasernn} & Deep Learning (RecNN) & Reviews (Restaurants) \cite{pontiki-etal-2014-semeval} & 0.639 & 0.624 & 0.639 & 0.622 & - \\ \hline Ruder et al. (2016) \cite{ruder2016insight} & Deep Learning (CNN) & Reviews (Electronics) \cite{pontiki2016semeval}\newline Reviews (Restaurants) \cite{pontiki2016semeval} \newline Reviews (Hotels) \cite{pontiki2016semeval}& 0.781 \newline 0.765 \newline 0.827 & -\newline -\newline - & -\newline -\newline - & -\newline -\newline - & -\newline -\newline - \\ \hline Jangid et al. (2018) \cite{10.1145/3184558.3191827} & Deep Learning (CNN) & Social Media (Finance) \cite{10.1145/3184558.3192301} & - & - & - & - & MSE = 0.112 \\ \hline Xue and Li (2018) \cite{xue-li-2018-aspect} & Deep Learning (CNN) & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval} & 0.691 \newline 0.773 & -\newline - & -\newline - & -\newline - & -\newline - \\ \hline Zeng et al. (2019) \cite{zeng2019aspect} & Deep Learning (CNN) & Reviews (Restaurants) \cite{pontiki-etal-2014-semeval} & 0.823 & - & - & - & - \\ \end{tabular} } \end{center} \end{table*} \subsubsection{Support Vector Machines}\label{sec:MethodsMachineLearningSVM} Support vector machine (SVM) models \cite{cortes1995support} have long been a popular choice for sentiment analysis and ABSC \cite{alsmadi2018rnnvsvm, pang2002thumbs, pannala2016supervised, varghese2013svm, mullen2004sentiment}. SVM models are classifiers that distinguish between categories by constructing a hyperplane that separates data vectors belonging to the different classes \cite{cortes1995support, varghese2013svm, burges1998tutorial}. In the case of ABSC, this comes down to separating aspects into sentiment classes (\textit{``positive"}, \textit{``neutral"}, and \textit{``negative"}) based on the feature vector $\bm x$. Suppose we have a training dataset consisting of the $N$ feature vectors $\bm x_1, \dots, \bm x_N$ with the corresponding labels $y_1, \dots, y_N$. Each of the feature vectors $\bm x_1, \dots, \bm x_N$ represents an aspect with the corresponding context in a record, as explained in Section \ref{sec:Inputs}. The corresponding labels $y_1, \dots, y_N$ indicate the true sentiment expressed towards the aspects. We first examine the case when only two sentiment classes are considered: \textit{``positive"} and \textit{``negative"}. The labels for aspects for which a positive sentiment is expressed are encoded as a 1, while aspects toward which a negative sentiment is expressed receive a label of -1. The SVM classifier can then be summarized as follows: \begin{equation}\label{equation:NonLinearSVM} \def\scriptscriptstyle{\scriptscriptstyle} \setstackgap{L}{8pt} \defL{L} \stackunder{\hat{y}}{\scriptscriptstyle 1 \times 1} = \text{sign}(\stackunder{\bm w^T}{\scriptscriptstyle 1 \times d_z} \times \stackunder{\phi(\bm x)}{\scriptscriptstyle d_z \times 1}+\stackunder{b}{\scriptscriptstyle 1 \times 1}), \end{equation} where $\bm w \in \mathbb{R}^{d_x}$ is a learned vector of weights and $b \in \mathbb{R}^{1}$ is a learned bias constant. The weights are determined by constructing a hyperplane that maximizes the separation between the training data labels $y_1, \dots, y_N$ based on the feature vectors $\bm x_1, \dots, \bm x_N$. While some datasets can be separated using a linear function form, other problems may not be so easily separable. In such situations, the kernel function $\phi()$ can be used to transform the feature vector $\bm x$ into a higher dimension where the labels are separated more easily \cite{schlkopf2018learning}. The disadvantage of using a kernel function is that the learned coefficients become difficult to interpret due to the non-linearity. When an aspect can be classified into multiple sentiment categories, the SVM model must be adjusted. An example solution is a ``one-versus-all" implementation, where an SVM model is trained to separate each class versus all other classes. The final prediction is based on which decision function has the highest value. SVMs are known to generalize well and be robust towards noisy data \cite{xu2009robustness}. Furthermore, as mentioned in Section \ref{sec:Inputs}, feature vectors for ABSC can often consist of substantial amounts of features, which SVM models are known to handle well \cite{joachims1998text}. However, finding a kernel that works is generally a difficult task \cite{burges1998tutorial}, and hand-crafted features are required for the model to perform well \cite{alsmadi2018rnnvsvm}. \subsubsection{Tree-Based}\label{sec:MethodsMachineLearningTree} Tree-based approaches are methods based on the trainable decision tree \cite{breiman1984classification} model. A decision tree model consists of a tree-like structure where each internal node, or decision node, in the tree represents a condition based on a particular feature from the vector $\bm x$, and each leaf node represents a particular sentiment class. Given the feature vector $\bm x$, we start at the root node of the tree and examine the splitting condition. The condition and the corresponding feature in $\bm x$ determine toward which internal node we move next. We move down the tree until a leaf node is reached, which has a certain sentiment class assigned to it, which determines the prediction $\hat{y}$. A significant advantage of decision trees is their explainability. The decision rules are typically easy to interpret for humans and can therefore be used to discover knowledge and obtain new insights \cite{10.1007/978-3-030-65965-3_28}. The new insights can then also be used to, for example, improve the previously discussed knowledge bases. Although decision tree models have not been particularly popular for ABSC, there are some examples of works successfully implementing these models. In \cite{7972395}, an incremental decision tree model is proposed that outperforms an implementation of the previously discussed SVM model. Similarly, in \cite{akhtar2016aspect}, a decision tree is shown to outperform several other models, including an SVM, on a variety of datasets. However, SVM models can outperform decision tree models for other problems \cite{alsmadi2019enhancing}. The main issue with decision trees is the problem of overfitting, which can be especially problematic for ABSC due to the substantial amounts of features that are often used. A solution to this problem is the use of a random forest model, which is an ensemble of decision trees \cite{breiman2001random}. A random forest consists of a large number of decision tree models. Each tree receives a limited number of features and a bootstrapped sample of the training data to train with. By randomly sampling data and restricting features, the individual decision trees tend to overfit less on specific features or data. Predictions are achieved via majority voting among the aggregated predictions of all individual decision tree models. In \cite{gupta-ekbal-2014-iitp}, a random forest model is implemented for the SemEval-2014 ABSC tasks, but mixed results are obtained for the different datasets. Similar results are presented in \cite{8126201}. Examples of other tree-based methods are the gradient boosted trees \cite{friedman2001greedy} and extra trees \cite{geurts2006extremely} classifiers. In \cite{bhoi2018various}, a comparison is made that indicates that these models may provide slight improvements compared to the random forest model. \subsubsection{Deep Learning}\label{sec:MethodsMachineLearningDeep} Deep learning models \cite{goodfellow2016deep} have revolutionized many fields of research \cite{sejnowski2018deep}, including sentiment analysis and ABSC \cite{zhou2019deep, do2019deep}. Significant amounts of research have been put into developing deep learning models for various types of data and learning tasks. One of the main disadvantages of deep learning models is the fact that they are highly difficult to interpret. Basic machine learning models, like decision trees and linear SVMs, can provide some useful model interpretations. Yet, although attempts have been made to explain the predictions produced by deep learning models \cite{guidotti2018survey, 10.1145/3359786}, these black-box methods are still regarded as practically not explainable. Furthermore, effectively training deep learning models requires large amounts of computational resources. This is because deep learning models require large amounts of data to train, which have not always been available for ABSC. However, as both deep learning research and the amount of publicly available data grow \cite{pontiki2016semeval, pontiki2015semeval, pontiki-etal-2014-semeval}, deep learning models become more and more popular due to their great predictive performances. Deep learning models used for ABSC include, but are not limited to, recurrent neural networks, recursive neural networks, and convolutional neural networks. \textbf{Recurrent neural network} (RNN) models \cite{hopfield1982neural} have in recent years become one of, if not the most popular choice for ABSC models. RNN models are powerful tools used for learning sequence-based data \cite{lipton2015critical}. They have achieved remarkable results for many language-based learning tasks, including ABSC, but also a wide variety of other sequence-based tasks. A basic RNN model is presented in Figure \ref{fig:RNN}. In language processing, the main idea behind RNN models is that a sequence of words is sequentially fed through a neural network model. The hidden state produced by the neural network based on one word is used as input for the neural network at the next step, such that information is carried through the sequence. This same concept can also be used to process sequences of images, times series, or other sequences. We consider again a record $R$ containing an aspect $a$ with a label $y$. However, compared to the previously discussed classification methods, we now consider the case where the numeric features are represented in a matrix $\bm X \in \mathbb{R}^{d_x \times n_x}$, where $d_x$ represents the number of features used, and $n_x$ indicates the number of words considered to be in the context. For any task, given an input matrix $\bm X$, a general RNN model can be defined as follows: \begin{equation}\label{equation:RNN} \def\scriptscriptstyle{\scriptscriptstyle} \setstackgap{L}{8pt} \defL{L} \stackunder{\bm h_t}{\scriptscriptstyle d_h \times 1} = f(\stackunder{\bm h_{t-1}}{\scriptscriptstyle d_h \times 1}, \stackunder{\bm x_t}{\scriptscriptstyle d_x \times 1}), \end{equation} where $\bm h_t \in \mathbb{R}^{d_h}$ is the hidden state vector at step $t$, $d_h$ is the pre-defined dimension of the hidden state vectors, and $\bm x_t \in \mathbb{R}^{d_x}$ is the $t$th column of $\bm X$, for $t = 1, \dots, n_x$. In the most basic RNN form, the function $f(.)$ represents a concatenation of the two vectors $\bm h_t$ and $\bm x_t$ that are then fed through a basic neural network model consisting of linear transformations and non-linear activation functions. Thus, at each step $t$, information from the previous words (contained in $\bm h_{t-1}$) is combined with the information contained in the following word (contained in $\bm x_t$). The final hidden state vector $\bm h_{n_x}$ should then contain all information from the words corresponding to the context of the aspect, processed from left to right. The last hidden state can then be put through an output layer to produce a label prediction. A typical example of an output layer is a linear transformation and a softmax function: \begin{equation}\label{equation:FinalLayer} \def\scriptscriptstyle{\scriptscriptstyle} \setstackgap{L}{8pt} \defL{L} \stackunder{\bm s}{\scriptscriptstyle d_y \times 1} = \text{softmax}(\stackunder{\bm W_{f}}{\scriptscriptstyle d_y \times d_h}\times \stackunder{\bm h_{n_x}}{\scriptscriptstyle d_h \times 1} + \stackunder{\bm b_{f}}{\scriptscriptstyle d_y \times 1}), \end{equation} where $\bm W_f \in \mathbb{R}^{d_y \times d_h}$ and $\bm b_f \in \mathbb{R}^{d_y}$ are, respectively, the trainable weight matrix and bias vector of the final layer, and $\bm s \in \mathbb{R}^{d_y}$ contains the probabilities of being the correct label for each sentiment class. These probabilities can also be interpreted as sentiment scores. After applying the softmax function, a label prediction can be obtained by predicting the label with the highest sentiment score or probability of being correct. While basic RNN models work well with shorter sequences, problems occur when they are used to process longer and more complex sentences. Due to the multiplicative nature of neural networks, RNN models generally suffer heavily from the vanishing gradient problem, which in turn means that these models have trouble learning long-term dependencies \cite{hochreiter2001gradient, bengio1994learning}. As such, more advanced RNN models, such as long short-term memory (LSTM) \cite{hochreiter1997long} and gated recurrent unit (GRU) \cite{cho2014properties} models, improve the function $f(.)$ by incorporating a series of gates. These gates allow for the information to flow through the model without losing critical details. These types of RNN models have been proven to work for many different problems and have also become the standard when implementing RNNs for ABSC. Further improvements can be made to RNN models by not only processing the words from left to right but also from right to left. These bi-directional RNN (Bi-RNN) models reverse the flow of information, which allows information at each end of the sequence of words to be preserved. \begin{figure*} \centering \includegraphics[scale=0.6]{RNN.pdf} \caption{An illustration of a basic RNN model for ABSC.} \label{fig:RNN} \end{figure*} RNN models are versatile models that can be applied to many tasks that involve sequence data, including ABSC. An implementation of an LSTM model for sentence-level ABSC is presented in \cite{alsmadi2018rnnvsvm} where it is compared to an SVM model for ABSC of hotel reviews. The SVM model significantly outperforms the RNN model, which the authors explain to be due to the rich hand-crafted feature vectors used to train the SVM model. In contrast, in \cite{ruder2016hierarchical}, a hierarchical LSTM model is implemented and compared to other deep learning architectures using a collection of SemEval-2016 \cite{pontiki2016semeval} datasets. This hierarchical LSTM model leverages document-level information to perform sentence-level ABSC. The authors show that this model achieves competitive results compared to the best models of the competition, even though no rich hand-crafted features were used. In \cite{tang-etal-2016-effective}, a different approach is proposed that can be used when processing explicit aspects. Two LSTM models are used to model the left and right parts of the context relative to the target. This approach improves upon the basic LSTM model but is still similar in performance compared to advanced SVM models with rich features and pooled word embeddings \cite{vo2015target}. Nevertheless, RNN models seem to generally outperform SVMs, especially for more recent tasks (e.g., the FiQA-2018 task \cite{10.1145/3184558.3192301}). In recent research, deep learning models like RNNs are quickly outpacing SVM models in terms of predictive performance. This is the case for many language processing tasks, which is in part due to an increase in available training data. See for example the GLUE \cite{wang-etal-2018-glue} and SuperGLUE \cite{NEURIPS2019_4496bf24} benchmarks, where all the top-performing models are deep learning approaches. Yet, as previously mentioned, the increase in performance comes at a great computational cost, since deep learning models typically require significantly more resources to train than simpler models like SVMs. On the other hand, time is saved on designing handcrafted features. Therefore, when determining the optimal model choice for a certain task, one must take into consideration various factors besides predictive performance on benchmark datasets. Namely, there may not be enough computational resources available, or it could be too costly to obtain enough training data to be able to properly train a deep learning model. On the other hand, there may not be enough time to design handcrafted features for a high-performance SVM. \begin{figure} \centering \includegraphics[scale=0.6]{RecNN.pdf} \caption{An illustration of a basic RecNN model for ABSC.} \label{fig:RecNN} \end{figure} \textbf{Recursive neural network} (RecNN) models \cite{goller1996learning} are a generalization of RNN models that process the words using a tree-like structure. Similar to RNNs, RecNNs are general models that can be applied to a variety of tasks. A basic RecNN model is presented in Figure \ref{fig:RecNN}. Similarly to RNN models, a function $f(.)$ is defined that is used to combine input vectors. This function is shared throughout the network and is therefore used in each processing step. As seen in Figure \ref{fig:RecNN}, RecNN trees consist of leaf nodes, which are the embedded input vectors, and internal nodes (depicted in blue) where vectors are combined using the function $f(.)$. Given a certain internal node with two child nodes, the output of the internal node can be defined as follows: \begin{equation}\label{equation:RecNN} \def\scriptscriptstyle{\scriptscriptstyle} \setstackgap{L}{8pt} \defL{L} \stackunder{\bm h}{\scriptscriptstyle d_x \times 1} = f(\stackunder{\bm c_{1}}{\scriptscriptstyle d_x \times 1}, \stackunder{\bm c_2}{\scriptscriptstyle d_x \times 1}), \end{equation} where $\bm h \in \mathbb{R}^{d_x}$ is the output of the internal node, and $\bm c_1 \in \mathbb{R}^{d_x}$ and $\bm c_2 \in \mathbb{R}^{d_x}$ are the outputs of the child nodes. By iteratively combining the word embeddings and internal node output vectors, the model can extract information from the words. After the final step, the output vector $\bm h_{4}$ in Figure \ref{fig:RecNN} should contain the information from the sentence or context. While the example we used considers a binary tree, which is the most common type in ABSC, trees with three or more children per internal node are also possible. Although, the function $f(.)$ would have to be adjusted to process more than two inputs. In the most basic RecNN setup, the function $f(.)$ takes the form of a standard neural network. However, RecNN models using this basic setup may suffer from the same problems as RNNs do when using this function. As such, gated functions like the LSTM module have been adapted for RecNN models \cite{tai2015improved}. The main advantage of using RecNN models over RNNs is the fact that the model is no longer restricted to processing the words sequentially from left to right and right to left. The order in which the inputs are processed depends on the type of relations that define the tree. The tree may also be structured such that the model processes the words in the original order, meaning that it would be equivalent to a standard RNN. However, herein also lies one of the disadvantages of these models. Namely, choosing how to define the tree structure can be difficult. RecNN trees are generally constructed using a language parser, which analyses the structure and dependencies of a sentence and deconstructs the sequence of words. Parsing is an entirely separate field in which much research has been done \cite{kubler2009dependency}. Various parsers exist, but the tree in Figure \ref{fig:RecNN} has been generated using the popular Stanford neural network parser \cite{chen2014fast}. The previously discussed RecNN characteristics are general attributes that apply to any task. Yet, more specialized RecNN models have been proposed for ABSC. For example, trees can be constructed in a manner that conforms specifically to the task of ABSC. In \cite{dong2014adaptive}, a technique is proposed that can be used to process explicit aspects. The trees in \cite{dong2014adaptive} are built towards the target word or phrase corresponding to the explicit aspect. This means that the model learns to forward-propagate the sentiment towards the aspect. Additionally, instead of using only one function $f(.)$, the proposed model implements multiple types of combination functions from which the model adaptively selects based on input vectors and linguistic characteristics. This RecNN specifically designed for ABSC is shown to significantly outperform previous SVM models \cite{jiang2011target}. Nguyen and Shirai \cite{nguyen2015phrasernn} expand on this idea by constructing a model that incorporates both dependency and constituent trees. Furthermore, they also expand upon the use of multiple combination functions. This model is shown to outperform the models presented in \cite{dong2014adaptive}. However, based on the reported results, the RecNN models we found still do not perform as well as other deep learning models. \begin{figure} \centering \includegraphics[width=0.6\linewidth]{CNN.pdf} \caption{An illustration of a basic CNN model for ABSC. Each square box represents a value or element of a vector or matrix.} \label{fig:CNN} \end{figure} \textbf{Convolutional neural network} (CNN) models \cite{lecun1999object} are yet another popular deep learning model for text analysis methods, such as sentiment analysis and ABSA \cite{severyn2015twitter, xue-li-2018-aspect}. Originally, CNN models were used for processing images using three distinct layer types: convolutional layers, pooling layers, and fully-connected layers. A basic CNN model illustrating these layers is presented in Figure \ref{fig:CNN} in the context of language processing. The convolutional layer is generally described as a filter that slides across the input data matrix and produces linear combinations of the values within the window. Pooling layers can be used to reduce the dimensionality of the output and improve generalization by summarizing the information obtained from the convolutional layer. A global pooling layer is typically also used before the final layers to transform the features matrix into a single vector. This vector is then processed using fully-connected layers to produce the correct output. The previously explained model layers form a general model architecture that can be used for many tasks. As such, CNN models have also been used for ABSC. In \cite{mulyo2018aspect}, CNN models are shown to produce superior performances compared to SVM models for ABSC. In \cite{ruder2016insight}, a CNN specifically designed for ABSC is implemented by including target embeddings so that the aspect can be modeled explicitly. The aspect embedding is concatenated with the input vectors to be used as input for the CNN. The reported results indicate that this approach allows for performances on par with the top models for the used datasets. A different approach presented in \cite{xue-li-2018-aspect} also uses aspect embeddings but in the form of a non-linear gating mechanism inserted between the convolutional and pooling layers. This model produces highly accurate predictions, outperforming many other models, including a random forest \cite{gupta-ekbal-2014-iitp}, RecNN models \cite{dong2014adaptive, nguyen2015phrasernn}, and even several attention-based deep learning models which are discussed in Subsection \ref{sec:MethodsMachineLearningAttention}. In \cite{zeng2019aspect}, this gated CNN is further expanded and improved using a linguistic regularization expansion of the loss function. Although this model comes close to being the best performing approach for the SemEval-2014 dataset, it was still outperformed by a hybrid model \cite{kiritchenko2014nrc}, which is discussed in Subsection \ref{sec:MethodsDictionaryEnhanced}. \begin{figure*} \centering \includegraphics[scale=0.6]{AB-RNN.pdf} \caption{An illustration of a basic attention-based RNN model for ABSC.} \label{fig:AB-RNN} \end{figure*} \subsubsection{Attention-Based Deep Learning}\label{sec:MethodsMachineLearningAttention} The attention mechanism \cite{bahdanau2014neural} is a highly effective extension to the deep learning models discussed in the previous subsection. in this section, we discuss attention-based deep learning (AB-DL) models for ABSC. The performances of various AB-DL models are displayed in Table \ref{Table:AB-DL}. We illustrate the attention mechanism using Figure \ref{fig:AB-RNN} in which the example RNN model presented in Figure \ref{fig:RNN} is extended with a basic attention architecture. Consider the input representation matrix $\bm X \in \mathbb{R}^{d_x \times n_x}$ for which the columns are the vectors $\bm x_1, \dots, \bm x_{n_x} \in \mathbb{R}^{d_x}$. RNN units are used to encode the sequence of words to produce the hidden states $\bm h_1, \dots, \bm h_{n_x} \in \mathbb{R}^{d_h}$ that correspond to the $n_x$ words. For clarity, we concatenate the hidden states to create the hidden state matrix $\bm H \in \mathbb{R}^{d_h \times n_x}$. Due to the sequential nature of the processing of the input vectors, the last hidden state $\bm h_{n_x}$ ($\bm h_{5}$ in Figure \ref{fig:AB-RNN}) should contain information from the entire sequence. However, as previously mentioned, RNN models struggle to learn long-term dependencies, which can be addressed using the attention mechanism. \begin{table*} \caption{Overview of prominent attention-based deep learning ABSC models and their reported performances.} \label{Table:AB-DL} \begin{center} \resizebox{400pt}{!}{% \begin{tabular}{p{125pt}|p{140pt}|p{135pt}|p{40pt}|p{40pt}|p{28pt}|p{20pt}} \textbf{Reference} & \textbf{Attention Characteristics} & \textbf{Domain(s)} & \textbf{Accuracy} & \textbf{Precision} & \textbf{Recall} & \textbf{F$_1$} \\ \hline Wang et al. (2016) \cite{wang2016attention} & Additive + Global + LSTM & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval} & 0.687 \newline 0.772 & -\newline - & -\newline - & -\newline - \\ \hline Yang et al. (2017) \cite{yang2017attention} & General + Global + LSTM & Social Media (Various) \cite{dong2014adaptive} & 0.726 & - & - & 0.722 \\ \hline Liu et al. (2018) \cite{liu2018cabasc} & Additive + Global + GRU & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval}\newline Social Media (Various) \cite{dong2014adaptive}\newline Reviews (Restaurants) \cite{ pontiki2016semeval, pontiki2015semeval}\newline (reimplemented in \cite{wallaart2019hybrid})& 0.751 \newline 0.809\newline 0.715\newline 0.806 & -\newline -\newline -\newline - & -\newline -\newline -\newline - & -\newline -\newline -\newline - \\ \hline Zhang et al. (2019) \cite{Zhang2019MultilayerAB} & Additive + Global + CNN & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval}\newline Social Media (Various) \cite{dong2014adaptive}& 0.754 \newline 0.795\newline 0.713& -\newline -\newline - & -\newline -\newline - &-\newline -\newline - \\ \hline Gan et al. (2020) \cite{GAN2020104827} & Additive + Global + CNN & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval}\newline Social Media (Various) \cite{dong2014adaptive}& 0.736 \newline 0.814\newline 0.742& -\newline -\newline - & -\newline -\newline - &-\newline -\newline 0.732 \\ \hline He et al. (2018) \cite{he2018effective} & Activated General + Syntax-Based + LSTM & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki2016semeval, pontiki2015semeval, pontiki-etal-2014-semeval} & 0.719 \newline 0.823 & -\newline - & -\newline - & 0.692\newline 0.683 \\ \hline Ma et al. (2017) \cite{ma2017ian} & Activated General + Global + Co-attention + LSTM & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval}\newline Social Media (Various) \cite{dong2014adaptive}\newline (reimplemented in \cite{YANG2019463}) & 0.721 \newline 0.786 \newline 0.698 & -\newline -\newline - & -\newline -\newline - & -\newline -\newline - \\ \hline Gu et al. (2018) \cite{gu-etal-2018-position} & Activated General + Global + Co-attention + Bi-GRU & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval} & 0.741 \newline 0.812 & -\newline - & -\newline - & -\newline - \\ \hline Fan et al. (2018) \cite{fan2018multi} & General + Global + Co-attention + Bi-LSTM & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval}\newline Social Media (Various) \cite{dong2014adaptive}& 0.754 \newline 0.813\newline 0.725 & -\newline -\newline - & -\newline -\newline - & 0.725\newline 0.719\newline 0.708 \\ \hline Yang et al. (2019) \cite{YANG2019463} & Additive + Global + Co-attention + LSTM & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval}\newline Social Media (Various) \cite{dong2014adaptive}& 0.735 \newline 0.788\newline 0.715 & -\newline -\newline - & -\newline -\newline - & -\newline -\newline - \\ \hline Zheng and Xia (2018) \cite{zheng2018left} & Activated General + Global + Rotatory + Bi-LSTM & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval}\newline Social Media (Various) \cite{dong2014adaptive}\newline Reviews (Restaurants) \cite{ pontiki2016semeval, pontiki2015semeval} \newline (reimplemented in \cite{wallaart2019hybrid})& 0.752 \newline 0.813\newline 0.727\newline 0.827 & -\newline -\newline -\newline - & -\newline -\newline -\newline - & -\newline -\newline -\newline - \\ \hline Tang et al. (2016) \cite{tang2016aspect} & Additive + Global + Multi-hop + LSTM & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval}\newline Social Media (Various) \cite{dong2014adaptive}\newline (reimplemented in \cite{YANG2019463}) & 0.722 \newline 0.810 \newline 0.685 & -\newline -\newline - & -\newline -\newline - & -\newline -\newline - \\ \hline Fan et al. (2018) \cite{10.1145/3209978.3210115} & Additive + Multi-Hop + Position-Based + Bi-GRU & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval}\newline Social Media (Various) \cite{dong2014adaptive} & 0.764 \newline 0.783 \newline 0.721 & -\newline -\newline - & -\newline -\newline - & 0.721\newline 0.684 \newline 0.708 \\ \hline Majumder et al. (2018) \cite{majumder-etal-2018-iarm} & Additive/Multiplicative + Global + Multi-Hop + GRU & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval} & 0.738 \newline 0.800 & -\newline - & -\newline - & -\newline - \\ \hline Wallaart and Frasincar (2019) \cite{wallaart2019hybrid} & Activated General + Global + Rotatory + Multi-Hop + Bi-LSTM & Reviews (Restaurants) \cite{ pontiki2016semeval, pontiki2015semeval} & 0.831 & - & - & - \\ \hline Gao et al. (2019) \cite{gao2019bert} & Transformer & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval}\newline Social Media (Various) \cite{dong2014adaptive}& 0.784 \newline 0.846\newline 0.773 & -\newline -\newline - & -\newline -\newline - & 0.744\newline 0.796\newline 0.744 \\ \hline Xu et al. (2019) \cite{xu-etal-2019-bert} & Transformer & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval} & 0.781 \newline 0.850 & -\newline - & -\newline - & 0.751\newline 0.770 \\ \hline Zeng et al. (2019) \cite{zeng2019lcf} & Transformer & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval}\newline Social Media (Various) \cite{dong2014adaptive}& 0.825 \newline 0.871\newline 0.773 & -\newline -\newline - & -\newline -\newline - & 0.796\newline 0.817\newline 0.758 \\ \hline Sun et al. (2019) \cite{sun2019bert} & Transformer & Social Media (Neighborhoods) \cite{saeidi-etal-2016-sentihood} & 0.933 & - & - & - \\ \hline Karimi et al. (2020) \cite{Karimi2020AdversarialTF} & Transformer & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval} & 0.794 \newline 0.860 & -\newline - & -\newline - & 0.765\newline 0.792 \\ \hline Xu et al. (2020) \cite{XU2020135} & Transformer + Co-attention & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki2016semeval, pontiki2015semeval, pontiki-etal-2014-semeval}\newline Social Media (Various) \cite{dong2014adaptive}& 0.781 \newline 0.843\newline 0.766 & -\newline -\newline - & -\newline -\newline - & 0.732\newline 0.712\newline 0.722 \\ \hline Ansar et al. (2021) \cite{ansar2021efficient} & Transformer & Reviews (Movies) \cite{imdb, times}\newline Social Media (Neighborhoods) \cite{saeidi-etal-2016-sentihood} & 0.952 \newline 0.924 & -\newline - & -\newline - & -\newline - \\ \hline Su et al. (2021) \cite{SU2021103477} & Transformer & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval}\newline Social Media (Various) \cite{dong2014adaptive}& 0.826 \newline 0.879\newline 0.776 & -\newline -\newline - & -\newline -\newline - & 0.793\newline 0.823\newline 0.765 \\ \end{tabular} } \end{center} \end{table*} The basic attention architecture consists of three operations: attention scoring, attention alignment, and a weighted averaging operation. In the most general form, the attention mechanism requires three inputs: \textbf{key} vectors, \textbf{value} vectors, and a \textbf{query} vector. These attention concepts and the corresponding notation were introduced in \cite{daniluk2017frustratingly} and further popularized in \cite{vaswani2017attention}. The keys and values are generally derived from the data matrix on which attention is calculated. In the example presented in Figure \ref{fig:AB-RNN}, the data matrix upon which we calculate the attention is the matrix of hidden vectors $\bm H = [\bm h_1, \dots, \bm h_{n_x}]$. Key vectors can be derived by linearly transforming the hidden state vectors using a trainable weights matrix $\bm W_K \in \mathbb{R}^{d_k \times d_h}$, which produces the key vectors $\bm k_1, \dots, \bm k_{n_x} \in \mathbb{R}^{d_k}$. A similar process can be performed to obtain the value vectors $\bm v_1, \dots, \bm v_{n_x} \in \mathbb{R}^{d_v}$ using the trainable weights matrix $\bm W_V \in \mathbb{R}^{d_v \times d_h}$. However, for simplicity's sake, the example in Figure \ref{fig:AB-RNN} simply uses the original hidden state vectors as both the keys and values. The keys are used in combination with the query vector $\bm q \in \mathbb{R}^{d_q}$ in the first step of the attention calculation: attention scoring, which involves calculating an attention score corresponding to each feature vector. This score directly determines how much attention is focused on each word. The attention score for the $t$th word depends on the corresponding key vector $\bm k_t$, the query vector $\bm q$, and a score function. The score function can take a variety of forms, but it is generally meant to determine a relation between the key and query vectors. One of the most common score functions is the \textbf{additive} score function \cite{bahdanau2014neural}, which is used in many ABSC models \cite{wang2016attention}: \begin{equation}\label{equation:AttentionScoring2} \def\scriptscriptstyle{\scriptscriptstyle} \setstackgap{L}{8pt} \defL{L} \stackunder{e_t}{\scriptscriptstyle 1 \times 1} = \stackunder{\bm w^T}{\scriptscriptstyle 1 \times d_w}\hspace{-3pt} \times \text{act}(\stackunder{\bm W_k}{\scriptscriptstyle d_w \times d_k} \hspace{-3pt}\times \hspace{-3pt}\stackunder{\bm k_t}{\scriptscriptstyle d_k \times 1} \hspace{-3pt}+\hspace{-3pt} \stackunder{\bm W_q}{\scriptscriptstyle d_w \times d_q}\hspace{-3pt}\times\hspace{-3pt} \stackunder{\bm q}{\scriptscriptstyle d_q \times 1} \hspace{-3pt}+\hspace{-3pt} \stackunder{\bm b}{\scriptscriptstyle d_w \times 1} \hspace{-5pt}), \end{equation} where $e_t \in \mathbb{R}^{1}$ is the attention score belonging to the $t$th word, $\bm w \in \mathbb{R}^{d_w}$, $\bm W_k \in \mathbb{R}^{d_w \times d_k}$, $\bm W_Q \in \mathbb{R}^{d_w \times d_q}$, and $\bm b \in \mathbb{R}^{d_w}$ are trainable weight matrices and vectors, and $d_w$ is a predefined dimension parameter. Other types of score functions are often based on a product, such as the \textbf{multiplicative} \cite{luong2015effective}, \textbf{scaled-multiplicative} \cite{vaswani2017attention}, \textbf{general} score function \cite{luong2015effective}, and \textbf{activated general} \cite{ma2017ian} score functions. One can interpret attention scoring as determining which words contain the most important information with regards to determining the correct sentiment classification. The higher the score, the more important the information contained in the corresponding vector. Based on the scoring calculation, the query $\bm q$ is essential in determining what information is important. The simplest method of defining this query is by setting it as a constant vector. This allows the model to learn a general way of defining which words to focus on. However, since aspects can differ widely in their characteristics, a general query may not be flexible enough. A more popular technique is to use a vector representation of the aspect as the query. As seen in \cite{yang2017attention}, one can take the hidden vector representation of the target (a pooled version of $\bm h_1$ and $\bm h_2$ in Figure \ref{fig:AB-RNN}) as the query. In this case, the attention score calculation determines which information is most important with regard to the aspect itself, which enhances the model's ability to focus on the most important words in the sequence. This is especially important when a sentence contains multiple aspects. The purpose of the attention scores is to be used as weights in a weighted average calculation. However, the weights are required to add up to one for this purpose. As such, an alignment function align$()$ is used on the attention scores $e_1, \dots, e_{n_x} \in \mathbb{R}^{1}$: \begin{equation}\label{equation:AttentionAlign} \def\scriptscriptstyle{\scriptscriptstyle} \setstackgap{L}{8pt} \defL{L} \stackunder{a_t}{\scriptscriptstyle 1 \times 1} = \text{align}(\stackunder{e_t}{\scriptscriptstyle 1 \times 1}; \stackunder{\bm e}{\scriptscriptstyle n_x \times 1}), \end{equation} where $a_t \in \mathbb{R}^{1}$ is the attention weight corresponding to the $t$th word, and we define the vector $\bm e \in \mathbb{R}^{n_x}$ as the vector containing all attention scores. A highly popular alignment function is known as \textbf{soft} or \textbf{global} alignment which applies a softmax function to the attention scores. Other types of alignment are, for example, the \textbf{hard} \cite{xu2015show} and \textbf{local} \cite{luong2015effective} alignment functions, which provide a more focused attention alignment. Additionally, in \cite{he2018effective}, the \textbf{syntax-based} alignment function is introduced that is specifically designed for ABSC. It employs global alignment but scales the attention weights according to how far the corresponding words are from the target in a dependency tree. The attention weights $a_1, \dots, a_{n_x} \in \mathbb{R}^{1}$ are used in combination with the value vectors $\bm v_1, \dots, \bm v_{n_x} \in \mathbb{R}^{d_v}$ to calculate a so-called \textbf{context} vector $\bm c \in \mathbb{R}^{d_v}$. For clarity, in Figure \ref{fig:AB-RNN} the attention weights are represented by the vector of attention weights $\bm a \in \mathbb{R}^{n_x}$. The context vector can be used in combination with the final hidden state in the output layer to obtain a label prediction. The context vector can be calculated as follows: \begin{equation}\label{equation:ContextVector} \def\scriptscriptstyle{\scriptscriptstyle} \setstackgap{L}{8pt} \defL{L} \stackunder{\bm c}{\scriptscriptstyle d_v \times 1} = \sum^{n_x}_{t=1} \stackunder{a_{t}}{\scriptscriptstyle 1 \times 1} \times \stackunder{\bm v_t}{\scriptscriptstyle d_v \times 1}. \end{equation} This context vector summarizes the most important information contained in the context corresponding to the aspect. Since the attention mechanism can pick and choose information from any of the hidden states, information can be retrieved independently of where the it is positioned in the sequence. This significantly improves the model's ability to capture long-term dependencies. In addition, the attention weights are a direct indication of the importance of certain words. As such, attention models offer some explanation by analysing which words the model tends to focus on. This provides some interpretability for models which are typically considered black-box models. Although, the usefulness of the interpretation of attention weights is controversial \cite{jain2019attention, wiegreffe-pinter-2019-attention, mohankumar-etal-2020-towards}. The example we used incorporates the hidden states from an RNN model into the attention calculation. However, attention can also be applied in other models such as a CNN. In the case of a CNN, the sets of feature maps that are obtained after the convolutional layer can be used as input for the attention mechanism. The rest of the attention calculation remains the same. The score and alignment functions are basic aspects that are required for every attention model. While the basic attention model we discussed can be used for the task of ABSC, extensions are often implemented for further improvements or to process other types of inputs. For example, \textbf{co-attention} \cite{NIPS2016_6202} can be used to calculate attention between multiple features matrices if there are multiple model inputs. This can be the case for ABSC if the input text is split up into parts, such as the target words and the words left and right of the target. A similar approach is called \textbf{rotatory} \cite{zheng2018left}, which rotates attention between the inputs. A more general attention extension is \textbf{multi-head} attention \cite{vaswani2017attention}, which employs so-called attention `heads' to calculate multiple different types of attention in parallel. In contrast, \textbf{multi-hop} attention \cite{tran2018multihop} is an extension that allows for multiple attention calculates in sequence. Multi-head and multi-hop attention are vital parts of the \text{transformer} model \cite{vaswani2017attention}. This model is based purely on the attention mechanism and does not use a separate base model like an RNN or CNN. This model uses \textbf{self-attention} to calculate attention between the feature vectors and use the extracted information to iteratively transform the input. Typical of this model type is the use of a scaled-multiplicative score function and a global alignment function since these functions are computationally highly efficient. The transformer model has been proven to be highly effective for many tasks, including ABSC. Gao et al. \cite{gao2019bert} present a transformer model with ABSC extensions for properly processing the target. A transformer model incorporating the previously mentioned co-attention mechanism is proposed in \cite{XU2020135}. Zeng et al. \cite{zeng2019lcf} implement two transformer models in parallel to allow the model to process the global context and local context concerning the aspect separately. All of these transformer models and extensions achieve significant improvements over previous models, including other attention models. When only considering predictive performance, transformer models tend to dominate most ABSC tasks and other modern language processing benchmarks, such as GLUE \cite{wang-etal-2018-glue} and SuperGLUE \cite{NEURIPS2019_4496bf24}. Yet, these models require large amounts of training data and computational resources to train from scratch, even compared to other deep learning models. A portion of current research is focused on alleviating the resources necessary for training transformer models via more efficient architectures, such as the linformer \cite{wang2020linformer}. Nevertheless, transformer models can produce impressive results, but may not be suitable for problems where the required resources are not available. \subsection{Hybrid}\label{sec:MethodsHybrid} When limited data is available, machine learning models may not provide satisfactory results \cite{severyn2015twitter}. ABSA is a domain where datasets are typically considered to be small in size \cite{he-etal-2018-exploiting} (see Table \ref{Table:Data}), which makes training large language models a significant challenge. This can be an even worse problem when attempting ABSC in some of its more niche sub-domains, such as sentiment analysis of texts for less popular products or topics. This problem is compounded by the fact that there is a vast number of different languages. English is a language for which there typically are extensive resources available. Yet, for other languages, like Arabic or Chinese, new datasets are slowly being introduced (see SemEval-2016 \cite{pontiki2016semeval}) but the amount of data is still limited \cite{do2019deep}. As such, large language models for ABSC can be hard to train. A solution to this problem is to incorporate additional knowledge from knowledge bases to compensate for the lack of data. We refer to models that incorporate both machine learning and knowledge bases as hybrid models. There are various ways in which knowledge bases can be combined with machine learning. A common approach is to use a knowledge base to define features that the machine learning algorithm uses to predict the sentiment. Another is to implement both a knowledge-based classifier and a machine learning classifier in sequence or parallel. In this section, we discuss these various approaches and categorize them using three categories: dictionary-enhanced, ontology-enhanced, and discourse-enhanced models. The performances of different hybrid models are presented in Table \ref{Table:Hybrid}. \subsubsection{Dictionary-Enhanced Machine Learning}\label{sec:MethodsDictionaryEnhanced} Dictionary-enhanced machine learning methods are some of the more common hybrid approaches. Dictionaries are rich knowledge bases that can easily be used to define features for a machine learning algorithm. In \cite{varghese2013svm}, sentiment scores from the \textit{SentiWordNet} \cite{baccianella2010sentiwordnet} dictionary are included as features to improve the performance of an SVM model. In both \cite{devi2016feature} and \cite{7975227}, the same technique using \textit{SentiWordNet} is also used. Additionally, \textit{NRC-Canada} \cite{kiritchenko2014nrc}, the top-performing model of the SemEval-2014 ABSC task, is based on a similar concept. However, the authors generate their own sentiment lexicons from unlabeled review corpora. In \cite{vo2015target}, a different approach is presented. First, word embeddings are defined using \textit{word2vec} \cite{mikolov2017advances}. Then, embeddings for words that are not in the sentiment lexicons are filtered out. As such, only words that contain useful sentiment information are kept. Since an SVM model is used for classification, the authors employ several pooling functions to summarize the features from the dense feature vectors to be used by the SVM model. \begin{table*} \caption{Overview of prominent hybrid ABSC models and their reported performances. ``AB-DL = Attention-Based Deep Learning"} \label{Table:Hybrid} \begin{center} \resizebox{430pt}{!}{% \begin{tabular}{p{115pt}|p{112pt}|p{130pt}|p{40pt}|p{40pt}|p{28pt}|p{20pt}} \textbf{Reference} & \textbf{Category} & \textbf{Domain(s)} & \textbf{Accuracy} & \textbf{Precision} & \textbf{Recall} & \textbf{F$_1$} \\ \hline Varghese and Jayasree (2013) \cite{varghese2013svm} & Dictionary-Enhanced SVM & Reviews (Movies) \cite{epinions}\newline Reviews (Music) \cite{pitchfork} & 0.853\newline 0.865 & -\newline - & -\newline - & -\newline - \\ \hline Kiritchenko et al. (2014) \cite{kiritchenko2014nrc} & Dictionary-Enhanced SVM & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval} & 0.705\newline 0.802 & -\newline - & -\newline - & -\newline - \\ \hline Vo et al. (2015) \cite{vo2015target} & Dictionary-Enhanced SVM & Social Media (Various) \cite{dong2014adaptive} & 0.711 & - & - & 0.699 \\ \hline Devi et al. (2016) \cite{devi2016feature} & Dictionary-Enhanced SVM & Reviews (Electronics) \cite{amazon, ebay, flipkart} & 0.881 & 0.872 & 0.898 & 0.884 \\ \hline Fachrina and Widyantoro (2017) \cite{8285850} & Dictionary-Enhanced SVM & Reviews (Various) \cite{8285850} & - & - & - & 0.858 \\ \hline Akhtar et al. (2016) \cite{akhtar-etal-2016-hybrid} & Dictionary-Enhanced CNN & Reviews (Various) \cite{akhtar2016aspect_h} \newline Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki-etal-2014-semeval} & 0.660\newline 0.680\newline 0.772 & -\newline -\newline - & -\newline -\newline - & -\newline -\newline - \\ \hline Ma et al. (2018) \cite{ma2018sentic} & Dictionary-Enhanced RNN & Reviews (Restaurants) \cite{pontiki2015semeval}\newline Social Media (Neighborhoods) \cite{saeidi-etal-2016-sentihood} & 0.765 \newline 0.893 & -\newline - & -\newline - & -\newline - \\ \hline Ilmania et al. (2018) \cite{8629181} & Dictionary-Enhanced RNN & Reviews (Various) \cite{8285850} & - & - & - & 0.848 \\ \hline Bao et al. (2019) \cite{bao-etal-2019-attention} & Dictionary-Enhanced AB-DL & Reviews (Restaurants) \cite{pontiki-etal-2014-semeval} & 0.829 & - & - & - \\ \hline Schouten et al. (2017) \cite{schouten2017ontology} & Ontology-Enhanced SVM & Reviews (Restaurants) \cite{pontiki2016semeval} & - & - & - & 0.753 \\ \hline De Heij et al. (2017) \cite{10.1007/978-3-319-68786-5_27} & Ontology-Enhanced SVM & Reviews (Restaurants) \cite{pontiki2015semeval} & - & - & - & 0.808 \\ \hline De Kok et al. (2018) \cite{de2018aggregated} & Ontology-Enhanced SVM & Reviews (Restaurants) \cite{pontiki2016semeval} & - & - & - & 0.812 \\ \hline Schouten and Frasincar (2018) \cite{schouten2018ontology} & Ontology-Enhanced SVM & Reviews (Restaurants) \cite{ pontiki2016semeval, pontiki2015semeval} & 0.842 & - & - & - \\ \hline García-Díaz et al. (2020) \cite{GARCIADIAZ2020641} & Ontology-Enhanced RNN & Social Media (Diseases) \cite{GARCIADIAZ2020641} & 0.542 & - & - & - \\ \hline Kumar et al. (2020) \cite{kumar2020aspect} & Ontology-Enhanced CNN & Reviews (Hotels) \cite{booking} & 0.885 & 0.943 & 0.856 & 0.860 \\ \hline Me{\v{s}}kel{\.e} and Frasincar (2019) \cite{mevskele2019aldona} & Ontology-Enhanced AB-DL & Reviews (Restaurants) \cite{pontiki2016semeval}\newline Reviews (Restaurants) \cite{ pontiki2016semeval, pontiki2015semeval} \newline (reimplemented in \cite{mevskele2020aldonar}) & 0.863 \newline 0.834 & -\newline - & -\newline - & -\newline - \\ \hline Wallaart and Frasincar (2019) \cite{wallaart2019hybrid} & Ontology-Enhanced AB-DL & Reviews (Restaurants) \cite{ pontiki2016semeval, pontiki2015semeval} & 0.843 & - & - & - \\ \hline Trusca et al. (2020) \cite{trusca2020hybrid} & Ontology-Enhanced AB-DL & Reviews (Restaurants) \cite{ pontiki2016semeval, pontiki2015semeval} & 0.844 & - & - & - \\ \hline Me{\v{s}}kel{\.e} and Frasincar (2020) \cite{mevskele2020aldonar} & Ontology-Enhanced AB-DL & Reviews (Restaurants) \cite{ pontiki2016semeval, pontiki2015semeval} & 0.855 & - & - & - \\ \hline Wang et al. (2018) \cite{ijcai2018-617} & Discourse-Enhanced AB-DL & Reviews (Electronics) \cite{pontiki2015semeval}\newline Reviews (Restaurants) \cite{pontiki2015semeval} & 0.816\newline 0.809 & -\newline - & -\newline - & 0.667\newline 0.685 \\ \hline Wu et al. (2019) \cite{wu2019aspect} & Discourse-Enhanced AB-DL & Reviews (Electronics) \cite{pontiki-etal-2014-semeval}\newline Reviews (Restaurants) \cite{pontiki2016semeval, pontiki2015semeval, pontiki-etal-2014-semeval}\newline Social Media (Various) \cite{dong2014adaptive} & 0.734\newline 0.828\newline 0.698 & -\newline -\newline - & -\newline -\newline - & 0.691\newline 0.642\newline 0.675 \end{tabular} } \end{center} \end{table*} In \cite{bao-etal-2019-attention}, it is argued that attention models typically overfit when trained using small datasets. As such, \cite{bao-etal-2019-attention} proposes an attention-based LSTM model that incorporates lexicon features to improve the flexibility and robustness of attention-based deep learning models when trained with insufficient data. In \cite{akhtar-etal-2016-hybrid}, a hybrid model is presented specifically for resource-poor languages, such as Hindi. The proposed model combines trained CNN features and lexicon features. It is shown to produce better performances than the tested baselines that do not utilize the knowledge bases. In \cite{8629181}, a Bi-GRU model is proposed for ABSC for reviews from an Indonesian online marketplace. It is shown that performance is improved by incorporating lexicon features into the model. Lexicons and other knowledge bases can enhance machine learning models, but machine learning can also improve the production of knowledge bases. An important example of this is \textit{SenticNet} \cite{10.1145/3340531.3412003}, which is a knowledge base that uses linguistic patterns, first-order logic, and deep learning to discover relationships between entities, concepts, and primitives. Such ensembles of symbolic and subsymbolic tools can be highly useful for sentiment analysis and ABSA. For example, \cite{ma2018sentic} proposed the Sentic LSTM model, which is an attention-based LSTM that fully incorporates sentic knowledge into the deep learning architecture. It is shown that this ensemble application of symbolic and subsymbolic AI produces better results than symbolic and subsymbolic AI separately. \subsubsection{Ontology-Enhanced Machine Learning}\label{sec:MethodsOntologyEnhanced} Ontologies can be used both as a knowledge base for sentiment information and to define a structure between concepts. In \cite{de2018aggregated}, a restaurant-specific ontology is used to define features for a review-level ABSC SVM model. The authors use both the concepts and the sentiment information from the ontology to define features. Essentially, a ``bag-of-concepts'' is defined that includes a binary feature for each concept in the ontology. However, the value of a concept feature depends on whether it is related to the aspect itself, meaning that implicit information in the text can be encoded into the feature vector. Secondly, features are defined to count how often sentiment polarity words from the ontology occur in the text. A second approach for ontology-enhanced models is via a two-step method that sequentially employs a knowledge-based method and a machine learning classifier. First, an ontology-based model is employed to attempt to classify the aspect. When a conflicting answer is found for the sentiment label or when there is no information available regarding the sentiment, a backup model is employed in the form of a machine learning classifier. In \cite{wallaart2019hybrid}, sentence-level ABSC is performed by implementing a multi-hop rotatory attention model as the backup algorithm after an ontology-based classification model. The results presented indicate that the two-step method performs better compared to either of the classifiers individually. In \cite{mevskele2019aldona}, this two-step approach is used with a lexicalised domain ontology followed by an attention model. The attention model combines a sentence-level content attention mechanism and a bidirectional context attention mechanism. In \cite{mevskele2020aldonar}, this model is further extended using \textit{BERT} \cite{devlin2018bert} word embeddings, additional regularization, and adjustments to the training process. In \cite{trusca2020hybrid}, the two-step method using an ontology and a multi-hop rotatory attention model is extended with a hierarchical attention mechanism and deep contextual word embeddings. In \cite{schouten2018ontology}, both the two-step method and ontology-based features are used for sentence-level ABSC. First, an ontology-based classifier is used to try to determine a sentiment classification for the aspect. If no definitive answer can be found, an SVM is employed that uses a bag-of-words feature vector enhanced with ontology sentiment features. Ontology-enhanced methods can prove useful when training language models for ABSC in niche domains. In \cite{10.1007/978-3-319-68786-5_27}, an ontology-enhanced SVM model is tested on the SemEval-2016 dataset. It is shown that a model with ontology features attains a significantly higher $F_1$-score than models that do not have those features. Moreover, the model with ontology features can obtain equal performance with less than 60\% of the training data. In \cite{schouten2017ontology}, it is shown that an ontology-enhanced model can be highly robust to changes in dataset size. For the task of aspect extraction, the performance of the ontology-enhanced SVM model proposed in \cite{schouten2017ontology} drops by less than 10\% with only 20\% of the original training data, while the performances of the base methods drop significantly. For the task of ABSC, all models proposed in \cite{schouten2017ontology} appear to be robust to dataset size changes. Yet, this can be explained by the fact that all methods are also enhanced using another knowledge base in the form of sentiment dictionaries. In \cite{GARCIADIAZ2020641}, an ontology-enhanced ABSC method is employed in the niche domain of infodemiology in the Spanish language. Aspects are extracted from Twitter posts and are classified using a Bi-LSTM model and an ontology of the infectious disease domain. \subsubsection{Discourse-Enhanced Machine Learning}\label{sec:MethodsDiscourseEnhanced} As discussed previously, discourse trees function similarly to ontologies in the sense that they do not inherently contain sentiment information. However, similarly to ontologies, the structure of discourse trees can still be useful to combine with a machine learning classifier. Wang et al. \cite{ijcai2018-617} use the structure of a discourse tree to determine how words are processed via an attention-based deep learning model for sentence-level ABSC. For each clause, a Bi-LSTM and an attention layer are used to produce a clause vector represented by the context vector output of the attention layer. The clause representations are then processed through another Bi-LSTM and attention layer to create a hierarchical attention structure. The resulting context vector is used for prediction. While the individual clauses defined by the discourse tree are incorporated in this model, the relational structure of the tree is not. In \cite{wu2019aspect}, the relations between discourse clauses are incorporated through the use of conjunction rules. First, each clause identified by the discourse tree is processed separately using Bi-LSTM layers. However, clause representations are produced simply by averaging. These clause representations are then used in the output layer, a layer to extract the relations between clauses using a Bi-LSTM, and a layer where conjunction rules are used to extract additional features. Conjunctions indicate how clauses are connected. Clauses connected by words such as \textit{``and"} are called coordinate conjunctions and often indicate a shared sentiment between the clauses. When words such as \textit{``but"} are used, an adversative conjunction is present that generally indicates an opposite sentiment. This information is used to selectively summarize the clause representations. Using these techniques, the rhetorical structure of the context can be incorporated into the model. \section{Related Topics}\label{sec:Related} \subsection{Aspect Detection \& Aggregation}\label{sec:RelatedDet&Agg} In Section 1, ABSA was explained to consist of three steps: detection, classification, and aggregation. This survey discusses the classification step, also known as ABSC. Yet, the three steps cannot be completely separated from each other since these issues are not independent \cite{schouten2015survey}. For example, it may be important to consider the design of the classification and aggregation steps jointly, since information extracted during the classification step can be useful during the aggregation step. Namely, one can use features such as sentence importance to create a weighted average, which typically outperforms simple averages for aggregation \cite{basiri2020effect}. Similarly, information from the aspect detection step can be useful in the classification step. For example, one can use features extracted during the aspect extraction phase to predict the sentiments \cite{brun-etal-2014-xrce}. Yet, models have also been proposed that fully jointly perform the detection and classification steps. For example, in \cite{zhuang2006movie} a dictionary-based model using \textit{WordNet} \cite{miller1995wordnet} is proposed that extracts and pairs aspects and opinion words to find aspect-level sentiments. In \cite{schmitt-etal-2018-joint}, the detection and classification steps are completely fused in a single end-to-end LSTM model. The proposed model jointly learns the two tasks and produces improved performance for both tasks. In \cite{10.1145/3308558.3313750}, a capsule attention model is proposed that also jointly learns the detection and classification tasks in an end-to-end manner. This model produces state-of-the-art results for both tasks. \subsection{Sarcasm \& Thwarting}\label{sec:RelatedSarcasmThwarting} Thwarting and sarcasm are two complex linguistic phenomena that pose significant challenges for sentiment analysis models. Thwarting is the concept of building expectations, only to then contrast them. In other words, the overall sentiment of a document differs from the sentiment expressed throughout the majority of the text \cite{ramteke-etal-2013-detecting}. As such, methods that simply rely on aggregating the sentiment expressed by individual words can easily miss the greater context of the thwarted text. Additionally, developing a model for detecting thwarting is a difficult task due to the lack of available training data \cite{ramteke-etal-2013-detecting}. In the limit, detecting thwarting can also be considered to be highly similar to the difficult task of recognizing sarcasm \cite{ramteke-etal-2013-detecting}. Sarcasm is the act of deliberately ridiculing or questioning subjects by using language that is counter to its meaning \cite{joshi2018investigations}. It is particularly common in social media texts, where sarcasm analysis often also involves emoticons and hashtags \cite{maynard-greenwood-2014-cares}. Recognizing sarcasm in a text is a task that is often even difficult for humans \cite{maynard-greenwood-2014-cares} and requires significant world knowledge, which is difficult to include in most sentiment models \cite{ramteke-etal-2013-detecting}. The two problems of thwarting and sarcasm make sentiment analysis non-trivial \cite{annett2008comparison}. Yet, they have received little attention in the literature on ABSC. For ABSC, thwarting or sarcasm means that the sentiment expressed towards the aspect via the language used throughout the majority or the entirety of the record is opposite from the true sentiment. In the literature on general sentiment analysis, some attempts to address these problems have been proposed. Although, sentiment analysis is often considered a separate task from thwarting or sarcasm detection. For example, in \cite{ramteke-etal-2013-detecting}, a domain-ontology of camera reviews is used in combination with an SVM model to identify thwarted reviews of products. In \cite{8949523}, a multi-head attention-based LSTM model is used to detect sarcasm in social media texts. Yet, thwarting and sarcasm can also be handled in the context of sentiment analysis. For example, in \cite{mishra-etal-2016-leveraging}, the problems of thwarting and sarcasm in sentiment analysis are addressed using cognitive features obtained by tracking the eye movements of human annotators. It is theorized that the cognitive processes used to recognize thwarting and sarcasm are related to the eye movements of readers. It is shown that the gaze features significantly improve the performance of sentiment analysis classifiers, such as an SVM. Furthermore, it is specifically shown that the gaze features help address the problem of analysing complex linguistic structures, such as thwarting and sarcasm. In \cite{el-mahdaouy-etal-2021-deep}, the tasks of sentiment analysis and sarcasm detection are considered jointly via a multi-task model. A transformer model is used to identify both sentiment and sarcasm in Arabic tweets. It is shown that the proposed multi-task model outperforms its single-task counterparts. Such methods could also be used for ABSC. Cognitive features could be combined with aspect features to address thwarting and sarcasm for the task of ABSC. Similarly, the task of ABSC can be considered jointly with the task of sarcasm detection. Yet, more specialized ABSC datasets would be required for such problems. \subsection{Emotions}\label{sec:RelatedEmotions} Emotions are an interesting subject in conjunction with ABSC. Sentiment analysis and emotion analysis are two highly related topics. In sentiment analysis, we typically assign polarity labels or scores to texts, but in emotion analysis, a wide range of emotions (e.g., ``\textit{joy}'', ``\textit{sadness}'', ``\textit{anger}'') are to be considered \cite{10.1109/MIS.2016.31}. For example, in \cite{topal2016movie}, movie reviews are analysed and assigned emotion scores based on the dimensions of the hourglass of the emotions \cite{cambria2012hourglass}. Like sentiment analysis, emotion analysis can be done at multiple levels \cite{hakak2017emotion}. The equivalent of ABSC for emotion analysis is called aspect-based emotion analysis. For example, in \cite{polignano2018emotion}, emotions towards aspects in social media posts are classified using word embedding centroids, lexicons, and emoticons. More extensive work on the usage of emoticons in sentiment analysis is available from \cite{10.1145/2480362.2480498, hogenboom2015exploiting}. In \cite{suciati2020aspect}, several models are tested to detect emotions towards aspects in restaurant reviews. Information about emotions can also help with the task of ABSC. For example, in \cite{kumar2021sentic}, features from the \textit{SenticNet} \cite{10.1145/3340531.3412003} knowledge base are used to enhance a model for ABSA. In \cite{ma2018sentic}, an attention-based LSTM model is proposed that incorporates emotion information from the \textit{AffectiveSpace} knowledge base \cite{cambria2015affectivespace} for improved performance in ABSC tasks. A similar technique is used in \cite{ma2018targeted}. Additionally, emotion analysis and sentiment analysis can also be performed jointly. For example, in \cite{7933922}, a model is proposed that can extract aspect-level affective knowledge that includes sentiment polarities and emotion categories. In \cite{wang2020multi}, an extensive framework for multi-level sensing of sentiments and emotions is proposed that incorporates knowledge bases and sarcasm handling. \section{Conclusion}\label{sec:Conclusion} In this survey, we have provided an overview of the current state-of-the-art models for ABSC. We have explained the process of ABSC according to its three main phases: input representation, aspect sentiment classification, and output evaluation. The input representation phase involves representing a body of text by a numeric vector or matrix such that a classification model can identify the correct polarity label of an aspect. In the output evaluation phase, the quality of these polarity label predictions is assessed using performance measures. The quality of the predictions is determined by the architecture of the classification model. We have discussed a variety of state-of-the-art ABSC models using a proposed taxonomy and summarizing tables that serve as overviews of model performances. These ABSC models have been discussed and compared using intuitive explanations, technical details, and reported performances. We have also discussed a variety of important topics related to ABSC. ABSC is a relatively new task that has quickly gained popularity and is rapidly changing. A noticeable evolution in the field of ABSC concerns the used datasets. The authors of the early ABSC works often scraped and compiled their own datasets from the Web. While this phenomenon makes sense due to the vast amounts of public reviews available online, it makes performance comparisons difficult due to most models being tested on different datasets. It was only after researchers started adopting the Twitter dataset from \cite{dong2014adaptive} and the review datasets from the SemEval challenge \cite{pontiki2016semeval, pontiki2015semeval, pontiki-etal-2014-semeval} that actual comparative analyses became more feasible. While this is a significant development for the domain of ABSC, a consequence is that ABSC models are mostly only implemented for restaurant reviews, electronics reviews, and Twitter data. Other types of data, such as hotel reviews, go mostly ignored even though the SemEval-2016 datasets \cite{pontiki2016semeval} contain a set of hotel reviews. To further advance the field of ABSC, it is desirable to test models in other domains. This requires more high-quality public datasets to be made available and adopted. Examples are the FiQA-2018 \cite{10.1145/3184558.3192301} and SentiHood \cite{saeidi-etal-2016-sentihood} datasets. Another example is the book review dataset produced by {\'A}lvarez-L{\'o}pez et al. \cite{alvarez2017book}. This dataset contains a subset of book reviews taken from the INEX Amazon/LibraryThing book corpus \cite{koolen2016overview} that was hand-annotated at the aspect level. Yet, this dataset has rarely been adopted by other researchers. New datasets are also required for languages other than English. As discussed, English is a language for which there is a relatively sizeable number of datasets available for the task of ABSC. This is not the case for most other languages, which makes training language models a difficult task. If it is too costly to obtain more training data, simpler models like SVMs are typically the better option. However, the incorporation of knowledge bases may help compensate for the lack of data, as evidenced by the hybrid models discussed in Subsection \ref{sec:MethodsHybrid}. As has been shown in various works \cite{ijcai2018-617, mevskele2019aldona, trusca2020hybrid}, knowledge bases are effective in enhancing state-of-the-art models to achieve higher performances. We believe that the further exploration of knowledge-enhanced methods will help improve on the current state-of-the-art. However, the incorporation in current research is mostly relatively basic. For example, in \cite{ijcai2018-617}, only the clauses extracted by the discourse tree are incorporated in the model. The model presented in \cite{wu2019aspect} improves upon this by including conjunction rules, but this solution fails to exploit the many other discourse relations that can contain useful information for sentiment classification. Similarly, we have mainly seen domain ontologies in hybrid models as part of a two-step method where the ontology is separate from the machine learning model. Yet, ontologies contain many concepts and relations that can provide important features for the machine learning model. As such, we would advocate for further incorporation of knowledge bases and their structures. Furthermore, while we have only discussed three types of knowledge bases (dictionaries, ontologies, and discourse trees), we expect to see further exploration of new knowledge bases to be incorporated for ABSC. Additionally, improvements can be made in the construction of knowledge bases. This is exemplified by the recent research on the semi-automatic construction of domain ontologies for ABSC \cite{schouten2018ontology, zhuang2019soba, dera2020sasobus, tenhaaf2021websoba}. Another problem is that proper knowledge bases for resource-poor languages may be scarce as well \cite{akhtar2016aspect_h}. As such, the development of new knowledge resources other than labeled ABSC datasets is an important next step. Often, the problem is not a lack of data, but specifically a lack of labeled data. Unsupervised or weakly supervised methods can be highly useful in situations where labeled data is too expensive to obtain. For example, in \cite{10.1145/3397271.3401179}, a weakly supervised model is proposed for joint aspect extraction and sentiment classification. The proposed model involves a sentiment dictionary learned via an auto-encoder using attention. Furthermore, weakly supervised systems can even be used to produce new training data via labeling mechanisms based on expert knowledge \cite{ratner2017snorkel}. An alternative solution to the problem of a lack of data is the use of cross-lingual and multi-lingual models. Knowledge from a language with extensive resources available can be transferred to models for other languages to compensate for the lack of data \cite{barnes-etal-2016-exploring}. For example, in \cite{akhtar-etal-2018-solving}, an attempt is made to solve the problem of data sparsity in French and Hindi datasets via the use of a deep learning model built on top of bilingual word embeddings. These bilingual word embeddings were produced using English-French and English-Hindi parallel corpora created via standard machine translation methods. Training models that are more language-agnostic is an important step for language models in general. Another important step is the development of domain-agnostic models. The problem of data scarcity for ABSC can be alleviated by transferring knowledge from other language domains where data is more readily available in large quantities \cite{xu-etal-2019-bert, sun2019bert}. Pre-training large language models like BERT on large language datasets from other domains and then fine-tuning the model parameters using a small domain dataset is a popular technique to handle this problem and can become more and more useful as larger and more general language models continue to emerge \cite{brown2020language}. Early ABSC approaches were systems based almost purely on knowledge bases. As larger labeled datasets became available, machine learning models such as SVMs became the standard for ABSC. Soon after, deep learning methods started becoming more popular, but often performed on par with the machine learning methods that used high-quality handcrafted features. Yet, with the introduction of attention, deep learning methods started rapidly outpacing other approaches. Until a new revolutionary innovation comes along, we foresee attention-based deep learning models to be the future for ABSC. New attention models are rapidly being developed in and outside the field of ABSC. For example, multi-dimensional attention \cite{shen2018disan} is a general extension of the attention mechanism that allows for a more fine-grained attention computation. Yet, it has barely been explored for ABSA. Similarly, while multi-head attention is a general attention extension, it is typically only used in transformer-based architectures \cite{vaswani2017attention}. Admittedly, the transformer model is a highly successful model producing state-of-the-art performances in ABSC \cite{gao2019bert, xu-etal-2019-bert, Karimi2020AdversarialTF, XU2020135, zeng2019lcf}. Transformer models will undoubtedly remain relevant as new transformer-based architectures are proposed that can be used for ABSC, such as the \textit{transformer-XL} \cite{DBLP:conf/acl/DaiYYCLS19} and the \textit{reformer} model \cite{kitaev2020reformer}. Attention models provide an inherent type of interpretability via the attention weights, which is an important aspect that is missing from many black-box algorithms like deep learning models. Another example of more explainable models is the ensembles of symbolic and subsymbolic AI. Models like the Sentic LSTM \cite{ma2018sentic} may be the start for new explainable models for ABSA \cite{susanto2021ten}. Not only can the field of ABSC evolve through advancements in models and datasets, but also via research on the applications of ABSC. For example, most ABSC methods focus either on implicit or explicit aspects. Yet, texts can often contain multiple implicit and explicit aspects \cite{dosoula2016sentiment}. This poses a significant challenge for real-words applications of ABSA methods. As such, further research on methods that can handle both implicit and explicit aspects is required. Furthermore, when considering real-world applications of ABSA, users are typically interested in the sentiment expressed towards aspects aggregated over a review or sets of reviews. Yet, most ABSC methods are focused on sentence-level ABSC, after which the sentiment is aggregated at the review level. However, it has been shown that pure review-level ABSC can outperform sentence-level ABSC with aggregation \cite{de2018aggregated}. As such, more research on the application of review-level ABSC can also help move the field forward. An interesting new application for ABSC is the implementation of ABSA-based search engines. \textit{Smith} \cite{choi2015smith} is a specialized opinion-based search engine that returns restaurants based on sentiments expressed towards certain aspects in online reviews. Such specialized search engines and other new applications will drive the field of ABSC forward in the future. Another direction that is to be explored stems from the manner in which the task of ABSC is formulated. Currently, the task of ABSC is generally defined in the absence of a time element, meaning that sentiments are considered to be static over time. Although, sentiments towards products are known to typically change over time \cite{moe2012online}. This concept has been considered in some sentiment analysis research. For example, in \cite{wang2012system}, the overall sentiment expressed toward the presidential candidates in the 2012 U.S. election is tracked over time based on Twitter posts. Nonetheless, ABSA over time is an interesting problem that has not seen much attention. However, this development requires new datasets to be produced that are suitable for this task. \bibliographystyle{ACM-Reference-Format}
1,941,325,221,204
arxiv
\section{Introduction} \label{sec_intro} Understanding low-metallicity astrochemistry is crucial to unveil chemical processes in the past universe, where the metallicity was significantly lower than the present-day galaxies. Observations of star-forming regions in nearby low-metallicity galaxies and comparative studies of their chemical compositions with Galactic counterparts play an important role for this purpose. Hot molecular cores are one of the early stages of star formation and they play a key role in the chemical processing of interstellar molecules, especially for complex molecular species. Physically, hot cores are defined as having small source size ($\leq$0.1 pc), high density ($\geq$10$^6$ cm$^{-3}$), and high gas/dust temperature ($\geq$100 K) \citep[e.g.,][]{vanD98,Kur00,vdT04}. Characteristic chemistry in hot cores starts from sublimation of ice mantles by stellar radiation and/or shock. This leads to the enrichment of gas-phase molecules, and parental species such as methanol (CH$_3$OH) and ammonia (NH$_3$) evolve into larger complex organic molecules (COMs) in warm and dense circumstellar environment \citep[e.g.,][]{NM04,Gar06,Her09,Bal15}. Grain surface chemistry also contributes to the formation of COMs upon heating of ice mantles. Consequently, hot cores show rich spectral lines in the radio regime. Detailed studies of hot core chemistry are thus crucial to understand chemical processes triggered by star-formation activities. The Large Magellanic Cloud (LMC) is an excellent target to study interstellar/circumstellar chemistry at low metallicity, thanks to the proximity \citep[49.97 $\pm$ 1.11 kpc,][]{Pie13} and the decreased metallicity environment \citep[$\sim$1/2--1/3 of the solar metallicity; e.g., ][]{Rus92, Duf82, Wes90, And02, Rol02}. The low dust-to-gas ratio makes the interstellar radiation field less attenuated, and thus photoprocessing of interstellar medium could be more effective in the LMC than in our Galaxy. The environmental differences caused by the deceased metallicity would lead to a different chemical history of star- and planet-forming regions in the LMC and other low-metallicity galaxies. Hot core chemistry in the LMC should provide us with useful information to understand chemical complexity in the past metal-poor universe. Chemical compositions of interstellar molecules in the LMC have been studied extensively. Molecular-cloud-scale chemistry ($\lesssim$10 pc) has been investigated by radio single-dish observations \citep[e.g.,][]{Joh94, Chi97, Hei99, Wan09, Par14, Par16, Nis16, Tan17}. Interferometry observations in millimeter have probed distributions of dense molecular gas at a clump scale \citep[a few pc; e.g.,][]{Won06, Sea12, And14}. The Atacama Large Millimeter/submillimeter Array (ALMA) has provided us with an unprecedented sensitivity and spatial resolution to study physical properties of dense molecular gas around young stellar objects (YSOs) at a subparsec scale \citep[e.g.,][]{Ind13, Fuk15, Sai17, Nay18}. For solid state molecules, compositions of ice mantles have been probed by infrared spectroscopic observations towards embedded YSOs \citep[e.g.,][]{vanL05, ST, ST10, ST16, Oli09, Oli11, Sea11} Chemistry of hot cores at low metallicity is now emerging with discoveries of extragalactic hot cores in the LMC with ALMA \citep{ST16b, Sew18}. The formation of organic molecules in low-metallicity environments is one of the important issues in recent astrochemical studies of low-metallicity star-forming regions. \citet{ST16b} reported that organic molecules such as CH$_3$OH, H$_2$CO, and HNCO towards a hot molecular core in the LMC (ST11) is underabundant by 1--3 orders of magnitude compared to Galactic hot cores. On the other hand, \citet{Sew18} reported the detection of CH$_3$OH and even larger COMs toward other different hot molecular cores in the LMC (N113 A1 and B3). In contrast to the ST11 hot core, they found that the molecular abundances of COMs in the N113 hot cores are scaled by the metallicity of the LMC and comparable to those found at the lower end of the range in Galactic hot cores. Because of the limited number of current samples and the limited frequency coverage, chemical processes to form organic molecules in low-metallicity hot cores are still an open question. Besides those observational approaches, astrochemical simulations of low-metallicity hot core chemistry are presented in \citet{Bay08,Ach18}. Observational efforts to identify and analyze further low-metallicity hot cores are important to constrain various uncertainties involved in astrochemical models. In this paper, we report the results of high spatial resolution submillimeter observations with ALMA towards a high-mass YSO in the LMC, and present the discovery of a new hot molecular core. Section \ref{sec_tarobsred} describes the details of the target source, observations, and data reduction. The obtained molecular line spectra and images are described in Section \ref{sec_res}. Derivation of physical quantities and molecular abundances from the present data is described in Section \ref{sec_ana}. Properties of the observed hot core and the comparison of the molecular abundances with those of other LMC and Galactic hot cores are discussed in Section \ref{sec_disc}, where astrochemical simulations of low-metallicity hot cores are also presented. Distributions of molecules in a rotating protostellar envelope and an outflow cavity, as well as isotope abundances of sulfur in the source, are also discussed in Section \ref{sec_disc}. The conclusions are given in Section \ref{sec_sum}. \begin{deluxetable*}{ l c c c c c c c c c} \tablecaption{Observation summary \label{tab_Obs}} \tablewidth{0pt} \tabletypesize{\footnotesize} \tablehead{ \colhead{} & \colhead{Observation} & \colhead{On-source} & \colhead{Mean} & \colhead{Number} & \multicolumn{2}{c}{Baseline} & \colhead{} & \colhead{} & \colhead{Channel} \\ \cline{5-6} \colhead{} & \colhead{Date} & \colhead{Time} & \colhead{PWV\tablenotemark{a}} & \colhead{of} & \colhead{Min} & \colhead{Max} & \colhead{Bem size\tablenotemark{b}} & \colhead{MRS\tablenotemark{c}} & \colhead{Spacing} \\ \colhead{} & \colhead{} & \colhead{(min)} & \colhead{(mm)} & \colhead{Antennas} & \colhead{(m)} & \colhead{(m)} & \colhead{($\arcsec$ $\times$ $\arcsec$)} & \colhead{($\arcsec$)} & \colhead{} } \startdata Band 6 & 2016 Nov 30 & 16.1 & 1.9--2.5 & 44 & 15.1 & 704.1 & 0.54 $\times$ 0.40 & 3.6 & 0.98 MHz \\ (250 GHz) & (Cycle 4) & & & & \multicolumn{2}{c}{(C40-4)} & & & (1.2 km s$^{-1}$) \\ Band 7 & 2018 Dec 4 & 35.8 & 0.5--0.6 & 46 & 15.1 & 783.5 & 0.37 $\times$ 0.32 & 3.3 & 0.98 MHz \\ (350 GHz) & (Cycle 6) & & & & \multicolumn{2}{c}{(C43-4)} & & & (0.85 km s$^{-1}$) \\ \enddata \tablecomments{ $^a$Precipitable water vapor. $^b$The average beam size achieved by TCLEAN with the Briggs weighting and the robustness parameter of 0.5. Note that we use a common circular restoring beam size of 0$\farcs$40 for Band 6 and 7 data to construct the final images. $^c$Maximum Recoverable Scale. } \end{deluxetable*} \section{Target, observations, and data reduction} \label{sec_tarobsred} \subsection{Target} \label{sec_tar} The target of the present ALMA observations is an infrared source, IRAS 05195-6911 or ST16 (hereafter ST16), located near the N119 star-forming region in the LMC. Previous infrared spectroscopic studies have classified the source as an embedded high-mass YSO \citep{Sea09,ST16}. A spectral energy distribution (SED) of the source is shown in Figure \ref{sed} \citep[data are collected from available databases and literatures including][]{Mei06, Mei13, Kat07, ST16, Kem10}. The bolometric luminosity of the source is estimated to be 3.1 $\times$ 10$^5$ L$_{\sun}$ by integrating the SED from 1 $\mu$m to 1200 $\mu$m. \begin{figure}[tp] \begin{center} \includegraphics[width=8.5cm]{f1.eps} \caption{ The spectral energy distribution of the observed high-mass young stellar object, ST16. The plotted data are based on IRSF/SIRIUS JHK$_\mathrm{s}$ photometry \citep[pluses, black,][]{Kat07}, {\it AKARI}/IRC spectroscopy \citep[solid line, blue,][]{ST10}, VLT/ISAAC spectroscopy \citep[solid line, light blue,][]{ST16}, {\it Spitzer}/IRAC and MIPS photometry \citep[open diamonds, light green,][]{Mei06}, {\it Spitzer}/MIPS spectroscopy \citep[solid line, green,][]{Kem10}, {\it Herschel}/PACS and SPIRE photometry \citep[filled diamonds, orange,][]{Mei13}, and ALMA 870 $\mu$m and 1200 $\mu$m continuum measurements obtained in this work (filled star, red). } \label{sed} \end{center} \end{figure} \subsection{Observations} \label{sec_obs} Observations were carried out with ALMA as a part of Cycle 4 (2016.1.00394.S) and Cycle 6 (2018.1.01366.S) programs (PI T. Shimonishi). A summary of the present observations is shown in Table \ref{tab_Obs}. The target high-mass YSO is located at RA = 05$^\mathrm{h}$19$^\mathrm{m}$12$\fs$31 and Dec = -69$^\circ$9$\arcmin$7$\farcs$3 (ICRS), based on the {\it Spitzer} SAGE infrared catalog \citep{Mei06}. The source's positional accuracy is about 0.3$\arcsec$. The pointing center of antennas is RA = 05$^\mathrm{h}$19$^\mathrm{m}$12$\fs$30 and Dec = -69$^\circ$9$\arcmin$6$\farcs$8 (ICRS), which roughly corresponds to the infrared center of the target. The total on-source integration time is 16.1 minutes for Band 6 data and 35.8 minutes for Band 7. Flux, bandpass, and phase calibrators are J0519-4546, J0635-7516, and J0526-6749 for Band 6, while J0519-4546, J0519-4546, and J0529-7245 for Band 7, respectively. Four spectral windows are used to cover the sky frequencies of 241.25--243.12, 243.61-245.48, 256.75--258.62, and 258.60--260.48 GHz for Band 6 and 336.97--338.84, 338.77-340.64, 348.85--350.72, and 350.65--352.53 GHz for Band 7. The channel spacing is 0.98 MHz, which corresponds to 1.2 km s$^{-1}$ for Band 6 and 0.85 km s$^{-1}$ for Band 7. The total number of antennas is 44 for Band 6 and 46 for Band 7. The minimum--maximum baseline lengths are 15.1--704.1 m for Band 6 and 15.1--783.5 m for Band 7. A full-width at half-maximum (FWHM) of the primary beam is about 25$\arcsec$ for Band 6 and 18$\arcsec$ for Band 7. \subsection{Data reduction} \label{sec_red} Raw data are processed with the \textit{Common Astronomy Software Applications} (CASA) package. For calibration, CASA 4.7.2 is used for Band 6 and CASA 5.4.0 is used for Band 7. For imaging, we use CASA 5.4.0 for the all data. With the Briggs weighting and the robustness parameter of 0.5, the synthesized beam sizes of 0$\farcs$52--0$\farcs$56 $\times$ 0$\farcs$39--0$\farcs$41 with a position angle of -23 degree for Band 6 and 0$\farcs$36--0$\farcs$38 $\times$ 0$\farcs$31--0$\farcs$32 with a position angle of -17 for Band 7 are achieved. In this paper, we have used a common circular restoring beam size of 0$\farcs$40 for Band 6 and 7 data, in order to accommodate the spectral analyses in separated frequency regions. This beam size corresponds to 0.097 pc at the distance of the LMC. The continuum image is constructed by selecting line-free channels from the four spectral windows. After the clean process, the images are corrected for the primary beam pattern using the \textit{impbcor} task in CASA. The self-calibration is not applied. The spectra and continuum flux are extracted from the 0$\farcs$42 (0.10 pc) diameter circular region centered at RA = 05$^\mathrm{h}$19$^\mathrm{m}$12$\fs$295 and Dec = -69$^\circ$9$\arcmin$7$\farcs$34 (ICRS), which corresponds to the 870 $\mu$m continuum center of ST16. The continuum emission is subtracted from the spectral data using the \textit{uvcontsub} task in CASA before the spectral extraction. \section{Results} \label{sec_res} \subsection{Spectra} \label{sec_spc} Figure \ref{spec} shows molecular emission line spectra extracted from the position of ST16. Spectral lines are identified with the aid of the Cologne Database for Molecular Spectroscopy\footnote{https://www.astro.uni-koeln.de/cdms} \citep[CDMS,][]{Mul01,Mul05} and the molecular database of the Jet Propulsion Laboratory\footnote{http://spec.jpl.nasa.gov} \citep[JPL,][]{Pic98}. The detection criteria adopted here are the 2.5$\sigma$ significance level and the velocity coincidence with the systemic velocity of nearby CO clouds \citep[between 260 km s$^{-1}$ and 270 km s$^{-1}$, estimated using the MAGMA data presented in][]{Won11}. Molecular emission lines of CH$_3$OH, H$_2$CO, CCH, H$^{13}$CO$^{+}$, CS, C$^{34}$S, C$^{33}$S, SO, $^{34}$SO, $^{33}$SO, SO$_2$, $^{34}$SO$_2$, $^{33}$SO$_2$, OCS, H$_2$CS, CN, NO, HNCO, H$^{13}$CN, CH$_3$CN, and SiO are detected from the observed region. Multiple high excitation lines (upper state energy $>$100 K) are detected for CH$_3$OH, SO$_2$, $^{34}$SO$_2$, $^{33}$SO$_2$, OCS, and CH$_3$CN. Complex organic molecules larger than CH$_3$OH are not detected. In total we have detected 90 transitions, out of which, 30 lines are due to CH$_3$OH, and 27 lines are due to SO$_2$ and its isotopologues. Radio recombination lines are not detected, though moderately-intense lines such as H36$\beta$ (260.03278 GHz) or H41$\gamma$ (257.63549 GHz) are covered in the observed frequency range. Line parameters are measured by fitting a Gaussian profile to observed lines. Based on the fitting, we estimate the peak brightness temperature, the FWHM, the LSR velocity, and the integrated intensity for each line. Measured line widths are typically 3--6 km s$^{-1}$. Full details of the line fitting can be found in Appendix A (Tables of measured line parameters) and Appendix B (Figures of fitted spectra). The tables also contain estimated upper limits on important non-detection lines. \begin{figure*}[ht] \centering \includegraphics[width=21cm,angle=90]{f2.eps} \caption{ALMA Band 6 and 7 spectra of ST16 extracted from a 0$\farcs$42 (0.10 pc) diameter region centered at the continuum and molecular emission peak. Detected emission lines are labeled. The source velocity of 264.5 km s$^{-1}$ is assumed. } \label{spec} \end{figure*} \subsection{Images} \label{sec_img} Figures \ref{images1} and \ref{images2} shows synthesized images of continuum and molecular emission lines observed toward the target region. The molecular line images are constructed by integrating spectral data in the velocity range where the emission is seen. For CH$_3$OH and SO$_2$, high-excitation line images ($E_{u}$ $>$100 K for CH$_3$OH and $>$80 K for SO$_2$) and low-excitation line images ($E_{u}$ $<$50 K for CH$_3$OH and 36 K for SO$_2$) are separately constructed, because these molecules show two different temperature components in their rotation diagrams (see Section \ref{sec_rd}). The continuum emission, as well as most of molecular emission lines except for CCH and CN, are centered at the position of the high-mass YSO. SO, NO, and low-$E_{u}$ SO$_2$ show a secondary peak at the east side of the YSO. The distributions of continuum, CS, SO, H$_2$CO, CCH, CN, and possibly H$^{13}$CO$^{+}$ are elongated in the north-south direction. We have estimated the spatial extent of each emission around the YSO by fitting a two-dimensional Gaussian to the peak position. Compact distributions, i.e. FWHM = 0$\farcs$38--0$\farcs$46 (0.09--0.11 pc) that is comparable with the beam size, are seen in high-$E_{u}$ CH$_3$OH, low- and high-$E_{u}$ SO$_2$, $^{34}$SO$_2$, $^{33}$SO$_2$, OCS, $^{34}$SO, $^{33}$SO, CH$_3$CN, and HNCO. Slightly extended distributions, i.e. FWHM = 0$\farcs$52--0$\farcs$76 (0.13--0.18 pc), are seen in H$_2$CO, low-$E_{u}$ CH$_3$OH, SO, C$^{34}$S, C$^{33}$S, H$_2$CS, NO, H$^{13}$CN, SiO, and continuum. Among them, the distributions of H$^{13}$CN and H$_2$CS, and possibly C$^{34}$S, C$^{33}$S, and SiO, are marginally off the continuum center. The NO distribution seems to be patchy. The continuum emission has a sharp peak around the YSO position, but also widely distribute within the observed field as shown by the 5$\sigma$ contour in Figure \ref{images1}. Similar characteristics (sharp peak and extended plateau) are also seen SO and H$_2$CO. Clearly extended distributions, i.e. FWHM = 1$\arcsec$--2$\arcsec$ (0.24--0.49 pc), are seen in CS, CCH, CN, and H$^{13}$CO$^{+}$. The distributions of CCH and CN are significantly different from those of other molecules and will be further discussed in Section \ref{sec_disc_CCH_CN}. \begin{figure*}[tp] \begin{center} \includegraphics[width=15.0cm]{f3.eps} \caption{ Flux distributions of the ALMA 870 $\mu$m continuum and integrated intensity distributions of molecular emission lines. For CCH, CN, NO, HNCO, and $^{34}$SO, the detected multiple transitions are averaged. Gray contours represent the continuum distribution and the contour levels are 5$\sigma$, 10$\sigma$, 20$\sigma$, 40$\sigma$, 100$\sigma$ of the rms noise (0.06 mJy/beam). Low signal-to-noise regions (S/N $<$2) are masked. The spectra discussed in the text are extracted from the region indicated by the thick black open circle. The blue open star represents the position of a high-mass YSO identified by infrared observations. The synthesized beam size is shown by the gray filled circle in each panel. North is up, and east is to the left. } \label{images1} \end{center} \end{figure*} \begin{figure*}[p] \begin{center} \includegraphics[width=15.0cm]{f4.eps} \caption{ Same as in Figure \ref{images1}. For CH$_3$OH, CH$_3$CN, SO$_2$, OCS, $^{34}$SO$_2$, $^{33}$SO$_2$, the detected multiple transitions are averaged. CH$_3$OH and SO$_2$ are separated into high-$E_{u}$ ($>$100 K for CH$_3$OH and $>$80 K for SO$_2$) and low-$E_{u}$ ($<$50 K for CH$_3$OH and $=$ 36 K for SO$_2$) components. } \label{images2} \end{center} \end{figure*} \section{Analysis} \label{sec_ana} \subsection{Rotation diagram analyses} \label{sec_rd} Column densities and rotation temperatures of CH$_3$OH, SO$_2$, $^{34}$SO$_2$, $^{33}$SO$_2$, SO, $^{34}$SO, OCS, and CH$_3$CN are estimated with the aid of the rotation diagram analysis, because multiple lines with different excitation energies are detected (Figure \ref{rd}). We here assume an optically thin condition and the local thermodynamic equilibrium (LTE). The assumption of optically thin emission is mostly valid for the present source (see discussion in Sections \ref{sec_disc_molab}, \ref{sec_disc_isotop}, and \ref{sec_disc_CCH_CN}). We use the following formulae based on the standard treatment of the rotation diagram analysis \citep[e.g., ][]{Sut95, Gol99}: \begin{equation} \log \left(\frac{ N_{u} }{ g_{u} } \right) = - \left(\frac {\log e}{T_{\mathrm{rot}}} \right) \left(\frac{E_{u}}{k} \right) + \log \left(\frac{N}{Q(T_{\mathrm{rot}})} \right), \label{Eq_rd1} \end{equation} where \begin{equation} \frac{ N_{u} }{ g_{u} } = \frac{ 3 k \int T_{\mathrm{b}} dV }{ 8 \pi^{3} \nu S \mu^{2} }, \label{Eq_rd2} \\ \end{equation} and $N_{u}$ is a column density of molecules in the upper energy level, $g_{u}$ is the degeneracy of the upper level, $k$ is the Boltzmann constant, $\int T_{\mathrm{b}} dV$ is the integrated intensity estimated from the observations, $\nu$ is the transition frequency, $S$ is the line strength, $\mu$ is the dipole moment, $T_{\mathrm{rot}}$ is the rotational temperature, $E_{u}$ is the upper state energy, $N$ is the total column density, and $Q(T_{\mathrm{rot}})$ is the partition function at $T_{\mathrm{rot}}$. All the spectroscopic parameters required in the analysis are extracted from the CDMS database. For CH$_3$OH and SO$_2$, a straight-line fit is separated into two temperature regimes, because different temperature components are clearly seen in the diagram. We use the transitions with $E_{u}$ $<$100 K to fit the lower-temperature component and the transitions with $E_{u}$ $>$100 K to fit the higher-temperature component, respectively. Derived column densities and rotation temperatures are summarized in Table \ref{tab_N}. These rotation analyses suggest that the line-of-sight towards ST16 harbors two temperature components; i.e. a hot gas component with $T_{\mathrm{rot}}$ $\sim$150 K (an average temperature of high-temperature CH$_3$OH, high-temperature SO$_2$, $^{34}$SO$_2$, and $^{33}$SO$_2$), along with a warm gas component with $T_{\mathrm{rot}}$ $\sim$50 K (an average temperature of low-temperature CH$_3$OH, low-temperature SO$_2$, $^{34}$SO, OCS, and CH$_3$CN). Note that the temperature and column density derived from the rotation diagram of SO would not be reliable, because the SO(6$_{6}$--5$_{5}$) and (8$_{7}$--7$_{6}$) lines are moderately optically thick ($\tau$ $\sim$0.3--1, with $T_{\mathrm{rot}}$ = 50--20 K and a beam filling factor of unity). Given a possible beam dilution effect for high excitation lines, their optical thickness would cause non-negligible uncertainty on the reliability of the rotation analysis. We thus derive the SO column density using the SO(3$_{3}$--2$_{3}$) line, which has an $S \mu^{2}$ value about hundred times smaller than the above two transitions and is optically thin. Here the rotation temperature is assumed to be the same as that of $^{34}$SO ($\sim$50 K). We have also estimated the rotational temperature of SO at the off-hot-core position (i.e. the 0$\farcs$40 diameter circular region centered at RA = 05$^\mathrm{h}$19$^\mathrm{m}$12$\fs$39 and Dec = -69$^\circ$9$\arcmin$6$\farcs$6). Here we derive $T_{\mathrm{rot}}$ = 24.5 $\pm$ 1.4 K and $N$ = 4.2 $\pm$ 0.6 $\times$ 10$^{14}$ cm$^{-2}$ (Figure \ref{rd}, lower right panel). Note that the SO lines at the off position are optically thin ($<$0.15), since the peak intensities are nearly five times lower than those at the hot core position. The derived $T_{\mathrm{rot}}$ would represent the temperature of the relatively cold and dense gas surrounding the hot core. \begin{figure*}[tp] \begin{center} \includegraphics[width=18cm]{f5.eps} \caption{ Rotation diagrams for CH$_3$OH, SO$_2$, $^{34}$SO$_2$, $^{33}$SO$_2$, SO, $^{34}$SO, OCS, and CH$_3$CN lines. Upper limit points are shown by the downward arrows. The solid lines represent the fitted straight line. Derived column densities and rotation temperatures are shown in each panel. For CH$_3$OH and SO$_2$, the left line is fitted using the transitions with $E_{u}$ $<$100 K, while the right one is fitted using the transitions with $E_{u}$ $>$100 K. Note that A- and E-state CH$_3$OH are fitted simultaneously. Two SO$_2$ transitions, 10$_{6,4}$--11$_{5,7}$ and 26$_{3,23}$--25$_{4,22}$ (indicated by the open squares), are excluded from the fit, because they significantly deviates from other data points. At the lower-right panel, the rotation diagram of SO at the off-hot core position is shown. See Section \ref{sec_rd} for details. } \label{rd} \end{center} \end{figure*} \subsection{Column densities of other molecules} \label{sec_n} Column densities of molecular species other than those described in Section \ref{sec_rd} are estimated from Equation \ref{Eq_rd1} after solving it for $N$. For this purpose we need to assume their rotation temperatures. The rotation temperature of $^{34}$SO ($\sim$50K) corresponds to the temperature of warm components in the line of sight. This temperature is applied to SO and $^{33}$SO considering the co-existence of isotopologues. CS shows an extended distribution similar to SO, we thus also applied $T_{\mathrm{rot}}$ = 50 K for CS, C$^{34}$S, and C$^{33}$S. Furthermore, we also assume $T_{\mathrm{rot}}$ = 50 K for CCH and CN, because they are clearly extended and not centered at the hot core region. For other molecules that are concentrated at the hot core, we assume $T_{\mathrm{rot}}$ = 100 K, which is an average temperature of the hot and warm gas components describe in Section \ref{sec_rd}. We use the spectroscopic constants and partition functions extracted from the CDMS database except for HCOOCH$_3$, whose molecular data is extracted from the JPL database. Estimated column densities are summarized in Table \ref{tab_N}. We have also performed non-LTE calculation for column densities of selected species using RADEX \citep{vdT07}. For input parameters, we use the H$_2$ gas density of 3 $\times$ 10$^6$ cm$^{-3}$ according to our estimate in Section \ref{sec_h2_final} and the background temperature of 2.73 K. Kinetic temperatures are assumed to be the same as temperatures tabulated in Table \ref{tab_N}. The line widths used in the analysis are taken from the tables in Appendix A. The resultant column densities are summarized in Table \ref{tab_N}. The calculated non-LTE column densities are reasonably consistent with the LTE estimations. \begin{deluxetable}{ l c c c} \tablecaption{Estimated rotation temperatures and column densities \label{tab_N}} \tablewidth{0pt} \tabletypesize{\footnotesize} \tablehead{ \colhead{Molecule} & \colhead{$T$$_{rot}$} & \colhead{$N$(X)} & \colhead{$N$(X) non-LTE\tablenotemark{d}} \\ \colhead{ } & \colhead{(K)} & \colhead{(cm$^{-2}$)} & \colhead{(cm$^{-2}$)} } \startdata CH$_3$OH\tablenotemark{a} ($E_{u} >$100 K) & 137$^{+8}_{-7}$ & (1.6 $^{+0.1}_{-0.1}$) $\times$ 10$^{15}$ & (3.4 $\pm$ 0.2) $\times$ 10$^{15}$ \\ CH$_3$OH\tablenotemark{a} ($E_{u} <$100 K) & 61$^{+2}_{-2}$ & (1.1 $^{+0.1}_{-0.1}$) $\times$ 10$^{15}$ & (1.6 $\pm$ 0.1) $\times$ 10$^{15}$ \\ SO$_2$\tablenotemark{a} ($E_{u} >$100 K) & 232$^{+27}_{-22}$ & (4.4 $^{+0.1}_{-0.1}$) $\times$ 10$^{15}$ & (3.5 $\pm$ 0.3) $\times$ 10$^{15}$ \\ SO$_2$\tablenotemark{a} ($E_{u} <$100 K) & 64$^{+5}_{-4}$ & (1.7 $^{+0.1}_{-0.1}$) $\times$ 10$^{15}$ & (2.0 $\pm$ 0.1) $\times$ 10$^{15}$ \\ $^{34}$SO$_2$\tablenotemark{a} & 86$^{+11}_{-9}$ & (4.0 $^{+0.8}_{-0.6}$) $\times$ 10$^{14}$ & \nodata \\ $^{33}$SO$_2$\tablenotemark{a} & 124$^{+37}_{-23}$ & (1.2 $^{+0.3}_{-0.2}$) $\times$ 10$^{14}$ & \nodata \\ $^{34}$SO\tablenotemark{a,b} & 47$^{+14}_{-14}$ & (3.2 $^{+0.9}_{-0.9}$) $\times$ 10$^{14}$ & \nodata \\ OCS\tablenotemark{a} & 53$^{+11}_{-8}$ & (6.5 $^{+7.2}_{-3.4}$) $\times$ 10$^{14}$ & (6.9 $\pm$ 0.5) $\times$ 10$^{14}$ \\ CH$_3$CN\tablenotemark{a} & 53$^{+10}_{-7}$ & (2.3 $^{+1.7}_{-1.0}$) $\times$ 10$^{13}$ & (2.5 $\pm$ 0.2) $\times$ 10$^{13}$ \\ \tableline SO\tablenotemark{c} & 50 & (7.3 $\pm$ 0.2) $\times$ 10$^{15}$ & (7.5 $\pm$ 0.1) $\times$ 10$^{15}$ \\ $^{33}$SO & 50 & (1.4 $\pm$ 0.1) $\times$ 10$^{14}$ & \nodata \\ CS & 50 & (1.3 $\pm$ 0.1) $\times$ 10$^{14}$ & (1.2 $\pm$ 0.1) $\times$ 10$^{14}$ \\ C$^{34}$S & 50 & (6.7 $\pm$ 0.4) $\times$ 10$^{12}$ & \nodata \\ C$^{33}$S & 50 & (1.8 $\pm$ 0.3) $\times$ 10$^{12}$ & \nodata \\ CCH & 50 & (1.3 $\pm$ 0.1) $\times$ 10$^{14}$ & \nodata \\ CN & 50 & (4.5 $\pm$ 0.5) $\times$ 10$^{13}$ & \nodata \\ H$_2$CO & 100 & (2.1 $\pm$ 0.1) $\times$ 10$^{14}$ & (1.3 $\pm$ 0.1) $\times$ 10$^{14}$ \\ H$^{13}$CO$^+$ & 100 & (1.4 $\pm$ 0.1) $\times$ 10$^{13}$ & (7.7 $\pm$ 0.3) $\times$ 10$^{12}$ \\ NO & 100 & (4.1 $\pm$ 0.8) $\times$ 10$^{15}$ & (3.8 $\pm$ 0.2) $\times$ 10$^{15}$ \\ HNCO & 100 & (3.7 $\pm$ 0.8) $\times$ 10$^{13}$ & (2.3 $\pm$ 0.2) $\times$ 10$^{13}$ \\ H$^{13}$CN & 100 & (8.8 $\pm$ 2.0) $\times$ 10$^{12}$ & (4.2 $\pm$ 0.2) $\times$ 10$^{12}$ \\ HC$_3$N & 100 & $<$1.1 $\times$ 10$^{13}$ & \nodata \\ H$_2$CS & 100 & (1.9 $\pm$ 0.4) $\times$ 10$^{13}$ & (1.3 $\pm$ 0.2) $\times$ 10$^{13}$ \\ SiO & 100 & (8.5 $\pm$ 3.2) $\times$ 10$^{12}$ & (6.0 $\pm$ 1.0) $\times$ 10$^{12}$ \\ HDO & 100 & $<$3.8 $\times$ 10$^{14}$ & \nodata \\ c-C$_3$H$_2$ & 100 & $<$2.4 $\times$ 10$^{14}$ & \nodata \\ C$_2$H$_5$OH & 100 & $<$4.1 $\times$ 10$^{14}$ & \nodata \\ C$_2$H$_5$CN & 100 & $<$4.5 $\times$ 10$^{14}$ & \nodata \\ CH$_3$OCH$_3$ & 100 & $<$2.5 $\times$ 10$^{14}$ & \nodata \\ HCOOCH$_3$ & 100 & $<$6.8 $\times$ 10$^{14}$ & \nodata \\ \textit{trans}-HCOOH & 100 & $<$7.5 $\times$ 10$^{13}$ & \nodata \\ \enddata \tablecomments{ Uncertainties and upper limits are of the 2 $\sigma$ level and do not include systematic errors due to adopted spectroscopic constants. See Section \ref{sec_rd} and \ref{sec_n} for details. $^a$Derived based on the rotation diagram analysis. $^b$Assumed empirical 30 $\%$ uncertainty for $T$$_{rot}$ and $N$ because of the fitted data points are relatively few and scattered. $^c$Derived from the SO(3$_{3}$--2$_{3}$) line. $^d$The following lines are used for non-LTE calculation with RADEX; CH$_3$OH(7$_{2}$ A$^+$--6$_{2}$ A$^+$), CH$_3$OH(5$_{1}$ E--4$_{1}$ E), SO$_2$(5$_{3,3}$--4$_{2,2}$), SO$_2$(14$_{0,14}$--13$_{1,13}$), OCS(28--27), CH$_3$CN(19$_{0}$--18$_{0}$), SO($N_J$ = 6$_{6}$--5$_{5}$), CS(5--4), H$_2$CO(5$_{1,5}$--4$_{1,4}$), H$^{13}$CO$^+$(3--2), NO(J = $\frac{7}{2}$--$\frac{5}{2}$, $\Omega$ = $\frac{1}{2}$, F = $\frac{9}{2}$$^+$--$\frac{7}{2}$$^-$), HNCO(16$_{0,16}$--15$_{0,15}$), H$^{13}$CN(3--2), H$_2$CS(10$_{1,10}$--9$_{1,9}$), and SiO(6--5). } \end{deluxetable} \subsection{Column density of H$_2$} \label{sec_h2} A column density of molecular hydrogen ($N_{\mathrm{H_2}}$) is estimated by several methods to check their reliability. The H$_2$ column densities derived by different methods are summarized in Table \ref{tab_N_H2}. Details of each method are described below. \begin{deluxetable*}{ l c c c c c c c c c c} \tablecaption{Estimated H$_2$ column densities and $A_{V}$ \label{tab_N_H2}} \tablewidth{0pt} \tabletypesize{\scriptsize} \tablehead{ \colhead{ } & \multicolumn{6}{c}{ALMA continuum} & \colhead{ } & \multicolumn{2}{c}{SED fit} & \colhead{$\tau$$_{9.7}$} \\ \cline{2-7} \cline{9-10} \colhead{ } & \multicolumn{2}{c}{$T_{d}$ = 20 K} & \multicolumn{2}{c}{$T_{d}$ = 60 K} & \multicolumn{2}{c}{$T_{d}$ = 150 K} & \colhead{ } & \colhead{RW07\tablenotemark{a}} & \colhead{ZT18\tablenotemark{b}} & \colhead{ } \\ \cline{2-3} \cline{4-5} \cline{6-7} \colhead{ } & \colhead{870 $\mu$m} & \colhead{1200 $\mu$m} & \colhead{870 $\mu$m} & \colhead{1200 $\mu$m} & \colhead{870 $\mu$m} & \colhead{1200 $\mu$m} & \colhead{ } & \colhead{} & \colhead{ } & \colhead{} } \startdata $N_{\mathrm{H_2}}$ (10$^{23}$ cm$^{-2}$) & 21.0 $\pm$ 2.1 & 22.2 $\pm$ 2.2 & 5.64 $\pm$ 0.56 & 5.47 $\pm$ 0.55 & 2.12 $\pm$ 0.21 & 2.01 $\pm$ 0.20 & & 5.40 $\pm$ 1.12 & 5.71 $\pm$ 0.87 & 5.60 $\pm$ 0.56 \\ $A_{V}$ (mag) & 749 $\pm$ 75 & 792 $\pm$ 79 & 202 $\pm$ 20 & 195 $\pm$ 20 & 76 $\pm$ 8 & 72 $\pm$ 7 & & 193 $\pm$ 40 & 204 $\pm$ 31 & 200 $\pm$ 20 \\ \enddata \tablecomments{ In this work, we use $N_{\mathrm{H_2}}$ = (5.6 $\pm$ 0.6) $\times$ 10$^{23}$ cm$^{-2}$ as a representative value, which is the average of $N_{\mathrm{H_2}}$ derived by ALMA dust continuum (870 $\mu$m and 1200 $\mu$m with $T_{d}$ = 60 K), the SED model fits, and the 9.7 $\mu$m silicate dust absorption depth (see Section \ref{sec_h2} for details). Uncertainties do not include systematic errors due to adopted optical constants. \\ $^a$\citet{Rob07}; $^b$\citet{Zha18} } \end{deluxetable*} \subsubsection{$N_{\mathrm{H_2}}$ from the ALMA continuum} \label{sec_h2_dust} The present ALMA dust continuum data can be used for the $N_{\mathrm{H_2}}$ estimate. The continuum brightness of ST16 is measured to be (3.37 $\pm$ 0.34) mJy/beam for 1200 $\mu$m and (10.55 $\pm$ 1.06) mJy/beam for 870 $\mu$m towards the region same as in the spectral extraction\footnote{A canonical uncertainty of 10 $\%$ for the absolute flux calibration of the ALMA Band 6 and 7 data is adopted (see ALMA Technical Handbook).} Based on the standard treatment of optically thin dust emission, we use the following equation to calculate $N_{\mathrm{H_2}}$: \begin{equation} N_{\mathrm{H_2}} = \frac{F_{\nu} / \Omega}{2 \kappa_{\nu} B_{\nu}(T_{d}) Z \mu m_{\mathrm{H}}} \label{Eq_h2}, \end{equation} where $F_{\nu}/\Omega$ is the continuum flux density per beam solid angle as estimated from the observations, $\kappa_{\nu}$ is the mass absorption coefficient of dust grains coated by thin ice mantles at 1200/870 $\mu$m as taken from \citet{Oss94} and we here use 1.06 cm$^2$ g$^{-1}$ for 1200 $\mu$m and 1.89 cm$^2$ g$^{-1}$ for 870 $\mu$m, $T_{d}$ is the dust temperature and $B_{\nu}(T_{d})$ is the Planck function, $Z$ is the dust-to-gas mass ratio, $\mu$ is the mean atomic mass per hydrogen \citep[1.41, according to][]{Cox00}, and $m_{\mathrm{H}}$ is the hydrogen mass. We use the dust-to-gas mass ratio of 0.0027 for the LMC, which is obtained by scaling the Galactic value of 0.008 by the metallicity of the LMC ($\sim$1/3 Z$_{\sun}$). The dust temperature is a key assumption for the derivation of $N_{\mathrm{H_2}}$. We estimate $N_{\mathrm{H_2}}$ for three different dust temperatures, 20 K, 60 K, and 150 K, as shown in Table \ref{tab_N_H2}. We revisit the validity of these assumption in Section \ref{sec_h2_final}, based on the comparison of $N_{\mathrm{H_2}}$ values by different methods. Note that consistent $N_{\mathrm{H_2}}$ values are derived from 870 $\mu$m and 1200 $\mu$m continuum, suggesting that the submillimeter continuum emission from ST16 is almost dominated by the thermal emission from dust grains. \subsubsection{$N_{\mathrm{H_2}}$ from the SED fit} \label{sec_h2_sed} A model fit to the source's SED provides us with an alternative way to estimate the total gas column density in the line-of-sight. We have tested two SED models in this work; one by \citet{Rob07} and another by \citet{Zha18}. For input data, we use 1--1200 $\mu$m photometric and spectroscopic data of ST16 as shown in Figure \ref{sed}. We exclude the SPIRE 350 $\mu$m and 500 $\mu$m band data in the fit, because they are possibly contaminated by diffuse emission around the YSO due to their large point spread function (about 27$\arcsec$ and 41$\arcsec$ in FWHM, respectively). The distance to ST16 is assumed to be the same as that of the LMC. The model of \citet{Rob07} produces a bunch of fitted SEDs that differ in $\chi^2$ values. To obtain a range of acceptable fits, we use a cutoff value for $\chi^2$ that is described in \citet{Rob07}. We select the fit results which have $(\chi^2 - \chi^2_{best})/N_{data}$ $\leqq$ 3, where $\chi^2_{best}$ is the $\chi^2$ value of the best fit model and $N_{data}$ is the total number of data points used for the fit. Then, a median value of the selected results is adopted as a representative value and their standard deviation is adopted as uncertainty. In the \citet{Zha18} model, the evolution of the protostar and its surround structures in a self-consistent way based on the turbulent core accretion theory for massive star formation \citep{McK03}. The best models are selected based on $\chi^2$ values of the SED fits, as in the Robitaille model. We use best five models to estimate the final column density and uncertainty, i.e., their average value and standard deviation. In both models, the total visual extinction ($A_V$) from the protostar to the observer is derived from the best-fit SEDs. The value is doubled in order to compare with submillimeter data, which probe the total column density in the line of sight. To estimate $N_{\mathrm{H_2}}$ values from the derived $A_V$, we use a $N_{\mathrm{H_2}}$/$A_{V}$ conversion factor. \citet{Koo82} reported $N_{\mathrm{H}}$/$E(B-V)$ = 2.0 $\times$ 10$^{22}$ cm$^{-2}$ mag$^{-1}$ and \citet{Fit85} reported $N_{\mathrm{H}}$/$E(B-V)$ = 2.4 $\times$ 10$^{22}$ cm$^{-2}$ mag$^{-1}$ for the interstellar extinction in the LMC. Taking their average and adopting a slightly high $A_{V}$/$E(B-V)$ ratio of $\sim$4 for dense clouds \citep{Whi01b}, we obtain $N_{\mathrm{H_2}}$/$A_{V}$ = 2.8 $\times$ 10$^{21}$ cm$^{-2}$ mag$^{-1}$, where we assume that all the hydrogen atoms are in the form of H$_2$. The estimated $N_{\mathrm{H_2}}$ and $A_V$ are summarized in Table \ref{tab_N_H2}. For both SED models, consistent $N_{\mathrm{H_2}}$ values are obtained. \subsubsection{$N_{\mathrm{H_2}}$ from the 9.7 $\mu$m silicate dust absorption} \label{sec_h2_tau97} The mid-infrared spectrum of ST16 shows a deep absorption due to the silicate dust at 9.7 $\mu$m (Fig. \ref{sed}). The peak optical depth of the 9.7 $\mu$m silicate dust absorption band ($\tau_{\mathrm{9.7}}$) is estimated to be 2.44 from the spectrum. The relationship between the visual extinction ($A_V$) and $\tau_{\mathrm{9.7}}$ reported for Galactic dense cores is \begin{equation} A_V = \frac{\tau_{\mathrm{9.7}} - (0.12 \pm 0.05)}{0.21 \pm 0.02} \times 8.8 \label{Eq_Avtau} \end{equation} according to \citet{Boo11} (assuming $A_V$/$A_K$ = 8.8). Applying this relationship to ST16, we obtain $A_V$ = 100 $\pm$ 10 mag. Because the present infrared absorption spectroscopy probes only the foreground component relative to the central protostar, the above $A_V$ value should be doubled to compare with submillimeter data, which probe the total column density in the line of sight. Therefore, the total visual extinction expected from the 9.7 $\mu$m silicate band is $A_{V}$ = 200 $\pm$ 20 mag for ST16. Using the LMC's $N_{\mathrm{H_2}}$/$A_V$ ratio of 2.8 $\times$ 10$^{21}$ cm$^{-2}$ mag$^{-1}$ described in Section \ref{sec_h2_sed}, we obtain $N_{\mathrm{H_2}}$ = (5.60 $\pm$ 0.56) $\times$ 10$^{23}$ cm$^{-2}$. \subsubsection{Recommended H$_2$ column density, dust extinction, and gas mass} \label{sec_h2_final} The discussion in Section \ref{sec_h2_dust}--\ref{sec_h2_tau97} suggests that consistent $N_{\mathrm{H_2}}$ values are obtained by different methods; i.e., the SED fits by the \citet{Rob07}'s and the \citet{Zha18}'s model, and the $\tau_{\mathrm{9.7}}$--$A_V$ relation. In addition, the $N_{\mathrm{H_2}}$ estimates by the dust continuum with $T_{d}$ = 60 K result in consistent $N_{\mathrm{H_2}}$ values with the above methods. In this paper, we use $N_{\mathrm{H_2}}$ = (5.6 $\pm$ 0.6) $\times$ 10$^{23}$ cm$^{-2}$ as a representative value, which corresponds to the average of $N_{\mathrm{H_2}}$ derived by dust continuum (870 $\mu$m and 1200 $\mu$m with $T_{d}$ = 60 K), the SED model fits, and the 9.7 $\mu$m silicate dust absorption depth. This $N_{\mathrm{H_2}}$ corresponds to $A_V$ = 200 mag using the $N_{\mathrm{H_2}}$/$A_V$ factor described in Section \ref{sec_h2_sed}. Assuming the source diameter of 0.1 pc and the uniform spherical distribution of gas around a protostar, we estimate the average gas number density to be $n_{\mathrm{H_2}}$ = 3 $\times$ 10$^6$ cm$^{-3}$ and the total gas mass to be 100 M$_{\sun}$. Here we emphasize that the derived H$_2$ value corresponds to the total column density integrated over the whole line of sight, which includes various temperature components. Thus the assumed dust temperature ($T_{d}$ = 60 K) corresponds to the mass-weighted average temperature in the line of sight. Given the lower dust temperature compared with the gas temperature in the hot core region, the contribution from low-temperature component would not be negligible. The situation is the same for Galactic hot core sources compared in this work, whose N$_{\mathrm{H_2}}$ values are derived by using low-J CO isotopologue lines, and thus the low-temperature component would have non-negligible contribution. To selectively probe the total gas column density in the high-temperature hot core region, observations of high-J CO lines or H$_2$O lines will be important, which will be accessible by future far-infrared facilities. \subsection{Fractional abundances} \label{sec_x} Fractional abundances of molecules relative to H$_2$ are summarized in Table \ref{tab_X}, which are calculated by using the molecular column densities estimated in Section \ref{sec_n} and $N_{\mathrm{H_2}}$ estimated in Section \ref{sec_h2}. Abundances of HCN and HCO$^{+}$ are estimated from their isotopologues H$^{13}$CN and H$^{13}$CO$^{+}$, assuming $^{12}$C/$^{13}$C = 49 \citep{Wan09}. \begin{deluxetable}{ l c } \tablecaption{Estimated fractional abundances \label{tab_X}} \tablewidth{0pt} \tabletypesize{\small} \tablehead{ \colhead{Molecule} & \colhead{$N$(X)/$N_{\mathrm{H_2}}$\tablenotemark{$\dag$}} } \startdata H$_2$CO & (3.8 $\pm$ 0.6) $\times$ 10$^{-10}$ \\ CH$_3$OH\tablenotemark{a,b} & (4.8 $\pm$ 0.9) $\times$ 10$^{-9}$ \\ HCO$^+$\tablenotemark{c} & (1.2 $\pm$ 0.2) $\times$ 10$^{-9}$ \\ CCH & (2.3 $\pm$ 0.4) $\times$ 10$^{-10}$ \\ c-C$_3$H$_2$ & $<$4.3 $\times$ 10$^{-10}$ \\ CN & (8.0 $\pm$ 1.8) $\times$ 10$^{-11}$ \\ HCN\tablenotemark{d} & (7.7 $\pm$ 2.6) $\times$ 10$^{-10}$ \\ NO & (7.3 $\pm$ 2.2) $\times$ 10$^{-9}$ \\ HNCO & (6.6 $\pm$ 2.2) $\times$ 10$^{-11}$ \\ CH$_3$CN\tablenotemark{a} & (4.1 $\pm$ 3.0) $\times$ 10$^{-11}$ \\ HC$_3$N & $<$2.0 $\times$ 10$^{-11}$ \\ CS & (2.3 $\pm$ 0.4) $\times$ 10$^{-10}$ \\ H$_2$CS & (3.4 $\pm$ 1.1) $\times$ 10$^{-11}$ \\ SO & (1.3 $\pm$ 0.2) $\times$ 10$^{-8}$ \\ SO$_2$\tablenotemark{a,b} & (1.1 $\pm$ 0.2) $\times$ 10$^{-8}$ \\ OCS\tablenotemark{a} & (1.2 $\pm$ 0.7) $\times$ 10$^{-9}$ \\ SiO & (1.5 $\pm$ 0.7) $\times$ 10$^{-11}$ \\ HDO & $<$6.8 $\times$ 10$^{-10}$ \\ C$_2$H$_5$OH & $<$7.3 $\times$ 10$^{-10}$ \\ C$_2$H$_5$CN & $<$8.0 $\times$ 10$^{-10}$ \\ CH$_3$OCH$_3$ & $<$4.5 $\times$ 10$^{-10}$ \\ HCOOCH$_3$ & $<$1.2 $\times$ 10$^{-9}$ \\ \textit{trans}-HCOOH & $<$1.3 $\times$ 10$^{-10}$ \\ \enddata \tablecomments{ $\dag$Assuming $N_{\mathrm{H_2}}$ = (5.6 $\pm$ 0.6) $\times$ 10$^{23}$ cm$^{-2}$. Molecular column densities are summarized in Table \ref{tab_N}. $^a$Based on the rotation analysis. $^b$Sum of all temperature components. $^c$Estimated from H$^{13}$CO$^+$ with $^{12}$C/$^{13}$C = 49. $^d$Estimated from H$^{13}$CN. } \end{deluxetable} \section{Discussion} \label{sec_disc} \subsection{Hot molecular core associated with ST16} \label{sec_disc_hc} A variety of molecular emission lines, including high-excitation lines of typical hot core tracers such as CH$_3$OH and SO$_2$, are detected from the line of sight towards a high-mass YSO, ST16 (L = 3 $\times$ 10$^5$ L$_{\sun}$, Fig. \ref{sed}). The source is associated with high-density gas, as the H$_2$ gas density is estimated to be $n_{\mathrm{H_2}}$ = 3 $\times$ 10$^6$ cm$^{-3}$ based on the dust continuum data (Section \ref{sec_h2_final}). According to the rotation analyses of CH$_3$OH and SO$_2$ (Fig. \ref{rd}), the temperature of molecular gas is estimated to be higher than 100 K, which is sufficient to trigger the sublimation of ice mantles. The size of the hot gas emitting region is as compact as $\sim$0.1 pc, according to integrated intensity maps shown in Figures \ref{images1} and \ref{images2}. Note that the line of sight towards ST16 also contains compact and warm ($\sim$50--60 K) gas components as seen in the rotation diagrams of CH$_3$OH, SO$_2$, $^{34}$SO, OCS, and CH$_3$CN. In addition, the source is surrounded by relatively extended and cold ($\sim$25 K) gas components, as represented by the rotation diagram of SO at the off-center position. The nature of ST16, (i) the compact source size, (ii) the high gas temperature, (iii) the high density, (iv) the association with a high-mass YSO, (v) the presence of chemically-rich molecular gas, strongly suggests that the source is associated with a hot molecular core. The temperature structure and the molecular distribution in ST16 are illustrated in Figure \ref{structure} based on the present observational results. \begin{figure}[t] \begin{center} \includegraphics[width=9.0cm]{f6.eps} \caption{ Schematic illustration of the temperature structure and the molecular distribution in the ST16 hot core. } \label{structure} \end{center} \end{figure} \subsection{Molecular abundances} \label{sec_disc_molab} Figure \ref{histo_ab1} shows a comparison of molecular abundances between the ST16 hot core and Galactic hot cores (Orion and W3 (H$_2$O)). Abundances for Galactic hot cores are collected from \citet{Bla87, Tur91, Ziu91, Sut95, Schi97, Hel97}. Typically a factor of about two scatter in standard deviation is seen among molecular abundances of a number of Galactic hot cores \citep[e.g.,][]{Bis07,Ger14}. In general, fractional molecular abundances of ST16 are smaller than those of Galactic hot cores, but the degree of the abundance decrease varies depending on molecular species. Regarding carbon-bearing molecules shown in Figure \ref{histo_ab1}a, the H$_2$CO and CH$_3$OH abundances are low in the ST16 hot core by an order of magnitude or more, as compared with Galactic hot cores. CCH shows a lower abundance in ST16, but the difference is only a several factor relative to Galactic values. The HCO$^{+}$ abundance is comparable with those of Galactic sources. Nitrogen-bearing molecules are mostly less abundant than those in Galactic hot cores (Fig. \ref{histo_ab1}b). The degree of the abundance decrease is nearly a factor of five to ten. An exception is NO, whose abundance is comparable with Galactic values. For sulfur- and silicon-bearing molecules, the abundances of CS, H$_2$CS, and SiO are significantly less abundant in ST16 by more than an order of magnitude compared with Galactic hot cores (Fig. \ref{histo_ab1}c). The SO$_2$ abundance is lower by a factor of $\sim$4, while that of SO is not significantly different from Galactic values. The OCS abundance in ST16 may be comparable with or lower than Galactic values, but the abundance uncertainty is very large. \begin{figure}[tp] \begin{center} \includegraphics[width=9.0cm]{f7.eps} \caption{ Comparison of molecular abundances between a LMC hot core (ST16: orange) and Galactic hot cores (Orion: cyan and W3 (H$_2$O): blue). Each panel shows (a) carbon- and oxygen-bearing molecules (HCO$^{+}$, CCH, H$_2$CO, and CH$_3$OH); (b) nitrogen-bearing molecules (CN, HCN, HNCO, CH$_3$CN, HC$_3$N, and NO); (c) sulfur- and silicon-bearing molecules (SO, SO$_2$, OCS, CS, H$_2$CS, and SiO). The area with thin vertical lines indicate the error bar. The bar with a color gradient indicate an upper limit. The plotted data are summarized in Table \ref{tab_X} for ST16, while those for Galactic hot cores are collected from the literature (see Section \ref{sec_disc_molab}). } \label{histo_ab1} \end{center} \end{figure} \begin{figure*}[btp] \begin{center} \includegraphics[width=18.0cm]{f8.eps} \caption{ Comparison of metallicity-scaled molecular abundances between three LMC hot cores (ST11: red, ST16: orange, and N113 A1: light yellow) and Galactic hot cores (Orion: cyan) and W3 (H$_2$O): blue). Abundances of LMC hot cores are multiplied by three to correct for the metallicity difference relative to Galactic ones. The area with thin vertical lines indicate the error bar. The bar with a color gradient indicate an upper limit. The data for N113 A1 are adapted from \citet{Sew18}. See Section \ref{sec_disc_molab} for details. } \label{histo_ab2} \end{center} \end{figure*} Figure \ref{histo_ab2} compares metallicity-scaled fractional molecular abundances between LMC hot cores (ST11, ST16, N113 A1) and Galactic hot cores. The data for ST11 are adapted from \citet{ST16b} and those for N113 A1 are from \citet{Sew18}. The $N_{\mathrm{H_2}}$ value of N113 A1 is re-estimated using the same dust opacity data as in this work and the dust temperature assumption of 100 K; i.e. 5.3 $\times$ 10$^{23}$ cm$^{-2}$. Though it is not shown here, the other LMC hot core in \citet{Sew18}, N113 B3, shows a similar molecular abundance with N113 A1. In the figure, the abundances of LMC hot cores are multiplied by three to correct for the metallicity difference. Thus, the molecular abundance would roughly scale with the elemental abundances linearly, if the abundances are comparable between LMC and Galactic sources in the plot. If the metallicity-corrected molecular abundances of LMC sources are significantly higher or lower compared with Galactic counterparts, this would suggest that their overabundance or underabundance cannot be simply explained by the metallicity difference. It is commonly seen in two LMC hot cores (ST11 and ST16) that H$_2$CO, CH$_3$OH, HNCO, CS, H$_2$CS, and SiO are significantly less abundant, while HCO$^{+}$, SO, SO$_2$, and NO are comparable with or more abundant than Galactic hot cores, after corrected for the metallicity (Fig. \ref{histo_ab2}). The molecular abundances of the N113 A1 hot core in the LMC seem to be partly different from those in the other two LMC hot cores. The deficiency of organic molecules such as CH$_3$OH, H$_2$CO, and HNCO are previously reported for the ST11 hot core in the LMC \citep{ST16b}. A similar trend is seen in ST16, but the CH$_3$OH abundance in the N113 A1 hot core is almost comparable with Galactic values after corrected for the metallicity difference, as pointed out in \citet{Sew18}. In N113 A1 and B3, complex organic molecules larger than CH$_3$OH (i.e. CH$_3$OCH$_3$ and HCOOCH$_3$) are also detected \citep{Sew18}. This would suggest that organic molecules show a large abundance variation in a low-metallicity environment; ST11 and ST16 are organic-poor hot cores that are unique in the LMC and their low abundances of organic molecules cannot be explained by the decreased abundance of carbon and oxygen, while N113 A1 and B3 are organic-rich hot cores, whose COMs abundances roughly scale with the LMC's metallicity. It should be also noted that an infrared dark cloud that shows the CH$_3$OH abundance comparable with Galactic counterparts is detected in the SMC with ALMA \citep{ST18}. Although the source is not a hot core, this would be an alternative example indicating a large chemical diversity of organic molecules in low-metallicity environments. \citet{ST16, ST16b} suggest the warm ice chemistry hypothesis to interpret the low abundance of organic molecules in the LMC. The hypothesis argues that warm dust temperatures in the LMC inhibit the hydrogenation of CO in ice-forming dense clouds, which leads to the low abundances of organic molecules that are mainly formed on grain surfaces (CH$_3$OH, HNCO, and partially H$_2$CO). Therefore, the different chemical history during the ice formation stage could contribute to the differentiation of organic-poor and organic-rich hot cores in low-metallicity environments. Alternatively, the difference in the hot core's evolutionary stage may contribute to the observed chemical diversity, since high-temperature gas-phase chemistry can also decrease the CH$_3$OH abundance at a late stage \citep[e.g.,][]{NM04,Gar06,Vas13,Bal15}. The compact spatial distribution of CH$_3$OH lines in ST16 suggests its grain surface origin, as the emission is concentrated at the hot core position (Fig. \ref{images2}). On the other hand, the H$_2$CO emission is relatively extended around the hot core. This would suggest that H$_2$CO can also be formed efficiently in the gas phase, as in Galactic star-forming regions \citep[e.g.,][]{vdT00}. SO$_2$ is suggested to be a useful molecular tracer for the study of hot core chemistry at low metallicity, according to the ALMA observations of the ST11 hot core \citep{ST16b}. This is because (i) SO$_2$ mainly originates from the hot core region as suggested from its compact distribution and high rotation temperature, (ii) the abundance of SO$_2$ simply scales with metallicity (the metallicity-scaling law), and (iii) SO$_2$ and its isotopologues show a large number of emission lines even in a limited frequency coverage. The same characteristics are observed in the present ST16 hot core. Metallicity-scaled abundances of SO$_2$ in LMC hot cores (ST11, ST16, N113 A1) are similar to each other and almost comparable with those of Galactic hot cores (Fig. \ref{histo_ab2}). This is remarkably in contrast to the abundance of a classical hot core tracer, CH$_3$OH, which shows a large variation in the low-metallicity environment of the LMC. The above characteristic behavior of SO$_2$ suggests that high-excitation SO$_2$ lines will be a useful tracer of metal-poor hot core chemistry. The metallicity-scaling law of SO$_2$ is not applicable to other sulfur-bearing molecules, as CS and H$_2$CS are significantly less abundant in organic-poor LMC hot cores. The metallicity-scaled abundances of SO in LMC hot cores are comparable with Galactic values, as in SO$_2$. However, the low rotation temperature and the extended spatial distribution of SO would suggest that it is widely distributed beyond the hot core region. NO may be an interesting molecule that is efficiently produced in low-metallicity environments. An overabundance of NO is reported for the ST11 hot core in the LMC \citep{ST16b}. Interestingly, a similar trend is observed in the present hot core. The metallicity-scaled abundance histogram shows that the NO abundances in LMC hot cores are higher than that in Orion, despite the low abundance of elemental nitrogen in the LMC (Fig. \ref{histo_ab2}). Note the plotted NO abundance of Orion is 1.1 $\times$ 10$^{-8}$, while the average and the standard deviation of NO abundances in six Galactic high-mass star-forming region is (8.2 $\pm$ 0.3) $\times$ 10$^{-9}$ \citep{Ziu91}. Currently, such a high abundance of nitrogen-bearing species is seen only in NO, as all of the other nitrogen-bearing molecules detected in ST16 are less abundant than those of Galactic hot cores (Fig. \ref{histo_ab1}b). The expected strength ratio of NO lines at 351.04352 and 351.05171 GHz in the LTE and optically thin case is 1.00 : 1.27, while the observed integrated intensity ratio is (1.00 $\pm$ 0.19) : (1.35 $\pm$ 0.10) based on the data in Table \ref{tab_lines_others}. Thus the observed NO lines are presumably optically thin. The distribution of NO is mainly concentrated at the hot core position, but slightly extended components are also seen at the east side of the hot core. The abundance of NO at the hot core position is (7.3 $\pm$ 2.2) $\times$ 10$^{-9}$ (see Table \ref{tab_X}). At the secondary peak on the east side, the abundance is estimated to be (7.5 $\pm$ 2.2) $\times$ 10$^{-9}$, where we assume $T_{\mathrm{rot}}$ = $T_{d}$ = 22 K based on the rotation analysis of SO, and with this temperature we have derived $N(\mathrm{NO})$ = (1.2 $\pm$ 0.2) $\times$ 10$^{15}$ cm$^{-2}$ and $N_{\mathrm{H_2}}$ = (1.6 $\pm$ 0.2) $\times$ 10$^{23}$ cm$^{-2}$. Similar abundances of NO between the hot core and the nearby peak may suggest a common chemical origin. \begin{figure*}[tbph] \begin{center} \includegraphics[width=14.0cm]{f9.eps} \caption{ Simulated peak molecular abundances of CH$_3$OH, HNCO, SO$_2$, NO, and OH during the hot core stage plotted against the initial dust extinction at the prestellar stage. The corresponding dust temperature is also plotted at the upper axis in each panel. Note that, for LMC simulations, $A_V$ values are divided by three (metallicity factor) to mimic the low dust-to-gas ratio. $A_V$/(metallicity factor) = 1 mag corresponds to the gas column density of $N_{\mathrm{H_2}}$ = 2.8 $\times$ 10$^{21}$ cm$^{-2}$ using the $N_{\mathrm{H_2}}$/$A_V$ conversion factor in Section \ref{sec_h2_sed}. Results of hot core simulations for the Galactic case (open squares) and the LMC case (filled square) are plotted. The red filled squares represents the metallicity-scaled abundances for the LMC case, where the abundances are multiplied by three. } \label{peak_vs_Av} \end{center} \end{figure*} \subsection{Astrochemical Simulations} \label{sec_disc_theory} In this section we use astrochemical simulations to interpret the observed chemical characteristics of low-metallicity hot cores in the LMC. The simulations include a gas-grain chemical network coupled with a toy physical model, aiming at simulating the chemical evolution of a star-forming core up to the hot core stage. We here consider three different evolutionary stages (i.e., cold, warm-up, and post-warm-up), where physical conditions (density, temperature, and extinction) vary at each stage. The first stage corresponds to the quiescent cold cloud, the second stage mimics the collapsing and warming-up core, and the third stage corresponds to the high-temperature core before the formation of \ion{H}{2} region. The second and third stages correspond to the hot core. Two different setups for initial elemental abundances and gas-to-dust ratio are considered to simulate hot core chemistry at the LMC and Galactic conditions. Details of our astrochemical simulations are described in Section \ref{app_model} in Appendix. The chemical history during the ice forming stage is suggested to play an important role in subsequent hot core chemistry (Section \ref{sec_disc_molab}). Thus here we focus on the effect of initial physical conditions on the chemical compositions of a hot core. In the astrochemical simulations, we have varied the dust extinction parameter ($A_V$) at the first cold stage, and examined how the subsequent hot core compositions are affected. Here $A_V$ values are coupled with grain temperature using Eq. \ref{Eq_Hoc} in Appendix. Figure \ref{peak_vs_Av} shows the peak molecular abundances that are achieved during the hot core stage. The plotted peak abundances corresponds to the maximum achievable abundances of hot cores at each condition since hot core chemistry is time-dependent. It is seen from the figure that the maximum achievable abundances of CH$_3$OH gas in a hot core significantly decrease as the visual extinction of the first cold stage (ice-forming stage) decreases. The decrease of the abundance at the low $A_V$ regime is particularly enhanced in the LMC condition. The main reason behind this is because the abundance of solid CH$_3$OH is sensitive to the dust temperature. As discussed in the previous section, CH$_3$OH is mainly formed by grain surface reaction at the cold stage, and then released into the gas-phase at the hot core stage. The hydrogenation reaction of CO mainly controls the formation of CH$_3$OH on surfaces, but this reaction is inhibited when the grain surface temperature increases, because of the high volatility of atomic hydrogen. The effect is enhanced in the LMC case, because of the lower $A_V$ at the ice-forming stage, which leads to the higher dust temperature. A similar behavior is also seen in HNCO, suggesting the importance of hydrogenation for its production. The present simulations are consistent with the picture provided by the warm ice chemistry hypothesis \citep{ST16, ST16b} and previous astrochemical simulations dedicated to the LMC condition \citep{Ach15,Ach18,Pau18}. At the high $A_V$ regime, the peak CH$_3$OH abundances of the LMC case gradually approaches to the metallicity-corrected Galactic CH$_3$OH abundances. This could be because, in both LMC and Galactic cases, the grain surface is cold enough to trigger the CO hydrogenation, and the resultant CH$_3$OH abundances are regulated by the elemental abundances. The present astrochemical simulations suggest that a large chemical diversity of organic molecules seen in LMC hot cores is related to the different physical condition at the initial stage of star formation. Particular organic molecules such as CH$_3$OH and HNCO decrease when a prestellar ice-forming cloud is less shielded, because of the inhibited surface hydrogenation reaction at increased dust temperature. This effect is particularly enhanced in a low-metallicity condition due to the low dust content in a star-forming core. Molecular species that are mainly produced by high-temperature gas-phase chemistry show different behavior. SO$_2$ is one of those cases. It is suggested to be a key hot core tracer at low metallicity since the metallicity-scaling law can apply to its abundances (see Section \ref{sec_disc_molab}). Such a tendency is seen in the present simulation results. The peak abundances of SO$_2$ in a hot core, after corrected for the metallicity, are nearly comparable with Galactic cases (Fig. \ref{peak_vs_Av}). Also, SO$_2$ abundances are less affected by the initial $A_V$ at the ice-forming stage. A major molecular reservoir of sulfur in the cold stage is solid H$_2$S in our simulations. It is released into the gas phase at the hot core stage and experiences subsequent chemical syntheses into SO$_2$; i.e., H$_2$S $\xrightarrow{\mathrm{H}}$ SH $\xrightarrow{\mathrm{H}}$ S $\xrightarrow{\mathrm{OH/O_2}}$ SO $\xrightarrow{\mathrm{OH}}$ SO$_2$ \citep[e.g.,][]{Cha97,NM04}. This chemical sequence can reset the ice compositions that were accumulated at the cold stage and help initialize a major sulfur reservoir into atomic sulfur. Atomic sulfur is further synthesized into SO$_2$, but since it is a major product, the SO$_2$ abundance might be directly regulated by the elemental abundance of sulfur, which is roughly proportional to the metallicity. We speculate this \textit{reset} effect contributes to metallicity-scaled hot core compositions of particular molecular species. NO in hot cores is also suggested to be mainly formed by high-temperature gas-phase chemistry, which is the neutral-neutral reaction between N and OH \citep[e.g.,][]{Her73,Pin90,NM04}. A major molecular reservoir of nitrogen in the cold stage is either solid N$_2$ or NH$_3$ in our simulations. Likewise SO$_2$, the parent species leading to the formation of NO would experience the chemical reset at the hot core stage. This could be the reason why NO abundances in a hot core are less affected by the physical condition of the initial ice-forming stage (Fig. \ref{peak_vs_Av}). Note that the simulated peak NO abundances in the LMC case become comparable with those of the Galactic case at the low $A_V$ regime. The behavior is consistent with the present observations, as LMC hot cores show NO abundances that are nearly comparable with Galactic hot cores despite the low nitrogen abundance. We speculate that the increased production of gaseous OH at the hot core stage may contribute to this, since peak OH abundances are higher in the LMC simulations as shown in Figure \ref{peak_vs_Av}. The efficient production of OH could be related to lower O/H or O/H$_2$ ratios at low metallicity. Alternatively, the increased production of solid NO at the prestellar stage may also contribute to the overproduction of NO in LMC hot cores. Gas-grain astrochemical simulations of a cold molecular cloud with a new set of atomic binding energies actually have reported the increase of solid NO according to the increased grain temperature \citep{STNN18}. More detailed and sophisticated astrochemical simulations of low-metallicity hot cores, involving the latest chemical data and various physicochemical mechanisms, are required for more quantitative interpretation of observed hot core compositions at different metallicity. Further simulations will be presented in a future paper. \subsection{Infrared spectral characteristics of ST16} \label{sec_disc_ir} The observed hot core region corresponds to the infrared center of ST16. No emission line components (i.e. hydrogen recombination lines or fine-structure lines from ionized metals) are seen in the near- to mid-infrared spectrum of ST16 \citep{Sea09,ST16}. Despite the high bolometric luminosity, the source is still in an early evolutionary stage before the formation of a prominent \ion{H}{2} region. This would indicate that the central massive protostar has a low effective temperature ($<$10,000{\rm\:K}) and large radius ($>$100 $R_\odot$), which is theoretically predicted in the case of a high accretion rate with $>$$10^{-3}\:M_\odot{\rm\:yr^{-1}}$ \citep{Hos09,Tan18}. Abundances of solid molecules in the line-of-sight towards the infrared center of ST16 are summarized in Table \ref{tab_ice}. Elemental abundances of gas-phase oxygen and carbon in dense clouds in the LMC, after considering the depletion into dust grain material, are estimated to be 4.0 $\times$ 10$^{-5}$ for oxygen and 1.5 $\times$ 10$^{-5}$ for carbon (w.r.t. H$_2$), according to the LMC's low-metal abundance model presented in \citet{Ach15}. The total fractional abundance of elemental oxygen in solid H$_2$O and CO$_2$ (w.r.t. H$_2$) in ST16 is about 4.5 $\times$ 10$^{-6}$. This would suggest that a non-negligible fraction ($\sim$10 $\%$) of elemental oxygen still remain in ices in the direction of ST16. Because the temperature of the hot core region is high enough for the ice sublimation, these ices would exist in the ST16's cold outer envelope that is located at the foreground side to the observer. Note that, for more comprehensive estimates of gas/ice ratio, future observations of gas-phase H$_2$O, CO$_2$, and CO will be important. \begin{deluxetable*}{ l c c c c} \tablecaption{Ice abundances towards ST16 \label{tab_ice}} \tablewidth{0pt} \tabletypesize{\footnotesize} \tablehead{ \colhead{} & \colhead{H$_2$O ice} & \colhead{CO$_2$ ice} & \colhead{CO ice} & \colhead{CH$_3$OH ice} } \startdata $N$(X) (10$^{17}$ cm$^{-2}$) & 19.6 $\pm$ 3.2 & 2.7 $\pm$ 0.2 & $<$2 & $<$1.2 \\ $N$(X)/$N_{\mathrm{H_2}}$ & (3.5 $\pm$ 1.0) $\times$ 10$^{-6}$ & (4.8 $\pm$ 1.7) $\times$ 10$^{-7}$ & $<$4 $\times$ 10$^{-7}$ & $<$2.4 $\times$ 10$^{-7}$ \\ \enddata \tablecomments{ Tabulated ice abundances are adapted from \citet{ST16}. We use $N_{\mathrm{H_2}}$ = (5.6 $\pm$ 0.6) $\times$ 10$^{23}$ cm$^{-2}$ based on this work. } \end{deluxetable*} \subsection{Isotope abundances of sulfur} \label{sec_disc_isotop} Isotope abundances of $^{32}$S, $^{34}$S, and $^{33}$S, based on the present observations of SO, SO$_2$, CS, and their isotopologues, are summarized in Table \ref{tab_iso}. The $^{32}$S/$^{34}$S ratios for SO and SO$_2$ (23 and 15) in ST16 are well consistent with those estimated for the ST11 hot core and the N113 star-forming region in previous studies \citep[$\sim$15,][]{Wan09,ST16b}. This would suggests that both $^{32}$SO(3$_{3}$--2$_{3}$) and $^{32}$SO$_2$ lines are optically thin in the direction of ST16. The $^{32}$S/$^{34}$S ratio in the LMC sources is about a half compared with the solar neighborhood value \citep[$\sim$30,][]{Chi96b}, suggesting the overabundance of $^{34}$S in the LMC The $^{32}$S/$^{33}$S ratios for SO and SO$_2$ (52 and 51) in ST16 are also, within the uncertainty, consistent with the previously-reported $^{32}$S/$^{33}$S ratio of 40 $\pm$ 17 estimated based on observations of the ST11 hot core \citep{ST16b}. The $^{32}$S/$^{33}$S ratio of SO and SO$_2$ in the LMC is significantly lower than a typical solar neighborhood value by a factor of 3--4. As well as $^{34}$S, $^{33}$S is also overabundant in the LMC. Sulfur is an $\alpha$ element and massive stars mainly contribute to its nucleosynthesis. A theoretical model on the galactic chemical enrichment predicts the increasing trend of $^{32}$S/$^{33, 34}$S ratios according to the decreasing metallicity, because minor isotopes are synthesized from the seed of major isotopes as secondary elements in core-collapse supernovae, and thus more minor isotopes are produced at higher metallicity \citep{Kob11}. The trend is consistent with the observations of sulfur isotopes in our Galaxy, which report an increase of the $^{32}$S/$^{34}$S ratio from the Galactic inner part toward the solar neighborhood \citep[see Fig.3 in][]{Chi96b}. The sulfur isotope ratios in the LMC, however, significantly deviate from this trend. The observed $^{32}$S/$^{34}$S and $^{32}$S/$^{33}$S ratios in ST16 are lower than those of the model prediction at a half solar metallicity by a factor of two and four, respectively \citep[see Table. 3 in][]{Kob11}. The LMC's sulfur isotope ratio also deviates from the above-mentioned increasing trend of the $^{32}$S/$^{34}$S ratio from the Galactic inner part to the solar neighborhood. Interestingly, low $^{32}$S/$^{34}$S ratios are observed in millimeter molecular absorption line systems at the redshift of 0.68 and 0.89 \citep[$^{32}$S/$^{34}$S $\sim$9 for CS/C$^{34}$S and H$_2$S/H$_2$$^{34}$S,][]{Wal16}. The reason of the characteristic isotope abundance ratios of sulfur in the LMC is still remain unexplained. \begin{deluxetable*}{ l c c c c c c c} \tablecaption{Isotope abundances of sulfur \label{tab_iso}} \tablewidth{0pt} \tabletypesize{\footnotesize} \tablehead{ \colhead{ } & \multicolumn{4}{c}{ST16\tablenotemark{a}} & \colhead{ST11\tablenotemark{b}} & \colhead{N113\tablenotemark{c}} & \colhead{Solar neighborhood\tablenotemark{d}} \\ \cline{2-5} \colhead{} & \colhead{SO} & \colhead{SO$_2$} & \colhead{CS} & \colhead{Weighted Mean\tablenotemark{e}} & \colhead{SO$_2$} & \colhead{CS} & \colhead{CS} } \startdata $^{32}$S/$^{34}$S & 23 $\pm$ 8 & 15 $\pm$ 3 & 19 $\pm$ 3 & 17 $\pm$ 2 & 14 $\pm$ 3 & $\sim$15 & $\sim$30 \\ $^{32}$S/$^{33}$S & 52 $\pm$ 5 & 51 $\pm$ 15 & 72 $\pm$ 18 & 53 $\pm$ 5 & 40 $\pm$ 17 & $<$100 & $\sim$180 \\ $^{34}$S/$^{33}$S & 2 $\pm$ 1 & 3 $\pm$ 1 & 4 $\pm$ 1 & 3 $\pm$ 1 & 3 $\pm$ 2 & $<$7 & $\sim$6 \\ \enddata \tablecomments{ $^a$This work; $^b$ALMA observations towards a LMC hot core, ST11 \citep{ST16b}; $^c$Single-dish observations towards a LMC star-forming region, N113 \citep{Wan09}; $^d$\citet{Chi96b}; $^e$The weight is the inverse of the squared error value. } \end{deluxetable*} \subsection{Rotating protostellar envelope traced by $^{34}$SO and SO$_2$} \label{sec_disc_rotation} A sign of the rotating protostellar envelope is seen in the velocity maps of $^{34}$SO and SO$_2$ (Figure \ref{mom1b}). The maps are constructed based on the original Band 7 images without the beam restoration, where the beam size corresponds to 0.090 pc $\times$ 0.076 pc at the LMC. As shown in the figure, the east side of the core is red-shifted and the west side is blue-shifted, and the velocity separation is about 2--3 km s$^{-1}$. The direction of the velocity separation is nearly perpendicular to the outflow axis, which is directed from north-east to south-west (see Section \ref{sec_disc_CCH_CN}). This would support the idea that $^{34}$SO and SO$_2$ are tracing the envelope rotation, rather than the outflow motion. Similar rotation motions are observed in Galactic high-mass protostellar objects \citep[e.g.,][]{Beu07b,Fur10,Bel10,Bel11,Zha19}. The present result is the first detection of a rotating protostellar envelope outside our Galaxy. SO does not show the rotation motion in the present data. This may be due to the contamination of a foreground cold component, because the strong SO lines (6$_{6}$--5$_{5}$ and 8$_{7}$--7$_{6}$) are moderately optically thick (see Section \ref{sec_rd}), while the optically thin SO(3$_{3}$--2$_{3}$) is too weak to analyze the velocity structure. Figure \ref{mom1b} also shows the velocity map for high-excitation CH$_3$OH lines, but the rotating structure is not seen here, though these CH$_3$OH trace a warm and dense region close to the protostar. This would indicate the different spatial distribution of CH$_3$OH and $^{34}$SO/SO$_2$ within the 0.1 pc scale region. The different distributions of the sulfur-bearing species and CH$_3$OH are also supported by their different line widths. Figure \ref{fwhm_Eu} compares the measured line FWHMs of the selected molecular species with their upper state energies. Blended lines and low-S/N lines are excluded here. The figure shows that SO$_2$, SO, and $^{34}$SO have relatively large line widths ($\gtrsim$6 km s$^{-1}$). A well-known shock tracer, SiO, also shows a broader line width. On the other, CH$_3$OH lines show relatively narrow line widths ($\lesssim$5 km s$^{-1}$ for $E_{u}$ $<$150 K and $\lesssim$3 km s$^{-1}$ for $E_{u}$ $>$150 K). Other molecules, H$_2$CO, HNCO, and OCS, which possibly trace a compact hot core region, show intermediate line widths between SO$_2$ and CH$_3$OH. The relatively large line widths of SO and SO$_2$ would indicate that they arise from a more turbulent region compared with other molecular species. Given the similar line width of SiO and SO/SO$_2$, such a turbulent region may be related to the shock. We finally note that an infall motion is not seen in the present data, because we do not cover a fully optically thick molecular tracer in this work. Future higher-spatial-resolution multiline observations are required to further understand the dynamics of molecular gas associated with a low-metallicity massive protostellar envelope. \begin{figure*}[btp] \begin{center} \includegraphics[width=18.0cm]{f10.eps} \caption{ Velocity maps for $^{34}$SO, SO$_2$, and CH$_3$OH. The color scale indicates the offset velocity relative to the systemic velocity of 264.5 km s$^{-1}$. The contour represents the integrated intensity, where the level is 6 $\%$, 20 $\%$, 50 $\%$, and 80 $\%$ of the peak value for $^{34}$SO and SO$_2$, while 10 $\%$, 20 $\%$, 50 $\%$, and 80 $\%$ for CH$_3$OH. The dotted line represents a possible outflow axis inferred from CCH and CN distribution (see Section \ref{sec_disc_CCH_CN}). The maps are constructed by averaging the following Band 7 lines: $^{34}$SO($N_J$ = 8$_{8}$--7$_{7}$ and 8$_{9}$--7$_{8}$), SO$_2$(18$_{4,14}$--18$_{3,15}$ and 14$_{4,10}$--14$_{3,11}$), CH$_3$OH(7$_{0}$ E--6$_{0}$ E, 7$_{-1}$ E--6$_{-1}$ E, 7$_{2}$ A$^-$--6$_{2}$ A$^-$, 7$_{3}$ A$^+$--6$_{3}$ A$^+$, and 7$_{-2}$ E--6$_{-2}$ E). The blue open star represents the position of a high-mass YSO. The synthesized beam size is shown by the gray filled ellipse. See Section \ref{sec_disc_rotation} for details. } \label{mom1b} \end{center} \end{figure*} \begin{figure}[t] \begin{center} \includegraphics[width=9.5cm]{f11.eps} \caption{ Line FWHMs vs. upper state energies. Blended lines and low-S/N lines are excluded. Molecular names are indicated by colors (SO$_2$: red, SO: dark green, $^{34}$SO: light green, CH$_3$OH: blue, H$_2$CO: cyan, HNCO: purple, OCS: orange, SiO: black, others: gray). } \label{fwhm_Eu} \end{center} \end{figure} \subsection{Outflow cavity structures traced by CCH and CN} \label{sec_disc_CCH_CN} Spatial distributions of molecular radicals, CCH and CN, are similar to each other. In the LTE and optically thin case, the expected intensity ratio of CCH lines at 349.33771 and 349.39928 GHz is 1.28 : 1.00, which is consistent with the observed integrated intensity ratio of (1.25 $\pm$ 0.04) : (1.00 $\pm$ 0.05) based on the data in Table \ref{tab_lines_others}, suggesting that the lines are optically thin. Similarly, for the same assumption, the expected ratio of CN lines at 340.03155, 340.03541, and 340.24777 GHz is 1.00 : 1.00 : 3.05, while the observed integrated intensity ratio is (1.00 $\pm$ 0.10) : (1.11 $\pm$ 0.11) : (2.75 $\pm$ 0.08), suggesting that they are nearly optically thin. Obviously the CCH and CN distributions are not centered at the hot core position. They show emission peaks at the north-east and south-west direction from the hot core. Their distribution seems to trace bipolar outflow structures, originating from the protostar associated with the hot core. A well-collimated structure is seen particularly in the north direction. A width of the collimated structure is $\sim$0.1 pc, almost the same as the beam size. Given the early evolutionary stage of ST16, it is likely that molecular outflows are associated with the source. CCH and CN emission are known to be bright in photodissociation regions (PDRs) and they are suggested to be a sensitive tracer of UV-irradiated regions \citep[e.g.,][]{Fue93, Jan95, Rod98, Pet17}. The present characteristic distributions of CCH and CN presumably trace the PDR-like outflow cavity structure, which are irradiated by the UV light. According to the present dust continuum data, the visual extinction from the outer edge of the dust clump to the CCH/CN emission region is at least larger than 20 mag. Thus a possible UV source for the photochemistry is a high-mass protostar located at the hot core position, rather than the external radiation field. Such a UV-irradiated outflow cavity is actually observed in Galactic star-forming regions via CCH emission \citep{Zha18b}. The presence of high-velocity outflow gas needs to be tested by future high-spatial-resolution observations of strong and optically thick outflow tracers such as CO. \section{Summary} \label{sec_sum} We present the results of 0$\farcs$40 (0.1 pc) scale submillimeter observations towards a high-mass YSO (ST16, L = 3 $\times$ 10$^5$ L$_{\sun}$) in the LMC with ALMA. As a result, a new hot molecular core is discovered in the LMC. The following conclusions are obtained in this work: \begin{enumerate} \item Molecular emission lines of CH$_3$OH, H$_2$CO, CCH, H$^{13}$CO$^{+}$, CS, C$^{34}$S, C$^{33}$S, SO, $^{34}$SO, $^{33}$SO, SO$_2$, $^{34}$SO$_2$, $^{33}$SO$_2$, OCS, H$_2$CS, CN, NO, HNCO, H$^{13}$CN, CH$_3$CN, and SiO are detected from the compact region ($\sim$0.1 pc) associated with a high-mass YSO. In total we have detected 90 transitions, out of which, 30 lines are due to CH$_3$OH, and 27 lines are due to SO$_2$ and its isotopologues. Complex organic molecules larger than CH$_3$OH are not detected. \item Rotation analyses show that ST16 is associated with hot molecular gas ($T_{\mathrm{rot}}$ $>$ 100 K) as traced by CH$_3$OH and SO$_2$. The line of sight also contains warm ($\sim$50--60 K) gas components traced by CH$_3$OH, SO$_2$, $^{34}$SO, OCS, and CH$_3$CN, in addition to extended and cold ($\sim$25 K) gas traced by SO. \item The total gas column density towards ST16 is estimated by using several different methods (continuum analysis, SED analysis, and dust absorption band analysis). The estimated H$_2$ column density is $N_{\mathrm{H_2}}$ = 5.6 $\times$ 10$^{23}$ cm$^{-2}$ (corresponds to $A_V$ = 200 mag). The average gas number density is estimated to be $n_{\mathrm{H_2}}$ = 3 $\times$ 10$^6$ cm$^{-3}$. \item The nature of ST16, the compact source size, the high gas temperature, the high density, association with a high-mass YSO, and the presence of chemically-rich molecular gas, strongly suggest that the source is associated with a hot molecular core. \item Organic molecules show a large abundance variation in low-metallicity hot cores. There are currently two organic-poor hot cores \citep[ST16 in this work and ST11 in][]{ST16b} and two organic-rich hot cores \citep[N113 A1 and B3 in][]{Sew18} in the LMC. The different chemical history during the ice formation stage could contribute to the differentiation of organic-poor and organic-rich hot cores. \item High-excitation SO$_2$ lines will be a useful tracer of low-metallicity hot core chemistry. This is because (i) SO$_2$ mainly originates from a hot core region, (ii) it is commonly seen in LMC hot cores, and (iii) its abundances in LMC hot cores roughly scale with the LMC's metallicity. This is remarkably in contrast to abundances of a classical hot core tracer, CH$_3$OH, which shows a large abundance variation in low-metallicity hot cores. CS and H$_2$CS are significantly less abundant in organic-poor hot cores. \item Nitrogen-bearing molecules in ST16 are generally less abundant than those in Galactic hot cores. An exception is NO, whose abundance is comparable with Galactic values, despite the low elemental abundance. An overabundance of NO is also reported in the other LMC hot core (ST11). \item Isotope abundance ratios of $^{32}$S, $^{33}$S, and $^{34}$S in the ST16 hot core are presented. Based on SO, SO$_2$, and their isotopologues, we obtain $^{32}$S/$^{34}$S $\sim$15 and $^{32}$S/$^{33}$S $\sim$40, which are lower than solar neighborhood values by a factor of 2 and 4.5, respectively. Both $^{34}$S and $^{33}$S are overabundant in the LMC. \item A rotating protostellar envelope is for the first time detected outside our Galaxy via SO$_2$ and $^{34}$SO lines. \item CCH and CN show clearly different spatial distributions compared to other molecular lines. They seem to trace PDR-like cavity regions created by protostellar outflows. \item Our astrochemical simulations for a low-metallicity hot core suggest that a large chemical diversity of organic molecules (e.g., CH$_3$OH) seen in LMC hot cores is related to the different physical condition at the initial stage of star formation. Particular molecular species that are mainly produced by high-temperature gas-phase chemistry in a hot core (e.g., SO$_2$) likely to show a metallicity-scaled molecular abundance. \end{enumerate} \acknowledgments This paper makes use of the following ALMA data: ADS/JAO.ALMA$\#$2016.1.00394.S and $\#$2018.1.01366.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan) and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. This work has made extensive use of the Cologne Database for Molecular Spectroscopy and the molecular database of the Jet Propulsion Laboratory. We use data obtained by IRSF/SIRIUS, VLT/ISAAC, \textit{AKARI}, \textit{Spitzer}, and \textit{Herschel}. We are grateful to all the members who contributed to these projects. T.S. is supported by a Grant-in-Aid for Scientific Research on Innovative Areas (19H05067) and Leading Initiative for Excellent Young Researchers, MEXT, Japan. A.D acknowledges the ISRO RESPOND program (Grant No. ISRO/RES/2/402/16-17) and Grant-In-Aid from the Higher Education Department of the Government of West Bengal. K.E.I.T acknowledges support from NAOJ ALMA Scientific Research Grant Number 2017-05A and JSPS KAKENHI Grant Number JP19K14760. Y.N. is supported by NAOJ ALMA Scientific Research Grant Number 2017-06B and JSPS KAKENHI Grant Number JP18K13577. Finally, we would like to thank an anonymous referee for careful reading and useful comments.
1,941,325,221,205
arxiv
\chapter*{Biography} \end{frontmatter} \begin{biography} \section*{Sir Stanley Davidson (1894--1981)} One or more blank lines denote the end of a paragraph. The ends of\index{index} words and sentences are marked by spaces. It doesn't matter how many spaces you type; one is as good as 100. The end of a line counts as a space. \end{biography} \chapter{Appendix title \end{frontmatter} \chapter{Appendix title \end{frontmatter} \chapter*{Meta-Learning of NAS for Few-shot Learning in Medical Image Applications}\label{chap1} \begin{aug} \author[addressrefs={ad1}]% {\fnm{Viet-Khoa} \snm{Vo-Ho}}% \author[addressrefs={ad1}]% {\fnm{Kashu} \snm{Yamazaki}}% \author[addressrefs={ad2}]% {\fnm{Hieu} \snm{Hoang}}% \author[addressrefs={ad2}]% {\fnm{Minh-Triet} \snm{Tran}}% \author[addressrefs={ad1}]% {\fnm{Ngan} \snm{Le}}% \address[id=ad1]% {Department of Computer Science \& Computer Engineering, University of Arkansas, Fayetteville, USA 72703 \address[id=ad2]% {Department of Computer Science, VNUHCM-University of Science, Vietnam, }% \end{aug} \begin{abstract} Deep learning methods have been successful in solving tasks in machine learning and have made breakthroughs in many sectors owing to their ability to automatically extract features from unstructured data. However, their performance relies on manual trial-and-error processes for selecting an appropriate network architecture, hyperparameters for training, and pre-/post-procedures. Even though it has been shown that network architecture plays a critical role in learning feature representation feature from data and the final performance, searching for the best network architecture is computationally intensive and heavily relies on researchers' experience. Automated machine learning (AutoML) and its advanced techniques i.e. \textit{Neural Architecture Search (NAS)} have been promoted to address those limitations. Not only in general computer vision tasks, but NAS has also motivated various applications in multiple areas including medical imaging. In medical imaging, NAS has significant progress in improving the accuracy of image classification, segmentation, reconstruction, and more. However, NAS requires the availability of large annotated data, considerable computation resources, and pre-defined tasks. To address such limitations, meta-learning has been adopted in the scenarios of few-shot learning and multiple tasks. In this book chapter, we first present a brief review of NAS by discussing well-known approaches in search space, search strategy, and evaluation strategy. We then introduce various NAS approaches in medical imaging with different applications such as classification, segmentation, detection, reconstruction, etc. Meta-learning in NAS for few-shot learning and multiple tasks is then explained. Finally, we describe several open problems in NAS. \end{abstract} \begin{keywords} \kwd{Neural Architecture Search} \kwd{NAS} \kwd{Medical Imaging} \kwd{Applications} \kwd{AutoML} \kwd{Meta Learning} \end{keywords} \end{frontmatter} \section{Neural Architecture Search: Background}\label{sec1} From the earliest LeNet \cite{lenet5} to the recent deep learning networks, designing network architecture heavily relies on prior knowledge and the experience of researchers. Furthermore, searching for optimal and effective network architecture is also time-consuming and computationally intensive due to the immersive amount of experiments for every architecture. Automated machine learning (AutoML) is recently proposed to align with such demands to automatically design the network architecture instead of relying on human experiences and repeated manual tuning. Neural Architecture Search (NAS) is an instance of hyperparameter optimization that aims to search the optimal network architecture for a given task automatically, instead of handcrafting the building blocks or layers of the model \cite{cheng2020hierarchical}. NAS methods has already identified more efficient network architectures in general computer vision tasks. MetaQNN \cite{baker2016designing} and NAS-RL \cite{zoph2016neural} are the two earliest works in in the field of NAS. By automating the design of a neural network for the task at hand, NAS has tremendous potential to to surpass the human design of deep networks for both visual recognition and natural language processing \cite{liu2018progressive, chen2018searching, perez2018efficient, tan2019mnasnet}. This motivated various NAS applications in medical image tasks e.g. classification, segmentation, reconstruction. In a standard problem setup, the outer optimization stage searches for architectures with good validation performance while the inner optimization stage trains networks with the specified architecture. However, a full evaluation of the inner loop is expensive since it requires a many-shot neural network to be trained. A typical NAS technique contains two stages: the searching stage, which aims to find a good architecture, and the evaluating stage, where the best architecture is trained from scratch and validated on the test data. Corresponding to two stages, there are three primary components: search space $\mathcal{A}$, search strategy, and evaluation strategy. Denote $A$ is single network in $\mathcal{A}$ i.e. $A \in \mathcal{A}$. Let $S^{train}$ and $S^{val}$ are two datasets corresponding to the training and testing sets. The performance of the network $A$, which is trained on $S^{train}$ and evaluated on $S^{val}$, is measured by function $\mathcal{F}$. In general, NAS can be mathematically formulated as follows: \begin{equation} \underset{A}{\operatorname{argmax}} = \mathcal{F}(S^{val}(S^{train} (A))) \end{equation} Fig.\ref{fig:NAS} visualizes a general NAS architecture with three components i.e. search space, search strategy, and evaluation strategy. \begin{figure}[!t] \begin{center} \includegraphics[width=0.8\linewidth]{img/NAS.pdf} \end{center} \caption{NAS algorithm with three components: search space, search strategy, and evolution strategy.} \label{fig:NAS} \end{figure} \subsection{Search Space} In principle, the search space specifies a set of operations (e.g. convolution, normalization, activation, etc.) and how they are connected as well as it defines which architectures can be represented. Thus, search space design has a key impact on the final performance of the NAS algorithm. In general, the search space defines a set of configurations that can be continuous or discrete hyperparameters \cite{liu2018darts} in a structured or unstructured fashion. In NAS, search spaces usually involve discrete hyperparameters with a structure that can be captured with a directed acyclic graph (DAG) \cite{pham2018efficient, liu2018darts}. The most naive approach to designing the search space for neural network architectures is to depict network topologies, either CNN or RNN, with a list of sequential layer-wise operations, as can be seen in \cite{baker2016designing, zoph2016neural}. Because each operation is associated with different layer-specific parameters which were implemented by hard-coded techniques, thus, this network representation strongly depends on expert knowledge. To deal with too many nodes and edges in an entire architecture as well as to reduce the complexity of NAS search tasks, search spaces are usually defined over some smaller building block and learned meta-architecture to form a larger architecture \cite{elsken2019neural, wang2020m, elsken2020meta}. NASNet \cite{zoph2018learning} is one of the first works to make use of cell-based neural architecture. There are two types of cells in NASNet: normal cells and reduction cells. The former cells aim to extract advanced features while keeping the spatial resolution unchanged whereas the later cells aim to reduce the spatial resolution. A complete network contains many blocks, each block consists of multiple repeated normal cells followed by a reduction cell. An illustration of NASnet is shown in Fig.\ref{fig:nasnet} Leveraging NASNet, \cite{liu2018darts, real2019regularized, piergiovanni2019evolving, zhong2018practical} proposed similar cell-based search space but the reduce cells are eliminated as in Block-QNN \cite{zhong2018practical} and DPPNet \cite{ piergiovanni2019evolving} and replaced by pooling layers. In order to deal with U-Net-like encoder-decoder architectures, AutoDispNet \cite{saikia2019autodispnet} contains three types of cells: normal, reduction, and upsampling. At the encoder path, it comprises alternate connections of normal cells and reduction cells while the decoder path consists of a stack of multiple upsampling cells. More details of the cells from popular cell-based search spaces are theoretically and experimentally studied in \cite{shu2019understanding}. Instead of stacking one or more identical cells, FPNAS \cite{cui2019fast} considers stacking more diversity of blocks which aim to the improvement of neural architecture performance. Instead of a graph of operations, \cite{brock2017smash} consider a neural network as a system with multiple memory blocks which can read and write. Each layer operation is designed to: firstly, read from a subset of memory blocks; secondly, computes results; finally, write the results into another subset of blocks. Since the size of the search space is exponentially large or even unbounded, incorporating prior knowledge about properties well-suited for the task can reduce the size of the search space and simplify the search task. This could introduce a human bias, which may prevent the discovery of the optimal architectural building blocks that go beyond the current state-of-the-art. \begin{figure}[!t] \begin{center} \includegraphics[width=\linewidth]{img/NAS-net.pdf} \end{center} \caption{Left: The overall structure of the \textit{search space} of NASNet with two cells: normal cell and reduction cell. The normal cell is repeated $\times n$ and then connected to a reduction cell. Right: Architecture of the best normal cell and reduction cells.} \label{fig:nasnet} \end{figure} \subsection{Search Strategy} Given a search space, there are various search methods to select a configuration of network architecture for evaluation. The search strategy defines the way to explore the search space for the sake of finding high-performance architecture candidates. The most basic search strategies are grid search (i.e. systematically screening the search space) and random search (i.e randomly selecting architectures from the search space to be tested)\cite{bergstra2012random}. They are quite effective in practice for a small search space but they may fail with a large search space. Among hyperparameter optimization search methods, gradient-based approaches \cite{kandasamy2018neural} and Bayesian optimisation \cite{shahriari2015taking} based on Gaussian processes or Gaussian distribution has already proven its usefulness, specially in continuous space \cite{kandasamy2016gaussian, klein2017fast}. However, they may not work well with the discrete and high dimensionality space. To apply the gradient, gradient-based approaches transform the discrete search problem into a continuous optimization problem. With distribution assumption, Bayesian-based approaches rely on the choice of kernels. By contrast, evolutionary strategies \cite{real2019regularized, song2021enas} is more flexible and can be applied to any search space. However, evolutionary methods require to define a set of possible mutations to apply to different architectures. As an alternation, reinforcement learning is used to train a recurrent neural network controller to generate good architectures \cite{zoph2016neural, pham2018efficient}. The conventional NAS algorithms, which leverage either gradient-based or Bayesian optimization or evolutionary search or reinforcement learning, can be prohibitively expensive as thousands of models are required to be trained in a single experiment. Compared to RL-based algorithms, gradient-based algorithms are more efficient although even gradient-based algorithms require the construction of supernet in advance, which also highly requires expertise. Due to the improper relationship for adapting to gradient-based optimization, both RL-based and gradient-based algorithms are often ill-conditioned architectures. Among all search strategies, evolutionary strategy (ES) NAS is the most popular, which can use some common approaches such as genetic algorithms \cite{sivanandam2008genetic}, genetic programming \cite{langdon2013foundations}, particle swarm optimization \cite{kennedy1995particle}. Fig.\ref{fig:enas} shows an illustration of the flowchart of an ES algorithm which takes place in the initial space and the search space sequentially. ES starts with a population being initialized within the initial space. In the population, each individual represents a solution (i.e., a DNN architecture) for NAS and it is evaluated by fitness process. After the initial population is fitness evaluated, the whole population starts the evolutionary process within the search space. In the evolutionary process, the population is updated by the selection and the evolutionary operators in each iteration, until the stopping criterion is met. Finally, a population that has finished the evolution is obtained. \begin{figure}[!t] \begin{center} \includegraphics[width=\linewidth]{img/ENAS.pdf} \end{center} \caption{Top: Flowchart of evolutionary strategy in NAS. Bottom: Pseudocode of evolutionary strategy in NAS.} \label{fig:enas} \end{figure} \subsection{Evaluation Strategy} For each hyperparameter configuration (from search strategy), we need to evaluate its performance. The evaluation can be benchmarked by either fully or partially training a model with the given hyperparameters, and subsequently measuring its quality on a validation set. Full training evaluation is a default method and it is known as the first generation of NAS evaluation strategy, which requires thousands of GPU days to achieve the desired result. To speed up the evaluation process, partial training methods make use of early-stopping. Hypernetworks \cite{brock2017smash, zhang2018graph}, network morphisms, \cite{cai2018path, jin2018efficient} and weight-sharing \cite{bender2018understanding, liu2018darts} are common NAS evaluation methods. Among these three methods, weight-sharing is least complex because it does not require training an auxiliary network while network morphisms are the most expensive as it requires on the order of 100 GPU days. NAS approaches categorized by search space, search strategy, and evaluation strategy are summarized in Fig.\ref{fig:sum_NAS}. \begin{figure*} \dirtree{% .1 Neural Architecture Search (NAS). .2 Search Space. .3 Cell Block. .4 NASNet \cite{zoph2018learning}. .4 Block-QNN \cite{zhong2018practical}. .4 DPPNet \cite{ piergiovanni2019evolving}. .4 FPNAS \cite{cui2019fast}. .3 Meta architecture. .4 MetaQNN\cite{baker2016designing}. .4 NAS-RL\cite{zoph2016neural}. .4 M-NAS \cite{wang2020m}. .4 Few-shot \cite{elsken2020meta}. .3 Memory-bank Representation. .4 SMASH \cite{brock2017smash}. .2 Search Strategy. .3 Random Search \cite{bergstra2012random}. .3 Gradient-based \cite{kandasamy2018neural}. .3 Bayesian optimisation \cite{shahriari2015taking}. .3 Evolutionary strategies \cite{real2019regularized, song2021enas}. .3 Reinforcement learning \cite{zoph2016neural, pham2018efficient}. .2 Evaluation Strategy. .3 Fully training. .4 NAS-RL\cite{zoph2016neural}. .4 MetaQNN\cite{baker2016designing}. .3 Partial training. .4 Hypernetworks \cite{brock2017smash, zhang2018graph}. .4 Network morphisms \cite{cai2018path, jin2018efficient}. .4 Weight-sharing \cite{bender2018understanding, liu2018darts, pham2018efficient}. } \caption{Summary NAS architecture regarding search space, search strategy, and evaluation strategy.} \label{fig:sum_NAS} \end{figure*} \section{NAS for Medical Imaging} The recent breakthroughs in NAS have motivated various applications in medical images such as segmentation, classification, reconstruction, etc. Starting from NASNet \cite{zoph2018learning}, many novel search spaces, search strategies, and evaluation strategies have been proposed for biomedical images. The following sections will detail recent efficient NAS for medical imaging applications. \subsection{NAS for Medical Image Classification} In image classification, the search space can be divided into either network topology level \cite{xie2019exploring, fang2020densely}, which perform search on the network topology; or cell level \cite{liu2018progressive, liu2018darts, pham2018efficient, real2019regularized}, which focus on searching optimal cells and apply a predefined network topology. NASNet \cite{zoph2018learning} is considered as one of the first successful NAS architectures for image classification. The overall architecture of NASNet together with its normal cells and reduction cells are shown in Fig.\ref{fig:NAS}. In NASNet, the normal cell returns a feature map of the same dimension whereas the reduction cell returns a feature map where the feature map height and width are reduced by a factor of two. Both normal cell and reduction cell are searched by a controller, which is fashioned from Recurrent Neural Network (RNN). In this section, we take the frontier NAS approach \cite{zoph2016neural} developed by Google Brain as an instance to show how to employ NAS into image classification. The network architecture of NAS \cite{zoph2016neural} is shown in Fig.\ref{fig:NAS_Le}(top) where the controller is defined as RNN based on Long Short Term Memory (LSTM) \cite{hochreiter1997long} is shown in Fig.\ref{fig:NAS_Le}(bottom). The RNN controller is responsible for generating new architectural hyper-parameters of CNNs, and is trained using REINFORCE \cite{williams1992simple}. In NAS, controller predicts the parameters $\theta_C$ corresponding to filter height, filter width, stride height, stride width, and number of filters for one layer and repeats. Every prediction is carried out by a softmax classifier and then fed into the next time step as input. The parameters $\theta_C$ are optimized in order to maximize the expected validation accuracy of the proposed architectures. After the controller predicts a set of hyper-parameters $\theta_C$, a neural network with the specified configuration is built, i.e. a child model, and trained to convergence on a dataset (CIFAR-10 is used). In this architecture, the child network accuracy $R$ is utilised as a reward to train the controller under a reinforcement learning mechanism. In NAS, each gradient update to the controller parameters $\theta_C$ corresponds to training one child network to convergence. An upper threshold on the number of layers in the CNNs is used to stop the process of generating new architectures. Each network proposed by the RNN controller is trained on CIFAR10 dataset for fifty epochs with the use of vast computational resources (450 GPUs for 3-4 days for a single experiment). The search space contains 12,800 architectures. Based on NAS, many effective NAS approaches on image classification have been proposed such as evolution-based NAS \cite{real2019regularized} (i.e. using evolution algorithms to simultaneously optimize topology alongside with parameters), ENAS \cite{pham2018efficient} (i.e. sharing of parameters among child models), DARTS \cite{liu2018darts} (i.e. formulating the task in a differentiable manner to shorten the search within four GPU days), GDAS \cite{dong2019searching} (i.e. enabling the search speed in four GPU hours), ProxylessNAS \cite{cai2018proxylessnas} (i.e. process on a large-scale target tasks and the target hardware platforms) \begin{figure}[!t] \begin{center} \includegraphics[width=\linewidth]{img/NAS_Le.pdf} \end{center} \caption{Top: An overview of Neural Architecture Search. Botton: NAS Controller by RNN} \label{fig:NAS_Le} \end{figure} Inspired by the successes of NAS in computer vision, i.e. image classification, there are several attempts in medical image classification that employed NAS techniques. Leveraged by \cite{elsken2017simple}, which uses the hill-climbing algorithm with the network morphism transformation to search for the architectures, Kwasigroch et al.,\cite{kwasigroch2020neural} proposed a malignant melanoma detection for skin lesion classification. In this method, the hill-climbing algorithm can be interpreted as a simple evolutionary algorithm with only a network morphism operation. Adopt AdaNet framework \cite{cortes2017adanet} as the NAS engine, \cite{dai2020optimize} proposed Adanet-NAS to optimize a CNN model for three classes fMRI signal classification. \subsection{NAS for Medical Image Segmentation} Medical image segmentation faces some unique challenges such as lacking annotated data, inhomogeneous intensity, and vast memory usage for processing 3D high-resolution images. 3DUnet \cite{ronneberger2015u}, VNet\cite{milletari2016v} are the first 3D networks designed for medical image segmentation. Later, many other effective CNN-based 3D networks for medical image segmentation such as cascade-Unet \cite{le2021multi}, UNet++ \cite{zhou2019unet}, Densenet \cite{huang2017densely}, H-DenseUNe \cite{li2018h}, NN-UNet \cite{isensee2019automated} have been proposed. In computer vision, NAS has mainly solved image classification and a few recent works recently applied NAS to image segmentation such as FasterSeg\cite{chen2019fasterseg}, Auto-DeepLab \cite{liu2019auto}. In medical analysis, accurate segmentation of medical images is a crucial step in computer-aided diagnosis, surgical planning, and navigation which have been applied in a wide range of clinical applications. Great achievements have been made in medical segmentation thanks to the recent breakthroughs in deep learning, such as 3D-UNet and NN-UNet. However, it remains very difficult to address some challenges such as extremely small objects with respect to the whole volume, weak boundary, various location, shape, and appearance. Furthermore, volumetric image segmentation is extremely expensive to train, thus, it is difficult to attain an efficient 3D architecture search. NAS-Unet \cite{weng2019unet} and V-NAS \cite{zhu2019v} are two of the first NAS architectures for medical segmentation. NAS-Unet is based on U-like/V-net backbone (i.e. Densenet implementation \cite{jegou2017one} in NAS-NET and V-net\cite{milletari2016v} in V-NAS) with two types of cell architectures called DownSC and UpSC as given in Fig.\ref{fig:NAS-Unet} (left). NAS-UNet is based on cell-block building search space and the search cell contains three types of primitive operation: Down PO (average pooling, max pooling, down cweight, down dilation conv., down depth conv. and down conv.), Up PO (up cweight, up depth conv., up conv., up dialtion conv.) and Normal PO (identity, cweight, dilation conv., depth conv., conv.) as shown in Fig.\ref{fig:NAS-Unet} (right). In NAS-Unet, both DownSC and UpSC are simultaneously updated by a differential architecture strategy during the search stage. As given in Fig.\ref{fig:NAS-Unet}(a), the NAS-Unet contains $L_1$ cells in the encoder path o learn the different level of semantic context information and $L_1$ cells in the decoder path to restore the spatial information of each probability. Compared to Densenet \cite{huang2017densely}, NAS-Unet replaces the convolution layers with these cells and moves up-sampling operation and down-sample operation into the cells. Leveraged by DARTS \cite{liu2018darts}, the NAS-Unet constructs an over-parameterized network by Eq. \ref{eq:C}. \begin{equation} C(e_1=MixO_1,\dots,e_E=MixO_E) \label{eq:C} \end{equation}, where each edge is a mixed operation that has $N$ parallel paths, denoted as MixO, as shown in Fig.\ref{fig:over_params_cell} (left). The output of a mixed operation $MixO$ is defined based on the output of its $N$ paths as in Eq.\ref{eq:Mi} \begin{equation} MixO(x)=\sum _{(i=1)}^{N}w_{i}o_{i}(x) \label{eq:Mi} \end{equation} To save memory during the update strategy, NAS-Unet makes use of ProxylessNAS \cite{cai2018proxylessnas} as shown in Fig.\ref{fig:over_params_cell}(right). By using ProxylessNAS, NAS-Unet update of one of the architecture parameters by gradient descent at each step. To obtain that objective, NAS-Unet first freezes the architecture parameters and stochastically sample binary gates for each batch of input data. NAS-Unet then updates parameters of active paths via standard gradients descent on the training dataset as shown in Fig.\ref{fig:over_params_cell} (right). These two update steps are performed in an alternative manner. \textit{} \begin{figure} \includegraphics[width=\textwidth]{img/NAS-Unet.pdf} \caption{Left: The U-like backbone of NAS-Unet architecture, the rectangle represents cell architectures need to search. The green arrow merely represents the flow of the feature map (input image). The gray arrow is a transform operation that belongs to UpSC and is also automatically searched. Right: NAS-Unet cell architecture. The red arrow indicates a down operation, the blue arrow indicates the normal operation and the green arrow represents a concatenate operation. Courtesy of \cite{weng2019unet}. } \label{fig:NAS-Unet} \end{figure} \begin{figure} \begin{minipage}[c]{0.5\textwidth} \includegraphics[width=\textwidth]{img/over-params_cell.pdf} \end{minipage}\hfill \begin{minipage}[c]{1.0\textwidth} \includegraphics[width=\textwidth]{img/ProxylessNAS.png} \end{minipage} \caption{Top: Over-parameterized cell architecture a NAS-Unet. Each edge is associated with N candidate operations from different primitive operation sets. Bottom: Update strategy by ProxylessNAS \cite{cai2018proxylessnas}. Courtesy of \cite{weng2019unet}.} \label{fig:over_params_cell} \end{figure} In addition to NAS-Unet, V-NAS \cite{zhu2019v} is another NAS cell-blocked-based NAS architecture and based on V-Net\cite{milletari2016v} network design. In V-NAS, a cell is defined as a fully convolutional module composing of several convolutional (Conv+BN+ReLU) layers, which is then repeated multiple times to construct the entire neural network. Corresponding encoder path and decoder path, V-NAS consists of encoder cell and decoder cell which are chosen between 2D, 3D, or Pseudo-3D (P3D) as shown in Fig.\ref{fig:encoder_decoder_vnas}. V-NAS is designed with ResNet-50 in the encoder path and pyramid volumetric pooling (PVP) in the decoder path. \begin{figure}[!ht] \includegraphics[width=1.0\textwidth]{img/encoder_decoder_vnas.pdf} \caption{Encoder cell and decoder cell defined in V-NAS \cite{zhu2019v}.} \label{fig:encoder_decoder_vnas} \end{figure} \cite{zhu2019v} designed a search space consisting of both 2D, 3D, and pseudo-3D (P3D) operations, and let the network itself select between these operations at each layer. \begin{figure*} \includegraphics[width=\textwidth]{img/NAS-based-reconstruction.pdf} \caption{Top: The reconstruction module used in NAS-based reconstruction \cite{yan2020neural}. Down: Candidate layer operations in search space.} \label{fig:reconstruction_cell} \end{figure*} Different from NAS-Unet \cite{weng2019unet} and V-NAS \cite{zhu2019v} which search cells and apply them to a U-Net/V-Net like architecture, C2FNAS \cite{yu2020c2fnas} searches 3D network topology in a U-shaped space and then searches the operation for each cell. The search procedure contains two stages corresponding to macro level (i.e. defining how every cell is connected to each other) and micro-level ( assigning an operation to each node). Thus, a network is constructed from scratch in a macro-to-micro manner under two stages which aim to relieve the memory pressure and resolve the inconsistency problem between the search stage and deployment stage. MS-NAS \cite{yan2020ms} applied PC-DARTS \cite{xu2019pc} and Auto-DeepLab’s formulation to 2D medical images MS-NAS is defined with three types of cells: expanding cells (i.e. expands and up-samples the scale of feature map), contracting cells (i.e. contracts and down-samples the scale of feature map;), and non-scaling cells (i.e. keeps the scale of feature map constant) to automatically determine the network backbone, cell type, operation parameters, and fusion scales. Recently, BiX-NAS\cite{wang2021bix} searches for the optimal bi-directional architecture by recurrently skipping multi-scale features while discarding insignificant ones at the same time. One of the current limitations of NAS is that it lacks corresponding baseline and sharable experimental protocol. Thus, it is difficult to compare between NAS search algorithms. In this section, we make a comparison based on the performance on a particular dataset without knowing where is the performance gains come from. \subsection{NAS for Other Medical Image Applications} Inspired by continuous relaxation of the architecture representation of DARTS \cite{liu2018darts} with differentiable search, \cite{yan2020neural} proposed NAS-based reconstruction which searches for the internal structure of the cells. The reconstruction module is a stack of cells as shown in Fig.\ref{fig:reconstruction_cell} (left) where the first and the last common $3 \times 3$ convolutional layer. For each cell, it maps the output tensors of previous two cells to construct its output by concatenating two previous cells with a parameter representing the relaxation of discrete inner cell architectures. The search space contains three operators defined as in the Fig.\ref{fig:reconstruction_cell} (right). The inner structure of cells is search through DARTS \cite{liu2018darts}. A similar NAS-based MRI reconstruction network was introduced by EMR-NAS \cite{huang2020enhanced} where the search space contains eight different cells with the same kernel size $3 \times 3$ but different dilation rate and the connection between them Besides classification, segmentation, and reconstruction, lesion detection is another important task in medical analysis. In addition to TruncatedRPN balances positive and negative data for false-positive reduction; ElixirNet \cite{jiang2020elixirnet} proposed Auto-lesion Block (ALB) to locate the tiny-size lesion by dilated convolution with flexible receptive fields. The search space for ALB contains nine operators i.e. $3\times 1$ and $1\times 3$ depthwise-separable conv, $3\times 3$ depthwise-separable conv, $5\times 5$ depthwise-separable conv, $3\times 3$ atrous conv with dilate rate 3, $5\times 5$ atrous conv with dilate rate 5, average pooling, skip connection, no connection, non-local. All operations are of stride 1 and the convolved feature maps are padded to preserve their spatial resolution. Among all operators, non-local operator aims to encode semantic relation between region proposals that is relevant to lesion detection. The cell is searched by DARTS \cite{liu2018darts}. NAS is also employed to localize multiple uterine standard plan (SP) in 3D Ultrasound (US) simultaneously by Multi-Agent RL (MARL) framework \cite{yang2021searching}. In MARL, the optimal agent for each plane is obtained by one-shot NAS \cite{guo2020single} to avoid time-consuming. The search strategy is based on GDAS \cite{dong2019searching} which search by gradient descent and only updates the sub-graph sampled from the supernet in each iteration. In MARL, the search space contains eight cells (five normal cells and three reduce cells) and each agent has its own four cells (three normal cells and one reduce cells). The cell search space consists of ten operations including none, $3\times 3$ conv., $5\times 5$ conv., $3\times 3$ dilated conv., $5\times 5$ dilated conv., $3\times 3$ separable conv., $5\times 5$ separable conv., $3\times 3$ max pooling, $3\times 3$ avg pooling, and skip-connection. \section{Meta-Learning in NAS} NAS has made remarkable progress in many tasks in both medical imaging and computer vision; however, NAS is not only computation resource consumption but also requires a large amount of annotated data which is one of the biggest challenges in medical imaging. Furthermore, most of the existing NAS methods search for a well-performing architecture for a single task. Meanwhile prior meta-learning approaches \cite{finn2017model, nichol2018first, antoniou2018train, finn2018probabilistic} compute meta-learning weights to learn new tasks from just a few examples, but their model is trained with a fixed neural architecture. \begin{figure*} \includegraphics[width=\textwidth]{img/compare-meta.pdf} \caption{Comparison between meta-learning with a fixed architecture (left) and meta-learning with NAS (right). The main difference is updating process (highlighted). In meta-learning, only network weights are updated whereas both network architecture and its corresponding weights are updated in meta-learning with NAS. } \label{fig:compare-meta} \end{figure*} Based on the observation that NAS can be seen as an instance of hyper-parameter meta-learning, there have been some works that combine NAS and meta-learning to address both new tasks and flexible network architecture. The comparison between meta-learning only and meta-learning with NAS is illustrated in Fig. \ref{fig:compare-meta}. Fig. \ref{fig:compare-meta} (left) is meta-learning algorithm that takes a fixed network architecture as its input and it updates network weights. Whereas Fig. \ref{fig:compare-meta} (right) is meta-learning with NAS which takes a network architecture search space as its input and simultaneously updates both network architecture with its corresponding weights. To train a network on multiple tasks, Wong et al. \cite{wong2018transfer} train an AutoML system \cite{hutter2019automated} via RL with utilization of transfer learning. In Wong et al. \cite{wong2018transfer}, hyper-parameters are searched for a new task while the architecture is chosen as one of the pre-defined ones. Zoph et al. \cite{zoph2018learning} make use of NAS \cite{zoph2016neural} and propose a design of a new search space, called NASNet search space, to search for a good architectures on a small dataset and then transfer across a range of data and computational scales. Kim et al. \cite{kim2018auto} wrap NAS around meta-learning by adopting progressive NAS \cite{liu2018progressive} to few-shot learning. In Kim et al. \cite{kim2018auto}, the model is trained from scratch in every iteration of the NAS algorithm, thus, it is computation consumption. In a concurrent work \cite{lian2019towards}, Lian et al. propose a meta-learning-based transferable neural architecture search method to generate a meta-architecture, which can adapt to new tasks through a few gradient descent steps. In Lian et al. \cite{lian2019towards}, the model is trained in every task-dependent architecture, thus, it is time consuming. By incorporating gradient-based NAS and gradient-based meta-learning, Elsken et al. \cite{elsken2020meta} propose METANAS which can adapt an architecture to new tasks using a small number of samples with only several gradient descent steps. Elsken et al. \cite{elsken2020meta} not only meta-learns network weights for a given fixed architecture, but also meta-learns architecture as shown in Fig.\ref{fig:compare-meta}(right). That means, both network weights and network architecture are meta-learned simultaneously. To optimize both network architecture and its corresponding weights at once and adapt them to a new task, METANAS utilizes both task-learner (e.g DARTS~\cite{liu2018darts}) and meta-learner (e.g. REPTILE \cite{nichol2018first}). Let denote $\alpha$ and $\omega$ as meta-learned architecture and its corresponding meta-learned weights. METANAS objective aims to quickly adapt $\alpha$ and $\omega$ to a new task $\mathcal{T}_i$ with a few samples $\mathcal{T}_i = {D^{\mathcal{T}_i}_{train}, D^{\mathcal{T}_i}_{val}}$, where training data $D^{\mathcal{T}_i}_{train}$ is sampled from training task distribution $p_{train}$. The optimal architecture and its corresponding optimal weights of a task $\mathcal{T}_i$ are denoted as $\alpha^*(\mathcal{T}_i)$ and $\omega^*(\mathcal{T}_i)$. To archive such goals, the METANAS loss is defined as follows: \begin{equation} \begin{split} \mathcal{L}(\alpha, \omega, p_{train}, \phi_k) & = \sum_{\mathcal{T}_i}{\mathcal{L}_{\mathcal{T}_i}(\phi_k(\alpha, \omega, D^{\mathcal{T}_i}_{train}), D^{\mathcal{T}_i}_{val})} \\ & = \sum_{\mathcal{T}_i}{\mathcal{L}_{\mathcal{T}_i}}{((\alpha^*(\mathcal{T}_i), \omega^*(\mathcal{T}_i)), D^{\mathcal{T}_i}_{val})} \end{split} \end{equation} , where $\phi_k$ is task-learner at $k^{th}$ updating iteration. NAS by DARTS \cite{liu2018darts} is used as task-learner and applied to update both architecture and its corresponding weight, i.e., $[\alpha; \omega]$. $\alpha$ and $\omega$ are updated with their own learning rate $\lambda^{task}_\alpha$ and $\lambda^{task}_\omega$ to obtain the optimal architecture $\alpha^*(\mathcal{T}_i)$ and its corresponding optimal weights $\omega^*(\mathcal{T}_i)$ . The loss $\mathcal{L}$ is differentiable with respect to both $\alpha$ and $\omega$. The loss $\mathcal{L}_{\mathcal{T}_i}$ is the loss of particular task $\mathcal{T}_i$ and based on task loss. \begin{algorithm}[t] \caption{METANAS learning procedure with task-learner DARTS \cite{liu2018darts} and meta-learner REPTILE \cite{nichol2018first}.} \label{algo:metanas} \hrule \begin{algorithmic}[1] \algrenewcommand\algorithmicrequire{\textbf{Input: }} \algrenewcommand\algorithmicensure{\textbf{Output: }} \Require \\ Distribution over taks $p(\mathcal{T})$ \\ task-learner at k iteration $\phi^k$. \\ meta-learner $\theta_\alpha$, $\theta_\omega$ \Ensure meta-learned architecture $\alpha$, meta-learned weights $\omega$. \State \textbf{Initialize}: meta-learned architecture $\alpha$, meta-learned weights $\omega$. \State \While {not converge} \State Sample tasks $\mathcal{T}_i$ from $p(\mathcal{T})$ \For{all $\mathcal{T}_i$ } \State $\alpha^{\mathcal{T}_i} \gets \alpha$ \State $\omega^{\mathcal{T}_i} \gets \omega$ \For {j = 1, ..., k } \Comment{{updating iteration}} \State $\alpha^{\mathcal{T}_i} \gets \phi^k(\alpha, \omega, D^{\mathcal{T}_i}_{train}, \lambda^{task}_\alpha)$ \Comment{{update architecture by a task-learner $\phi^k$ with learning rate $\lambda^{task}_\alpha$}} \State $\omega^{\mathcal{T}_i} \gets \phi^k(\alpha, \omega, D^{\mathcal{T}_i}_{train}, \lambda^{task}_\omega)$ \Comment{{update architecture by a task-learner $\phi^k$ with learning rate $\lambda^{task}_\omega$}} \EndFor \EndFor \State $\alpha \gets \alpha + \theta_\alpha(\alpha^{*\mathcal{T}_i}, \omega^{*\mathcal{T}_i}, \lambda^{meta}_{\alpha})$ \Comment{{update architecture by a meta-learner $\theta_\alpha$ with learning rate $\lambda^{meta}_\alpha$}} \State $\omega \gets \omega + \theta_\omega(\alpha^{*\mathcal{T}_i}, \omega^{*\mathcal{T}_i}, \lambda^{meta}_{\omega})$ \Comment{{update weights by a meta-learner $\theta_\omega$ with learning rate $\lambda^{meta}_\omega$}} \EndWhile \end{algorithmic} \end{algorithm} After finding the optimal architecture $\alpha^*(\mathcal{T}_i)$ and its optimal weights $\omega^*(\mathcal{T}_i)$ for a particular task $\mathcal{T}_i$, the meta-learned architecture $\alpha$ and the meta-learned weights $\omega$ are updated by a meta-learned $\theta_\alpha$ and $\theta_\omega$ with learning rates of $\lambda^{meta}_\alpha$ and $\lambda^{meta}_\omega$. REPTILE \cite{nichol2018first} is used as meta-leaner in METANAS. The entire METANAS learning process is described in Algorithm \ref{algo:metanas}. While most of the existing meta-learning-based NAS methods focus more on a few shot learning with a small number of labeled data, there is no annotated data available in some cases. Liu et al. \cite{liu2020labels} propose an unsupervised neural architecture search (UnNAS) to explore whether labels are necessary for NAS. It is resulting that NAS without labels is competitive with those with labels; therefore, labels are not necessary for NAS. \section{Future Perspectives} Artificially neuron networks (ANN) have made breakthroughs in many medical fields, including recognition, segmentation, detection, reconstruction, etc. Compared with ANN, NAS is still in the initial research stage even NAS has become a popular subject in the area of machine-learning science. Commercial services such as Google’s AutoML and open-source libraries such as Auto-Keras make NAS accessible to the broader machine learning environment. At the current stage of development, NAS-based approaches focus on improving image classification accuracy, reducing time consumption during the search for a neural architecture. There are some challenges and future perspectives discussed as follows: \textbf{Search space}: There are various effective search spaces; however, they are based on human knowledge and experience, which inevitably introduce human bias. Balancing between freedom of neural architecture design, search cost and network performance in NAS-based approach is an important future research direction. For example, to reduce the search space as much as possible while also improving network performance, NASNet \cite{zoph2018learning} proposes a modular search space that was later widely adopted. However, this comes at the expense of the freedom of neural architecture design. Thus, general, flexible, and human bias-free search space are critical requirements. To minimize human bias, AutoML-Zero \cite{elsken2020meta} applies the evolution strategy (ES) and designs two-layer neural networks based on basic mathematical operations (cos, sin, mean, st). \textbf{Robustness}: Even though NAS has been proven effective in many datasets, it is still limited when dealing with a dataset that contains noise, adversarial attacks or open-set dataset. Some efforts have been proposed to boost the NAS's robustness such as Chen et al. \cite{chen2020robustness} proposed a loss function for noise tolerance. Guoet al. \cite{guo2020meets} explored the intrinsic impact of network architectures on adversarial attacks. \textbf{Learn new data}: Most of the existing NAS methods can search an appropriate architecture for a single task. To search for a new architecture on a new task, a suggested solution is to combine meta-learning into NAS \cite{pasunuru2019continual, elsken2020meta, lian2019towards}. For example \cite{lian2019towards} proposed a transferable neural architecture search to generate a meta-architecture, which can adapt to new tasks and new data easily while \cite{elsken2020meta} applying NAS into few-shot learning. On learning new data, recent work by \cite{liu2020labels} proposed unsupervised neural architecture search (UnNAS) shows that an architecture searched without labels is competitive with those searched with labels. \textbf{Reproducibility}: Most of the existing NAS methods have many parameters that need to be set manually at the implementation level which may be described in the original paper. In addition to the design of the neural architecture, configure non-architecture hyperparameters (e.g. initial learning rate, weight decay, drop out ratio, optimizer type, etc) also time consumption and strongly affects network performance. Jointly search of hyperparameters and architectures has been taken into consideration; however, it just focuses on small data sets and small search spaces. Recent research AutoHAS \cite{dong2020autohas}, FBNet \cite{dai2020fbnetv3} show that jointly search of hyperparameters and architectures has great potential. Furthermore, reproduce NAS requires a vast of resource. With the rise of NAS-based techniques, it is now possible to produce state-of-the-art ANNs for many applications with relatively low search time consumption (a few 10 GPU-days rather than 1000 GPU-days). The future NAS-based approach should be able to address various problems across domains i.e. tasks: segmentation, object detection, depth estimation, machine translation, speech recognition, etc; computing platform: server, mobile, IOT and CPU, GPU, TPU; sensor: camera, lidar, radar, microphone; objectives: accuracy, parameter size, MACs, latency, energy. Instead of wasting much time on the model evaluation, some datasets (e.g NAS-Bench-101 \cite{ying2019bench}, NAS-Bench-201 \cite{dong2020bench}) support NAS researchers to focus on the design of optimization algorithm. \textbf{Comparison}: At the current stage of development, there is no corresponding baseline and sharable experimental protocol besides random sampling which has been proven to be a strong baseline. This makes it difficult to compare NAS search algorithms. Instead of blindly stacking certain techniques to increase performance, it is critical to have more ablation experiments on which part of the NAS design leads to performance gains. To support reproducibility and comparison, NASbenches \cite{ying2019bench} provide pre-computed performance measures for a large number of NAS architectures. \section*{Acknowledgment} This material is based upon work supported by the National Science Foundation under Award No. OIA-1946391; partially funded by Gia Lam Urban Development and Investment Company Limited, Vingroup and supported by Vingroup Innovation Foundation (VINIF) under project code VINIF.2019.DA19. \section*{Disclaimer} Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. \chapter{Capsule Networks for Medical Imaging}\label{chap2} \subchapter{Chapter Subtitle} \begin{aug} \author[addressrefs={ad1,ad2}]% {\fnm{Firstname} \snm{Surname}}% \author[addressrefs={ad2}]% {\fnm{Firstname} \snm{Surname}}% \address[id=ad1]% {Short Address}% \address[id=ad2]% {Long Address}% \end{aug} \begin{chapterpoints \item The ends of words and sentences are marked by spaces. It doesn't matter how many spaces you type; one is as good as 100. The end of a line counts as a space. \cite{cheng2020hierarchical} \item The ends of words and sentences are marked by spaces. It doesn't matter how many spaces you type; one is as good as 100. The end of a line counts as a space. \end{chapterpoints} \begin{dispquote} The ends of words and sentences are marked by spaces. It doesn't matter how many spaces you type; one is as good as 100. The end of a line counts as a space. The ends of words and sentences are marked by spaces. It doesn't matter how many spaces you type; one is as good as 100. The end of a line counts as a space. \source{Name} \end{dispquote} \end{frontmatter} \section{Capsule Network: Basic}\label{sec2.1} \section{2D Capsule Network} \section{3D Capsule Network} \section{Comparison between CapsNets and CNNs} \chapter*{Contributors} \end{frontmatter} \begin{contributorslist} \name{Greg N. Gregoriou} is Professor of Finance in the School of Business and Economics at State University of New York at Plattsburgh. He obtained his Ph.D. (finance) from the University of Quebec at Montreal and is hedge-fund editor for the peer-reviewed scientific journal Derivatives Use, Trading and Regulation and editorial board member for the Journal of Wealth Management, and the Journal of Risk and Financial Institutions. He has written more than 50 articles on hedge funds and managed futures in various U.S. and U.K. peer-reviewed publications, including (among others) the Journal of Portfolio Management, Journal of Derivatives Accounting, Journal of Futures Markets, European Journal of Operational Research, Annals of Operations Research, European Journal of Finance, and Journal of Asset Management. He has edited 18 books for Elsevier, Wiley, Palgrave-MacMillan, and Risk and has coauthored one book for Wiley. \name{Donald F. Dansereau} Department of Psychology, Texas Christian University, Fort worth, Texas \name{Ruth Garner} Department of Educational Psychology, Texas A \& M University, College Station, Texas \end{contributorslist} \chapter*{List of Figures} \end{frontmatter} \listoffigures \part{} \end{frontmatter} \part{Part title}\label{part1} \end{frontmatter}
1,941,325,221,206
arxiv
\section{Introduction} Robotic grasping evaluation is a challenging task due to incomplete geometric information from single-view visual sensor data~\cite{varley2015generating}. Many probabilistic grasp planning models have been proposed to address this problem, such as Motel Carlo, Gaussian Process and uncertainty analysis~\cite{lundell2019robust, tosun2020robotic, gualtierirobotic}. However, these analytic methods are always computationally expensive. With the development of deep learning techniques, data-driven grasp detection methods have shown great potential~\cite{breyer2021volumetric,wu2020grasp,ten2017grasp,liang2019pointnetgpd} to solve this problem. They generate lots of grasp candidates and estimate the corresponding grasp quality, resulting in a better grasp performance and generalization. But as most of these methods still rely on original sensor input like 2D (image) and 2.5D (depth map), there exists a physical grasping defect when the gripper interacts with real object surfaces or edges because of the incomplete pixel-wise and point-wise representations. \begin{figure}[!ht] \centering \includegraphics[width=0.48\textwidth]{figures/shape_completion_overview.pdf} \caption{Overview of our shape completion based grasp pipeline. The upper line is the shape completion module. In this module, a partial point cloud $\zeta_p$ with \emph{n} points is first input into a transformer-based encoder to extract point-wise and self-attention features, which outputs a latent vector with \emph{m} dimensions. Then, the latent vector is concatenated with another latent feature from a flat/spatial point seed generator to predict multiple spatial surfaces in the manifold-based decoder. Finally these surfaces are montaging into a complete point cloud $\zeta_c$. The bottom line is the grasp evaluation module, the complete point cloud $\zeta_c$ is the input of our grasp detection pipeline PointNetGPD to compute the grasp quality $\mathcal{Q}_i$. The grasp with the highest score $\mathcal{G}_{best}$ will be send to calculate collision free trajectory and executed in a real robot experiment.} \label{Overview} \vspace{-0.5em} \end{figure} To cope with this limitation, the missing geometric and semantic information of the object needs to be restored or repaired to generate a better grasping interaction. Additional sensor input such as tactile sensor is introduced to supplement original vision sensing~\cite{watkins2019multi}. However, object uncertainty still exists and extra sensor interference with the object will directly affect the final grasping result. Another strategy is to use shape completion to infer the original object shape while traditional grasping-based shape completion methods use a high-resolution voxelized grid as object representation \cite{varley2017shape,lundell2019robust,lundell2020beyond}, causing a high memory cost and information loss since the sparsity of the sensory input. To avoid extra sensor cost and obtain complete object information, a novel transformer-based shape completion module is proposed in this work based on an original sparse point cloud. Compared with the traditional convolutional network layer, the transformer has achieved state-of-the-art results in the visual recognition and segmentation~\cite{srinivas2021bottleneck,guo2020pct}, which enables our shape completion module to achieve a better performance. As illustrated in Fig.~\ref{Overview}, we present a novel grasping pipeline that uses a sparse point cloud to execute the grasp directly without converting it into voxel grids during the shape completion process and transforming it into mesh in the grasp planning process. The pipeline consists of two sub-modules: The transformer-based shape completion module and the grasp evaluation module. In the first module, a non-synthetic partial point cloud dataset based on a YCB object was constructed. Not cropping the object randomly or viewing the object in a physical simulator, our dataset contains lots of real cameras and environmental noise, which guarantees an improved grasping interaction in a real robot environment. Based on this dataset, we propose a novel encoder-decoder point cloud completion network architecture (TransSC), which outperforms some representative baselines in different evaluation metrics. In the second module, our previous work~\cite{liang2019pointnetgpd} is referred. We use PointNet to obtain feature representation of the repaired point cloud and build a grasp detection network to generate and evaluate a set of grasp candidates. The grasp with the highest score will be executed in the real robot experiment. The proposed pipeline is validated in a simulation experiment and a robotic experiment, which all demonstrate our shape completion pipeline can improve grasping performance significantly. The main contributions of this paper can be summarized as: \begin{itemize} \item A large-scale non-synthetic partial point cloud dataset is constructed based on the YCB-Video dataset. As the dataset is based on 3D point cloud data captured by a real RGB-D camera, the noise that comes from it will facilitate the generalization of our work. \item A novel point cloud completion network Transformer-based Shape Completion (TransSC) is proposed. The transformer-based encoder and manifold-based decoder are introduced into the shape completion task for a better shape completion performance. \item Combining our previous work PointNetGPD for grasp evaluation and the Moveit! Task Constructor for motion planning, we demonstrate a robust grasp planning pipeline that using the shape completion result as input could get a better grasp planning result compared to the single view and no shape completion work. \end{itemize} \section{RELATED WORK} \textbf{Deep Visual Robotic Grasping} With the development of deep learning, many methods for deep visual grasping have been proposed. Borrowing from the ideas of 2D object recognition, monocular camera images were firstly used to predict the probability that the input grasps were successful \cite{levine2018learning}. In \cite{chu2018real} and \cite{tosun2020robotic}, a single RGB-D image of the target object was used to generate a 6D-pose grasp and effective end-effector trajectories. However, these works are not suitable to deal with sparse 3D object information and spatial grasps. Compared with the 2D feature representations from images, 3D voxel or point cloud data could provide robotic grasping with more semantic and spatial information. Given a synthetic grasp dataset, \cite{breyer2021volumetric} transformed scanning 3D object information into Truncated Signed Distance Function (TSDF) representations and passed them into a Volumetric Grasping Network (VGN) to directly output grasp quality, gripper orientation and gripper width at each voxel. \cite{wu2020grasp} designed a special grasp proposal module that defines anchors of grasp centers and related 3D grid corners to predict a set of 6D grasps from a partial point cloud. \cite{ten2017grasp} used hand-crafted projection features on a normalized point cloud to construct a CNN-based grasp quality evaluation model. In our previous work \cite{liang2019pointnetgpd}, we used PointNet \cite{qi2017pointnet} to extract raw point cloud features and built a grasp evaluation network, which achieves a great performance in robotic grasping experiments. \textbf{Grasp-based Shape Completion} For robotic grasping, the key challenge is to recognize objects in 3D space and avoid potential perception uncertainty. When the RGB-D camera captures an object from a particular viewpoint, the 3D information of the object is incomplete, which means a lot of semantic and spatial information is missing. This will affect the quality of later generated grasping and cause wrong grasping poses. Recently, some researchers proposed to use shape completion to enable robotic grasping. In \cite{varley2017shape}, the observed object from 2.5D range sensors was firstly converted to occupancy voxel grid data. Then the voxelized data were input into a CNN network and formed a high-resolution voxel output. Furthermore, the completion result was transformed into mesh and then loaded into Graspit!~\cite{miller2004graspit} to generate a grasp.~\cite{lundell2019robust} used dropout layers to modify the network, which enabled the prediction of shape samples at run-time. Meanwhile, Monte Carlo Sampling and probabilistic grasp planning were used to generate grasp candidates. As traditional analytic grasping methods are computationally expensive, \cite{lundell2020beyond} combined the shape completion of a voxel grid and a data-driven grasping planning strategy (GQCNN) \cite{mahler2017dex} to propose a structure called FC-GQCNN, where synthetic object shapes were obtained from a top-down physics simulator and grasps were generated from depth images. In conclusion, traditional grasp shape completion methods mainly voxelized the 2.5D data into occupancy grids or distance fields to train a convolutional network. However, these high-resolution voxel grids will entail a high memory cost. Moreover, detailed semantic information is often lost as an artifact of discretization, which causes meaningful geometric features of objects not to be learned from the neural network. To obtain more complete geometric features and retain original object information, a transformer-based shape completion module is introduced in our proposed method. Without converting the observed partial point cloud into the voxel grid and mesh, our completion method outputs a repaired point cloud at arbitrary resolution and outperforms existing methods. Furthermore, PointNet\cite{qi2017pointnet} is introduced for the representation learning of the repaired point cloud and a grasp evaluation network is constructed to generate grasp candidates. Therefore, our grasp evaluation framework also achieves a better grasping performance than original framework without shape completion. \begin{figure*}[!t] \centering \includegraphics[width=1.0\linewidth,height=5.5cm]{figures/Encoder.pdf} \caption{Illustration of various encoder structures for point cloud completion. (a) is a simple multiple-layer perception (MLP) structure. (b) is a multi-scale fusion (MSF) module, which can fuse features from different layers directly. (c) is concatenated multiple layer perception (CMLP), it also can concatenate multi-dimensional latent features while the max pooling operation is used to extract latent features further. (d) shows our Transformer-based multiple layer perception (TMLP) module, which integrates the Multi-head Self-attention (MHSA) module into the MLP structure. (e) depicts the architecture of the MHSA module.} \label{Encoder} \vspace{-0.5em} \end{figure*} \section{PROBLEM FORMULATION} We consider a setup consisting of a robotic arm with parallel-jaw grippers, an RGB-D camera and an object to be grasped. Also, we assume that the RGB-D camera could capture the depth map of an object and convert it to a 2.5D partial point cloud $\mathcal{P} \in \mathcal{R}^{N\times3}$. For simplicity, all spatial quantities are in camera coordinates. Given a gripper configuration $\mathcal{C}$ and camera observation $\mathcal{P}$, our goal is firstly to learn an encoder-decoder point cloud completion network, which could repair an observed 2.5D partial point cloud $\mathcal{P} \in \mathcal{R}^{N\times3}$, turning it into a complete 3D point cloud $\mathcal{P}_c \in \mathcal{R}^{N\times3}$. After that, a grasp evaluation network based on $\mathcal{P}_c$ is trained to predict a set of grasp candidates $\mathcal{G}_i$ and compute relative grasp quality $\mathcal{Q}_i$. The grasp with the highest score $\mathcal{G}_{best}$ will be executed in the real robot experiment. \section{Robotic Grasping Evaluation Via Shape Completion and Grasp Detection} \subsection{Dataset Construction} Traditional shape completion methods use synthetic CAD models from the ShapeNet~\cite{yi2016scalable} or ModelNet~\cite{wu20153d} datasets to generate partial and corresponding complete point cloud data, while these synthetic data contain little noise from the camera and robotic environment. In order to simulate real point cloud data distribution, we summarize a shape completion dataset from the YCB-Video Dataset~\cite{xiang2017posecnn}. Non-synthetic RGB-D video images ($\sim$ 133,827 frames) in the YCB-Video Dataset are firstly chosen, while most of them vary insignificantly. Thus, a pre-processed image dataset is obtained by reducing every 5 frames. Meanwhile, to cover distinguishable shapes with different levels of detail, 18 objects are also chosen in the YCB-Video dataset. In this work, the ground-truth point cloud of 18 objects is created by the farthest point sampling (FPS) 2048 points on each object model. Not randomly sampling or cropping complete point clouds on the unit sphere to get partial point clouds, RGB-D images and related object label images in the pre-processed dataset are loaded to compute the matching partial point clouds using related camera intrinsic parameters. To approximate the distribution of point cloud data of real objects and retain the semantic information, a large number of cameras and environmental noise data are kept on, though a certain radius is used to remove partial outliers. For the convenience of network training, the partial point clouds are also unified into the size of 2048 points by FPS or replicating points. To enable an accurate comparison with existing baselines, the canonical center of the partial point cloud of each object is transformed into the canonical center of the ground-truth point cloud using pose information. Finally, more than 70,000 partial point clouds are collected in our dataset. Compared to other synthetic point cloud datasets, our dataset also does well at preserving the real point cloud distribution of occluded objects. \subsection{Transformer-based Encoder Module} As shown in Fig.~\ref{Encoder}, we compare our proposed encoder module with several common competitive methods. Multi-layer Perception (MLP) is a simple baseline architecture to extract point features. This method maps each point into different dimensions and extracts the maximum value from the final $K$ dimensions to formulate a latent vector. A simple generalization for MLP is to combine semantic features from a low-level dimension with those of a high-level dimension. The MSF (Multi-scale Fusion)~\cite{kuang2020voxel} module inflates the dimension of the latent vector from 1024 to 1408 to obtain semantic features from different dimensions. To improve the performance of the feature extractor, L-GAN~\cite{achlioptas2018learning} proposed to use a Maxpooling layer appropriately. Concatenated Multiple Layer Perception (CMLP)~\cite{huang2020pf} maxpools the output of the last $k$ layers to guarantee that multi-scale feature vectors are concatenated directly. An overview of our proposed Transformer-based multi-layer perception (TMLP) module is shown in Fig.~\ref{Encoder}(d). Without an extra skip connection structure and a maxpooling operation from different layers, the Multi-head Self-attention (MHSA)~\cite{vaswani2017attention} module is introduced to replace the traditional convolutional layer [$128\times256\times1$]. \begin{figure*}[t] \includegraphics[width=1.0\textwidth]{figures/Decoder.pdf} \caption{Illustration of the decoder structure for point cloud completion. The feature vector with \emph{m} dimensions from the encoder is firstly concatenated with latent feature from a special point seed generator \emph{f} or \emph{g}. Then three convolutional layers as the backbone are used to extract features and form different manifold-based surfaces, respectively. Finally, these surfaces are gathering and montaging into a complete point cloud.} \label{Decoder} \vspace{-0.5em} \end{figure*} MHSA aims to transform (encode) the input point feature into a new feature space, which contains point-wise and self-attention features. Fig.~\ref{Encoder}(e) shows a simple MHSA architecture used in TMLP, which includes two sub-layers. In our first layer, the multi-head number is set to 8 and the input feature dimension for each point is 128. Unlike natural language processing (NLP) problems, the 128-dimensional feature vector $\mathcal{A}_{in} \in \mathcal{R}^{2048\times128}$ will enter into the multi-head attention module directly without positional encoding. This is because each point in the point cloud has its unique $x-y-z$ coordinates. The output feature $\mathcal{Z}$ is formed by concatenating the attention of each attention head. A residual structure is also used to add and normalize the output feature $\mathcal{Z}$ with $\mathcal{A}_{in}$. This process can be formulated as follows: \begin{equation} \mathcal{A}_{i} = SA_i(\mathcal{A}_{in})\quad i=1,2,...,8 \end{equation} \begin{equation} \mathcal{Z} = concat(\mathcal{A}_{1},\mathcal{A}_{2},...,\mathcal{A}_{8})*W_0 \end{equation} \begin{equation} \mathcal{A}_{out} = Norm(\mathcal{A}_{in} + \mathcal{Z}) \end{equation} where $SA_i$ represents the $i$-th self-attention layer, each has the same output dimension size with input feature vector $\mathcal{A}_{in}$, and $W_0$ is the weight of the linear layer. $\mathcal{A}_{out}$ represents the output point-wise features of the first sub-layer. The second sub-layer is called Feed-forward module, which is a fully connected network. Point-wise features $\mathcal{A}_{out}$ are processed through two linear transformations and one ReLU activation. Furthermore, a residual network is also used to fuse and normalize the output features. Finally, we can get the MHSA module output $\mathcal{FF}_{out} \in \mathcal{R}^{2048\times128}$ as: \begin{equation} \mathcal{FF} = ReLU(\mathcal{A}_{out} * W_1 + b_1) *W_2 + b_2 \end{equation} \begin{equation} \mathcal{FF}_{out} = Norm(\mathcal{A}_{out} + \mathcal{FF}) \end{equation} where $W_1$, $W_2$ and $b_1$, $b_2$ represent the weight and bias value of the corresponding linear transformation, respectively. \subsection{Manifold-based Decoder Module} Inspired by the AtlasNet~\cite{groueix1802atlasnet}, a manifold-based decoder module is designed to predict a complete point cloud from partial point cloud features. As shown in Fig.~\ref{Decoder}, a complete point cloud could can be assumed that it consists of multiple sub-surfaces. Therefore, we only concentrate on obtaining each sub-surface, then we gather them and make appropriate montage to form the final complete point cloud. To get each sub-surface, a point seed generator is used to concatenate with global feature vector $\mathcal{P}_{g} \in \mathcal{R}^{2048\times1024}$ output from the encoder, where point initialization values are computed from a flat $(f)$ or spatial $(g)$ sampler. As the coordinate values of the ground-truth point cloud are limited to between [-1, 1], point initialization values are also limited in this range. After that, the concatenated feature vector $\mathcal{P}_{concat} \in \mathcal{R}^{2048\times M} (M = 1026$\ $or$\ $1027)$ is input into $K$ convolutional layers, where all sampled 2D or 3D points will be mapped to 3D points on each sub-surface. In our decoder, the sub-surface number is set to 16. Unlike other voxel-based shape completion methods, our decoder module achieves an arbitrary resolution for the final completion results. \textbf{Evaluation Metrics} To evaluate our shape completion results, we used two permutation-invariant metrics called Chamfer Distance (CD) and Earth Mover's Distance (EMD) as our evaluation goal~\cite{fan2017point}. Given two arbitrary point clouds $S_1$ and $S_2$, CD measures the average distance between each point in one point cloud to its nearest point coordinates in the other point cloud. \begin{equation} d_{CD}\left(S_{1}, S_{2}\right)=\frac{1}{S_{1}} \sum_{x \in S_{1}} \min _{y \in S_{2}}\|x-y\|_{2}^{2}+\frac{1}{S_{2}} \sum_{y \in S_{2}} \min _{x \in S_{1}}\|y-x\|_{2}^{2} \end{equation} While Earth Mover's Distance considers two equal point sets $S_1$ and $S_2$ and is defined as: \begin{equation} d_{EMD}\left(S_{1}, S_{2}\right)=\min _{\emptyset: S_{1} \rightarrow S_{2}} \frac{1}{S_{1}} \sum_{x \in S_{1}}\|x-\emptyset(x)\|_{2} \end{equation} CD has been widely used in most shape completion tasks because it is efficient to compute. However, EMD is chosen as our completion loss because CD is blind to some visual inferiority and ignores details easily~\cite{achlioptas2018learning}. With ${\emptyset: S_{1} \rightarrow S_{2}}$ being bijective, EMD could solve the assignment and transformation problem in which one point cloud is mapped into another. \begin{table*}[htb!] \centering \caption{Comparison of Earth Mover's Distance in different point cloud completion models} \resizebox{1\textwidth}{!}{ \label{EMD_distance} \begin{tabular}{l|lllllllll} \hline \textbf{Model} & \textbf{\begin{tabular}[c]{@{}l@{}}cracker \\ box\end{tabular}} & \textbf{banana} & \textbf{\begin{tabular}[c]{@{}l@{}}pitcher\\ base\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}bleach\\ cleanser\end{tabular}} & \textbf{bowl} & \textbf{mug} & \textbf{\begin{tabular}[c]{@{}l@{}}power \\ drill\end{tabular}} & \textbf{scissors} & \textbf{average} \\ \hline \textbf{Oracle} & 3.4 & 1.7 & 4.6 & 2.9 & 1.9 & 2.0 &3.8 &1.5 &2.7 \\ \textbf{AtlasNet \cite{groueix1802atlasnet}} & 9.7 & 4.9 & 10.5 & 10.0 & 8.8 & 5.3 &15.0 &5.2 &8.7 \\ \textbf{MSN (fusion) \cite{liu2020morphing}} & 10.7 & 4.6 & 12.4 & 14.0 & 11.5 & 12.9 &23.4 &5.3 &11.8 \\ \textbf{MSN (vanilla) \cite{liu2020morphing}} & 11.0 & \textbf{3.8} & 9.3 & 8.3 & 10.2 & 3.9 &5.9 &\textbf{3.4} &7.0 \\ \textbf{Our (flat)} & \textbf{8.5} & 3.9 & 9.4 & 6.7 & 6.0 & \textbf{3.7} &\textbf{5.2} &4.1 &\textbf{5.9} \\ \textbf{Our (spatial)} & 10.1 & 4.4 & \textbf{8.4} & \textbf{5.8} & \textbf{5.6} & \textbf{3.7} &7.0 &3.9 &6.1 \\ \hline \end{tabular}} \end{table*} \begin{table*}[htb!] \centering \caption{Comparison of Chamfer Distance in different point cloud completion models} \label{chamfer_distance} \resizebox{1\textwidth}{!}{ \begin{tabular}{l|lllllllll} \hline \textbf{Model} & \textbf{\begin{tabular}[c]{@{}l@{}}cracker \\ box\end{tabular}} & \textbf{banana} & \textbf{\begin{tabular}[c]{@{}l@{}}pitcher\\ base\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}bleach\\ cleanser\end{tabular}} & \textbf{bowl} & \textbf{mug} & \textbf{\begin{tabular}[c]{@{}l@{}}power \\ drill\end{tabular}} & \textbf{scissors} & \textbf{average} \\\hline \textbf{Oracle} & 0.24 & 0.52 & 0.28 & 0.12 & 0.10 & 0.09 &0.13 &0.38 &0.23 \\ \textbf{AtlasNet \cite{groueix1802atlasnet}} & 4.51 & 0.87 & 4.97 & 5.61 & 4.21 & 1.37 &6.18 &0.92 &3.58 \\ \textbf{MSN (fusion) \cite{liu2020morphing}} & 5.59 & 1.25 & 5.71 & 2.77 & 10.81 & 1.77 &8.34 &1.58 &4.73 \\ \textbf{MSN (vanilla) \cite{liu2020morphing}} & 6.01 &\textbf{0.71} & 4.01 & 4.68 & 7.51 & 0.76 &1.28 &\textbf{0.38} &3.17 \\ \textbf{Our (flat)} &\textbf{3.28} & 0.92 & 4.09 & 1.50 & \textbf{2.55} & \textbf{0.66} &\textbf{1.25} &0.82 &\textbf{1.88} \\ \textbf{Our (spatial)} & 5.81 & 0.87 & \textbf{3.19} & \textbf{1.20} & 2.79 & 0.69 &2.54 &0.66 &2.22 \\ \hline \end{tabular}} \end{table*} \begin{table*}[htb!] \centering \caption{Comparison of EMD and CD from different encoder structures} \resizebox{1.0\textwidth}{!}{ \label{Structure} \begin{tabular}{l|l|l|l|l} \hline \multirow{1}{*}{\textbf{Earth Mover's Distance (EMD)}} & \multirow{1}{*}{\textbf{MLP}} & \multirow{1}{*}{\textbf{CMLP}} & \multirow{1}{*}{\textbf{MSF}} & \multirow{1}{*}{\textbf{TMLP}} \\\hline Mug & 6.01 & 3.69 & 9.45 & {\textbf{3.69}} \\ Bleach cleanser & 10.51 & 8.10 & 11.70 & {\textbf{6.70}} \\ \hline \end{tabular} \begin{tabular}{l|l|l|l|l} \hline \multirow{1}{*}{\textbf{Chamfer Distance (CD)}} & \multirow{1}{*}{\textbf{MLP}} & \multirow{1}{*}{\textbf{CMLP}} & \multirow{1}{*}{\textbf{MSF}} & \multirow{1}{*}{\textbf{TMLP}} \\\hline Mug & 2.15 & {\textbf{0.65}} & 13.80 & 0.66 \\ Bleach cleanser & 6.88 & 2.63 & 13.89 & {\textbf{1.50}} \\ \hline \end{tabular}} \vspace{-1.0em} \end{table*} \subsection{PointNetGPD: Grasping Detection Module} Giving the complete point cloud from previous steps, we put the point cloud into a geometric-based grasp pose generation algorithm (GPG)~\cite{gpg}, which outputs a set of grasp proposals $\mathcal{G}_i$. We then transform $\mathcal{G}_i$ into a gripper coordinate system and use points inside the gripper as the input of PointNetGPD. The output grasp will then be sent to the MoveIt! Task Constructor~\cite{mtc_grasp} to plan a feasible trajectory for pick and place task. \begin{figure}[t!] \centering \includegraphics[width=0.48\textwidth]{figures/shape_completion_grasp_example.pdf} \caption{Comparison of grasp candidates generated using GPG~\cite{gpg}. (a) RGB image to show the example environment, (b) grasp generated with partial point cloud, (c) grasp generated with complete point cloud.} \label{fig:grasp_candidate} \vspace{-0.5em} \end{figure} PointNetGPD is trained on a grasp dataset generated using reconstructed YCB object mesh and evaluates the input grasp quality. The grasp candidates in the grasp dataset are all collision-free with respect to the target object. As a result, the grasp evaluation network assumes all the input grasp candidates are not colliding with the object. If the object has occlusion due to the camera viewpoint, current geometric-based grasp proposal algorithm will generate grasp candidates that collide with the object. Thus, using a complete point cloud could ensure that the grasp candidate generation algorithm generates grasp sets that do not collide with the graspable objects. Fig.~\ref{fig:grasp_candidate} shows the comparison of grasp generation result using GPG~\cite{gpg} with and without point cloud completion, where Fig.~\ref{fig:grasp_candidate}(b) shows a candidate generated using partial point cloud and Fig.~\ref{fig:grasp_candidate}(c) shows a grasp candidate generated using complete point cloud. We can see that the grasp in Fig.~\ref{fig:grasp_candidate}(b) has a collision with the real object while Fig.~\ref{fig:grasp_candidate}(c) avoids generating such that grasp. \section{Experiments} \subsection{Quantitative Evaluation of Proposed Shape Completion Network} \textbf{Training and Implementation details} To evaluate model performance and reduce training time, 8 categories of different objects in our dataset are chosen to train the shape completion model. The training set and validation set are split into 0.8:0.2. We implement our network on PyTorch. All the building modules are trained by using the Adam optimizer with an initial learning rate of 0.0001 and a batch size of 16. All the parameters of the network are initialized using a Gaussian sampler. Batch Normalization (BN) and ReLU activation units are all employed at the encoder and decoder module except the final tanh layer producing point coordinates, and Dropout operation is used in the MHSA module to suppress model overfitting. \subsubsection{Comparison with Existing Methods} In this subsection, we compare our method against several representative baselines that are also used for point cloud completion, including AtlasNet~\cite{groueix1802atlasnet} and MSN~\cite{liu2020morphing}. The Oracle method means that we randomly resample 2048 points from the original surface of different YCB objects. Corresponding EMD and CD distance between the resampled point cloud and the ground-truth point cloud provide an upper bound for the performance. Relative comparison results are shown in Table.~\ref{EMD_distance} and Table.~\ref{chamfer_distance}. Our method is developed into two models based on the different point seed generators $(f/g)$ in the decoder module. It can be seen that our method outperforms other methods in most objects on both EMD and CD distances. For the same completion loss, our (flat) model achieves an average of about 15\% improvement in terms of the EMD distance with respect to the latest MSN (vanilla) model. Since our dataset contains much noise from the camera and the environment, we found that fusing the output completion result with the original point cloud makes the performance significantly worse, which can be seen from the comparison of MSN (fusion) and MSN (vanilla). It also implies that our model is robust enough, which is conducive to rapid deployment in real robot experiments. Furthermore, compared with ideal results from the Oracle method, it demonstrates that point cloud completion remains an arduous task to solve. \subsubsection{Ablation Studies} To comprehensively evaluate our proposed shape completion model, in this section we provide a series of ablation studies on our YCB-based dataset. Accordingly, the effectiveness of each special module in our model is analysed as follows: \begin{table}[t!] \caption{comparison of average EMD and CD from different point generators} \label{Generator} \resizebox{0.48\textwidth}{!}{% \begin{tabular}{c|ccc|ccc|l|l} \hline \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Similarity \\ Metrics\end{tabular}}} & \multicolumn{3}{l|}{\textbf{Uniform Distribution:}} & \multicolumn{3}{l|}{\textbf{Gaussian Distribution:}} & \multicolumn{2}{l}{\multirow{2}{*}{\textbf{ZERO}}} \\ & 0:1 & -0.5:0.5 & -1:1 & 0.5,0.5/3 & 0,0.5 & 0,1 & \multicolumn{2}{l}{} \\ \hline Avg EMD & \textbf{5.94} & 7.09 & 6.50 & 6.34 & 6.15 & \textbf{6.14} & \multicolumn{2}{c}{9.88} \\ Avg CD & \textbf{1.89} & 3.25 & 2.42 & 2.39 & 2.38 & \textbf{2.12} & \multicolumn{2}{c}{6.17} \\ \hline \end{tabular}% } \vspace{-1.5em} \end{table} \begin{table*}[htb!] \centering \caption{Influence of different surface numbers in the decoder} \resizebox{1.0\textwidth}{!}{ \label{Surface} \begin{tabular}{l|l|l|l|l} \hline \multirow{1}{*}{\textbf{Earth Mover's Distance (EMD)}} & \multirow{1}{*}{\textbf{n=4}} & \multirow{1}{*}{\textbf{n=8}} & \multirow{1}{*}{\textbf{n=16}} & \multirow{1}{*}{\textbf{n=32}} \\\hline Mug & 4.71 & 3.94 & 3.70 & \textbf{3.61} \\ Bleach cleanser & 10.10 & 7.82 & 6.69 & {\textbf{5.94}} \\ \hline \end{tabular} \begin{tabular}{l|l|l|l|l} \hline \multirow{1}{*}{\textbf{Chamfer Distance (CD)}} & \multirow{1}{*}{\textbf{n=4}} & \multirow{1}{*}{\textbf{n=8}} & \multirow{1}{*}{\textbf{n=16}} & \multirow{1}{*}{\textbf{n=32}} \\\hline Mug & 9.01 & 6.70 &\textbf{6.61} & 6.69 \\ Bleach cleanser & 3.69 & 1.70 & \textbf{1.51} & 1.53 \\ \hline \end{tabular}} \end{table*} We first evaluate our transformer-based encoder module with other representative encoder modules under the same setting of convolutional/transformer layer number and object inputs. As shown in Tab.~\ref{Structure}, our encoder has a better result overall, though CMLP could get a great result on Mug's completion. When the point seed in the decoder is flat, we further analyze the influence of different point seed distributions and surface numbers in Tab.~\ref{Generator} and Tab.~\ref{Surface}. We can see that both Uniform and Gaussian sample method can achieve a better result at $(0, 1)$. We choose $Uniform(0,1)$ in our model, as it can achieve best results. Like the weight parameters in the neural network, the initialization value of points cannot be close to zero, which predicts the worst result. As illustrated in Tab.~\ref{Surface}, when the sub-surface number increases, the overall model performance improves. However, the improvement of completion results is limited when the number is above 16. \begin{table}[ht!] \caption{Comparison of average difference of grasp joint and grasp pose from different completion type} \label{simulation} \resizebox{0.485\textwidth}{!}{% \begin{tabular}{c|c|c|c|c|c|c} \hline \textbf{Error} & \multicolumn{1}{l|}{\textbf{Partial}} & \multicolumn{1}{l|}{\textbf{Mirror}} & \multicolumn{1}{l|}{\textbf{Voxel-based}} & \multicolumn{1}{l|}{\textbf{RANSAC}} & \multicolumn{1}{l|}{\textbf{\begin{tabular}[c]{@{}c@{}}Ours\\ (canonical)\end{tabular}}} & \textbf{\begin{tabular}[c]{@{}c@{}}Ours\\ (arbitrary)\end{tabular}} \\ \hline \begin{tabular}[c]{@{}l@{}}Grasp Joint \\ (degree)\end{tabular} & 6.27 & 4.05 & 1.80 & 1.69 &\textbf{1.15} & 2.02 \\ \hline \begin{tabular}[c]{@{}l@{}}Grasp Pose \\ (mm)\end{tabular}& 20.8 & 15.6 & 6.7 & 7.4 & 0.4 &\textbf{0.2} \\ \hline \end{tabular}} \vspace{-1.5em} \end{table} \subsubsection{Visualization Analysis} Fig.~\ref{Visulization} shows the visualized shape completion results using our TransSC. To facilitate visual analysis, the input partial point cloud of each object is first preprocessed to remove noisy data from the camera and the environment. It can be seen that the geometric loss of the input point cloud in our dataset comes from the change of the camera viewpoint and the occlusion of other objects, which causes a big challenge for our model. The output results of the canonical pose show that our model works well on all simple and complex objects. Moreover, our model can generate realistic structures and details like the mug handle, bowl edge and bottle mouth. To enable robotic grasping, another shape completion model based on the arbitrary ground-truth pose is retrained through transforming the ground truth pose to the original pose of the input partial point cloud, and completion results are also shown in Fig.~\ref{Visulization}. Obviously, arbitrary output is not as good as the canonical output while it still restores the overall shape of each object well. It also demonstrates that achieving object completion of arbitrary poses in the real environment is still a formidable task. \begin{figure}[!t] \includegraphics[width=0.48\textwidth]{figures/visulization.pdf} \caption{Shape completion result using TransSC. The canonical pose result is trained under a fixed point cloud coordinate system while the arbitrary pose result is trained under the camera perspective. In the robot experiment, the arbitrary pose training result is used to generate grasps.} \label{Visulization} \vspace{-0.5em} \end{figure} \subsection{Simulation Grasp Experiments with complete shapes} We use Graspit!~\cite{miller2004graspit} to evaluate the quality of shape completion similar to~\cite{varley2017shape}. First, the Alpha shapes algorithm ~\cite{alpha_shape} is used to implement surface reconstruction of completion object. The output 3D mesh is then imported into GraspIt! Simulator to calculate grasps. To have a fair comparison, we also use Barrett Hand to generate grasps. After finishing the grasp generation, we remove the completion object and import the ground-truth object into the same place. Meanwhile, the Barrett Hand is moved back for 20 cm along the approach direction and then approaches the object until the gripper detect a collision or reach the calculated grasp pose. Furthermore, we adjust the gripper to the calculated grasp joint angles and perform the auto-grasp function in GraspIt! to ensure the gripper contacts with the object surface or reaches the joint limit. The joint angle difference and position difference are then recorded. We use four objects (bleach cleanser, cracker box, pitcher base and power drill) in the YCB objects set and calculate 100 grasps for each object in our experiment. We compare the average difference of joint angle and grasp pose from our shape completion model to that of Laplacian smoothing in Meshlab (Partial), mirroring completion~\cite{bohg2011mind} (Mirror), RANSAC-based approach~\cite{papazov2010efficient} and voxel-based completion~\cite{varley2017shape}. Note that we use two different models, canonical and arbitrary. The canonical model means all the training is transformed into the same object coordinate system and the arbitrary model means all the training data are transformed into the camera's coordinate system. Although from Fig.~\ref{Visulization} we can see the canonical model has a better shape completion result, but it requires a 6D pose of the target object if we want to map the complete point cloud into the real environment. To avoid this complication of adding a 6D pose estimation module and achieve real robot experiments, the arbitrary model is also trained. The simulation result is shown in Table.~\ref{simulation}. It can be seen that Ours (canonical pose) gets the best simulation grasping performance, which outperforms other completion types. Ours (arbitrary pose) also obtains a great simulation result though its average joint pose is slightly bigger than RANSAC-based and voxel-based methods. Moreover, the average grasp pose errors of both models are smaller than other methods significantly. The larger joint error and lower pose error of Ours (arbitrary pose) indicates that corresponding completion object is slightly larger than the ground-truth object. The average difference from two models also demonstrates that a perfect shape completion in an arbitrary pose is much harder than in a canonical pose. \subsection{Robotic Experiments} To evaluate the performance improvement using complete point cloud for robotic grasping, we choose six YCB objects to test the grasping success rate. The robot for evaluation is a UR5 robot arm equipped with a Robotiq 3-finger gripper. The vision sensor is an Industrial 3D camera from Mechmind~\footnote{https://en.mech-mind.net/} to acquire a high-quality partial point cloud. The selected six objects are list in Table.~\ref{tab:real_robot_experiment_result}. We select these objects because they are typical objects that may fail to generate good grasp candidates without shape completion. For other objects such as banana or marker, they are quite simple and small, which causes that improvement of shape completion on the grasping result is minor. For the selected six objects, we perform grasp evaluation on two different methods: PointNetGPD grasp with/without shape completion. We run the robot experiment by randomly putting the object on the table and grasping for ten times, then calculating the success rate. The experiment result is shown in Table.~\ref{tab:real_robot_experiment_result}. We can see that all six objects' grasp success rates using PointNetGPD with TransSC outperform or even with original method. The low success rate of power drill for both methods is because when the robot tries to grasp the head of the power drill, the contact area is too slippy. The failures of PointNetGPD with observed point cloud input are mainly from the limit of camera viewpoint, and GPG generates grasp candidates that sink into the object. An example of this situation is shown in Fig.~\ref{fig:grasp_candidate}. This is a strong evidence that our shape completion model can improve the grasp success rate in some particular objects. \begin{table}[t!] \caption{Real robot experiment result} \label{tab:real_robot_experiment_result} \resizebox{0.48\textwidth}{!}{% \begin{tabular}{c|ccccccc} \hline \textbf{Method} & \textbf{\begin{tabular}[c]{@{}c@{}}cracker\\ box\end{tabular}} & \textbf{mug} & \textbf{\begin{tabular}[c]{@{}c@{}}meat\\ can\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}pitcher\\ base\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}bleach\\ cleanser\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}power\\ drill\end{tabular}} & \textbf{average} \\ \hline PointNetGPD\cite{liang2019pointnetgpd} & 70\% & 70\% & 80\% & 80\% & 90\% & 40\% & 71.67\% \\ \hline \begin{tabular}[c]{@{}c@{}}PointNetGPD \\ with TransSC\end{tabular} & 80\% & 100\% & 100\% & 80\% & 90\% & 50\% & 83.33\% \\ \hline \end{tabular}} \vspace{-1.5em} \end{table} \section{Conclusion and Future Work} We present a novel transformer-based shape completion network (TransSC), which is robust to sparse and noisy point cloud input. A transformer-based encoder and manifold-based decoder are designed in our network, which makes our model achieve a great completion result and outperform other representative methods. Besides, TransSC could be easily embedded into a grasp evaluation pipeline and improve grasping performance significantly. The lack of geometric information on the object in our dataset is not only due to the change of the camera viewpoint but also the occlusion of different objects. Thus, TransSC could also achieve shape completion for occluded objects. In future work, our goal is to integrate semantic segmentation into our shape completion pipeline to make the robot grasp objects better in a cluttered environment. \section{Acknowledgement} This research was funded by the German Research Foundation (DFG) and the National Science Foundation of China (NSFC) in project Crossmodal Learning, DFG TRR-169/NSFC 61621136008, and partially supported by European projects H2020 STEP2DYNA (691154) and Ultracept (778602). We also thanks Mech-Mind Robotics Company for providing the 3D camera. \addtolength{\textheight}{-4cm} \bibliographystyle{IEEEtran}
1,941,325,221,207
arxiv
\section{Introduction} \IEEEPARstart{A}{utonomous} vehicles are required to operate in challenging urban environments that consist of a wide variety of agents and objects, making comprehensive perception a critical task for robust and safe navigation. Typically, perception tasks are focused on independently reasoning about the semantics of the environment and recognition of object instances. Recently, panoptic segmentation~\cite{kirillov2019panoptic} which unifies semantic and instance segmentation has emerged as a popular scene understanding problem that aims to provide a holistic solution. Panoptic segmentation simultaneously segments the scene into 'stuff' classes that comprise of background objects or amorphous regions such as road, vegetation, and buildings, as well as 'thing' classes that represent distinct foreground objects such as cars, cyclists, and pedestrians. Panoptic segmentation has been extensively studied in the image domain~\cite{kirillov2019panoptic,mohan2020efficientps,porzi2019seamless,cheng2019panoptic}, facilitated by the ordered structure of images being supported by well-researched convolutional networks. However, only a handful of methods have been proposed for panoptic segmentation of LiDAR point clouds~\cite{gasperini2020panoster,milioto2020lidar}. LiDARs have become an indispensable sensor for autonomous vehicles due to their illumination independence and geometric description of the scene, making scene understanding using LiDAR point clouds an essential capability. However, the typical unordered, sparse, and irregular structure of point clouds pose several unique challenges. To this end, deep learning methods that rely on grid based convolutions to address these challenges typically follow two different directions. They either project the point cloud into the 3D voxel space and employ 3D convolutions on them~\cite{maturana2015voxnet, graham20183d}, or they project the point cloud into the 2D space~\cite{cortinhal2020salsanext,milioto2020lidar,hurtado2020mopt} and employ the well-researched 2D Convolutional Neural Networks (CNNs). While voxel-based method achieve high accuracy, they are computationally more expensive and require substantial memory to store the voxelized point clouds. Methods such as~\cite{graham20183d,choy20194d} leverage the sparse nature of occupied voxel grids to improve the runtime and memory consumption. The 2D projection based methods on the other hand, yield a more denser representation and require comparatively lesser computational resources, but they suffer from information loss during projection, blurry CNN outputs, and incorrect label assignment to the occluded points during re-projection. Therefore, there is a need to bridge this gap with a method that has the advantages of fast and memory-efficient 2D convolutions while mitigating the problems due to the projection. \begin{figure} \footnotesize \centering \includegraphics[width=0.47\textwidth]{images/introtechnical/smaal_into_netpdf.pdf} \caption{Overview of the top-down EfficientLPS architecture that consists of a shared backbone to learn spatially-aware features from the projected point cloud and individual heads to learn semantic and instance specific features which are fused in the panoptic fusion module. The network explicitly utilizes the range information in the backbone, semantic head and fusion module to mitigate the problems due to the projection and distance-dependent sparsity of LiDAR point clouds.} \label{fig:intropic} \end{figure} In this work, we present the novel Efficient LiDAR Panoptic Segmentation (EfficientLPS) architecture that effectively addresses the aforementioned challenges by employing a 2D CNN for the task while explicitly utilizing the unique 3D information provided by point clouds. EfficientLPS consists of a shared backbone comprising our novel Proximity Convolution Module (PCM), an encoder, the proposed Range-aware FPN (RFPN) and the Range Encoder Network (REN). We build the encoder and REN based on the EfficientNet~\cite{tan2019efficientnet} family, therefore we follow the convention of naming our model with the Efficient prefix. EfficientLPS also consists of a novel distance-dependent semantic segmentation head and an instance segmentation head, followed by a fusion module that provides the panoptic segmentation output. Our network makes several new contributions to address the problems that persist in LiDAR cylindrical projections. We propose the Proximity Convolution Module (PCM) that alleviates the problems caused by the fixed geometric grid structure of a standard convolution which is incapable of modeling object transformations such as scaling, rotation and deformation~\cite{dai2017deformable}. The problem of distance-dependent sparsity further exacerbates the limited transformation modeling capability of standard convolutions. The PCM models the transformations of objects in the scene by leveraging the contributions of nearby points, effectively reshaping the convolution kernel depending on range values. When LiDAR points are projected into the 2D domain, objects tend to be closer to each other. Hence, the network in these cases often ignores smaller objects in favor of larger overlapping objects. Although this overlap is more distinguishable in the range channel of the projections, the features computed over all the projection channels begin to lose track of this distinction as they try to capture more and more complex representations in the deeper layers of the encoder. To alleviate this problem and enable the network to better distinguish adjacent objects, we propose the Range-aware Feature Pyramid Network (RFPN). We employ the REN parallel to the encoder to solely encode the range channel of the projection and selectively fuse it with the FPN outputs to compute range-aware multi-scale features. Moreover, there is a large variation in the scale of objects due to the projection of the point cloud into the 2D domain. The objects that are closer tend to be larger in scale, and objects at a farther distance tend to be smaller. Hence, the 2D projection consists of objects that have distance-dependent scale variations. Typically, the instance head of top-down methods cope with it to a certain extent using many predefined anchors at different scales. However, the semantic head that predominantly aggregates multi-scale features tends to suffer~\cite{chen2018encoder,chen2018searching, mohan2020efficientps}. In order to mitigate this problem, we propose the distance-dependent semantic head that consists of modules that incorporate our range-guided depth-wise atrous separable convolutions in addition to fixed multi-dilation rate convolutions to generate features that cover a relatively large scale range in terms of the receptive field in a dense manner. Furthermore, segmented objects in the projection domain often tend to have inaccurate boundaries. In the image domain, these inaccuracies only span a few pixels so they have little or negligible effect on the overall performance. However, when the segmented output in the projection domain is re-projected back into point clouds, it causes leakage of object boundaries into the background and significantly affects the performance of the model. To address this problem, we introduce the novel panoptic periphery loss function that operates on the logits of the panoptic fusion module to effectively combine the outputs of both heads. Our proposed loss function refines 'thing' instance boundaries by maximizing the range separation between the foreground boundary pixels, i.e., the 'thing' instance boundary and the neighboring background pixels. Most supervised learning methods require large amounts of annotated training data and manually labeling point clouds is an extremely arduous task. As an alternative solution to this problem, we explore the viability of generating pseudo labels from the abundantly available unlabeled point cloud datasets. We formulate a new framework for computing regularized pseudo labels from unlabeled data, given some labeled data with similar properties. The regularized pseudo labels aim to reduce the incorrect predictions on the unlabeled dataset to prevent confirmation bias. To the best of our knowledge, this is the first work to propose a pseudo labeling technique for any point cloud scene understanding task. We perform extensive evaluations on the SemanticKITTI~\cite{behley2019iccv} and nuScenes~\cite{caesar2020nuScenes} datasets which have point clouds with different densities to demonstrate the generalization ability of our model. As the nuScenes dataset itself does not provide panoptic segmentation labels, we compute the annotations from the publicly available semantic segmentation and 3D bounding box annotations. We provide several baselines and make the nuScenes panoptic segmentation dataset publicly available to encourage future research using sparse point clouds. Our proposed EfficientLPS consistently outperforms existing methods, thereby setting the new state of the art on both datasets and is ranked \#1 on the SemanticKITTI leaderboard. Finally, we present detailed ablation studies that demonstrate the novelty of the various architectural contributions that we make in this work. To summarize, the main contributions of this work are as follows: \begin{enumerate} \item A novel top-down architecture that consists of a shared backbone with task-specific heads that incorporate our proposed range enforced components and a fusion module supervised by our panoptic periphery loss function. \item The proximity convolution module which boosts the transformation modeling capacity of the shared backbone by leveraging range proximity between neighboring points. \item The novel range-aware feature pyramid network that reinforces bidirectionally aggregated semantically rich multi-scale features with spatial awareness. \item The new semantic head that captures scale-invariant rich characteristic and contextual features using our range-guided depth-wise atrous separable convolutions. \item The novel panoptic periphery loss function that refines the segmentation of 'thing' instances by maximizing the range separation between foreground boundary pixels and neighboring background pixels. \item A new framework for improving panoptic segmentation of LiDAR point clouds by exploiting large unlabelled datasets via regularized pseudo label generation. \item Exhaustive quantitative and qualitative evaluations of our model along with comprehensive ablation studies of our proposed architectural components. \item We made the code and models publicly available at \url{http://rl.uni-freiburg.de/research/lidar-panoptic} \end{enumerate} \section{Related Works} \label{sec:realtedwork} Panoptic segmentation has emerged as a popular scene understanding task since its introduction by Kirillov~\textit{et~al.}~\cite{kirillov2019panoptic}. By unifying semantic segmentation and instance segmentation, it aims at holistic scene understanding and reasoning. While panoptic segmentation has been extensively studied in the 2D domain using RGB images, only a handful of methods address this task in the 3D domain of LiDAR point clouds. In the following, we first discuss different convolution kernels and different techniques of representing point cloud data. We then discuss recent works that address the various scene segmentation tasks using point clouds in autonomous driving scenarios, namely: semantic, instance, and panoptic segmentation. \subsection{Convolution kernels} Standard 2D convolutions lack the ability to model geometric relations in the 3D domain due to the nature of the convolution operation that samples from fixed locations. Some works model these relations with the help of non-regular grids~\cite{dai2017deformable} where the learned offsets change the shape of the sampling grid in convolutions. Using RGB-D images, Wang~\textit{et~al.}~\cite{wang2018depth} directly use the depth value to weight the contribution of neighboring pixels for the convolution output. While the convolution shaping methods using RGB images learn the lacking spatial information, we propose a shaping mechanism using the already available spatial information in point clouds. We devise the Proximity-aware Convolution Module (PCM) that reshapes the convolution grid to capture local contextual information from neighboring pixels. This is especially helpful for capturing contextual information of very distant points in LiDAR point cloud that suffer from distance-induced sparsity. \subsection{Data Representation} The methods that are typically employed on point clouds can be broadly classified into three categories, namely: point-based, volumetric, and projection-based techniques. PointNet~\cite{qi2017pointnet} is one of the first pioneering point-based methods which learns features using MLPs followed by max-pooling to extract global context. More recent methods~\cite{boulch2020convpoint, xu2018spidercnn, thomas2019kpconv} develop convolution operations and kernels specially designed to work with point clouds. Kernel Point Convolution (KPConv)~\cite{thomas2019kpconv} is one such method with flexible kernel points in the 3D space, which are learned in a similar manner on point clouds as in 2D convolutions. On the other hand, volumetric methods transform the point clouds into regular voxel grids and apply 3D convolutions~\cite{maturana2015voxnet}. Most methods that rely on 3D convolution are both memory and computationally intensive which limits the resolution of the voxels and hence the overall performance. However, to account for the sparse nature of the voxel grid, sparse 3D convolutions~\cite{graham20183d,choy20194d} have been proposed to decrease the runtime and memory footprint. In projection-based methods, the point cloud is projected onto an intermediate regular 2D grid representation to facilitate the use of well researched 2D convolution architectures. To obtain pseudo sensor data such as the grid representation, existing methods project points either using the spherical projection~\cite{milioto2019rangenet++,wu2018squeezeseg} or using scan unfolding~\cite{triess2020}. Conversely, point clouds are also projected into a bird's eye view (BEV)~\cite{zhang2020polarnet} to exploit the radial nature and obtain better spatial segregation. In this work, we employ scan unfolding due to its ability to recover dense representations, similar to the original format that a LiDAR sensor provides. \subsection{Scene Understanding using LiDAR Point Clouds} \subsubsection{Semantic Segmentation} The challenges posed by the unordered and sparse nature of point clouds has hindered the progress in LiDAR semantic segmentation for autonomous driving. Considerably lesser number of techniques have been proposed to address this task using point clouds in comparison to methods in the visual domain. Dewan~\textit{et~al.}~\cite{dewan2017deep} propose an approach to classify points into movable, non-movable, and dynamic classes, by combining deep learning-based semantic cues and rigid motion based motion cues in a Bayesian framework. Wu~\textit{et~al.}~\cite{wu2018squeezeseg} propose a projection-based approach that builds upon SqueezeNet and introduces the fire module which is incorporated into the encoder and decoder. DeepTemporalSeg~\cite{dewan2020deeptemporalseg} employs Bayesian filtering to obtain temporally consistent semantic segmentation. The release of the SemanticKITTI~\cite{behley2019iccv} dataset motivated many works in semantic segmentation of LiDAR point clouds. Milioto~\textit{et~al.}~\cite{milioto2019rangenet++} propose a 2D CNN architecture that operates on spherically projected point clouds and employs a kNN based post-processing step to account for the occlusions due to the projection. SalsaNext~\cite{cortinhal2020salsanext} uses spherical projection for semantic segmentation and also performs uncertainty estimation. PolarNet~\cite{zhang2020polarnet} projects the point cloud in the birds-eye view and employs a ring convolutions on the radially defined grids. Some methods follow a hybrid approach by combining 2D and 3D convolutions. KPRNet~\cite{kochanov2020kprnet} uses 2D convolutions for semantic segmentation followed by KPConv-based~\cite{thomas2019kpconv} post-processing using point-wise convolutions. SPVNAS~\cite{tang2020searching} automates architecture design by employing neural architectural search to search for efficient 3D convolution-based models \subsubsection{Instance Segmentation} Similar to the image domain, instance segmentation of point clouds can be classified into two categories: proposal based and proposal free methods. Proposal based methods perform 3D bounding box detection followed by point-wise mask generation for the points in each bounding box. 3D~Bonet~\cite{yang2019learning} follows this approach using two separate 3D bounding box proposal generation and mask generation branches. GSPN~\cite{yi2019gspn} generates proposals using a shape aware proposal generation for different instances. On the other hand, proposal free methods directly predict the instances by detecting keypoints such as the centriod of the instance, or the similarity between points, which is followed by clustering~\cite{zhang2020instance}. SGPN~\cite{wang2018sgpn} learns a similarity matrix between the points which is used to cluster points with higher similarity scores between them. PointGroup~\cite{jiang2020pointgroup} extracts semantic information using 3D sparse convolutions to cluster points towards the instance centroid, followed by using the original points and the clustered points to obtain the final prediction. VoteNet~\cite{ding2019votenet} predicts an offset vector to the centroid of every point and then employs clustering \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{images/introtechnical/EFFICIENTLPS_1.pdf} \caption{Illustration of our proposed EfficientLPS architecture for LiDAR panoptic segmentation. The point cloud is first projected into the 2D domain using scan unfolding and fed as an input to our Proximity Convolution Module (PCM). Subsequently, we employ the shared backbone consisting of the EfficientNet encoder with the 2-way FPN and the Range Encoder Network (REN) in parallel. The output of these two modules are fused and fed as input to the semantic and instance heads. The logits from both heads are then combined in the panoptic fusion module which is supervised by the panoptic periphery loss function. Finally, the output of the panoptic fusion module is projected back to the 3D domain using a kNN algorithm.} \label{fig:network} \end{figure*} \subsubsection{Panoptic Segmentation} Panoptic segmentation methods can also be classified into proposal-free (bottom-up) or proposal-based (top-down) techniques. Bottom-up methods group points belonging to the same instances either by a voting scheme or based on pixel-pair affinity pyramid while simultaneously learning the semantic labels~\cite{gao2019ssap}. On the other hand, top-down approaches~\cite{hurtado2020mopt} tackle the problem in two a stage manner with a dedicated instance segmentation branch for detecting and segmenting~'thing' classes, and a semantic segmentation branch for segmenting the~'stuff' classes. Miliotto~\textit{et~al.}~\cite{milioto2020lidar} and Gasperini~\textit{et~al.}~\cite{gasperini2020panoster} adopt the bottom-up approach where instances are detected without region proposals. Miliotto~\textit{et~al.} use spherical projection of point clouds and predict offsets to the centroids for aiding clustering. They also use 3D information available in the range images for trilinear upsampling in the decoder. Panoster~\cite{gasperini2020panoster} uses an instance head which directly provides the instance ids of the points from learnable clustering without any explicit grouping requirement. In addition to the spherical projection-based method Rangenet++~\cite{milioto2019rangenet++} and Panoster~\cite{gasperini2020panoster} also show the implementation of their clustering mechanism using the point-based method KPConv~\cite{thomas2019kpconv} for semantic segmentation. PanopticTrackNet~\cite{hurtado2020mopt} further unifies panoptic segmentation with multi-object tracking and provides temporally consistent instance labels. \subsubsection{Semi-Supervised Learning} Semi-supervised learning (SSL) for LiDAR panoptic segmentation has not been explored. Thus, we discuss the two prominent SSL approaches for 3D object detection. SESS~\cite{zhao2020sess} trains a EMA-based teacher model and a student model simultaneously with consistency loss between them whereas 3DIoUMatch~\cite{wang20213dioumatch} employs mean-teacher~\cite{tarvainen2017mean} based framework using an IoU prediction filtering mechanism. Both of the approaches use augmentation of point clouds for which the point cloud is available. In contrast, our proposed pseudo labeling framework exploits external unlabeled point cloud datasets with a separate teacher and student model. In this work, we present a novel LiDAR panoptic segmentation network that effectively exploits the advantages of projection-based top-down methods. Our proposed architecture comprises of a shared backbone that incorporates the proposed proximity convolution module in the beginning to boost its geometric transformation modeling capacity and the novel range-aware FPN at the end to capture spatially aware and semantically rich multi-scale features. It further consists of a modified Mask~R-CNN~\cite{he2017mask} instance head and a new semantic head that fuses distance-dependent fine and long-range contextual features with distance-independent features for enhanced scale-invariance. We also propose the novel panoptic periphery loss function that refines the 'thing' object class boundaries by maximizing the range difference between the foreground and background pixels of the instance boundaries. All of the aforementioned modules effectively leverage the intricacies of LiDAR data to address the issues prevalent in projection-based LiDAR segmentation. Additionally, we explore the viability of pseudo labeling for LiDAR panoptic segmentation and thus propose a novel regularized pseudo labeling framework for the same. \section{Technical Approach} \label{sec:technical} In this section, we present a brief overview of our proposed EfficientLPS architecture and then detail each of its constituting components. \figref{fig:network} illustrates the topology of EfficientLPS that follows the top-down layout. First, we project the point cloud from the LiDAR scanner into the two-dimensional space using scan unfolding~\cite{triess2020}. The projected representation comprises of five channels: range, intensity and the $(x,y,z)$ coordinates. We then employ our novel shared backbone which consists of our proposed Proximity Convolution Module (PCM) to aid in modeling geometric transformations, followed by a modified EfficientNet-B5 encoder with the 2-way~FPN~\cite{mohan2020efficientps}. We employ the proposed Range Encoder Network (REN) in parallel which takes the range channel of the projected point cloud as input. We then fuse the multi-scale outputs of the encoder and REN to obtain the range-aware feature pyramid that enhances the ability to distinguish adjacent objects at different distances. The entire shared backbone is enclosed with green dashed lines in \figref{fig:network}.\looseness=-1 Following the backbone, we employ the parallel semantic segmentation (depicted in orange) and instance segmentation (depicted in purple) heads. The semantic head consists of Dense Prediction Cells (DPC)~\cite{chen2018searching} and Large Scale Feature Extractor (LSFE)~\cite{mohan2020efficientps} units, which we extend with our proposed range-guided depth-wise atrous separable convolutions to enable capturing scale invariant long-range contextual and fine features. We use a variant of Mask~R-CNN~\cite{he2017mask} for the instance head. We then fuse the logits from both the heads in the panoptic fusion module~\cite{mohan2020efficientps} which is supervised by our panoptic periphery loss to facilitate object boundary refinement and yield the panoptic segmentation in the projection domain. Finally, we re-project the predictions into the 3D space to obtain the final panoptic segmentation output of the input point cloud. During training, we employ our proposed pseudo labeling technique to train our model with both labeled and unlabeled data. In the rest of this section, we describe each of the aforementioned components in detail. \subsection{Projection using Scan Unfolding} \label{sec:scanUnfold} We employ scan unfolding~\cite{triess2020} to project the point cloud into the 2D range image format. Scan unfolding aims to mitigate the problems due to the alternate spherical projection method which suffers from point occlusions due to ego-motion correction. Scan unfolding yields a much denser representation than spherical projection. LiDAR sensors typically provide raw data in a range image-like format, with each pixel describing the range value at a particular row and column. Each column of this range image consists of measurements taken by individual modules stacked vertically within the sensor at a particular time and each row represents the consecutive measurements of one module taken during spinning of the sensor. However, most of the publicly available datasets provide LiDAR measurements as a list of 3D Cartesian coordinates, without any information about the column or row indices. Hence, we assign these indices to every point in the laser scan for recovering the range image-like representation. In order to project the LiDAR scan represented as a point cloud to the range image, we assign row and column indices to every point in the scan corresponding to an image of size $W \times H$. The list of points provided by datasets such as KITTI~\cite{Geiger2013IJRR} is typically constructed by just concatenating the rows of each scan. Hence, it is still ordered by horizontal and vertical indices. The scan unfolding algorithm takes advantage of this information and sequentially computes the yaw difference between the consecutive points to recover the vertical index. A jump is detected when the yaw difference is above a pre-defined threshold, which increments the vertical index of the following points in the list. We chose a threshold value of $\SI{310}{\degree}$, since the yaw angle typically drops from a value near $\SI{360}{\degree}$ to near $0$ between the rows. The horizontal index is computed as $\floor{(0.5 (1-\phi/\pi) W)}$, where $\phi$ is the yaw angle of each point in the range $[-\pi, \pi]$. The points are projected to their corresponding rows and columns with range, intensity and $(x,y,z)$ coordinates represented as separate channels. This results in a tensor of shape $(5 \times H \times W)$ which is fed as input to the network. Note that the value of $H$ is the number of vertically placed sensor modules in a sensor, which is 64 for the KITTI dataset. Other datasets that already contain the vertical index information, such as nuScenes, only require the computation of the horizontal index. \subsection{EfficientLPS Architecture} \subsubsection{Backbone} The backbone consists of our proposed Proximity Convolution Module (PCM), followed by an encoder and the novel Range-aware Feature Pyramid Network (RFPN). We detail each of these components in the following section. \noindent\textbf{Proximity Convolution Module:} The core of the PCM is the proximity convolution operation. The standard convolution operation performs sampling over a feature map followed by a weighted sum of the sampled values to yield an output feature map $y$. The convolution at pixel $p$ is computed as \begin{equation} y(p)=\sum_{p_o\in{R}}w(p_o)\cdot x(p+p_o), \label{eq:standard_conv} \end{equation} where ${R}$ is a regular sampling grid containing the sampling offset locations around $p$ in the input feature map $x$ weighted by the kernel $w$. The standard convolution is limited in its geometric transformation modeling capacity due to its fixed grid structure. The distance-dependent sparsity present in the LiDAR data further exacerbates the effects of this limitation. To tackle this constraint, we propose the proximity convolution which exploits range information to augment the spatial sampling locations for effectively improving the transformation modeling ability. Formally, in the proximity convolution, for each pixel $p$ in the projected range image $R \in \mathbb{R}^{h \times w}$, we compute its nearest neighbors using the k-Nearest-Neighbors (kNN) algorithm. Here, we use range difference of the corresponding points from range image as the distance in the 3D space to find the nearest neighbors. Subsequently, we sort the nearest neighbors in the ascending order of their range difference to the query pixel. We now adapt \eqref{eq:standard_conv} as \begin{equation} y(p)=\sum_{p_n\in{N}}w(p_n)\cdot x(p+p_n), \label{eq:range_conv} \end{equation} where ${N}$ is no longer a regular grid but consists of offsets for the \textit{n} nearest neighbors of pixel $p$. Please note that the grid ${N}$ includes the offset to the query pixel and its $n-1$ nearest neighbors. The weights $w$ are learned in the same manner as in standard convolutions. The search grid for the kNN algorithm is always larger than the learnable weight matrix and value of $k$ is the product of kernel size dimensions. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{images/introtechnical/_small_pacnew.png} \caption{Comparison of pixel sampling while applying standard convolution and our proposed proximity convolution. The highlighted red box contains a car and green box contains a bike. The convolution is applied by placing the kernel at the yellow dot, and the sampled neighboring pixels are represented by red dots. Observe that the neighboring pixels are sampled adaptively based on the range difference of the corresponding points in the range image.} \label{fig:PAC} \end{figure} The sampling operation of the proximity convolution in comparison to the standard convolution is illustrated in \figref{fig:PAC}. In the example, the convolutions are performed at the border of the objects. As shown in \figref{fig:PAC}, the proximity convolution forms the kernel according to the shape of the object, while the standard convolution obtains information outside the objects. Particularly, the standard convolution is not able to obtain information for the bike rider (green rectangle), since the large distance of the object from the sensor causes increased sparsity. Since the proximity convolution is less dependent on this distance-induced sparsity, it successfully represents the shape of the bike rider. Therefore, the proximity convolution models the geometric transformations more effectively, especially for farther away objects that suffer from distance-induced sparsity. The proximity convolution module comprises of the proposed proximity convolution, synchronized Inplace Activated Batch Normalization (iABNsync)~\cite{porzi2019seamless} and Leaky ReLU activation. We use iABNsync in this layer and all the subsequent parts of the network in contrast to the vanilla batch normalization layer, as it provides a better estimate of the gradients and reduces the GPU memory footprint. We study the performance of the proximity convolution module in the ablation study presented in \secref{sec:pil}. \noindent\textbf{Encoder:} We adopt the EfficientNet~\cite{tan2019efficientnet} topology for the main encoder as well as the Range Encoder Network (REN). We remove the Squeeze and Excitation~(SE) connections to enable better localization of features and contextual elements. Similar to the proposed proximity convolution module, we replace the batch normalization layers with iABNsync and Leaky ReLU activation. The EfficientNet architecture comprises of nine blocks where blocks 2, 3, 5, and 9 yield multi-scale features that correspond to the down-sampling factors of $\times4$, $\times8$, $\times16$ and $\times32$ respectively. EfficientNet employs compound scaling to scale the base network efficiently. Here, width, depth, and the resolution of the network are the coefficients available for scaling. We choose the scaling coefficients for the main encoder as 1.6, 2.2, and 456 respectively and the coefficients for the REN as 0.1, 0.1, and 224 respectively, which we obtain via grid search optimization. The output of the PCM is fed as input to the main encoder and the REN takes the projected range image as input. \figref{fig:network} depicts the main encoder in light purple and the REN in red. \begin{figure*} \centering \includegraphics[width=1\textwidth]{images/introtechnical/rdpc.png} \caption{Topology of the different proposed architectural components in EfficientLPS. (a)~Range-Aware FPN (RFPN) and (b) Feature fusion module is used for the fusion of range encoded features with FPN features in RFPN. (c) Range-guided Dense Prediction Cells (RDPC) and (d) Range-guided Large Scale Feature Extractor (RLSFE) modules are part of the proposed semantic head. Lastly, (e) Range-guided depth-wise atrous separable convolution is the mechanism for controlling dilation offsets in the RDPC and RLSFE modules.} \label{fig:components} \end{figure*} \noindent\textbf{Range-Aware FPN:} Our proposed range-aware FPN (RFPN) reinforces the coherently aggregated fine and contextual features of Feature Pyramid Networks (FPNs) with distance awareness. This enables the network to better segregate adjacent objects with different range variations. We build upon the 2-way FPN~\cite{mohan2020efficientps} that enables bidirectional flow of information using two parallel branches that aggregate multi-scale from the main encoder in a top-down and bottom-up manner respectively. The 2-way FPN is depicted with yellow blocks in \figref{fig:network}. The outputs from both the parallel branches at each resolution are summed together and passed through a $3\times3$ convolution with 256 output channels to yield the outputs: P\textsubscript{4}, P\textsubscript{8}, P\textsubscript{16}, and P\textsubscript{32}. Note that we use standard convolutions in the 2-way FPN instead of separable convolutions used in~\cite{mohan2020efficientps} to learn richer representations at the expense of additional parameters. Our proposed RFPN consists of the aforementioned 2-way FPN, the REN module, and the feature fusion module as shown in \figref{fig:components}~(a). The REN module is employed in parallel to the 2-way FPN. We find that enabling the REN to learn to encode range information explicitly in different scales rather than direct downsampling of range data, yields better performance as shown in the ablation studies presented in \secref{sec:RFPN}. The outputs of REN which are at four different resolutions (R\textsubscript{4}, R\textsubscript{8}, R\textsubscript{16} and R\textsubscript{32}) and the outputs of the 2-way FPN (P\textsubscript{4}, P\textsubscript{8}, P\textsubscript{16} and P\textsubscript{32}) are fed as input to the Feature Fusion module which computes range-aware pyramid features for each of the corresponding resolution (RP\textsubscript{4}, RP\textsubscript{8}, RP\textsubscript{16} and RP\textsubscript{32}). Here, 4, 8, 16 and 32 denote different downsampling factors with respect to the input. The purpose of the fusion module is two-fold. First, to fuse the inputs R\textsubscript{s} and P\textsubscript{s}, where s denotes the downsampling factor. Second, to enable the network to emphasize on the more informative features between the 2-way FPN features and the corresponding fused features, hence incorporating distance awareness selectively. As shown in \figref{fig:components}~(b), the feature fusion module consists of two branches. The first branch takes the concatenated tensors R\textsubscript{s} and P\textsubscript{s} as input, and feeds them through two $3\times3$ convolution layers sequentially to yield the fused features G\textsubscript{s}. Additionally, we use iABNsync and leaky ReLU layers after each $3\times3$ convolution. We compute the weight factors for this branch ($w_{fs}$) by employing a $1\times1$ convolution followed by a sigmoid activation. The second branch propagates P\textsubscript{s} in parallel and the weight factor of this branch is computed as $1-w_{fs}$. Then the output (RP\textsubscript{s}) of this module is given by \begin{equation} {RP}_{s} = w_{fs}*G_{s}+(1-w_{fs})*P_{s}, \label{eq:range_fpn} \end{equation} For the sake of simplicity, the fusion module is depicted as one blue box in \figref{fig:network}, whereas in practice each resolution has its own exclusive feature fusion module depicted in \figref{fig:components}~(b). We present detailed analysis of different components of the range-aware~FPN in the ablation study in \secref{sec:RFPN}. \subsubsection{Distance-Dependent Semantic Head} The main component of our proposed distance-dependent semantic head is the novel range-guided depth-wise atrous separable convolution operation. We essentially encode the range using the REN module and compute the dilation factor to apply at each central pixel from the encoded features. We then employ the depth-wise atrous separable convolution operation with the computed dilation factor, thereby enabling the receptive field to be adaptable to the range data. As shown in \figref{fig:components}~(e), we employ a $3\times3$ convolution on the encoded range features from the REN module to obtain the dilation offsets for each pixel. Subsequently, we compute the bilinearly interpolated neighbors from the input at the corresponding offsets for each pixel. We then employ a $3\times3$ depth-wise separable convolution on the computed neighbors to generate the final output. Note that the input, the REN encoded features, and the dilation offsets have the same spatial resolution. We also use a parameter $D_{max}$ in this convolution to set the maximum value for dilation offsets. The range-guided depth-wise atrous separable convolution learns scale-invariant features as the dilation rate of the convolution kernel changes based on the distance, and so does the scale of objects. We take advantage of this scale invariance in the proposed semantic head of our EfficientLPS architecture. We extend the semantic head proposed in~\cite{mohan2020efficientps} consisting of Dense Prediction Cells (DPC), Large Scale Feature Extractor (LSFE), and Mismatch Correction Module (MC) with bottom-up path augmentation connections. We effectively retain the MC module and the bottom-up path augmentation connections but redesign the DPC and LSFE modules by incorporating our range-guided depth-wise atrous separable convolutions. We refer to this new LSFE variant as Range-guided Large Scale Feature Extractor (RLSFE) that comprises a $3\times3$ range-guided depth-wise atrous separable convolution with 256 output filters and $D_{max}=3$ followed by an iABNsync and a Leaky ReLU activation function. The $D_{max}$ parameter is set to a lower value in this module as it captures fine features that can get distorted with higher dilation rates. We employ two $3\times3$ separable convolutions with 128 output filters, each followed by iABNsync and a Leaky ReLU activation to further perform channel reduction and learn deeper features. \figref{fig:components}~(d) shows the topology of our proposed RLSFE module. The topology of our Range-guided Dense Prediction Cells (RDPC) module is depicted in \figref{fig:components}~(c). We refer to the $3\times3$ depth-wise separable convolution with 256 output channels and a dilation rate of (1,6) as branch A. Similarly, the parallel $3\times3$ range-guided depth-wise atrous separable convolution with 128 output filters with $D_{max}=24$ is referred to as branch B. Branch A further splits into four parallel branches. Three of the branches consist of a $3\times3$ depth-wise separable convolution with 256 output channels with dilation rates (1,1), (6,21), and (18,15) respectively. The fourth branch is an identity connection that goes to the end in parallel to all the branches. The fifth branch concatenates with Branch B and consists of a $3\times3$ range-guided depth-wise atrous separable convolution with 128 output filters with $D_{max}=24$, and runs parallel to other branches. The branch with (18,15) dilation rates further branches out into an identity connection and a branch with a $3\times3$ depth-wise separable convolution with 256 output channels and dilation rate (6,3). In the end, there are a total of six parallel branches that are concatenated together to yield a tensor with 1536 channels. Finally, a $1\times1$ convolution with 256 output channels generates the output of the RDPC module. Please note that each of the aforementioned convolutions in the RDPC module is followed by an iABNsync and Leaky ReLU activation. The RDPC module essentially integrates range-guided depth-wise atrous separable convolutions with fixed multi-dilation rates ones to generate features that cover a relatively large scale range in terms of receptive field in a dense manner. In summary, our proposed distance-dependent semantic head consists of two RDPC modules employed at $\times32$ and $\times16$ downsampling factor whose inputs are $RP_{32}$, $R_{32}$ and $RP_{16}$, $R_{16}$ respectively. It also utilizes two RLSFE modules for $\times8$ and $\times4$ downsampling factor with $RP_{8}$, $R_{8}$ and $RP_{4}$, $R_{4}$ as inputs. These modules are subsequently followed by the MC module~\cite{mohan2020efficientps} with bottom-up path augmentations. In the last step, we apply a $1\times1$ convolution with $N_{stuff+thing}$ output filters and upsample to the resolution of the input image. We train our semantic head with equally weighted per-pixel log-loss (${L}_{pp}$)~\cite{bulo2017loss} and \textit{Lov\'{a}sz-Softmax}~loss (${L}_{LS}$)~\cite{berman2018lovasz} as \begin{equation}\label{eq:Lsemantic} {L}_{semantic}(\Theta)= {L}_{pp} + {L}_{LS}, \end{equation} We evaluate the performance of our proposed semantic head in the ablation study presented in \secref{sec:semantichead}. \subsubsection{Instance Head} We adopt the Mask~R-CNN~\cite{he2017mask} topology for the instance head and make certain modifications. We replace the batch normalization and ReLU activations with iABNsync and Leaky ReLU layers respectively. \figref{fig:network} shows the instance head depicted in purple blocks which consists of two stages. In the first stage, the Region Proposal Network (RPN) employs a fully convolutional network to generate object proposals and objectness scores for each output resolution of the RFPN module. The RPN is trained with the objectness score loss ${L}_{os}$~\cite{mohan2020efficientps} and object proposal loss ${L}_{op}$~\cite{mohan2020efficientps} In the subsequent stage, ROI align extracts features by directly pooling from the n$^{\text{th}}$ channel of the FPN encodings with a $14\times14$ spatial resolution bounded within the object proposals obtained in the previous stage. These extracted features are then fed to specialized bounding box regression, object classification and mask segmentation networks. The second stage is trained with the classification loss ${L}_{cls}$~\cite{mohan2020efficientps}, bounding box loss ${L}_{bbx}$~\cite{mohan2020efficientps} and mask segmentation loss ${L}_{mask}$~\cite{mohan2020efficientps}. The overall loss of the instance segmentation head is the equally weighted summation of the aforementioned losses as \begin{equation}\label{eq:Linstance} {L}_{instance} = {L}_{os} + {L}_{op} + {L}_{cls} + {L}_{bbx} + {L}_{mask}. \end{equation} Note that the gradient from the losses ${L}_{cls}$, ${L}_{bbx}$ and ${L}_{mask}$ are allowed to flow only through the network backbone and not through the RPN. \subsection{Panoptic Fusion} We fuse the outputs of the semantic and instance heads using the heuristic proposed in~\cite{mohan2020efficientps} to yield the panoptic predictions in the projection domain. This heuristic enables us to adaptively fuse the predictions of both the heads which alleviates the inherent overlap problem. The instance head outputs a set of object instances comprising of class prediction, confidence score, bounding box, and mask logits for each instance. While, the semantic head outputs semantic logits of $N_{stuff}+N_{thing}$ channels. We first compute the mask logit $ML_A$ for the object instances by applying a series of operations on the outputs of the instance head, consisting of thresholding, sorting, scaling, resizing, padding, and overlap filtering. Subsequently, we compute the mask logit $ML_B$ for the corresponding object instances from the outputs of the semantic head by channel selection based on the class of the object instances and suppress the logits for that channel outside the instance bounding box. Finally, we adaptively fuse the two logits $ML_A$ and $ML_B$ to yield the fused mask logits $FL$ of the instances as \begin{equation} \label{eq} FL = (\sigma(ML_A) + \sigma(ML_B)) \odot (ML_A + ML_B), \end{equation} where $\odot$ is the Hadamard product, and $\sigma(\cdot)$ is the sigmoid function. In the next step, we concatenate the 'stuff' logits from the output of the semantic head and the fused logits, followed by applying the softmax function. Subsequently, we apply the argmax function along the channel dimension to obtain the intermediate panoptic prediction. To compute the final output, we replace the non-'thing' class predictions in the intermediate prediction with the 'stuff' class predictions of the semantic head while ignoring the classes that have an area smaller than a pre-defined area threshold $min_{sa}$. \subsection{Panoptic Periphery Loss} We propose the panoptic periphery loss function which exploits range information to refine the boundaries of the 'thing' class objects. By minimizing this loss, the boundary pixels of instances are adapted to maximize the range difference to the adjacent background pixels. This is motivated by the fact that there is typically a range gap at the borders of object instances. Consider the network provides a set of instances \textit{I} of 'thing' class objects, and for each instance, we have its foreground and background pixels. Then for a given range image \textit{R}, the panoptic periphery loss function is defined as \begin{equation} L_{refine} = - \frac{1}{|B|} \sum_{i \in I} \sum_{b \in B_i} [\max_{n \in N}(k_n *(r_b - r_n)^2)], \end{equation} where $|B|$ is the total number of boundary points over all instances, $B_i$ is the set of boundary pixels for instance \textit{i}, $N$ is the set of the four immediate neighbors of pixel location $b$, $r_b$ and $r_n$ are the range value at pixel location $b$ and $n$, respectively. $k_n = 1$ for $n$ being a background pixel and $0$ otherwise. The negative sign ensures that the loss decreases when the range difference between boundary and background increases. The overall loss $L$ for training EfficientLPS is given by \begin{equation}\label{eq:Loverall} {L} = {L}_{sem}+ {L}_{instance} + {L}_{refine}, \end{equation} where $L_{sem}$ is the semantic head loss, ${L}_{instance}$ is the instance head loss and $L_{refine}$ is the panoptic periphery loss for boundary refinement. \subsection{Back-projection} \label{post_proc} During the projection to point clouds, different points may get assigned to the same pixel in the projected image, which leads to the assignment of the same label to all overlapping points. Moreover, due to the downsampling operations in the network, the convolutions produce blurry outputs in the decoder which leads to leaking of the labels at the boundaries of the instances during back-projection to the 3D domain. We use a k-nearest neighbor (kNN) based \myworries{back-projection} scheme~\cite{milioto2019rangenet++} to mitigate these issues. For every point in the point cloud, the nearest k neighbors to the point vote for its semantic and instance labels. We obtain the labels of the selected neighbors from the corresponding pixels in the projected output prediction. To compute the nearest neighbors, we search for nearest neighbors within a pre-defined window around the pixel in the projected range image, out of which we select k nearest points based on the differences in their absolute range value. The entire post-processing is GPU optimized and is only employed during inference. \subsection{Psuedo Labeling} Due to the arduous task of annotating point-wise panoptic segmentation labels in point clouds and the effectiveness of pseudo labeling in the image domain, we explore its utility for LiDAR panoptic segmentation. We formulate a novel heuristic to improve the performance of EfficientLPS without requiring any additional manual human-annotations or model augmentations. We make the following assumptions while formulating the proposed heuristic. First, the unlabeled dataset is drawn from the same data distribution as the labeled dataset. Second, the model which is used to generate the labels for the unlabeled dataset and the model learning from these generated pseudo labels are the same, i.e., both the models have the same representation capacity. Finally, the precision and recall of the model generating the pseudo labels are tunable during inference time via adjustment of one or more hyperparameters. \begin{figure*} \centering \footnotesize {\renewcommand{\arraystretch}{1 \begin{tabular}{P{7.5cm}P{7.5cm}} SemanticKITTI & nuScenes \\ \\ \includegraphics[width=\linewidth]{images/experimental/kitti3d.png} & \includegraphics[width=\linewidth]{images/experimental/nuscenes.png} \\ \\ \includegraphics[width=\linewidth]{images/experimental/kittiside.png} & \includegraphics[width=\linewidth]{images/experimental/nuscene_side.png} \\ \end{tabular}} \caption[datasets visualization]{Example groundtruth visualization from SemanticKITTI and nuScenes datasets. The SemanticKITTI dataset was collected using a 64-beam LiDAR, hence it provides a fairly dense representation of the environment in comparison to the nuScenes dataset which was collected using a 32-beam LiDAR.} \label{fig:datasets} \end{figure*} The first assumption is dataset-specific to ensure high-quality pseudo labels, since the model trained on the same distribution as the unseen dataset tends to generalize better than the dataset from a different distribution. In our case, we choose the KITTI RAW dataset~\cite{Geiger2013IJRR} as the unlabeled dataset since SemanticKITTI~\cite{behley2019iccv} which is the labeled dataset, is a subset of the former. We use the same EfficientLPS model to generate the pseudo labels with the goal of improving its performance, hence satisfying the second assumption. This assumption ensures that the label generating model can provide meaningful pseudo labels and the learning model has the representational capacity to capture it. To satisfy the third condition, the performance of EfficientLPS can be tuned using softmax confidence thresholding or by tuning the thresholds of the panoptic fusion module in EfficientLPS or a combination of both. This condition is required to design an effective regularization strategy for pseudo label generation in order to minimize the confirmation bias while learning. We first train EfficientLPS on the labeled dataset and use this model to generate pseudo labels for the unlabelled dataset. We refer to this model as the Pseudo Label Generator (PLG) and the parameters that can be used to control the performance of the model as control parameters as a whole. In the next step, we use grid search to find the most optimal control parameter combination that maximizes the given ratio $(TP-FP)/TP$ until the PQ score is higher than the $PQ_{cutoff}$ parameter on the validation dataset of the labeled dataset. Here, $TP$ is the true positives, and $FP$ is false positives that are computed over the validation set. $PQ_{cutoff}$ is the minimum value of $PQ$ to which the performance of PLG is allowed to drop. By maximizing the aforementioned ratio, we make the generated pseudo label to be more accurate by having relatively fewer false positives and higher true positives. Subsequently, we use the optimal control parameter setting to generate pseudo labels with PLG. As a post-processing step, we then discard all the instances with the number of points less than a pre-defined limit $P_{limit}$ in the pseudo labels. This improves the quality of the generated pseudo labels by discarding incorrect predictions that were made due to the lack of sufficient points. We then train EfficientLPS from scratch on this pseudo labeled dataset, followed by fine-tuning the model on the labeled dataset to improve the overall performance. We comprehensively evaluate the performance of our proposed heuristic in the ablation study presented in \secref{sec:psuedolabel}. \section{Experimental Evaluation} \label{sec:experiments} In this section, we first briefly describe the datasets that we report the results on in \secref{sec:dataset}, followed by the training protocol that we employ in \secref{sec:training} and detailed comparisons as well as benchmarking results in \secref{sec:comparisonSOTA}. Subsequently, we present comprehensive ablation studies on the various proposed architectural components in EfficientLPS in \secref{sec:ablation} and detailed qualitative analysis in \secref{sec:qualitative}. In all the experiments presented in this section, we use the PQ metric~\cite{kirillov2019panoptic} as the main evaluation criteria as defined by the benchmarks. For completeness, we also report the mean Intersection-over-Union (mIoU), Segmentation Quality (SQ), and Recognition Quality (RQ), as well as the aforementioned metrics for the 'stuff' and 'thing' classes separately. \subsection{Dataset} \label{sec:dataset} We evaluate the performance of our approach on two datasets that were collected with LiDARs of different resolutions to test the generalization of our network. The first dataset is SemanticKITTI~\cite{behley2019iccv} which contains sequences (00-21) consisting of point-wise semantic and temporally consistent instance labels for the 43,552 LiDAR scans. The dataset is split into 20,351 scans (sequences 00-10) that are available for training, while the rest of the sequences (11-21) are withheld and used by the benchmarking server for evaluations. The dataset provides labels of 28 classes out of which 19 classes are considered for evaluation. For the second dataset, we use nuScenes~\cite{caesar2020nuScenes} which is a large-scale dataset for autonomous driving that consists of point-wise semantic labels for 32 classes out of which 16 are considered for evaluation. The dataset contains 1000 scenes, out of which 700 scenes are used for the training set, 150 scenes for the validation set, and the rest 150 scenes for the test set. The dataset itself does not provide any panoptic segmentation labels but it contains 3D bounding box annotations for the 'thing' object classes. To obtain the panoptic segmentation labels, we extract the points that lie inside the 'thing' class bounding boxes and assign unique instance-ids. The SemanticKITTI dataset ignores object instances that have less than 50 points. We follow a similar scheme for the nuScenes dataset and ignore object instances that have less than 15 points, since nuScenes is roughly 3$\times$ sparser than SemanticKITTI. While the vertical field of view of the LiDAR that was used in nuScenes is slightly higher than SemanticKITTI, the number of vertical lines is only 32 compared to 64 in SemanticKITTI. \figref{fig:datasets} shows example groundtruth panoptic segmentation of point clouds from both these datasets. \subsection{Training Protocol} \label{sec:training} \begin{table*} \begin{center} \caption{Comparison of LiDAR panoptic segmentation performance on SemanticKITTI test set. All scores are in [\%].} \label{tab:testKITTI} \footnotesize \begin{tabular} {l|cccc|ccc|ccc|c} \toprule Method & PQ & PQ$^\dagger$ & SQ & RQ & PQ\textsuperscript{Th} & SQ\textsuperscript{Th} & RQ\textsuperscript{Th} & PQ\textsuperscript{St} & SQ\textsuperscript{St} & RQ\textsuperscript{St} & mIoU \\ \midrule RangeNet++~\cite{milioto2019rangenet++} + PointPillars~\cite{lang2019pointpillars} & $37.1$ &$45.9$ & $75.9$ & $47.0$ & $20.2$ & $75.2$ & $25.2$ & $49.3$ & $76.5$ & $62.8$ & $52.4$ \\ KPConv~\cite{thomas2019kpconv} + PointPillars~\cite{lang2019pointpillars} & $44.5$ & $52.5$ &$80.0$ & $54.4$ & $32.7$ & $81.5$ & $38.7$ & $53.1$ & $79.0$ & $65.9$ & $58.8$ \\ LPSAD~\cite{milioto2020lidar} & $38.0$ &$47.0$ & $76.5$ & $48.2$ & $25.6$ & $76.8$ & $31.8$ & $47.1$ & $76.2$ & $60.1$ & $50.9$ \\ PanopticTrackNet~\cite{hurtado2020mopt} & $43.1$ & $50.7$ & $78.8$ & $53.9$& $28.6$ & $80.4$& $35.5$ &$53.6$ &$77.7$ & $67.3$ & $52.6$ \\ Panoster~\cite{gasperini2020panoster} & $52.7$ & $59.9$& $80.7$ &$64.1$ & $49.4$ & $83.3$ & $58.5$ & $55.1$ & $78.8$ & $68.2$ & $59.9$ \\ \midrule EfficientLPS (ours) & $\mathbf{57.4}$ & $\mathbf{63.2}$ & $\mathbf{83.0}$ & $\mathbf{68.7}$ & $\mathbf{53.1}$ & $\mathbf{87.8}$ & $\mathbf{60.5}$ & $\mathbf{60.5}$ & $\mathbf{79.5}$ & $\mathbf{74.6}$ & $\mathbf{61.4}$\\ \bottomrule \end{tabular} \end{center} \end{table*} \begin{table*} \setlength\tabcolsep{3.7pt} \begin{center} \caption{Class-wise PQ scores on SemanticKITTI test set. R.Net, P.P, KPC refer to RangeNet++, Point Pillars, KPConv respectively. All scores are in [\%].} \label{tab:test_pq} \footnotesize \begin{tabular}{l|ccccccccccccccccccc|c} \toprule Method & \begin{sideways}car\end{sideways} & \begin{sideways}truck\end{sideways} & \begin{sideways}bicycle\end{sideways} & \begin{sideways}motorcycle\end{sideways} & \begin{sideways}other vehicle\end{sideways} & \begin{sideways}person\end{sideways} & \begin{sideways}bicyclist\end{sideways} & \begin{sideways}motorcyclist\end{sideways} & \begin{sideways}road\end{sideways} & \begin{sideways}sidewalk\end{sideways} & \begin{sideways}parking\end{sideways} & \begin{sideways}other ground\end{sideways} & \begin{sideways}building\end{sideways} & \begin{sideways}vegetation\end{sideways} & \begin{sideways}trunk\end{sideways} & \begin{sideways}terrain\end{sideways} & \begin{sideways}fence\end{sideways} & \begin{sideways}pole\end{sideways} & \begin{sideways}traffic sign\end{sideways} & PQ \\ \midrule R.Net~\cite{milioto2019rangenet++}+ P.P.~\cite{lang2019pointpillars} & 66.9 & 6.7 & 3.1 & 16.2 & 8.8 & 14.6 & 31.8 & 13.5 & 90.6 & 63.2 & 41.3 & 6.7 & 79.2 & 71.2 & 34.6 & 37.4 & 38.2 & 32.8 & 47.4 & 37.1\\ KPC~\cite{thomas2019kpconv} + P.P.~\cite{lang2019pointpillars} & 72.5 & 17.2 & 9.2 & 30.8 & 19.6 & 29.9 & 59.4 & 22.8 & 84.6 & 60.1 & 34.1 & 8.8 & 80.7 & 77.6 & 53.9 & 42.2 & 49.0 & 46.2 & 46.8 & 44.5\\ LPSAD~\cite{milioto2020lidar} & 76.5 & 7.1 & 6.1 & 23.9 & 14.8 & 29.4 & 29.7 & 17.2 & 90.4 & 60.1 & 34.6 & 5.8 & 76.0 & 69.5 & 30.3 & 36.8 & 37.3 & 31.3 & 45.8 & 38.0 \\ PanopticTrackNet~\cite{hurtado2020mopt} &70.8 & 14.4 & 17.8 & 20.9 & 27.4 &34.2 & 35.4& 7.9 & 91.2 & 66.1 & 50.3 & 10.5 & 81.8 & 75.9 & 42.0 & 44.3 & 42.9 & 33.4 & 51.1 & 43.1 \\ Panoster~\cite{gasperini2020panoster} & 84.0 & 18.5 & 36.4 & 44.7 & 30.1 & 61.1 & \textbf{69.2} & $\mathbf{51.1}$ & 90.2 & 62.5 & 34.5 & 6.1 & 82.0 & 77.7 & $\mathbf{55.7}$ & 41.2 & 48.0 & $\mathbf{48.9}$ & 59.8 & 52.7\\ \midrule EfficientLPS (ours) & $\mathbf{85.7}$ & $\mathbf{30.3}$ & $\mathbf{37.2}$ & $\mathbf{47.7}$ & $\mathbf{43.2}$ & $\mathbf{70.1}$ & 66.0 & 44.7 & $\mathbf{91.1}$ & $\mathbf{71.1}$ & $\mathbf{55.3}$ & $\mathbf{16.3}$ & $\mathbf{87.9}$ & $\mathbf{80.6}$ & 52.4 & $\mathbf{47.1}$ & $\mathbf{53.0}$ & 48.8 & $\mathbf{61.6}$ & $\mathbf{57.4}$ \\ \bottomrule \end{tabular} \end{center} \end{table*} We train our network on projected point clouds of $4096\times256$ resolution. We use bilinear interpolation on the projections obtained from scan unfolding and nearest neighbor interpolation on the ground truth point clouds. We initialize the main encoder of our EfficientLPS architecture with weights from the EfficientNet-B5 model pre-trained on the ImageNet~\cite{deng2009imagenet} dataset. Furthermore, we initialize the weights of the iABNsync layers to 1 and use Xavier initialization for the other layers. We also employ zero constant initialization for the biases and set the slope of Leaky ReLU to 0.01. We use the same hyperparameters for the instance head as Mask~R-CNN~\cite{he2017mask}. For the panoptic fusion module, we set $c_t = 0.5$, $o_t = 0.5$ and $min_{sa} = 128$. We use Stochastic Gradient Descent (SGD) with a momentum of $0.9$ for training our models. We employ a multi-step learning rate schedule, where we start with an initial base learning rate of $0.07$ and reduce it by a factor of $10$ after 16,000 and 22,000 iterations. We train our models for a total of 25,000 iterations with a batch size of 16 on 8 NVIDIA TITAN RTX GPUs. We first train the model on the pseudo labeled dataset until the first reduction in learning rate by a factor of $10$, followed by continuing the training on the labeled dataset. \subsection{Comparison with the State-of-the-Art} \label{sec:comparisonSOTA} In this section, we evaluate the performance of EfficientLPS and compare with state-of-the-art methods for panoptic segmentation of LiDAR point clouds. \noindent{\textbf{SemanticKITTI:}} We compare with three state-of-the-art methods, LPSAD~\cite{milioto2020lidar}, PanopticTrackNet~\cite{hurtado2020mopt}, and Panoster~\cite{gasperini2020panoster}, as well as the two baselines, (RangeNet++~\cite{milioto2019rangenet++} + PointPillars~\cite{lang2019pointpillars}), and (KPConv~\cite{thomas2019kpconv} + PointPillars~\cite{lang2019pointpillars}). \tabref{tab:testKITTI} presents the results on the SemanticKITTI test set which was evaluated by the benchmark server. Our proposed EfficientLPS achieves a PQ score of $57.4\%$, which is an improvement of $4.7\%$ over the previous state of the art Panoster. EfficientLPS also outperforms all the existing methods in all the metrics and sets the new state-of-the-art on this benchmark. The higher overall SQ score of EfficientLPS can be primarily attributed to the proposed panoptic periphery loss function, which improves the segmentation quality of 'thing' class objects by refining their boundaries. This is evident from the increase of $4.5\%$ in SQ$^{th}$ score compared to Panoster. Moreover, the proposed distance-dependent semantic head enables the recognition of objects in a scale-invariant manner by incorporating range encoded information to achieve a higher recognition quality, especially for the 'stuff' classes. This yields an improvement of $6.4\%$ in the RQ$^{st}$ score, which enables it to achieve the best overall RQ score of $68.7\%$. Additionally, the backbone of EfficientLPS equipped with the proposed PCM module which models the geometric transformations of different objects and the RFPN which learns spatially consistent features, contributes to the improvement of $3.7\%$ in PQ$^{Th}$, $5.4\%$ in PQ$^{St}$ and $1.5\%$ in mIoU scores. \begin{table*} \begin{center} \caption{Comparison of LiDAR panoptic segmentation performance on SemanticKITTI validation set. All scores are in [\%].} \label{tab:validationKITTI} \footnotesize \begin{tabular} {l|cccc|ccc|ccc|c} \toprule Method & PQ & PQ$^\dagger$ & SQ & RQ & PQ\textsuperscript{Th} & SQ\textsuperscript{Th} & RQ\textsuperscript{Th} & PQ\textsuperscript{St} & SQ\textsuperscript{St} & RQ\textsuperscript{St} & mIoU \\ \midrule RangeNet++~\cite{milioto2019rangenet++} + PointPillars~\cite{lang2019pointpillars} & $36.5$ & - & $73.0$ & $44.9$ & $19.6$ & $69.2$ & $24.9$ & $47.1$ & $75.8$ & $59.4$ & $52.8$ \\ KPConv~\cite{thomas2019kpconv} + PointPillars~\cite{lang2019pointpillars} & $41.1$ & - &$74.3$ & $50.3$ & $28.9$ & $69.8$ & $33.1$ & $50.1$ & $77.6$ & $62.8$ & $56.6$ \\ LPSAD \cite{milioto2020lidar} & $36.5$ & $46.1$ & - & - & - & - & $28.2$ & - & - & - & $50.7$ \\ PanopticTrackNet~\cite{hurtado2020mopt} & $40.0$ & - & $73.0$ & $48.3$ & $29.9$ & $76.8$ & $33.6$ & $47.4$ & $70.3$ & $59.1$ & $53.8$ \\ Panoster~\cite{gasperini2020panoster} & $55.6$ & - & $79.9$ &$66.8$ & $56.6$ & - & $65.8$ & - & - & - & $61.1$ \\ \midrule EfficientLPS (ours) & $\mathbf{59.2}$& $\mathbf{65.1}$ & $\mathbf{75.0}$ & $\mathbf{69.8}$ & $\mathbf{58.0}$ & $\mathbf{78.0}$& $\mathbf{68.2}$ & $\mathbf{60.9}$ &$\mathbf{72.8}$ & $\mathbf{71.0}$ & $\mathbf{64.9}$\\ \bottomrule \end{tabular} \end{center} \end{table*} In \tabref{tab:test_pq}, we present a comparison of the class-wise PQ scores on the SemanticKITTI test set. EfficientLPS achieves the highest PQ score for most of the 'stuff' class objects, with the exception of \textit{trunk} and \textit{pole} classes which are outperformed by Panoster. This shows that incorporating range encoded features into the semantic head helps achieve better overall semantic segmentation performance. The resulting spatial awareness is significantly beneficial for the \textit{other-ground} class, which every method struggles to classify due to the presence of the more dominating \textit{road} and \textit{sidewalk} classes. We also observe a similar effect with the \textit{parking} class. EfficientLPS achieves an improvement in the PQ score by $20.8\%$ for \textit{parking} and $10.2\%$ for \textit{other-ground} in comparison to Panoster. EfficientLPS also achieves the best performance for all 'thing' class objects, with the exception of \textit{motocyclist} and \textit{bicyclist} classes. This is due to the fact that both these classes share almost the same properties and they are relatively small objects which are represented by only a few points. These objects also always coexists with other classes such as \textit{bicycle} and \textit{motorcycle}, and are adversely affected during the projection of point cloud, rendering them very close to other objects. Hence, the point-based backbone KPConv and the clustering used by Panoster provides an advantage in this case. Nevertheless, EfficientLPS outperforms the other methods in the PQ score for all the other classes with large margins of $12.1\%$ for \textit{other vehicles}, $11.8\%$ for \textit{truck} and $9\%$ for \textit{person} classes. Therefore, the overall state-of-the-art performance obtained from our network is not just due to the improvement in scores for a particular object class, rather is a result of collective improvement across different semantic object classes with a variety of structural properties. \tabref{tab:validationKITTI} presents the results on the SemanticKITTI validation set. `-' indicates that the corresponding methods do not report the specific metric. We observe a similar trend here as the test set where EfficientLPS outperforms all the other methods in all of the metrics. It achieves a PQ score of $59.2\%$ and $75.0\%$, $69.8\%$ and $64.9\%$ for SQ, RQ and mIoU scores, respectively. \noindent{\textbf{nuScenes:}} As there are no existing panoptic segmentation methods that have been benchmarked on the nuScenes dataset, we trained two baseline methods by combining individual semantic and instance segmentation models, namely (KPConv~\cite{thomas2019kpconv} + Mask~R-CNN~\cite{he2017mask}) and (RangeNet++~\cite{milioto2019rangenet++} + Mask~R-CNN~\cite{he2017mask}), as well as the established PanopticTrackNet\cite{hurtado2020mopt} which is a projection-based panoptic segmentation model for LiDAR point clouds. For all these models, we use the original code provided by the authors and optimized the hyperparameters to the best of our ability. We trained Mask~R-CNN on the projected point cloud images using the approach described in \secref{sec:scanUnfold} and project the predictions back to the 3D domain using the post-processing described in \secref{post_proc}. We also make these trained baselines publicly available. \begin{table*} \begin{center} \caption{Comparison of LiDAR panoptic segmentation performance results on nuScenes validation set. All scores are in [\%].} \label{tab:Nuscenes} \footnotesize \begin{tabular} {l|cccc|ccc|ccc|c} \toprule Method & PQ & PQ$^\dagger$ & SQ & RQ & PQ\textsuperscript{Th} & SQ\textsuperscript{Th} & RQ\textsuperscript{Th} & PQ\textsuperscript{St} & SQ\textsuperscript{St} & RQ\textsuperscript{St} & mIoU \\ \midrule RangeNet++ \cite{milioto2019rangenet++} + Mask R-CNN \cite{he2017mask}& 46.6 & 52.6 & 79.5 & 58.4 & 39.9 & 80.5& 52.1 & 57.8 & 77.9 & 68.8 & 56.6 \\ PanopticTrackNet~\cite{hurtado2020mopt} & 51.4 & 56.2 &80.2 & 63.3 & 45.8 &81.4 &55.9 & 60.4 & 78.3 & 75.5 & 58.0 \\ KPConv \cite{thomas2019kpconv} + Mask R-CNN \cite{he2017mask}& 51.5 & 56.8 &80.3 & 63.5 & 44.6 &81.3 &53.9 & 62.9 & 78.8 & 79.4 & 58.9 \\ \midrule EfficientLPS (ours) & \textbf{62.0} & \textbf{65.6} & \textbf{83.4} & \textbf{73.9} & \textbf{56.8} & \textbf{83.2} & \textbf{68.0}& \textbf{70.6} & \textbf{83.8} & \textbf{83.6} & \textbf{65.6} \\ \bottomrule \end{tabular} \end{center} \end{table*} \begin{table*} \setlength\tabcolsep{3.7pt} \begin{center} \caption{Class-wise results on nuScenes validation set. All scores are in [\%].} \label{tab:nuScenesClass} \begin{tabular}{l|cccccccccccccccc|c} \toprule Method & \begin{sideways}barrier\end{sideways} & \begin{sideways}bicycle\end{sideways} & \begin{sideways}bus\end{sideways} & \begin{sideways}car\end{sideways} & \begin{sideways}cvehicle\end{sideways} & \begin{sideways}motorcycle\end{sideways} & \begin{sideways}pedestrian\end{sideways} & \begin{sideways}traffic cone\end{sideways} & \begin{sideways}trailer\end{sideways} & \begin{sideways}truck\end{sideways} & \begin{sideways}driveable\end{sideways} & \begin{sideways}other flat\end{sideways} & \begin{sideways}sidewalk\end{sideways} & \begin{sideways}terrain\end{sideways} & \begin{sideways}man-made\end{sideways} & \begin{sideways}vegetation\end{sideways} & PQ \\ \midrule RangeNet++~\cite{milioto2019rangenet++} + Mask R-CNN~\cite{he2017mask} & 40.3 & 25.7 & 51.7 & 62.5 & 14.6 & 48.3 & 38.8 & 41.8 & 32.7 & 43.0 & 77.1 & 41.5 & 59.2 & 42.1 & 58.9 & 67.9 & 46.6 \\ PanopticTrackNet~\cite{hurtado2020mopt} & 47.1 & 32.9 & 57.9 & 66.3 & 22.8 & 51.1 & 42.8 & 46.8 & 38.9 & 51.0 & 81.5 & 42.3 & 61.8 & 45.1 & 60.9 & 70.9 & 51.4 \\ KPConv~\cite{thomas2019kpconv} + Mask R-CNN~\cite{he2017mask} & 46.7 & 31.5 & 56.8 & 65.7 & 21.9 & 50.4 & 41.6 & 44.9 & 37.6 & 49.1 & 83.5 & 43.1 & 63.5 & 48.6 & 73.9 & 71.5 & 51.5 \\ \midrule EfficientLPS (ours) & \textbf{56.8} & \textbf{37.8} & \textbf{52.4} & \textbf{75.6} & \textbf{32.1} & \textbf{65.1} & \textbf{74.9} & \textbf{73.5} & \textbf{49.9} & \textbf{49.7} & \textbf{95.2} & \textbf{43.9} & \textbf{67.5} & \textbf{52.8} & \textbf{81.8} & \textbf{82.4} & \textbf{62.0} \\ \bottomrule \end{tabular} \end{center} \end{table*} \tabref{tab:Nuscenes} presents the results on the nuScenes validation set. Among the baselines, (KPConv + Mask~R-CNN) achieves the highest PQ score of $51.5\%$, closely followed by PanopticTrackNet which achieves a PQ score of $51.3\%$. (KPConv + Mask~R-CNN) achieves a higher PQ$^{St}$ score than PanopticTrackNet which demonstrates that ability of point-based methods to perform better at semantic segmentation. During the projection of points, distant objects in the 3D domain end up close to each other in the projected 2D domain. Hence, projection based methods that solely operate in the 2D domain find it hard to distinguish between them. The proposed distance-dependent semantic head in EfficientLPS exploits range encoded features and achieves an improvement of $7.7\%$ in the PQ$^{St}$ score over (KPConv + Mask~R-CNN). On the other hand, the top-down architecture of PanopticTrackNet which has a dedicated instance segmentation head, achieves a better performance in segmenting instances of 'thing' class objects, thereby achieving a higher PQ$^{Th}$ score than (KPConv + Mask~R-CNN). The RFPN moodule along with proposed panoptic periphery loss that we use for training EfficientLPS enables it to achieve an improvement of $12.2\%$ in the PQ$^{Th}$ score over PanopticTrackNet. Overall, EfficientLPS achieves a PQ score of $62.0\%$, SQ score of $83.4\%$, RQ score of $73.9\%$, and mIoU of $65.6\%$, outperforming all the methods in each of the metrics and sets the new state of the art on the nuScenes dataset. The consistent state-of-the-art performance, even on the sparse nuScenes dataset demonstrates the effectiveness and the generalization ability of our proposed modules in tackling different challenges such as distance-dependent sparsity, severe occlusions, large scale-variations, and re-projection errors. \begin{table*} \centering \caption{Ablative analysis on the various proposed architectural components in EfficientLPS. The model variants consists of the marked (\cmark) modules in their respective columns with Per. Loss denoting the Panoptic and P. Labels denoting Labels. The results are reported on the SemanticKITTI validation set.} \label{tab:ablation} \begin{tabular}{@{}c|ccccc|ccc|c|c|c|c@{}} \toprule Model & PCM & RFPN & RDPC & Panoptic & Pseudo & PQ & SQ & RQ & PQ\textsuperscript{St}& PQ\textsuperscript{Th} & mIoU & \myworries{Runtime} \\ Variant & & & & Periphery Loss & Labels & (\%) & (\%) & (\%) & (\%) & (\%) & (\%) & \myworries{$(\si{\milli\second})$}\\ \midrule M1 & \xmark & \xmark & \xmark & \xmark & \xmark & $53.0$& $73.1$ & $63.9$ & $53.3$ &$52.5$ & $58.6$ &\myworries{$153.84$} \\ M2 & \cmark & \xmark & \xmark & \xmark & \xmark &$53.9$& $73.9$ & $64.4$ & $54.3$&$53.3$ &$59.8$ & \myworries{$175.43$}\\ M3 & \cmark & \cmark & \xmark & \xmark & \xmark& $55.7$& $74.4$ & $65.8$ &$55.7$ &$55.8$& $60.9$ & \myworries{$192.31$} \\ M4 & \cmark & \cmark & \cmark & \xmark& \xmark & $56.6$& 75.0 & $66.8$ &$56.4$ &$56.9$ & $62.4$ & \myworries{$212.76$} \\ M5 & \cmark & \cmark & \cmark & \cmark & \xmark& $57.4$& 75.0 & $67.6$ &$56.5$ &$58.7$ & $62.5$ & \myworries{$212.76$} \\ M6 & \cmark & \cmark & \cmark & \cmark & \cmark& \textbf{59.2} & \textbf{75.0} & \textbf{69.8} & \textbf{58.0} & \textbf{60.9} & \textbf{64.9} & \myworries{$212.76$} \\ \bottomrule \end{tabular} \end{table*} In \tabref{tab:nuScenesClass}, we present the class-wise PQ scores on the nuScenes validation set. The sparsity of the point clouds especially affects segmentation of smaller objects such as \textit{pedestrian}, \textit{bicycle}, \textit{motorcycle}, \textit{barrier} and \textit{traffic cone}. The operation of the proposed proximity convolution module is independent of the sparsity of the point cloud, and models the geometric transformations of objects, only based on the proximity between the points. This is one of the key factors that enables EfficientLPS to achieve a higher performance for all 'thing' object classes, with an improvement of $12.9\%$ for \textit{pedestrian}, $8.4\%$ for \textit{bicycle}, and $13.2\%$ for \textit{motorcycle} compared to PanopticTrackNet. The sparse nature of point clouds also makes it harder to recognize and distinguish between different semantic object classes, especially for the object classes that often appear close to each other such as \textit{driveable surface}, \textit{sidewalk} and \textit{terrain}. The explicit incorporation of range encoded information in RFPN and the distance-dependent semantic head introduces spatial consistency in the learned features. This especially helps 'stuff' object classes, leading to an PQ improvement by $14.0\%$ for \textit{driveable surface}, $7.2\%$ for \textit{terrain}, and $7.0\%$ for \textit{sidewalk} compared to (KPConv + Mask~R-CNN). Overall, the superior performance obtained for all classes demonstrates the efficacy of EfficientLPS for accurately segmenting different semantic object classes, even for very sparse point clouds. \subsection{Ablation Studies} \label{sec:ablation} In this section, we present extensive ablation studies on the various proposed architectural components in EfficientLPS to warrant our design choices and study the impact of each module on the overall performance of our architecture. We begin with a comprehensive analysis on the EfficientLPS architecture that shows the effect of our proposed proximity input layer, range-aware FPN, distance-dependent semantic head, panoptic periphery loss, and pseudo labeling framework, on the overall performance of our network. We then study the influence of parameters for the search grid and kernel size on the performance of the proximity convolution module. Subsequently, we study the effect of the REN and the different approaches to incorporate it with the 2-way FPN in our proposed range-aware FPN. We then present detailed analysis of the distance-dependent semantic head to show the impact of different architectural design choices. Furthermore, we quantitatively show the boundary refinement achieved with the panoptic periphery loss function using the border IoU metric and compare the performance of the proposed pseudo labeling heuristic to a naive heuristic. Finally, to demonstrate the generalization ability of our proposed modules, we present results by incorporating our architectural components into other well-known top-down panoptic segmentation networks in a straight forward manner. \subsubsection{Comprehensive Analysis of EfficientLPS} \label{sec:detailedAnalysis} In this section, we study the improvement due to the incorporation of various architectural components proposed in EfficientLPS. The results of this experiment are shown in \tabref{tab:ablation}. \figref{fig:ablation} also shows the improvement in performance for each of the models described in this section. We begin with the base model M1 that consists of EfficientNet-B5 followed by the 2-way FPN as the shared backbone with the semantic head from \cite{mohan2020efficientps} and a Mask~R-CNN based instance head. Subsequently, the logits from both heads are fused in the panoptic fusion module at inference time. We train this model with an input resolution of $4096\times256$ as it allows the anchor scales defined in~\cite{he2017mask} to be used directly. We use scan unfolding for point cloud projection and the kNN-based post-processing for re-projecting the predicted labels back to the 3D domain. Further, we employ the \ls loss in addition to weight per-pixel log loss while training. This model M1 achieves a $PQ$ score of $53.0\%$ with $PQ^{st}$ and $PQ^{th}$ scores of $53.3\%$ and $52.5\%$ respectively. \myworries{This model has a runtime of $\SI{153.84}{\milli\second}$. To compute the runtime, we use a single NVIDIA Titan RTX GPU and an Intel [email protected] CPU. We average over 1000 runs on the same LiDAR point cloud. In the case of parallel components in the architecture, maximum runtime among all the components contribute to the total runtime.} In the subsequent model M2, we incorporate our proposed proximity convolution module which achieves a $PQ$ score of $53.9\%$ and an $mIoU$ of $59.8\%$, which constitutes to an improvement $0.9\%$ and $1.2\%$ respectively over the model M1. This improvement can be attributed to the enhanced transformation modeling capability imparted due to the incorporation of the proximity convolution module, as shown qualitatively in \figref{fig:visual_ablation}~(a). The model M1 fails to recognize and segment far-away objects (motorcyclist in the figure) as the points become more sparse with increasing distance. In contrast, the model M2 is able to accurately capture the distant objects. \myworries{Additionally, M2 has a runtime of $\SI{175.43}{\milli\second}$}. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{images/ablation/chart_2.png} \caption{Ablation analysis on the percent change in the metrics for the incorporation of various architectural components shown in \tabref{tab:ablation}. Where M(n)-M(n+1) denotes the \% improvement in the metrics that model M(n+1) achieves over model M(n).} \label{fig:ablation} \end{figure} \begin{figure*} \centering \footnotesize {\renewcommand{\arraystretch}{1 \begin{tabular}{P{0.1cm}P{5.5cm}P{5.5cm}P{5.5cm}} & \raisebox{-0.4\height}{Ground Truth} & \raisebox{-0.4\height}{Model MX}& \raisebox{-0.4\height}{Model MY} \\ {\rotatebox[origin=c]{90}{(a) X=1, Y=2}} &\raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/ablation/gt_666.png}} & \raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/ablation/a_666.png}} & \raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/ablation/b_666.png}} \\ \\ {\rotatebox[origin=c]{90}{(b) X=2, Y=3}} &\raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/ablation/gt_854.png}} & \raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/ablation/a_854.png}} & \raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/ablation/b_854.png}} \\ \\ {\rotatebox[origin=c]{90}{(c) X=3, Y=4}} &\raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/ablation/gt_2436.png}} & \raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/ablation/a_2436.png}} & \raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/ablation/b_2436.png}} \\ \\ {\rotatebox[origin=c]{90}{(d) X=4, Y=5}} &\raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/ablation/gt_660.png}} & \raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/ablation/a_660.png}} & \raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/ablation/b_660.png}} \\ \end{tabular}} \caption{Qualitative comparison of different models described in \tabref{tab:ablation}. The model numbers denoted by $X$ and $Y$ are shown in the first column. In Fig~(a), the incorporation of the proximity convolution module in model $M2$ enables accurate segmentation of small and distant objects such as the motorcyclist in the image. Fig.~(b) compares model $M2$ with $M3$ which incorporates the Range-aware FPN, enabling it to segment occluded objects such as the van in blue which is occluded by the surrounding vegetation. Fig.~(c) shows the performance improvement due to the new distance guided semantic head incorporated into model $M4$ which enables consistent segmentation of the sidewall shown in orange. Lastly, Fig.~(d) shows the improvement due to the panoptic periphery loss used in training model $M5$ which enables segmenting the entire instance of the human shown in red. \myworries{The enlarged images are taken from the corresponding projected range image for better visualization (PDF best viewed at $\times4$ zoom scale).}} \label{fig:visual_ablation} \end{figure*} The model M3 builds upon the model M2 by incorporating the range-aware FPN. This model achieves an improvement of $1.8\%$ in $PQ$ score that constitutes to an improvement of $1.4\%$ in the $PQ^{St}$ and $2.5\%$ in the $PQ^{Th}$ scores \myworries{while having a runtime of $\SI{192.31}{\milli\second}$}. This performance improvement can be attributed to the distance-aware reinforcement of coherently aggregated fine and contextual features that enables accurate segmentation of small occluded objects. In \figref{fig:visual_ablation}~(b), the car is occluded by the vegetation. The model M3 is able to successfully segment the occluded classes due to the incorporation of range-aware features, whereas model M2 falsely predicts both the occluded and the occluder objects as either the foreground or the background class. The next model M4 which incorporates our distance-dependent semantic head into model M3 achieves an improvement of $1.5\%$ in $mIoU$ and $0.9\%$ in the $PQ$ score \myworries{with runtime of $\SI{212.76}{\milli\second}$}. In order to visualize the qualitative improvement of model M4 over model M3, we present an example in \figref{fig:visual_ablation}~(c) where the model M4 segments the \textit{wall} class much more accurately than the model M3. Our proposed semantic head effectively handles the scale variation depicted in the examples. It benefits from the adaptable receptive field which results from combining fixed multi-dilation rate convolutions with range-guided depth-wise atrous separable convolutions. Subsequently, the model M5 extends model M4 by using our proposed panoptic periphery loss during training. This model achieves $PQ^{th}$ score of $58.7\%$ which is a substantial gain of $1.8\%$ over model M4. The model $M5$ achieves a overall $PQ$ and $mIoU$ scores of $57.4\%$ and $62.5\%$ respectively. \figref{fig:visual_ablation}~(d) shows a qualitative comparison of the boundary refinement improvement. We further improve the performance of our network in model M6 by employing our pseudo labeling framework as described in \secref{sec:psuedolabel}. Training with an unlabeled dataset in combination with the labeled dataset leads to a large improvement of $1.5\%$ in $mIoU$ and $1.8\%$ in $PQ$ scores. The final model M6 achieves state of the art performance on SemanticKITTI, with a $PQ$ score of $59.2\%$. \myworries{Moreover, we do not observe any changes in runtime of models M5 and M6 since there is no additional architectural overhead in these models. Model M6 is essentially EfficientLPS with a runtime of $\SI{212.76}{\milli\second}$}. In the following sections, we further analyze the individual architectural components of the $M5$ model in more detail. \subsubsection{Influence of Proximity Convolution Parameters} \label{sec:pil} \begin{table} \centering \caption{Effect of the search area and kernel size on the performance of the proximity convolution module. All scores are in [\%].} \label{tab:PAC ablation} \begin{tabular}{@{}c|cc|ccc|c@{}} \toprule Model & Search grid& Kernel size& PQ & PQ\textsuperscript{St} & PQ\textsuperscript{Th} & mIoU \\ \midrule M5$_{1}$ &- & 3x3 & $56.6$& $55.7$& $57.9$ & $61.8$ \\ M5$_2$ &5x5 & 3x3 & \textbf{57.4}& $56.5$ & \textbf{58.7} & \textbf{62.5} \\ M5$_3$ &7x7 & 3x3 & $57.1$ & \textbf{56.8} & $57.4$ & \textbf{62.5} \\ M5$_4$ &7x7 & 5x5 & \textbf{57.4} & $56.5$ & $58.5$ & $62.2$ \\ \bottomrule \end{tabular} \end{table} The proposed Proximity Convolution (PC) is the core of the proximity convolution module. It employs the kNN algorithm to find the $k$ closest neighbors of each pixel in the projected image where value of $k$ is the product of kernel size dimensions of the proximity convolution. This algorithm is also parameterized by the search grid size that defines a grid around a pixel within which the algorithm performs the search. \tabref{tab:PAC ablation} presents the results of the experiment where we vary the search grid and kernel sizes in the M5 model from \secref{sec:detailedAnalysis}. In the first model M5$_1$, we employ the standard convolution with a kernel size of $3\times3$ in the proximity convolution module of model M5. This model achieves a $PQ$ score of $56.6\%$ and a $mIoU$ of $61.8$. The second model M5$_2$ uses the PC with a search grid of $5\times5$ and a kernel size of $3\times3$. The model M5$_2$ with the proximity convolution achieves an improvement of $0.8\%$ in the $PQ$ score. Here, more than one-third of the search space can be captured by the convolution weights. In model M5$_3$, we increase the search grid of the PC from $5\times5$ to $7\times7$ while keeping the kernel size the same as model M5$_2$. Here, almost one-fifth of the search space can be captured by the convolution weights. Hence, although the search area increases, there are not enough convolution weights to efficiently capture all the close neighbors. This leads to a decrease in performance of $0.3\%$ in the $PQ$ score. The model M5$_4$ increases the kernel size of the PC in model M5$_3$ to $5\times5$. This model performs similar to the model M5$_2$, although the convolution weights can capture half of the search space. This result indicates that predominantly the top nine neighbors computed in model M5$_2$ were adequate to capture the underlying transformations. Therefore, in the proposed EfficientLPS architecture, we employ the search grid size of $5\times5$ and a proximity convolution kernel size of $3\times3$. \subsubsection{Evaluation of Range Encoder Network} \label{sec:RFPN} \begin{table} \centering \caption{Influence of the range encoder on the overall panoptic segmentation performance. All scores are in [\%].} \label{tab:FPN ablation} \begin{tabular}{@{}l|ccc|cc|c@{}}\toprule & PQ & PQ\textsuperscript{St} & PQ\textsuperscript{Th} & SQ & RQ & mIoU \\ \midrule Downsampled & $54.8$& $54.6$& $55.1$ & $74.1$ & $65.2$ & $59.5$ \\ Range-Encoded & \textbf{57.4}& \textbf{56.5}& \textbf{58.7} & \textbf{75.0} & \textbf{67.6} &\textbf{62.5} \\ \bottomrule \end{tabular} \end{table} The Range Encoder Network (REN) is a small CNN based on the EfficientNet~\cite{tan2019efficientnet} topology which encodes range information. We use compound scaling coefficients of $0.1$ for the width, $0.1$ for the depth, and $224$ for the resolution. Our proposed range-aware FPN and the distance-dependent semantic head both employ the multi-scale features encoded by the REN. The multi-scale features of the REN reinforce coherently aggregated fine and contextual output features of the 2-way FPN with spatial awareness. The dilation offsets for the range-guided depth-wise atrous separable convolutions that are employed at different scales are computed in the distance-dependent semantic head. In this experiment, we show the importance of the REN features compared to the direct incorporation of the range channel in the EfficientLPS architecture. \tabref{tab:FPN ablation} presents the results from this experiment. We compare the performance of two models in this experiment. The first model referred to as downsampled, removes the REN from the model M5 and directly downsamples the range channel with downsampling factors $\times4$, $\times8$, $\times16$, and $\times32$. These downsampled versions of the range channels are then fed to the relevant modules. This model achieves a $PQ$ score of $54.8\%$ and a $mIoU$ of $59.5\%$. The second model is essentially the model M5 that already has the REN as part of the network, and we refer to it as the range-encoded model. This model achieves a $PQ$ score of $57.4\%$ and a $mIoU$ of $62.5\%$. The range-encoded model achieves an improvement in $PQ$ and $mIoU$ scores of $2.6\%$ and $3.0\%$ respectively, demonstrating that the direct downsampling is not sufficient to propagate the spatial information to the main network and a dedicated encoder such as the REN is required. \subsubsection{Evaluation of Range-Aware FPN} \label{sec:RFPN_fusion} \begin{table} \centering \caption{Evaluation of different methods to incorporate learned range features in the RFPN. All scores are in [\%].} \label{tab:fusion ablation} \begin{tabular}{@{}l|ccc|cc|c@{}} \toprule & PQ & PQ\textsuperscript{St} & PQ\textsuperscript{Th} & SQ & RQ & mIoU \\ \midrule Additive & $53.9$& $53.7$& $54.1$ & $74.0$ & $64.6$ & $60.8$ \\ Concatenative & $56.5$& $55.6$& $57.8$ & $74.8$ & $66.4$& $61.6$ \\ Fusion & \textbf{57.4} & \textbf{56.5} & \textbf{58.7} & \textbf{75.0} & \textbf{67.6} &\textbf{62.5} \\ \bottomrule \end{tabular} \end{table} There are two main components of our proposed range-aware FPN: the REN and the fusion to incorporate REN features into the FPN. In the previous section, we discuss the importance of the REN for the functioning of range-aware FPN. In this section, we evaluate different fusion methods to incorporate REN features into the FPN. \tabref{tab:fusion ablation} shows the results from this experiment on the M5 model from \secref{sec:detailedAnalysis}. We identify three major techniques to fuse features from multiple network streams: Addition, Concatenation, and Feature fusion. For the additive model, the REN features at each scale are expanded to 256 channels and are summed with the corresponding output of the 2-way FPN to yield the output of the range-aware FPN. In the case of the concatenative model, the outputs of REN and 2-way FPN are concatenated at each scale, followed by two sequential $3\times3$ convolution layers to yield the final output of the range-aware FPN. Each of the convolution layers is followed by an iABNsync and leaky ReLU activation. For the feature fusion model, we employ the mechanism from \eqref{eq:range_fpn}. The additive model yields the lowest performance with a $PQ$ score of $53.9\%$ as it treats the features from the REN and the main network equally, even though the representational capacity of the two networks are significantly different. Instead of supporting the main network to capture richer representation, the REN reduces the quality of the overall features. The concatenative model yields a better performance with a $PQ$ score of $56.5\%$. This model provides the network with the flexibility of determining which features are more significant to impart the required spatial awareness and hence achieves a substantial improvement over the additive model. Nevertheless, the fusion model with a $PQ$ score of $57.4\%$ outperforms the other fusion methods. This model further makes the fusion more flexible by extending the concatenative model with additional selectivity control which improves the performance. We expect that more adaptive fusion techniques~\cite{valada2019self,valada2016convoluted,valada2016towards} can further improve the performance of our range-aware FPN. \subsubsection{Evaluation of Different Semantic Head Topologies} \label{sec:semantichead} \begin{table} \centering \caption{Evaluation of different variations of the proposed semantic head. All scores are in [\%].} \label{tab:RDPC ablation} \begin{tabular}{@{}cc|ccc|cc|c@{}} \toprule RLSFE& RDPC& PQ & PQ\textsuperscript{St} & PQ\textsuperscript{Th} & SQ & RQ & mIoU \\ \midrule \xmark & \xmark & $56.2$& $55.5$& $57.1$ & $74.7$ & $66.8$ & $61.1$ \\ \cmark & \xmark & $56.6$& $56.0$ & $57.4$ & $74.7$ & $67.0$ & $61.4$ \\ \xmark & \cmark & $57.1$& $56.2$ & $58.4$ & $74.9$ & $67.5$ & $62.1$ \\ \cmark & \cmark & \textbf{57.4} & \textbf{56.5} & \textbf{58.7} & \textbf{75.0} & \textbf{67.6} & \textbf{62.5} \\ \bottomrule \end{tabular} \end{table} The semantic head incorporates our proposed range-guided variants of the dense prediction cells (DPC)~\cite{chen2018searching} and the large scale feature extractor (LSFE)~\cite{mohan2020efficientps} modules with the bottom-up path connections. In this section, we compare the performance of the original versions of each of these two modules with their proposed range-guided counterparts. \tabref{tab:RDPC ablation} presents the results of this experiments. We train model M5 from \secref{sec:detailedAnalysis} with the semantic head consisting of the original DPC and LSFE modules. This model attain a $PQ$ score of $56.2\%$, $PQ_{st}$ score of $55.5\%$, $PQ_{th}$ score of $57.1\%$, and an $mIoU$ of $61.1\%$. For the second model, we replace the original LSFE module in the first model with our range-guided LSFE (RLSFE) module. This model achieves an improvement of $0.4\%$, $0.5\%$, $0.3\%$, and $0.3\%$ in the $PQ$, $PQ_{st}$, $PQ_{th}$ and $mIoU$ scores respectively, compared to the first model. We observe a similar improvement in performance of the third model that employs our proposed range-guided DPC module (RDPC). This model achieves an improvement of $0.9\%$ in the PQ score and $1.3\%$ in the $mIoU$ compared to the first model. Interestingly, the improvement in the PQ$^{th}$ score is higher than the improvement in the PQ$^{st}$ score which is $0.7\%$ and $1.3\%$ respectively. This indicates a larger improvement in semantic segmentation of 'thing' classes with the denser and relatively larger receptive field of RDPC compared to 'stuff' classes. Finally, we train the first model with both RLSFE and RDPC modules which achieves a further improvement with a $PQ$ score of $57.4\%$ and an $mIoU$ of $62.5\%$. This experiment demonstrates that our semantic head effectively learns scale-invariant features due to its distance-dependent receptive fields. \subsubsection{Influence of Panoptic Periphery Loss} \begin{table} \centering \caption{Evaluation of the panoptic periphery loss using the border IoU metric for the person~(Pe), car~(Ca), bicyclist~(Bi) and other vehicle~(Ov) classes. All scores are in [\%].} \label{tab:Periphery loss} \begin{tabular}{c|c|cccc|c@{}} \toprule Model & Per. loss& bIoU\textsuperscript{Pe} & bIoU\textsuperscript{Ca} & bIoU\textsuperscript{Bi} & bIoU\textsuperscript{Ov} & PQ\textsuperscript{Th} \\ \midrule M4 & \xmark & 66.8 & 89.6 & 74.3 & 40.9 & 56.9 \\ M5 & \cmark & \textbf{70.0}& \textbf{90.3} & \textbf{75.1} & \textbf{47.5} & \textbf{58.7} \\ \bottomrule \end{tabular} \end{table} We evaluate the boundary refinement performance due to the panoptic periphery loss using the \textit{border-IoU} metric. The \textit{border-IoU} metric enables us to analyze the bleeding or shadowing effects that are observed while projecting the predicted labels back to point clouds. Our proposed loss exploits spatial information to refine the boundaries of panoptic 'thing' class objects. \tabref{tab:Periphery loss} presents the results using the \textit{border-IoU} metric for the top four 'thing' classes that achieve the highest improvement, namely person, car, bicyclist, and other-vehicle. We compute the \textit{border-IoU} for a border width of 2 pixels for the model M4 and the model M5 from \secref{sec:detailedAnalysis} which are trained with and without the panoptic periphery loss. We observe the highest improvement of $6.6\%$ in $bIoU$ for the other-vehicle class, followed by $3.2\%$ improvement in $bIoU$ for the person class. We also observe an improvement for the car and bicyclist class. The other-vehicle class consists of different types of vehicles such as a trailer, bus, and train, which makes it challenging to accurately segment the boundaries. Similarly, the person class is often only represented with few pixels for extended body parts which again makes it challenging to accurately segment these boundaries. The inaccuracies in the border segmentation are further exacerbated while projecting back to the 3D domain. Hence, explicitly focusing on refining the boundaries using our proposed periphery loss yields substantial improvement. \subsubsection{Evaluation of Pseudo Labeling} \label{sec:psuedolabel} \begin{figure*} \centering \footnotesize {\renewcommand{\arraystretch}{1 \begin{tabular}{P{0.1cm}P{5.5cm}P{5.5cm}P{5.5cm}} & \raisebox{-0.4\height}{Input} & \raisebox{-0.4\height}{Naive}& \raisebox{-0.4\height}{Heuristic-Based (ours)} \\ \\ {\rotatebox[origin=c]{90}{Example 1}} &\raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/ablation/range_psuedo_1.png}} & \raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/ablation/naive_psuedo_1.png}} & \raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/ablation/ours_psuedo_1.png}} \\ \\ {\rotatebox[origin=c]{90}{Example 2}} &\raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/ablation/range_psuedo2.png}} & \raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/ablation/naive_psuedo2.png}} & \raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/ablation/ours_psuedo_large.png}} \\ \end{tabular}} \caption{Comparison of pseudo labels generated naively and using our proposed heuristic. On the left, the full range image is shown with zoomed in regions. Each example shows the predictions of the full image with zoomed in regions and the input range image. The labels generated using the naive heuristic misclassifies the truck (purple) in example~1 and a person as bicyclist (light purple) in example~2. Our proposed heuristic assigns the misclassified pixels as unlabeled in both examples.} \label{fig:pseudo_ablation} \end{figure*} In this section, we evaluate the performance of our proposed heuristic for generating pseudo labels from unlabeled datasets. We first define two pseudo label generators, one for generating naive pseudo labels (PLG\textsubscript{N}) and another using our proposed heuristic (PLG\textsubscript{O}). PLG\textsubscript{N} is the M5 model from \secref{sec:detailedAnalysis} which achieves the highest $PQ$ score on the SemanticKITTI validation dataset. Whereas, PLG\textsubscript{O} is the M5 models with its hyperparameters set such that it maximizes the ratio $(TP-FP)/TP$ on the validation dataset. The hyperparameters that we tune via grid search are the overlap threshold, minimum stuff area, confidence threshold, and softmax threshold, for the panoptic fusion module; NMS IoU, and score threshold, for the RCNN; the number of proposals for the RPN. For naive labeling of the unlabeled dataset, we use the output of PLG\textsubscript{N} as the final pseudo labels. For our heuristic-based labeling, we employ the post-processing technique described in \secref{sec:psuedolabel} on the predictions of PLG\textsubscript{O} to obtain the final pseudo labels. \figref{fig:pseudo_ablation} shows example pseudo labels generated from both the methods. In example~1, the train is misclassified as a truck in the output from the naive approach, whereas it is classified as unlabeled in the output from heuristic-based method. In example~2, the naive approach misclassifies one of the two persons as a bicyclist, and our heuristic-based approach classifies both the people as unlabeled. These examples demonstrate that our heuristic-based approach assigns objects as unlabeled than risking misclassification wherever possible. \begin{table} \centering \caption{Evaluation of the proposed heuristic for pseudo labeling. All scores are in [\%].} \label{tab:heuristic ablation} \begin{tabular}{@{}c|ccc|cc|c@{}} \toprule Heuristic & PQ & PQ\textsuperscript{St} & PQ\textsuperscript{Th} & SQ & RQ & mIoU \\ \midrule \xmark & $57.4$& $56.5$& $58.7$ & \textbf{75.0}& $67.6$& $62.5$\\ M\textsubscript{naive} & $58.0$& $57.2$& $59.1$ & \textbf{75.0}& $69.3$& $64.0$\\ M\textsubscript{ours} & \textbf{59.2} & \textbf{58.0} & \textbf{60.9} &\textbf{75.0} &\textbf{69.8}& \textbf{64.9} \\ \bottomrule \end{tabular} \end{table} \tabref{tab:heuristic ablation} presents the quantitative results from this experiment. Both M\textsubscript{naive} and M\textsubscript{ours} models have the same architecture as the M5 model from \secref{sec:detailedAnalysis} and are trained using the training scheme described in \secref{sec:psuedolabel}. The difference between the two models is that M\textsubscript{naive} is trained using the naive approach, whereas M\textsubscript{ours} is trained using our heuristic-based technique. We observe that both the models achieve a higher $PQ$ score than the model trained without the pseudo labeled dataset. Moreover, our M\textsubscript{ours} achieves the highest $PQ$ score of $59.2\%$ which is an improvement of $1.8\%$ over M\textsubscript{naive}. These results demonstrate that training our model on a pseudo labeled dataset improves the performance and we obtain a larger improvement if we further optimize by employing a form of regularization on the pseudo labels such as using our proposed heuristic. \subsubsection{Generalization Ability of Proposed Modules} \begin{table} \centering \caption{Evaluation of the generalization ability of our proposed architectural components. All scores are in [\%].} \label{tab:robust ablation} \begin{tabular}{@{}l|ccc|cc|c@{}} \toprule Model & PQ & PQ\textsuperscript{St} & PQ\textsuperscript{Th} & SQ & RQ & mIoU \\ \midrule Panoptic FPN\textsubscript{vanilla} & $48.7$& $49.6$& $47.5$ & $71.7$& $63.0$& $55.2$ \\ Seamless\textsubscript{vanilla} & $50.6$& $50.7$& $50.5$ & $72.8$& $63.6$& $56.9$ \\ \midrule Panoptic FPN\textsubscript{ours} & $50.8$& $50.9$& $50.6$ & $72.5$& $63.4$& $56.3$ \\ Seamless\textsubscript{ours} & $53.4$& $52.9$& $54.1$ & $73.1$& $64.2$ & $58.4$ \\ \bottomrule \end{tabular} \end{table} In this experiment, we study the effectiveness and generalization ability of our proposed modules by directly incorporating them into other well-known top-down image panoptic segmentation networks. We choose two state-of-the-art panoptic image segmentation networks: Panoptic~FPN~\cite{kirillov2019panoptice} and Seamless~\cite{porzi2019seamless}. In the vanilla version of both the networks, we use the panoptic fusion module to compute the final panoptic segmentation predictions. We train the vanilla versions with an input resolution of $4096\times256$ using scan unfolding based projection and kNN-based post-processing. We employ Lov\'{a}sz Softmax in addition to weighted per-pixel log loss during training. \tabref{tab:robust ablation} shows the results from this experiment. The Panoptic~FPN\textsubscript{vanilla} model achieves a $PQ$ score of $48.7\%$ and an $mIoU$ of $55.2\%$. While the Seamless\textsubscript{vanilla} model achieves a $PQ$ score of $50.6\%$ and an $mIoU$ of $56.9\%$. \begin{figure*} \centering \footnotesize \setlength{\tabcolsep}{0.1cm {\renewcommand{\arraystretch}{0.5 \begin{tabular}{P{0.4cm}P{5.5cm}P{5.5cm}P{5.5cm}} & \raisebox{-0.4\height}{Baseline Output} & \raisebox{-0.4\height}{EfficientLPS Output (Ours)} & \raisebox{-0.4\height}{Improvement/Error Map} \\ \\ {\rotatebox[origin=c]{90}{Semantic KITTI (a)}} &\raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/qualitative/000423_base_1.png}} & \raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/qualitative/000423_ours_1.png}} & \raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/qualitative/000423_improvement.png}} \\ \\ {\rotatebox[origin=c]{90}{Semantic KITTI (b)}}&\raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/qualitative/001276_base_1.png}} & \raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/qualitative/001276_ours_1.png}} & \raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/qualitative/001276_improvement.png}} \\ \\ {\rotatebox[origin=c]{90}{nuScenes (c)}} &\raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/qualitative/17_24_base_1.png}} & \raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/qualitative/17_24_ours_1.png}} & \raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/qualitative/17_24_improvement.png}} \\ \\ {\rotatebox[origin=c]{90}{nuScenes (d)}} &\raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/qualitative/806_35_base_1.png}} & \raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/qualitative/806_35_ours_1.png}} & \raisebox{-0.4\height}{\includegraphics[width=\linewidth]{images/qualitative/806_35_improvement.png}} \\ \end{tabular}} \caption{Qualitative comparison of LiDAR panoptic segmentation results. We compare against PanopticTrackNet on SemanticKITTI and \mbox{(KPConv + Mask~R-CNN)} on nuScenes validation sets. We also show the improvement/error map which shows the points that are misclassified by EfficientLPS in red and the points that are misclassified by the baseline but correctly predicted by EfficientLPS in blue.} \label{fig:semantic_qualitative} \end{figure*} In our version of both the networks, we introduce the proximity convolution module before the encoder, followed by employing the REN in parallel to the main encoder and switching the standard FPN to our range-aware FPN. Additionally, we use the panoptic periphery loss function during training. Since Panoptic~FPN only employs a $3\times3$ convolution in the semantic head for each of its FPN scales, we replace this convolution with our proposed $3\times3$ range-guided depth-wise atrous separable convolution with $D_{max}=3$. We keep $D_{max}$ value low to compute denser features with a small receptive field in contrast to a high value that will output sparse features with a large receptive field. In the case of the Seamless network, we extend the miniDL module in the semantic head with an additional parallel branch that consists of our proposed $3\times3$ range-guided depth-wise atrous separable convolution with $D_{max}=12$. As a result, Panoptic~FPN\textsubscript{ours} achieves an improvement of $2.1\%$ in the $PQ$ score and $1.1\%$ in $mIoU$ over the vanilla version. We also observe a similar improvement of $2.8\%$ in the $PQ$ score and $1.5\%$ in $mIoU$ for the Seamless\textsubscript{ours} model. This demonstrates the generalization ability of our proposed architectural modules as the direct incorporation of these modules without any tuning of parameters still achieves substantial improvement over the vanilla version. These results also show that any future top-down panoptic segmentation network can easily be transformed into a LiDAR panoptic segmentation network by incorporating our proposed architectural components. \subsection{Qualitative Evaluations} \label{sec:qualitative} In this section, we qualitatively evaluate the panoptic segmentation performance of our proposed EfficientLPS model on the validation set of SemanticKITTI and nuScenes datasets. We compare the results of EfficientLPS with the best performing baseline from \secref{sec:comparisonSOTA} for the respective datasets. To this end, we compare with PanopticTrackNet and (KPConv + Mask~R-CNN) for SemanticKITTI and nuScenes datasets respectively. \figref{fig:semantic_qualitative} shows two comparisons for each of these datasets. We also present the improvement/error map for each of the comparisons and enlarge the segments of the outputs that show significant misclassification. The improvement/error map depicts the points that are misclassified by EfficientLPS with respect to the groundtruth in red and the points that are correctly predicted by EfficientPS but are misclassified by the baseline model in blue.\looseness=-1 \figref{fig:semantic_qualitative}~(a) and \figref{fig:semantic_qualitative}~(b) show examples from the SemanticKITTI dataset in which the improvement over the baseline output (PanopticTrackNet) can be seen in the more accurate segmentation of the other-vehicle class as well as the improvement in distinguishing inconspicuous 'thing' classes such as person and bicyclist. EfficientLPS also demonstrates a more clear separation of object instances while segmenting cluttered classes. This can be primarily attributed to the proximity convolution module and the range-aware FPN that enables the network to have enhanced transform modeling capacity along with distance-aware semantically rich multi-scale features. This enables our model to learn highly discriminative features to accurately classify semantically related classes but at the same time the spatial awareness allows it to accurately segment object instances that are very close to each other. In \figref{fig:semantic_qualitative}~(a) the baseline fails to classify the other-vehicle class, whereas our model accurately classifies this object. In \figref{fig:semantic_qualitative}~(b), the baseline model segments the bicyclist but classifies is it as a person depicted in red (label color for person class). In contrast, our model correctly classifies it as a bicyclist depicted in magenta (label color for bicyclist class). Our model also successfully segments both vegetation and fence classes, as opposed to the baseline which misclassifies most of the fence as vegetation. Additionally, the effects of the boundary refinement is prominent in the segmentation of the car in \figref{fig:semantic_qualitative}~(a). In \figref{fig:semantic_qualitative}~(c) and \figref{fig:semantic_qualitative}~(d), we qualitatively compare the performance on the sparse nuScenes dataset. We observe that in \figref{fig:semantic_qualitative}~(c) the truck is not segmented in the output of the baseline model (KPConv + Mask~R-CNN), while our EfficientLPS model accurately segments the instance of the truck. In \figref{fig:semantic_qualitative}~(d) the baseline model segments the oncoming car instance but misclassifies it as a truck. It is significantly hard to accurately classify this instance as only a few points lie on the object. Nevertheless, our model still classifies these points as a car, which can be attributed to the shared backbone and the adaptable dense receptive field of the semantic head as well as the instance head. In \figref{fig:semantic_qualitative}~(d) the terrain class is accurately segmented by the EfficientLPS model, whereas the baseline misclassifies it as sidewalk. \section{Conclusion} \label{sec:conclusion} In this work, we presented a novel top-down approach for LiDAR panoptic segmentation using a 2D CNN that effectively leverages the unique spatial information provided by LiDAR point clouds. Our EfficientLPS architecture achieves state-of-the-art performance by incorporating novel architectural components that mitigate the problems caused by projecting the point cloud into the 2D domain. We proposed the proximity convolution module that effectively models geometric transformations of points in the projected image by exploiting the proximity between points. Our novel range-aware FPN demonstrates effective fusion of range encoded features with that of the main encoder to aggregate semantically rich multi-scale range-aware features. We proposed a new distance-dependent semantic head that incorporates our range-guided depth-wise atrous separable convolutions with adaptive dilation rates that cover large receptive fields densely to better capture the contextual semantic information. We further introduced the panoptic periphery loss function that refines object boundaries by utilizing the range information for maximizing the gap between the object boundary and the background. Moreover, we presented a new heuristic for generating pseudo labels from an unlabeled datasets to assist the network with additional training data. We introduced panoptic ground truth annotations for the sparse LiDAR point clouds in the nuScenes dataset which we made publicly available. Additionally, we provided several baselines for LiDAR panoptic segmentation on this new dataset. We presented comprehensive benchmarking results on SemanticKITTI and nuScenes datasets that show that EfficientLPS sets the new state-of-the-art. EffcientLPS is ranked~\#1 on the SemanticKITTI panoptic segmentation leaderboard. We presented exhaustive ablation studies with quantitative and qualitative results that demonstrate the novelty of the proposed architectural components. Furthermore, we made the code and models publicly available. \section*{Acknowledgements} This work was partly funded by the European Union’s Horizon 2020 research and innovation program under grant agreement No 871449-OpenDR, a research grant from Eva Mayr-Stihl Stiftung, and partly financed by the Baden-Württemberg Stiftung gGmbH.\looseness=-1 \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,941,325,221,208
arxiv
\section*{Introduction} Diffeological spaces and Fr\"olicher spaces are two frameworks, developed independently in the 20th century, in order to define smoothness on objects where basic differential properties cannot be stated in a standard way. {Motivated by technical uncapacities to formulate properly applied problems, many such settings developed independently, that are generalizations of the classical finite dimensional differential geometry, exist in the literature.} A non-exhaustive list of such settings can be found in e.g. \cite{St2011}. One global underlying problem is the following: \vskip 6pt \centerline{\textit{{How to express variational problems in a setting where differentiation is not easy?}}} \vskip 6pt One {has} immediately {in mind} spaces with singularities as elementary examples, such as the irrational torus or orbifolds where a rich differential geometry can be described by the use of diffeologies \cite{KIg}, but initialy the motivating (historical) examples for the development of diffeologies are groups of diffeomorphisms of non-compact manifolds and coadjoint orbits, in the works of French mathematician Jean-Marie Souriau (integrally written in French), reviewed and deeply expanded in \cite{Igdiff}. The same way, the motivations of the Swiss mathematician Alfred Fr\"olicher were certainly partly based on infinite dimensional considerations, till his book with Andreas Kriegl \cite{FK}. Therefore, motivated by historical considerations and by our own concerns, while most recent works in these two settings are developed in a background of algebraic or symplectic geometry (understood in a wide sense), we feel the need of an actualized presentation of diffeological spaces, and of Fr\"olicher spaces as a subcategory, based on infinite dimensional examples arising in functional analysis, infinite dimensional {geometry} and numerical analysis. Indeed, since about {ten} years, under the impulsion of a new generation of motivated researchers, diffeologies have drastically increased their known technical capacities. Examples and counter-examples have successfully {highlighted in} specific constructions that {point out} singular problems, drawing attention by non-specialists { to diffeologies} as a potential framework for some of their technical difficulties. This is to this class of potential readers that our text is primarily addressed. We are ourselves {primarily interested in} two very different backgrounds, both linked with functional analysis, and the use of diffeologies have imposed its necessity to us. {Indeed, after breakthrough in the theory of geometry and integrable systems \cite{Ma2013,MR2016},} diffeological spaces are also {recently} considered in applied mathematics (see, e.g., \cite{KW21}). {From the viewpoint of integrable systems, diffeologies enable to state smoothness, and hence well-posedness, on infinite dimensional (mostly algebraic) frameworks. From the viewpoint of shape optimization problems, diffeologies {}{propose} a more flexible framework than the one classically considered in the last decades, based on classical Riemannian infinite dimensional geometry.} One aim of this paper is to make a clear and understandable presentation of basic results on diffeologies and Frölicher spaces, based on the examples that we know better, but in a simple language, accessible to other researchers in the field of functional analysis and applied mathematics understood in a very wide sense. A further aim of this paper is to make a clear exposition of some basic concepts {where diffeologies are interrelated with topics} of functional analysis, {infinite dimensional geometry} and optimization for researchers whose basic knowledge remain e.g. in low dimensional topology or category theory, and who ignore such infinite dimensional settings. The structure of the paper is as follows: \begin{itemize} \item A brief introduction in diffeological and Fr\"olicher spaces is given in section \ref{sec:DiffFroehl}. In particular, we present and summarize the various tangent space definitions of diffeological spaces, which appear in the literature. {Other geometric objects generalized to diffeologies are also considered such as diffeological (Lie) groups, vector or principal (pseudo-) bundles, Riemannian metrics, groups of diffeomorphisms and automorphisms of various diffeological algebraic or set-theoric structures.} \item {Section \ref{diffeq} is concerned by (partial) differential equations in a two-fold way. {We take profit of this exposition, in the two sections, to highlight or review on some tangent and cotangent structures that are well-known on (finite dimensional) manifolds and recovered in diffeological spaces with specific technical details.} In section \ref{s:reg}, we consider the problem of the existence of an exponential map in a diffeological Lie group, solving a first order differential equation, and as a by-product holonomy-type equations by stating an infinite diemensional Ambrose-Singer theorem.} In section \ref{sec:Optimization} we present first order optimization techniques on diffeological spaces. Here, we concentrate on shape optimization, which has a wide range of application. \item The paper ends with section \ref{sec:MappingSpaces} {where basic functional spaces, that is mapping spaces, are reviewed from the viewpoint of infinite dimensional geometry. Depending on the regularity required for mappings and on the source manifolds, one either observes Hilbert, Banach or ILB manifolds along the lines of \cite{Om}, or topological spaces which can be unserstood as diffeological spaces \cite{Ma2019}. Implicit functions, jet spaces, and spaces of triangulations are related examples where diffeologies apply in settings related to mapping spaces.} \end{itemize} \section{A presentation of diffeological and Fr\"olicher spaces, with examples} \label{sec:DiffFroehl} \subsection{Diffeological and Fr\"olicher spaces} In this section we follow and complete the expositions appearing in \cite{Ma2013,MR2019,MR2016}. The main reference for a comprehensive exposition on diffeologies is \cite{Igdiff}. This reference will be completed all along the text for specific concerns. \begin{Definition} \label{d:diffeology} Let $X$ be a set. \noindent $\bullet$ A \textbf{p-parametrization} of dimension $p$ on $X$ is a map from an open subset $O$ of $\R^{p}$ to $X$. \noindent $\bullet$ A \textbf{diffeology} on $X$ is a set $\p$ of parametrizations on $X$ such that: \begin{itemize} \item For each $p\in\N$, any constant map $\R^{p}\rightarrow X$ is in $\p$; \item \label{d:local} For each arbitrary set of indexes $I$ and family $\{f_{i}:O_{i}\rightarrow X\}_{i\in I}$ of compatible maps that extend to a map $f:\bigcup_{i\in I}O_{i}\rightarrow X$, if $\{f_{i}:O_{i}\rightarrow X\}_{i\in I}\subset\p$, then $f\in\p$. \item \label{d:compose} For each $f\in\p$, $f : O\subset\R^{p} \rightarrow X$, and $g : O' \subset \R^{q} \rightarrow O$, in which $g$ is a smooth map (in the usual sense) from an open set $O' \subset \R^{q}$ to $O$, we have $f\circ g\in\p$. \end{itemize} \vskip 6pt If $\p$ is a diffeology on $X$, then $(X,\p)$ is called a \textbf{diffeological space} and, if $(X,\p)$ and $(X',\p')$ are two diffeological spaces, a map $f:X\rightarrow X'$ is \textbf{smooth} if and only if $f\circ\p\subset\p'$. \end{Definition} The notion of a diffeological space is due to J.M. Souriau, see \cite{Sou}, who was inspired by the remarks and constructions from Chen in \cite{Chen}. A comprehensive exposition of basic concepts can be found in \cite{Igdiff}. \begin{Definition} Let $(X,\p)$ be a diffeological space. Let $\p'$ be another diffeology on $X.$ Then $\p'$ is a \textbf{subdiffeology} of $\p$ if and only if $\p' \subset \p$ or, in other words, if the identity mapping $(X,\p')\rightarrow (X,\p)$ is smooth. \end{Definition} \begin{ex} There exists on any nonempty set $X$ a diffeology, minimal for inclusion, made only of constant parametrizations. It is called the \textbf{discrete diffeology}. This is a subdiffeology of any diffeology on $X.$ \end{ex} \begin{ex}\label{ex:p(A)} {Let $X$ be a set. Given a non-empty family $A$ of functions $$f : O_f \rightarrow X$$ where $O_f$ is the domain of the function $f,$ which is an open subset of an Euclidian space, a direct application of Zorn's lemma proves that there exists a diffeology $\p(A),$ minimal for inclusion, that contains $A,$ in other words, for which $A$ is a family of smooth maps. This is the \textbf{diffeology generated by} $A.$} \end{ex} The category of diffeological spaces is very large, and carries many different pathological examples even if it enables a very easy-to-use framework on infinite dimensional objects. therefore, a restricted category can be useful, the category of Fr\"olicher spaces. This is for example the framework chosen in \cite{Can2020}. \begin{Definition} A \textbf{Fr\"olicher} space is a triple $(X,\F,\C)$ such that - $\C$ is a set of paths $\R\rightarrow X$, - $\F$ is the set of functions from $X$ to $\R$, such that a function $f:X\rightarrow\R$ is in $\F$ if and only if for any $c\in\C$, $f\circ c\in C^{\infty}(\R,\R)$; - A path $c:\R\rightarrow X$ is in $\C$ (i.e. is a \textbf{contour}) if and only if for any $f\in\F$, $f\circ c\in C^{\infty}(\R,\R)$. \vskip 5pt If $(X,\F,\C)$ and $(X',\F',\C ')$ are two Fr\"olicher spaces, a map $f:X\rightarrow X'$ is \textbf{smooth} if and only if $\F'\circ f\circ\C\subset C^{\infty}(\R,\R)$. \end{Definition} This definition first appeared in {\cite{Fr1982}, see e.g.} \cite{FK}, but the actual terminology was fixed to our knowledge in Andreas Kriegl and Peter Michor's book \cite{KM}, and independently by Paul Cherenack in \cite{Ch1998,Ch1999}. \vskip 6pt The comparison of these two frameworks does need to be exposed for a complete review. Its first steps were published in \cite{Ma2006-3}; the reader can also see \cite{Ma2013,Ma2018-2,MR2016,Wa} for extended expositions. In particular, it is explained in \cite{MR2016} that {\em Diffeological, Fr\"olicher and Gateaux smoothness are the same notion if we restrict ourselves to a Fr\'echet context,} in a way that we explain here. For this, we first need to analyze how we generate a Fr\"olicher or a diffeological space, that is, how we implement a Fr\"olicher or a diffeological structure on a given set $X.$ Any family of maps $\F_{g}$ from $X$ to $\R$ generates a Fr\"olicher structure $(X,\F,\C)$ by setting, after \cite{KM}: - $\C=\{c:\R\rightarrow X\hbox{ such that }\F_{g}\circ c\subset C^{\infty}(\R,\R)\}$ - $\F=\{f:X\rightarrow\R\hbox{ such that }f\circ\C\subset C^{\infty}(\R,\R)\}.$ We call $\F_g$ a \textbf{generating set of functions} for the Fr\"olicher structure $(X,\F,\C)$. One easily see that $\F_{g}\subset\F$. A Fr\"olicher space $(X,\F,\C)$ carries a natural topology, the pull-back topology of $\R$ via $\F$. The same way, one can start alternatively from a generating set of contours $\C_g,$ and set: - $\F=\{f:X\rightarrow\R\hbox{ such that }f\circ\C_g\subset C^{\infty}(\R,\R)\}$ - $\C=\{c:\R\rightarrow X\hbox{ such that }\F\circ c\subset C^{\infty}(\R,\R)\}.$ In the case of a finite dimensional differentiable manifold $X$ we can take $\F$ as the set of all smooth maps from $X$ to $\R$, and $\C$ the set of all smooth paths from $\R$ to $X.$ Then, the underlying topology of the Fr\"olicher structure is the same as the manifold topology \cite{KM}. We also remark that if $(X,\F, \C)$ is a Fr\"olicher space, we can define a natural diffeology on $X$ by using the following family of maps $f$ defined on open domains $D(f)$ of Euclidean spaces, see \cite{Ma2006-3}: $$ \p_\infty(\F)= \coprod_{p\in\N^*}\{\, f: D(f) \rightarrow X\, | \, D(f) \hbox{ is open in } \R^p \hbox{ and } \F \circ f \in C^\infty(D(f),\R) \}.$$ If $X$ is a finite-dimensional differentiable manifold, setting $\F=C^\infty(X,\R),$ this diffeology is called the { \em nebulae diffeology}, see e.g. \cite{Igdiff}. Now, we can easily show the following: \begin{Proposition} \label{fd} \cite{Ma2006-3} Let$(X,\F,\C)$ and $(X',\F',\C')$ be two Fr\"olicher spaces. A map $f:X\rightarrow X'$ is smooth in the sense of Fr\"olicher if and only if it is smooth for the underlying diffeologies $\p_\infty(\F)$ and $\p_\infty(\F').$ \end{Proposition} Thus, Proposition \ref{fd} and the foregoing remarks imply that the following implications hold: \vskip 12pt \begin{tabular}{ccccc} smooth manifold & $\Rightarrow$ & Fr\"olicher space & $\Rightarrow$ & diffeological space \end{tabular} \vskip 12pt \noindent These implications can be analyzed in a refined way. The reader is referred to the Ph.D. thesis \cite{Wa} for a deeper analysis of them. \begin{rem} The set of contours $\C$ of the Fr\"olicher space $(X,\F,\C)$ does not give us a diffeology, because a diffeology needs to be stable under restriction of domains. In the case of paths in $\C$ the domain is always $\R$ whereas the domain of 1-plots can (and has to) be any interval of $\R.$ However, $\C$ defines a ``minimal diffeology'' $\p_1(\F)$ whose plots are smooth parametrizations which are locally of the type $c \circ g,$ in which $g \in \p_\infty(\R)$ and $c \in \C.$ Within this setting, we can replace $\p_\infty$ by $\p_1$ in Proposition $\ref{fd}$. The main technical tool needed to discuss this issue is Boman's theorem \cite[p.26]{KM}. Related discussions are in \cite{Ma2006-3,Wa}. \end{rem} After this remark, one can push further the ``restriction'' procedure, for any diffeology $\p$ on $X.$ \begin{Theorem} \label{th:dim} Let $\p$ be a diffeology on $X.$ Let $\F(\p)$ be the set of smooth functions from $(X,\p)$ to $(\R,\p_\infty(\R))$ where $\R$ is understood as a smooth manifold. Then: \begin{itemize} \item there exists a nebulae diffeology $\p_\infty$ generated by $\F(\p),$ \item $\p\subset \p_\infty$ \item $\forall k \in \N^*,$ there exists a \textbf{$n-$ dimensional diffeology} $\p_k \subset \p$ made of plots $p \in \p$ which factors tthrough $$ D(\p) \rightarrow \R^k \rightarrow X$$ \item $\forall k \in \N,$ the set of smooth maps from $(X,\p_k)$ to $(\R,\p_\infty(R))$ coincides with $\F(\p).$ \end{itemize} \end{Theorem} This construction for $\p=\p_\infty$ can be found in \cite{Ma2013}. It extends trivially to any diffeology $\p.$ By this theorem, one can easily see that the categories of Fr\"olicher spaces and diffeological spaces do not coincide. We now have to precise: \begin{Definition} \cite{Wa} If $\p=\p_\infty,$ the diffeology $\p$ is called reflexive (and it is the nebulae diffeology of a Fr\"olicher space). \end{Definition} \begin{rem} Given a diffeological space $(X,\p)$ there is a natural topology on $X$ called $D-$topology in \cite{CSW} which is the final topology for the set of maps $p \in \p.$ This topology can be rather complicated and actually not-so-studied. To our best knowledge, this problem of topology associated to a diffeology is better studied actually in the context of Fr\"olicher spaces, see e.g. \cite{FK,KM} for a serie of technical results. \end{rem} Let us now give a {class of} examples of diffeologies. \begin{ex} Let $k \in \N.$ Let $(X,\F,\C)$ be a Fr\"olicher space. The $C^k-$diffeology of $(X,\F,\C)$ is the diffeology $\p_{(k)}$ defined by: $$\forall d \in \N^*, \, \forall O \hbox{ open subset of } \R^d \quad {\p_{(k)}}_O = \left\{ p : O \rightarrow X \, | \, \F \circ p \subset C^k(O,\R) \right\}$$ and $$\p_{(k)} = \bigcup_{d \in \N^*} \bigcup_{O \hbox{ open in }\R^d} {\p_{(k)}}_O.$$ This example is given in \cite{Igdiff} for a finite dimensional manifold. The extension to Fr\"olicher structures is straightforward. \end{ex} \subsection{Diffeologies on spaces of mappings, products, quotients, subsets and algebraic structures} \begin{Proposition} \label{prod1} \cite{Sou,Igdiff} Let $(X,\p)$ and $(X',\p')$ be two diffeological spaces. There exists a diffeology $\p\times\p'$ on $X\times X'$ made of plots $g:O\rightarrow X\times X'$ that decompose as $g=f\times f'$, where $f:O\rightarrow X\in\p$ and $f':O\rightarrow X'\in\p'$. We call it the \textbf{product diffeology}, and this construction extends to an infinite (maybe not countable) product. \end{Proposition} We apply this result to the case of Fr\"olicher spaces and we derive (compare with \cite{KM}) the following: \begin{Proposition} \label{prod2} Let $(X,\F,\C)$ and $(X',\F',\C')$ be two Fr\"olicher spaces equipped with their natural diffeologies $\p$ and $\p'$ . There is a natural structure of Fr\"olicher space on $X\times X'$ which contours $\C\times\C'$ are the 1-plots of $\p\times\p'$. \end{Proposition} We can also state the above result for infinite products; we simply take Cartesian products of the plots, or of the contours. \smallskip Now we consider quotients after \cite{Sou} and \cite[p. 27]{Igdiff}: Let $(X,\p)$ be a diffeological space, and let $X'$ be a set. Let $f:X\rightarrow X'$ be a map. We define the \textbf{push-forward diffeology}, noted by $f_*(\p),$ as the coarsest (i.e. minimal for inclusion) among the diffologies on $X'$, which contains $f \circ \p,$ that is, for which $f$ is smooth. {This construction is a specification of example \ref{ex:p(A)}, where the generating family $A$ is set as $A=f \circ \p.$ A smooth map $f:(X,\p) \rightarrow (X',\p ')$ between two diffeoogical spaces is called a \textbf{subduction} if it is surjective anf if $f_*(\p) = \p'.$} Conversly, if $(X',\p')$ is a diffeological space and if $X$ is a set, if $f:X\rightarrow X',$ the \textbf{pull-back diffeology} noted by $f^*(\p')$ is the diffeology on $X,$ maximal for inclusion, for which $f$ is smooth. \begin{Proposition} \label{quotient} \cite{Sou,Igdiff} Let $(X,\p)$ b a diffeological space and $\rel$ an equivalence relation on $X$. Then, there is a natural diffeology on $X/\rel$, noted by $\p/\rel$, defined as the push-forward diffeology on $X/\rel$ by the quotient projection $X\rightarrow X/\rel$. \end{Proposition} Given a subset $X_{0}\subset X$, where $X$ is a Fr\"olicher space or a diffeological space, we equip $X_{0}$ with structures induced by $X$ as follows: \begin{enumerate} \item If $X$ is equipped with a diffeology $\p$, we define a diffeology $\p_{0}$ on $X_{0}$ called the \textbf{subset or trace diffeology}, see \cite{Sou,Igdiff}, by setting \[ \p_{0}= \lbrace p\in\p \hbox{ such that the image of }p\hbox{ is a subset of } X_{0}\rbrace\; . \] \item If $(X,\F,\C)$ is a Fr\"olicher space, we take as a generating set of maps $\F_{g}$ on $X_{0}$ the restrictions of the maps $f\in\F$. In this case, the contours (resp. the induced diffeology) on $X_{0}$ are the contours (resp. the plots) on $X$ whose images are a subset of $X_{0}$. \end{enumerate} \begin{ex}\label{ex:YpowerX} Let us consider the infinite product diffeology on $Y^X.$ This is the largest diffeology for which the evaluation maps $\operatorname{ev}_x$ are smooth for each $x \in X.$ Therefore, any other diffeology on any subset of $Y^X$ {for which evaluation maps are smooth} is a subdiffeology of the subset diffeology inherited from the this product diffeology. \end{ex} \begin{ex} Let $M$ be a smooth (Riemannian) manifold. Then, the set $M^\N$ of sequences on $M$ is a topological space with open sets of the form $$O_0 \times O_1 \times ... \times O_k \times ...$$ where the family $\{O_k, k \in \N\}$ is a family of open subsets of $M$ such that $O_k=M$ except for a finite set of indexes $k.$ With this topology, it seems difficult to define an atlas on $M^\N$ while an intuitive ``natural'' differentiation holds by considering differentiation on each component of the infinite product. By considering a diffeology $\p$ on $M$ such that $\F(\p)=C^\infty(M,\R)$ in the usual sense, the diffeology $\p$ also defines a diffeology on $M^\N$ {by (countable) infinite dimensional product diffeology along the lines of Example \ref{ex:YpowerX}} that encodes this ``natural differentiation''. {An example of such a diffeology is obtained by setting $\p=\p_\infty(M).$} Following Proposition \ref{prod2}, it is an easy exercise to check that the n-dimensional diffeology $\p_n$ related to $\p$ generates also a n-dimensional diffeology on the infinite product. Moreover, one can also consider subsets of $M^\N$ which inherit a subset diffeology, such as the so called \emph{marked (infinite) configuration spaces} (see e.g. \cite{AKLU2000}) that we note here by $$O\Gamma_M =\left\{ (x_n)\in M^\N \, | \, \forall (m,n)\in \N^2, m \neq n \Leftrightarrow x_n \neq x_m \right.$$ $$\left. \hbox{ and for all bounded set } B \subset M, |\{x_n \, | \, n \in \N\}\cap B| < +\infty \right\}.$$ It is an easy exercise to prove that he differentiable structure on $O\Gamma_M,$ defined by another generatized notion of differentiability called Sikorski differentiability (see e.g. \cite{St2011} for an overview), fits here with our diffeology, once we have remarked that each point $x_n \in M$ can be identified with its Dirac measure $\delta_{x_n}$ acting on $C^\infty(M,\R).$ Associated with marked infinite configuration spaces, the (unmarked) \emph{infinite configuration space} $\Gamma_M$ was studied by the same research group, see e.g. \cite{ADL2001}, can be understood as the space of marked configurations up to reindexation. More precisely, the infinite symmetric group $\mathfrak{G}_\infty$ made of bijections of $\N$ is acting on the indexes of the sequences of $O\Gamma_M,$ and $$\Gamma_M = O\Gamma_M / \mathfrak{G}_\infty.$$ Equipped with its quotient diffeology, this space inherits the same structure for generalized differentiability as the one described in the original works. Other similar examples can be found in \cite{Ism}. \end{ex} \begin{ex}\cite{Ma2020-3} Let $Y$ be a smooth complete manifold (not necessarily finite dimensional) and let $(X,\p)$ be a diffeological space such that $X \subset Y$ as a set, and $\p \subset \p_\infty(Y)$ (in other words, the canonical inclusion map is smooth for the nebulae diffeology on $Y$). Then we define $\C(X,Y)$ as the set of sequences in $X$ that converge in $Y,$ in other words, the subset of $X^\N$ made of sequences that are Cauchy for the topology of $Y.$ Then, \begin{itemize} \item the limit $$ \lim: \C(X,Y) \rightarrow Y$$ is not smooth when $\C(X,Y)$ is equipped with the subset topology inherited from the infinite product diffeology of $X^\N$, and when $Y$ is equipped with $\p_\infty(Y),$ as soon as there exists a non constant smooth path in $X.$ \item $\lim^*(\p_\infty(Y))$ may make the canonical projection (evaluation) maps $$ \C(X,Y) \rightarrow X$$ nonsmooth, consider for example $Y = \R$ and $X = \mathbb{Q}$ equipped with its subset diffeology. \end{itemize} Therefore, there is a true necessity to define the \emph{Cauchy diffeology} on $\C(X,Y)$ for which the limit map and the evaluation maps are smooth in $\C(X,Y).$ \end{ex} Our last general construction is the so-called functional diffeology. Its existence implies the following crucial fact: the category of diffeological spaces is Cartesian closed.} Our discussion follows \cite{Igdiff}. Let $(X,\p)$ and $(X',\p')$ be diffeological spaces. Let $M \subset C^\infty(X,X')$ be a set of smooth maps. The \textbf{functional diffeology} on $S$ is the diffeology $\p_S$ made of plots $$ \rho : D(\rho) \subset \R^k \rightarrow S$$ such that, for each $p \in \p$, the maps $\Phi_{\rho, p}: (x,y) \in D(p)\times D(\rho) \mapsto \rho(y)(x) \in X'$ are plots of $\p'.$ We have, see \cite[Paragraph 1.60]{Igdiff}: \begin{Proposition} \label{cvar} Let $X,Y,Z$ be diffeological spaces. Then, $$ C^\infty(X\times Y,Z) = C^\infty(X,C^\infty(Y,Z)) = C^\infty(Y,C^\infty(X,Z)) $$ as diffeological spaces equipped with functional diffeologies. \end{Proposition} The next property will be useful for us: \begin{Proposition} The composition of maps is smooth for functional diffeologies. \end{Proposition} Now, given an algebraic structure, we can define a corresponding compatible diffeological (resp. Fr\"olicher) structure, see for instance \cite{Les}. For example, see \cite[pp. 66-68]{Igdiff}, if $\R$ is equipped with its canonical diffeology (resp. Fr\"olicher structure), we say that an $\R-$vector space equipped with a diffeology (resp. Fr\"olicher structure) is a diffeological (resp. Fr\"olicher) vector space if addition and scalar multiplication are smooth. We state: \begin{Definition} Let $G$ be a group equipped with a diffeology (Fr\"olicher structure). We call it a \textbf{diffeological (Fr\"olicher) group} if both multiplication and inversion are smooth. \end{Definition} The same way, one can define diffeological fields, rings, algebras, groupoids, actions and so on. \subsection{The principle of pull-back on parametrizations} We explain here, with non-categorical vocabulary, some of the constructions that are necessary for the difinition of a \textbf{colimit,} which is often used to define geometric objects on a diffeology. We perform this contruction under three examples: the internal or kinematic tangent space, the Riemannian metric and the Haussdorf volume. One has to remark that the plots of a diffeology are behaved like charts on a manifold through the property \ref{d:local} in Definition \ref{d:diffeology}. The notion of parametrization splits with the notion of chart at the point where a chart defines an open neighborhood of each point of its image set, which is not the case for a plot of a given diffeology. Moreover, for a given diffeological space $(X,\p),$ the diffeology carries a notion of a given plot dimension in two ways: \begin{itemize} \item by the dimension of the domain of the plot $p,$ \item and by the filtration of diffeologies defined in Theorem \ref{th:dim}: $$\p_1 \subset \p_2 \subset \cdots \subset \p.$$ \end{itemize} Indeed, rephrasing the definition of $\p_n, $ a plot $p \in \p$ lies in $\p_n$ if there exists $g \in \p_\infty(\R^n)$ and $h$ a smooth map from $(\R^n,\p_\infty(\R^n))$ to $(X,\p)$ such that $p = h \circ g.$ Now, let us consider a smooth manifold $M$ equipped with its nebulae diffeology. Some geometric constructions (differential forms, Riemannian metrics, etc.) can be pulled back from $M$ to the domain of any plot $p.$ Therefore, there is a \emph{pull-back principle} that we can apply to define the same contructions on the domain of any plot of a diffeology $\p$ provided that they are stable under pull-back via composition of smooth maps. Let us develop in few first examples how basic constructions on finite dimnsional manifolds extend to diffeological spaces through this principle. \subsubsection{The kinematic (or internal) tangent cone}\label{s:internal tangent} We describe here the construction first established in \cite{Les} for diffeological groups, in \cite{DN2007-1} for Fr\"olicher spaces, and extended in \cite{CW} for diffeologies, in a categorical vocabulary. Our {}{aim} is to express the same construction without the assumption that the reader is familiar with the categorical framework. Let $(X,\p)$ be a diffeological space. The domain $D(p)$ of each plot $p$ of $\p$ can be considered as a smooth manifold, and first objects of interest are tangent vectors in the tangent space $TD(p).$ They are understood as germs of smooth path $\frac{d\gamma}{dt}|_{t=0}$ where $\gamma \in C^\infty(\R,D(p)).$ Let $x \in X$ and let us consider $$\mathcal{C}_x = \left\{ \gamma \in C^\infty(\R,X) \, | \, \gamma(0)=x\right\}.$$ For each $p \in \p,$ we also define $$\mathcal{C}_{x,p} = \left\{\gamma \in C^\infty(\R,D(p)) \, | \, p \circ \gamma(0) = x\right\}.$$ This set of smooth paths passing at $x$ enables to define the kinematic set $$\mathcal{K}_x = \coprod_{p \in \p} \left\{ \frac{d \gamma}{dt}|_{t=0} \, | \, \gamma \in \C_{x,p} \right\} = \coprod_{p \in \p} \coprod_{x_0 \in p^{-1}(x)} T_{x_0}D(p).$$ Therefore, we identify $(X_1,X_2) \in \mathcal{K}_x^2, $ where $X_1 = \frac{d \gamma_1}{dt}|_{t=0} \in T_{x_1}D(p_1)$ and $X_2 = \frac{d \gamma_2}{dt}|_{t=0} \in T_{x_2}D(p_2)$ if there exists a $p_{3} \in \p$ and $(\gamma_{3,1},\gamma_{3,2}) \in C_{x,p_3}$ such that \begin{equation} \label{id-germs} \left\{\begin{array}{l} \forall i \in \{1,2\}, p_i \circ \gamma_i = p_{3,i} \circ \gamma_{3,i} \\ \frac{d\gamma_{3,1}}{dt}|_{t=0} = \frac{d\gamma_{3,2}}{dt}|_{t=0} \end{array}\right.\end{equation} This identification is reflexive, symmetric, but not transitive, as is shown in the following counter-example: \begin{ex} \label{spagh} Let $$ X= \{(x,y,z) \in \R^3 \, | \, yz = 0\}.$$ We equip $X$ with its subset diffeology, inherited from the nebulae diffeology of $\R^3.$ Let us conside the paths $$\gamma_1(t) = (t,t^2,0)$$ and $$\gamma_2(t)=(t,0,t^2).$$ The natural intuition (which will be shown to have a defect later in the exposition, for another diffeology on $X$) says that $$\frac{d\gamma_{1}}{dt}|_{t=0} = \frac{d\gamma_{2}}{dt}|_{t=0} = (1,0,0),$$ \emph{but there is no parametrization $p_3$ at $x=(0,0,0)$ that fulfills (\ref{id-germs}).} Indeed, the diffeology of $X$ is generated by the push-forwards of the plots of $\R^2$ to $X$ by the maps $(x,y) \mapsto (x,y,0) $ and $(x,y \mapsto (x,0,y).$ In order to identify the two germs, one has to consider an intermediate path $\gamma_{1.5}(t) = (t,0,0).$ \end{ex} Therefore, we define \begin{Definition} We define the equivalence relation $\sim$ on $\mathcal{K}_x$ as follows: $\forall (X_1,X_2) \in \mathcal{K}_x, u_1 \sim u_2$ if and only if one of the two following conditions is fulfilled: \begin{enumerate} \item $$\exists (\gamma_1,\gamma_2) \in \left(\coprod_{p \in \p}\C_{x,p}\right)^2, \frac{d\gamma_{1}}{dt}|_{t=0} = u_1 \in TD(p_1), \frac{d\gamma_{2}}{dt}|_{t=0} = u_2 \in TD(p_2)$$ and also $p_{3} \in \p$ and $(\gamma_{3,1},\gamma_{3,2}) \in C_{x,p_3}$ such that condition (\ref{id-germs}) applies. \item there exists a finite sequence $(v_1,...,v_k)\in \mathcal{K}_x^k$ such that $v_1 = u_1,$ $v_k = u_2,$ and such that condition (1) applies to $v_i$ and $v_{i+1}$ for each index $i \in \N_{k-1}.$ \end{enumerate} \end{Definition} {\begin{Definition} The \emph{internal} or \emph{kinetic tangent cone} of $X$ at $x \in X$ is defined by $${}^iT_xX = \mathcal{K}_x/\sim,$$ The space ${}^iT_xX$ is endowed by the push-forward of the functional diffeology on $\mathcal{C}_x.$ \end{Definition}} The elements of ${}^iT_xX$ are called \emph{germs of paths} on $X$ at $x.$ In fact, we will produce later in our exposition other tangent spaces, so that we avoid the terminology ``tangent vector'' which {may be misleading}. \subsubsection{Differential forms and de Rham complex} \begin{Definition} \cite{Sou} Let $(X,\p)$ be a diffeological space and let $V$ be a vector space equipped with a differentiable structure. A $V-$valued $n-$differential form $\alpha$ on $X$ (noted $\alpha \in \Omega^n(X,V))$ is a map $$ \alpha : \{p:O_p\rightarrow X\} \in \p \mapsto \alpha_p \in \Omega^n(p;V)$$ such that $\bullet$ Let $x\in X.$ $\forall p,p'\in \p$ such that $x\in Im(p)\cap Im(p')$, the forms $\alpha_p$ and $\alpha_{p'}$ are of the same order $n.$ $\bullet$ Moreover, let $y\in O_p$ and $y'\in O_{p'}.$ If $(X_1,...,X_n)$ are $n$ germs of paths in $Im(p)\cap Im(p'),$ if there exists two systems of $n-$vectors $(Y_1,...,Y_n)\in (T_yO_p)^n$ and $(Y'_1,...,Y'_n)\in (T_{y'}O_{p'})^n,$ if $p_*(Y_1,...,Y_n)=p'_*(Y'_1,...,Y'_n)=(X_1,...,X_n),$ $$ \alpha_p(Y_1,...,Y_n) = \alpha_{p'}(Y'_1,...,Y'_n).$$ We note by $$\Omega(X;V)=\oplus_{n\in \mathbb{N}} \Omega^n(X,V)$$ the set of $V-$valued differential forms. \end{Definition} With such a definition, we feel the need to make two remarks for the reader: $\bullet$ If there does not exist $n$ linearly independent vectors $(Y_1,...,Y_n)$ defined as in the last point of the definition, $\alpha_p = 0$ at $y.$ $\bullet$ Let $(\alpha, p, p') \in \Omega(X,V)\times \p^2.$ If there exists $g \in C^\infty(D(p); D(p'))$ (in the usual sense) such that $p' \circ g = p,$ then $\alpha_p = g^*\alpha_{p'}.$ \vskip 12pt \begin{Proposition} The set $\p(\Omega^n(X,V))$ made of maps $q:x \mapsto \alpha(x)$ from an open subset $O_q$ of a finite dimensional vector space to $\Omega^n(X,V)$ such that for each $p \in \p,$ $$\{ x \mapsto \alpha_p(x) \} \in C^\infty(O_q, \Omega^n(O_p,V)),$$ is a diffeology on $\Omega^n(X,V).$ \end{Proposition} Working on plots of the diffeology, one can define the product and the differential of differential forms, which have the same properties as the product and the differential of differential forms. \begin{Definition} Let $(X,\p)$ be a diffeological space. \noindent $\bullet$ $(X,\p)$ is \textbf{finite-dimensional} if and only if $$\exists n_0\in\mathbb{N},\quad \forall n\in \mathbb{N}, \quad n\geq n_0 \Rightarrow \operatorname{dim}(\Omega^n(X,\mathbb{R}))=0.$$ Then, we set $$\operatorname{dim}(X,\p)=\max\{n\in \mathbb{N}| \operatorname{dim}(\Omega^n(X,\mathbb{R}))>0\}.$$ \noindent $\bullet$ If not, $(X,\p)$ is called \textbf{infinite dimensional}. \end{Definition} Let us make a few remarks on this definition. If $X$ is a manifold with $\operatorname{dim}(X)=n$ {(in the classical sense) the nebulae diffeology $\p_\infty(X)$} is such that {$$\operatorname{dim}(X,\p_\infty(X))=n.$$} Now, if $(X,\F,\C)$ is the natural Fr\"olicher structure on $X,$ take $\p_1$ generated by the maps of the type $g\circ c$, where $c\in \C$ and $g$ is a smooth map from an open subset of a finite dimensional space to $\mathbb{R}.$ This is an easy exercise to show that $$\operatorname{dim}(X,\p_1)=1.$$ This first point shows that the dimension depends on the diffeology considered. Now, we remark that $\F$ is the set of smooth maps $(X,\p_1)\rightarrow \mathbb{R},$ This leads to the following definition, since $\p(\F)$ is clearly the diffeology with the biggest dimension associated to $(X,\F,\C)$: \begin{Definition} The \textbf{dimension} of a Fr\"olicher space $(X,\F,\C)$ is the dimension of the diffeologial space $(X,\p(\F)).$ \end{Definition} This definition totally fits with the following: \begin{Definition} Let $(X,\p)$ be a diffeological space {which is not totally disconnected, that is, for which $\p$ has non-constant plots.} We define the dimension of $(X,\p)$ as $$\operatorname{dim}(X,\p) = min \left\{ n \in \N^* \, | \, {}{\p_n} = \p \right\}.$$ \end{Definition} \subsubsection{(Pseudo-)Riemannian metrics {on diffeological spaces}} \label{s:rmdiff} We now go {to} the extension of the basic structures of Riemannian manifolds to diffeological spaces. \begin{Definition} \label{d:RM} Let $(X,\p)$ be a diffeological space. An \textbf{internal Riemannian metric} $g$ on $X$ (noted $g \in Met(X)$) is a map $$ g: \{p:O_p\rightarrow X\} \in \p \mapsto g_p$$ such that $\bullet$ $x \in O_p \mapsto g_p(x)$ is a (smooth) metric on $TO_p$ $\bullet$ Moreover, let $y\in O_p$ and $y'\in O_{p'}$ such that $p(y)=p'(y').$ If $(X_1,X_2)$ is a couple of germs of paths in $Im(p)\cap Im(p'),$ if there exists two systems of $2-$vectors $(Y_1,Y_2)\in (T_yO_p)^2$ and $(Y'_1,Y'_2)\in (T_{y'}O_{p'})^2,$ if $p_*(Y_1,Y_2)=p'_*(Y'_1,Y'_2)=(X_1,X_2),$ $$ g_p(Y_1,Y_2) = g_{p'}(Y'_1,Y'_2).$$ \noindent $(X,\p,g)$ is a \textbf{internal Riemannian diffeological space} if $g$ is a metric on $(X,\p).$ \end{Definition} For any germ of path $X$ we note $||X|| = \sqrt{g(X,X)}$ This notation is independent on the chosen plot of the diffeology. \begin{Definition} We call \textbf{arc length} the map $L : C^\infty([0;1], X) \rightarrow \mathbb{R}_+$ defined by $$L(\gamma) = \int_0^1 ||\dot\gamma(t)|| dt.$$ Let $(x, y) \in X^2.$ We define $$ d(x,y) = \inf \left\{ L(\gamma) | \gamma(0) = x \wedge \gamma(1)=y\right\}$$ and we call \textbf{Riemannian pseudo-distance} the map $d : X \times X \rightarrow \R_+$ that we have just described. \end{Definition} The following proposition justifies the terms ``pseudo-distance'': \begin{Proposition} \cite{Ma2019} \begin{enumerate} \item \label{d1} $\forall x \in X, d(x,x) = 0.$ \item \label{d2} $\forall (x,y) \in X^2, d(x,y) = d(y,x)$ \item \label{d3} $\forall (x,y,z \in X^3, d(x,z) \leq d(x,y) + d(y,z).$ \end{enumerate} \end{Proposition} One could wonder whether $d$ is a distance or not, i.e. if we have the stronger property: $$ \forall (x,y) \in X^2, d(x,y)=0 \Leftrightarrow x=y.$$ Unfortunately, it seems to appear in examples arising from infinite dimensional geometry that there can have a distance which equals to 0 for $x \neq y.$ This is what is described on a weak Riemannian metric of a space of proper immersions in the work of Michor and Mumford \cite{MichMum}. Moreover, the D-topology is not the topology defined by the pseudo-metric $d.$ All these facts, which show that the situation on Riemannian Fr\"olicher spaces is very different from the one known on manifolds, are checked in the following (toy) example. \begin{rem} Let $Y = \coprod_{i \in \N^*} \R_i$ where $\R_i$ is the $i-$th copy of $\R,$ equipped with its natural scalar product. Let $\mathcal{R}$ be the equivalence relation $$x_i \mathcal{R} x_j \Leftrightarrow \left\{ \begin{array}{l}(x_i \notin ]0;\frac{1}{i}[ \wedge x_j \notin ]0;\frac{1}{j}[) \Rightarrow \left\{ \begin{array}{lr} x_i = x_j & \hbox{if } x_i\leq 0 \\ x_i + 1- \frac{1}{i} = y_j + 1 - \frac{1}{j} & \hbox{ if } x_i \geq \frac{1}{i} \end{array} \right. \\ (x_i \in ]0;\frac{1}{i}[ \vee x_j \in ]0;\frac{1}{j}[) \Rightarrow i = j \wedge x_i = x_j \end{array} \right. $$ Let $X = Y / \mathcal{R}.$ This is obviously a 1-dimensional Riemannian diffeological space. Let $\bar{0}$ be the class of $0 \in\R_1,$ and let $\bar{1}$ be the class of $1 \in \R_1.$ Then $d(\bar{0}, \bar{1}) = 0.$ This shows that $d$ is not a distance on $X.$ In the $D-$topology, $\bar{0}$ and $\bar{1}$ respectively have the following disconnected neighborhoods: $$U_{\bar{0}} = \left\{ \bar{x_i} | x_i < \frac{1}{2i} \right\}$$ and $$U_{\bar{1}} =\left\{\bar{x_i} | x_i > \frac{1}{2i} \right\}.$$ This shows that $d$ does not define the $D-$topology. \end{rem} \subsection{Group of diffeomorphisms and linear group: two examples of diffeological groups} Let us start with the generalization of the notion of diffeomorphism \cite{Sou}. \begin{Definition} Let $(X,\p)$ be a diffeological space. A (set theoric) bijection on $X$ is a \emph{diffeomorphism} if both $f$ and $f^{-1}$ are smooth. We note by $\operatorname{Diff}(X)$ the group of diffeomorphisms of $X.$ \end{Definition} \begin{Proposition} Let us equip $C^\infty(X,X)$ with its functional diffeology $\p.$ We define the maps $$ i : f \in \operatorname{Diff}(X) \mapsto f\in C^\infty(X,X)$$ and $$ (.)^{-1} : f \in \operatorname{Diff}(X) \mapsto f^{-1}\in C^\infty(X,X).$$ Then $(\operatorname{Diff}(X), i^*(\p)\cap ((.)^{-1})^*(\p))$ is a diffeological group. \end{Proposition} \begin{rem} When $X$ is a compact boundaryless manifold, the group $\operatorname{Diff}(X)$ is an open submanifold of $C^\infty(X,X)$ in the sense of e.g. \cite{Om}, that we will precise later. We anticipate this construction in order to point out here that in geenral, $\operatorname{Diff}(X)$ does \emph{not} have the subset diffeology from $C^\infty(X,X), $ while it is the case when $X$ is a compact boundaryless manifold, which is the most widely studied setting in the actual literature. \end{rem} Let us now assume that $X$ is a diffeological vector space with field of scalars a diffeological field $\mathbb{K}.$ \begin{Proposition} \begin{enumerate} \item The space $\mathcal{L}(X)$ of smooth linear maps from $X$ to $X$ is an algebra {with smooth addition, smooth multiplication and smooth scalar multiplication.} \item The group of invertible elements of $\mathcal{L}(X),$ that we note by $GL(X),$ is a diffeological subgroup of $\operatorname{Diff}(X).$ \end{enumerate} \end{Proposition} \begin{rem} \label{hypo} Following the recent review \cite{Glo2022}, to which we refer for a complete exposition, on topological vector spaces that are not normed spaces, multilinear mappings are merely hypocontinuous, and not continuous. This is the case, when $E$ and $F$ are locally convex complete topological vector spaces, for the evaluation map $$\mathcal{L}(E,F) \times E \rightarrow F.$$ Switching to the (subset) functional diffeology of $\mathcal{L}(E,F),$ where $E$ and $F$ are understood as diffeological vector spaces, the same evaluation map remains smooth. \end{rem} Following Iglesias-Zemmour, see \cite{Igdiff}, we do not assert that arbitrary diffeological groups have associated Lie algebras; however, the following holds, see \cite[Proposition 1.6.]{Les} and \cite[Proposition 2.20]{MR2016}. \begin{Proposition} Let $G$ be a diffeological group. Then the {internal} tangent cone at the neutral element {}{${}^iT_eG$} is a diffeological vector space. \end{Proposition} \noindent The proof of Proposition \ref{Les} appearing in \cite{MR2016} uses explicitly the diffeologies $\mathcal{P}_1$ and $\mathcal{P}_\infty$ which appear in Proposition 3 and Remark 4 of this work. } \begin{Definition} The diffeological group $G$ is a \textbf{diffeological Lie group} if and only if the Adjoint action of $G$ on the diffeological vector space $^iT_eG$ defines a {smooth} Lie bracket, {that is, noting by $c$ a path in $C^\infty(\R,G)$ such that $c(0)=e$ and $\partial_t c(0) = X,$ and noting by $Y$ an element of the internal tangent space ${}^iT_eG,$ \begin{itemize} \item $\partial_t\left(\operatorname{Ad}_{c(t)}Y\right)|_{t=0} = [X,Y]\in {}^iT_eG$ \item $(X,Y)\mapsto [X,Y]$ is smooth. \end{itemize}. } In this case, we call $^iT_eG$ the Lie algebra of $G$ and we denote it by $\mathfrak{g}.$ \end{Definition} The question of criteria for a diffeological group to be a diffeological Lie group has been addressed in \cite{Les} for diffeological groups, and in \cite{Lau2011} for Fr\"olicher groups. \begin{rem} Actually there is no proven example of diffeological group which is not a diffeological Lie group. \end{rem} The basic properties of adjoint, coadjoint actions, and of Lie brackets, remain {}{on diffeological Lie groups} globally the same as in the case of finite-dimensional Lie groups, and the proofs are similar: see \cite{Les} and \cite{DN2007-1} for details. \subsection{On various tangent spaces}\label{s:varTX} After our exposition of the construction of the internal tangent cone, we {describe} other existing {generalizations of the notion of tangent space} to a given diffeological space $X.$ Indeed, for a finite dimensional manifold, {tangent spaces} carry so many {technical} aspects {of interest} that the notion of tangent space can be generalized by many ways. Some of them appear as non equivalent, while there is actually a shared notion of differential form and of cotangent space of a diffeological space. Let us describe briefly {various} notions of tangent space for a diffeological space, and let us compare them. \subsubsection{The external tangent space} Let us first adapt a classical algebraic construction to the diffeological category. The set of (diffeological) derivations on a diffeological space $X,$ noted by $\mathfrak{der}(X),$ is the space of smooth maps from $C^\infty(X,\R)$ to itself which satisfies the Leibnitz rule. The same way, the (diffeological) pointwise derivations on $X$ are the smooth maps $d$ from $C^\infty(X,\R)$ to $\R$ such that $$ \exists x \in X, \forall (f,g )\in (C^\infty(X,\R))^2, d(fg) = f(x) d(g) + g(x) d(f).$$ \begin{Definition} The external tangent space ${}^eTX$ is defined as the set of diffeological pointwise derivations on $X.$ \end{Definition} As a set of smooth maps, it can be endowed with the functional diffeology, and fixing the point $x \in X,$ one defines ${}^eT_xX \subset {}^eTX .$ \subsubsection{At the intersection of the internal and the external tangent spaces}\label{s:ieTX} This {approach} is the one chosen in \cite{Ma2013,Ma2018-2,MR2016}, {and refined in \cite{GW2020}. This leads to two objects of different type: a cone \cite{Ma2013,Ma2018-2,MR2016} which can be completed into a vector space \cite{GW2020} following exactly the same procedure as the one applied in \cite{CW}. This last construction is not described here: we do not want exhaustivity in our presentation.} The main idea behind the definition of this tangent space {in \cite{Ma2013,Ma2018-2,MR2016,GW2020}} is based on the definition of a tangent space on smooth manifolds through paths. More precisely, we consider the set of all paths that maps $ 0 $ to $ x $ and identify two of these paths with each other if their derivative coincide in $ 0 $. Therefore, an element in $ T_xX $ is an {equivalence} class $ [c] $ for $ c\colon \mathbb{R} \rightarrow X $ such that $ c(0) = x $. The directional derivative of a function $ f\colon X \rightarrow \mathbb{R} $ in direction $ v $ is given by the derivative $ \tfrac{\partial (f \circ c)(t) }{\partial t}_{\mid_{t = 0}} $ for $ c\colon \mathbb{R} \rightarrow X $ such that $ c(0) = x $ and $ c '(0) = v $. {This approach is then applied to an arbitrary diffeological space to produce a tangent cone the following way:} \begin{Definition} For each $x\in X,$ we consider $$ C_{x}=\{c \in C^\infty(\R,X)| c(0) = x\} $$ and we take the equivalence relation $\mathcal{R}$ given by $$ c\mathcal{R}c' \Leftrightarrow \forall f \in C^\infty(X,\R), \partial_t(f \circ c)|_{t = 0} = \partial_t(f \circ c')|_{t = 0}. $$ \end{Definition} Equivalence classes of $\mathcal{R}$ are called {\bf germs} in \cite{Ma2013,Ma2018-2,MR2016} and are denoted by $\partial_t c(0)$ or $\partial_tc(t)|_{t=0}$. In the same references, the {\bf internal tangent cone} at $x$ is the quotient $C_x / \mathcal{R}.$ If $X = \partial_tc(t)|_{t=0} \in {}^iT_X, $ we define the pointwise derivation $f \mapsto Df(X) = \partial_t(f \circ c)|_{t = 0}\, .$ The reader may compare this definition to the one appearing in \cite{KM} for manifolds in the ``convenient" $c^\infty-$setting. The internal tangent cone defined in section \ref{s:internal tangent} trivially differs from this one, which is also a subset of ${}^eT_xX.$ We note this {``internal-external'' tangent cone by ${C}^{ie}T_xX.$} \begin{ex}\label{ex:cross} {Let $$X= \left\{(x,y) \in \R^2 \, | \, xy=0\right\},$$ equipped with its subset diffeology. Then, following \cite{Ma2020-3}, $$C^{ie}T_0X = X$$ which shows that the internal-external tangent space is not in general a vector space.} \end{ex} \begin{ex} \label{spagh2} Let us consider the example given in \cite{CW2022} of $\R^2$ equipped with the 1-dimensional diffology $\p_1(\R^2),$ called the spaghetti diffeology in the above reference. For reasons similar to the ones given in example \ref{spagh}, the internal tangent space of $\R^2$ equipped with the diffeology $\p_1(\R^2)$ is infinite dimensional, with uncountable dimension, while as a consequence of Bohman's theorem, ${C}^{ie}T_x\R^2$ is 2-dimensional vector space) \end{ex} Following \cite{GW2020} we {introduce the following terminology:} \begin{Definition} \label{def:PathDerivative} Let $X$ be a diffeological space and $c \in C_x$. The {path derivative} in direction $c$ is defined by \begin{align*} \label{PathDerivative} \mathrm{d}_{c}\colon C^{\infty}(X,\mathbb{R}) \rightarrow \mathbb{R},\, f \mapsto \mathrm{d}_{c}(f) := \dfrac{\partial}{\partial t}\left(f(c(t)\right)_{\mid_{t=0}}\in \mathbb{R}. \end{align*} One calls $\mathrm{d}_{c}(f) $ the \emph{path derivative} of the function $f\in C^{\infty}(X,\mathbb{R})$ in direction $c$. \end{Definition} {As explained in Example \ref{ex:cross}, see e.g. \cite{GW2020}, ${C}^{ie}T_xX$} does not provide a natural diffeological vector space structure. This motivates the next tangent space definition. \begin{Definition} \label{def:TxX} Let $X$ be a diffeological space and $x \in X$. The {internal-external} tangent space {${}^{ie}T_xX$ is given by \begin{align*} {}^{ie}T_xX := \operatorname{span}{C}^{ie}T_xX. \end{align*}} \end{Definition} \begin{rem} {The set ${C}^{ie}T_xX$ is noted by $C_xX$ and is called the tangent cone in \cite{GW2020}. Due to the multiple definitions of tangent objects in this work, we feel necessary to use the notation ${C}^{ie}T_xX$ because simplified notations woul be surely misleading.} \end{rem} Regarding the diffeologies on {${C}^{ie}T_xX$ and ${}^{ie}T_xX$} the following point of view is considered in \cite{GW2020} in order to obtain a \emph{diffeological tangent space} from the set {${C}^{ie}T_xX$:} \begin{itemize} \item An element in {${}^{ie}T_xX$} is a finite sum of elements in {}{ $ {C}^{ie}T_xX $} that are multiplied by a scalar. \item {The diffeology of ${C}^{ie}T_xX$ is the push-forward diffeology through the map $$c \in C_x \mapsto d_c \in {C}^{ie}T_xX$$ and a plot in ${}^{ie}T_xX$ is a mapping $$ U\to {}^{ie}T_xX, \, u \mapsto \sum_{i=1}^{+\infty}\lambda_i(u)p_i(u) $$ where $U\subset\mathbb{R}^n$ denotes an open subset with $n \in \mathbb{N}$,} $\lambda_i\colon U \rightarrow\mathbb{R}$ is smooth {such that the sequence $$\lambda = (\lambda_1, \cdots , \lambda_i , \cdots )$$ is a plot of the subset diffeology of $$ \R^\infty = \bigcup_{k \in \N^*} \R^k \subset \R^\N$$} and { each $p_i$ is a plot in ${C}^{ie}T_xX$ with domain $U$. \end{itemize} The diffeology introduced on $T_xX$ is the so-called \emph{weak-diffeology} defined in \cite{vincent}. Equipped with the weak diffeology, $ T_xX $ becomes the finest possible diffeological vector space such that, {with slight abuse of intuitionistic notations, $$ \p_{{C}^{ie}T_xX} \subset \p_{T_xX}.$$ } } \subsubsection{The diff-tangent space} \label{s:dTX} We now extend the construction of \cite{Ma2020-3} on Fr\"olicher spaces to (general) diffeological spaces. \begin{Definition} \label{def:dtangent}We use here the notations that we used before for the definition of the internal tangent cone of section \ref{s:internal tangent}. Let ${}^{d}T_xX$ be the subset of ${}^iT_xX$ defined by $$ {}^dT_xX = {}^dC_x /\mathcal{R}$$ with $${}^dC_x =\left\{ c \in C_x | \exists \gamma \in C^\infty(\R,\operatorname{Diff}(X)), c(.)=\gamma(.)(x) \hbox{ and } \gamma(0) = Id_X \right\}$$ \end{Definition} Through this definition, ${}^dT_xX$ is intrinsically linked with the tangent space at the identity ${}^iT_{Id_X}\operatorname{Diff}(X)$ described in \cite{Les} for any diffeological group (i.e. group equipped with a diffeology which makes composition and inversion smooth) \begin{rem} \label{rq24} Let $\gamma \in C^\infty(\R,\operatorname{Diff}(X))$ such that $\gamma(0)(x)=x.$ Then $\lambda (x) = (\gamma(0))^{-1} \circ \gamma(.)(x)$ defines a smooth path $\lambda \in {}^dC_x.$ {{} Consequently,} $${}^dC_x =\left\{ c \in C_x | \exists \gamma \in C^\infty(\R,\operatorname{Diff}(X)), c(.)=\gamma(.)(x) \hbox{ and } \gamma(0)= Id_X \right\}$$ \end{rem} \subsection{Fiber bundles, principal bundles and vector bundles in diffeologies} Let us now have a precise look at the notion of fiber bundle in classical (finite dimensional) fiber bundles. Fiber bundles, in the context of smooth finite dimensional manifolds, are defined by \begin{itemize} \item a smooth manifold $E$ called total space \item a smooth manifold $X$ called base space \item a smooth submersion $\pi: E \rightarrow X$ called fiber bundle projection \item a smooth manifold $F$ called typical fiber, because $\forall x \in X, \pi^{-1}(x)$ is a smooth submanifold of $E$ diffeomorphic to $F.$ \item a smooth atlas on $X,$ with domains $U \subset X$ such that $\pi^{-1}(U)$ is an open submanifold of $E$ diffeomorphic to $U \times F.$ We the get a system of local trivializations of the fiber bundle. \end{itemize} By the way, in order to be complete, a smooth fiber bundle should be the quadruple data $(E,X,F,\pi)$ (because the definition of $\pi$ and of $X$ enables to find systems of local trivializations). For short, this quadruple setting is often {{} denote}d by the projection map $\pi: E\rightarrow X.$ There exists some diffeological spaces which carry no atlas, so that, the condition of having a system of smooth trivializations in a generalization of the notion of fiber bundles is not a priori necessary, even if this condition, which is additional, enables interesting technical aspects \cite[pages 194-195]{MW2017}. So that, in a general setting, we do not need to assume the existence of local trivializations. Now, following \cite{pervova}, in which the ideas from \cite[last section]{Sou} have been devoloped to vector spaces, the notion of quantum structure has been introduced in \cite{Sou} as a generalization of principal bundles, and the notion of vector pseudo-bundle in \cite{pervova}.The common idea consist in the description of fibered objects made of a total (diffeological) space $E,$ over a diffeological space $X$ and with a canonical smooth {{} bundle} projection $\pi: E \rightarrow X$ such as, $\forall x \in X,$ $\pi^{-1}(x)$ is endowed with a (smooth) algebraic structure, but for which we do not assume the existence of a system of local trivialization. \begin{enumerate} \item For a diffeological vector pseudo-bundle, the fibers $\pi^{-1}(x)$ are assumed diffeological vector spaces, i.e. vector spaces where addition and multiplication over a diffeological field of scalars (e.g. $\R$ or $\mathbb{C}$) is smooth. We notice that \cite{pervova} only deals with finite dimensional vector spaces. \item For a so-called ``structure quantique'' (i.e. ``quantum structure'') following the terminology of \cite{Sou}, a diffeological group $G$ is acting on the right, smoothly and freely on a diffeological space $E$. The space of orbits $X=E/G$ defines the base of the quantum structure $\pi: E \rightarrow X,$ which generalize the notion of principal bundle by not assuming the existence of local trivialization. In this picture, each fiber $\pi^{-1}(x)$ is isomorphic to $G.$ \end{enumerate} From these two examples, we can generalize the picture. \begin{Definition}\label{pseu-fib} Let $E$ and $X$ be two diffeological spaces and let $\pi:E\rightarrow X$ be a smooth surjective map. Then $(E,\pi,X)$ is a \textbf{diffeological fiber pseudo-bundle} if and only if $\pi$ is a subduction. \end{Definition} {{} Let us precise that we do not assume that there exists a typical fiber, in coherence with Pervova's diffeological vector pseudo-bundles. We } can give the following definitions: \begin{Definition} Let $\pi:E\rightarrow X$ be a diffeological fiber pseudo-bundle. Then: \begin{enumerate} \item Let $\mathbb{K}$ be a diffeological field. $\pi:E\rightarrow X$ is a diffeological $\mathbb{K}-$vector pseudo-bundle if there exists: \begin{itemize} \item a smooth fiberwise map $.\, :\mathbb{K} \times E \rightarrow E,$ \item a smooth fiberwise map $+:E^{(2)} \rightarrow E$ where $$E^{(2)} = \coprod_{x \in X} \{(u,v) \in E^2\, | \, (u,v)\in \pi^{-1}(x)\}$$ equipped by the pull-back diffeology of the canonical map $E^{(2)} \rightarrow E^2,$ \end{itemize} such that $\forall x \in X, $ $(\pi^{-1}(x),+,.)$ is a diffeological $\mathbb{K}-$vector bundle. \item $\pi:E\rightarrow X$ is a \textbf{diffeological gauge pseudo-bundle} if there exists \begin{itemize} \item a smooth fiberwise involutive map ${(.)}^{-1}\, E \rightarrow E,$ \item a smooth fiberwise map $.\,:E^{(2)} \rightarrow E$ \end{itemize} such that $\forall x \in X, $ $(\pi^{-1}(x),\, .\, )$ is a diffeological group with inverse map $(.)^{-1}.$ \item $\pi:E\rightarrow X$ is a \textbf{diffeological principal pseudo-bundle} if there exists a diffeological gauge pseudo-bundle $\pi': E' \rightarrow X$ such that, considering $$E\times_X E' = \coprod_{x \in X} \{(u,v)\, | \, (u,v)\in \pi^{-1}(x)\times \pi'^{-1}(x)\}$$ equipped by the pull-back diffeology of the canonical map $E\times_X E' \rightarrow E\times E',$ there exists a smooth map $E\times_X E' \rightarrow E$ which restricts fiberwise to a smooth free and transitive right-action $$\pi^{-1}(x) \times \pi'^{-1}(x) \rightarrow \pi^{-1}(x).$$ \item $\pi:E\rightarrow X$ is a \textbf{Souriau quantum structure} if it is a diffeological principal pseudo-bundle with diffeological gauge (pseudo-)bundle $X\times G \rightarrow X.$ \end{enumerate} \end{Definition} {We now specialize to elementary classes of fiber bundles.} \subsubsection{Tangent bundles} The first motivating examples for these constructions were, of course, the tangent cones and the tangent spaces which are all fiber pseudo-bundles and sometimes vector pseudo-bundles. {Indeed, analyzing the constructions in sections \ref{s:internal tangent}, \ref{s:ieTX} and \ref{s:dTX}, everything starts with a set of paths that we note here generically by $C_x(X)$ which generates the tangent cone at $x,$ that we note here generically by $CT_xX,$ and its diffeology through the quotient by an equivalence relation $\mathcal{R}_x.$ Let us now consider the \textbf{total} tangent space (in fact, tangent cone in general) $$ CTX = \coprod_{x \in X} CT_xX,$$ for which basic intuition asserts that there exists a much better diffeology than the coproduct diffeology where fibers are arcwise disconnected one to each other. For the construction of this desired diffeology in the context of the internal tangent cone (section \ref{s:internal tangent}) and of the internal-external tangent cone (section \ref{s:ieTX}), one has only to remark that the evaluation maps at $t=0$ define a fiber pseudo-bundle projection $$ \operatorname{ev}_0 : C^\infty(\R,X) \rightarrow X$$ which is a subduction, and such that $$\forall x \in X, \operatorname{ev}_0^{-1}(x)=C_x(X).$$ The fiberwise equivalence relations $\mathcal{R}_x$ define a global equivalence relation $\mathcal{R }$ on $C^\infty(\R,X)$ for which $$CTX = C^\infty(\R,X) / \mathcal{R},$$ which enables to defined a global quotient diffeology. } Therefore, one can define {}{from} the kinetic tangent cone at $x \in X$ the global {}{internal} tangent cone of $X$ and their diffeologies the following way: \begin{Definition} The internal or kinetic tangent cone of $X$ as $${}^iTX = \coprod_{x \in X} {}^iT_xX .$$ The space ${}^iTX$ is endowed by the push-forward of the functional diffeology on $C^\infty(\R,X).$ \end{Definition} {The same definition holds for the definition of the total internal-external tangent cone $C^{ie}TX,$ by replacing the equivalence relations. Considering ${}^dTX,$ one has also to change in the definitions $C^\infty(\R,X)$ for its subset $$\coprod_{x \in X} {}^d C_x$$ which is endowed with its subset diffeology. } \begin{Definition} Let $X$ be a Fr\"olicher space. we define, by $$ {}^dTX = \coprod_{x \in X} {}^dT_xX$$ the diff-tangent bundle of $X.$ \end{Definition} By the way, we can get easily the following observations: \begin{Proposition} \label{prop:pdiff} Let $(X,\p)$ be a reflexive diffeological space, and let $\p_{\operatorname{Diff}}$ be the functional diffeology on $\operatorname{Diff}(X).$ \begin{enumerate} \item \label{dt1} There exists a diffeology $\p(\operatorname{Diff}) \subset \p$ {{} which is} generated by the family of push-forward diffeologies : $$ \left\{ (\operatorname{ev}_x)_{*}(\p_{\operatorname{Diff}}) \, | \, x \in X \right\}.$$ \item \label{dt2} $\forall x \in X, {}^dT_xX$ is the internal tangent cone of $(X,\p(\operatorname{Diff}))$ at $x.$ \item \label{dt3} $\forall x \in X, {}^dT_xX$ is a diffeological vector space \item \label{dt4} The total diff-tangent space $${}^{d}TX = \coprod_{x \in X} {}^{d}TX \subset {}^{i}TX $$ is a vector pseudo-bundle for the subset diffeology inherited from ${}^{i}TX $ and also for the diffeology of internal tangent space of $(X,\p_{\operatorname{Diff}}).$ \end{enumerate} \end{Proposition} \begin{proof} (\ref{dt1}) is a consequence of the definition of push-forward diffeologies the following way: the family $$\{ \p \hbox{ diffeology on } X \, | \, \forall x \in X, \, (\operatorname{ev}_x)_{*}(\p_{\operatorname{Diff}}) \subset \p\}$$ has a minimal element by Zorn Lemma. (\ref{dt2}) follows from remark \ref{rq24}. (\ref{dt3}): The diffeology $\p(\operatorname{Diff})$ coincides with the diffeology made of plots which are locally of the form $\operatorname{ev}_x \circ p,$ where $x \in X$ and $p$ is a plot of the diffeology of $\operatorname{Diff}(X).$ We have that $^{i}T_{Id}\operatorname{Diff}(X)$ is a diffeological vector space, following \cite{Les}. This relation follows from the differentiation of the multiplication of the group: given two paths $\gamma_1, \gamma_2$ in $C^\infty(\R,Diff(X)),$ with $\gamma_1(0)=\gamma_2(0)=Id,$ if $X_i = \partial_t\gamma_i(0)$ for $i \in \{1{{},}2\},$ then $$ X_1 + X_2 = \partial_t (\gamma_1 . \gamma_2) (0).$$ Reading locally plots in $\p(\operatorname{Diff}),$ we can consider only plots of the for $\operatorname{ev}_x \circ p,$ where $p$ is a plot in $\operatorname{Diff}(X)$ such that $p(0) = Id_X.$ By the way the vector space structure on $^{d}T_xX$ is inherited from $^{i}T_{Id}\operatorname{Diff}(X)$ via evaluation maps. In order to finish to check (\ref{dt3}), we prove directly $(\ref{dt4})$ by describing its diffeology. For this, we consider $$C^\infty_0(\R, \operatorname{Diff}(X)) = \left\{ \gamma \in C^\infty_0(\R, \operatorname{Diff}(X)) \, | \, \gamma(0)=Id_X \right\}.$$ Let $^{d}C = \coprod_{x \in X} ^{d}C_x.$ The total evaluation map \begin{eqnarray*} ev : & X \times C^\infty_0(\R, Diff(X)) \rightarrow & ^{d}C \\ & (x,\gamma) \mapsto & ev_x \circ \gamma \end{eqnarray*} is fiberwise (over $X$), and onto. By the way we get a diffeology on $ ^{d}C$ which is the push-forward diffeology of $X \times C^\infty_0(\R, \operatorname{Diff}(X))$ by $\operatorname{ev}.$ Passing to the quotient, we get a diffeology on $^{d}TX$ which makes each fiber $^{d}T_xX$ a diffeological vector space trivially. \end{proof} {Therefore, the diff-tangent space is a vector pseudo-bundle, while one supplementary effort is necessary to generate a diffeology on ${}^{ie}TX$. For this, let us consider the map $$\begin{array}{ccccc}l & : & C^\infty(X ,\R^\infty) \times C^{ie}TX^\N & \rightarrow & {}^{ie}TX \\ && (\lambda_1,\cdots, \lambda_k , \cdots ) \times (d_{c_1}, \cdots , d_{c_k}, \cdots ) & \mapsto & \sum_{k = 1}^{+ \infty} \lambda_k d_{c_k} \end{array} $$ which produces the desired diffeology on ${}^{ie}TX$ by push-forward. } \subsubsection{Riemannian metrics on vector pseudo-bundles} {\begin{Definition} \label{d:RMbundle} Let $\pi:E \rightarrow X$ be a real diffeological vector pseudo-bundle. A \textbf{Riemannian metric} on $E$ is a smooth map $$ g : E^{(2)} \rightarrow \R$$ that is fiberwise on each $E_x$ a symmetric, definite and positive bilinear form. \end{Definition} We have here to point out a first group of difficulties which are not present in the context of (classical) finite dimensional manifolds: \begin{itemize} \item The metric $g$ may not define an isomorphism between each fiber $E_x$ and its diffeological dual $\mathcal{L}(E_x,\R).$ \item The fibers of $E$ may not be isomorphic. \end{itemize} As a first class of examples, one can mention the vector pseudo-bundle ${}^dTX$ when $X$ is equipped with an internal Riemannian metric (see section \ref{s:rmdiff}), but one can wonder whether tangent cones can also carry Riemannian structures, whitout embedding in a larger Riemannian vector pseudo-bundle. To our actual state of knowledge, the only known such structure is described in section \ref{s:rmdiff}. } \subsubsection{Diffeological principal bundles and connections} In \cite[Article 8.32]{Igdiff} Iglesias-Zemmour gives a definition of a connection on a principal $G$-bundle in terms of paths on the total space $P,$ generalising the classical notion of path lifting for principal bundles with finite-dimensional Lie groups as structure groups. \begin{Definition}\label{d:iz-connection} Let $G$ be a diffeological group, and let $\pi\colon P\to X$ be a principal $G$-bundle. Denote by $\pathloc(P)$ the diffeological space of \textbf{local paths} (see \cite[Article 1.63]{Igdiff}), and by $\operatorname{tpath}(P)$ the \textbf{tautological bundle of local paths} $$\operatorname{tpath}(P):=\{(\gamma,t)\in\pathloc(P)\times\R\mid t\in D(\gamma)\}.$$ A \textbf{diffeological connection} is a smooth map $H \colon \operatorname{tpath}(P)\to\pathloc(P)$ satisfying the following properties for any $(\gamma,t_0)\in \operatorname{tpath}(P)$: \begin{enumerate} \item the domain of $\gamma$ equals the domain of $H(\gamma,t_0)$, \item $\pi\circ\gamma=\pi\circ H(\gamma,t_0)$, \item $H(\gamma,t_0)(t_0)=\gamma(t_0)$, \item $H(\gamma\cdot g,t_0)=H(\gamma,t_0)\cdot g$ for all $g\in G$, \item $H(\gamma\circ f,s)=H(\gamma,f(s))\circ f$ for any smooth map $f$ from an open subset of $\R$ into $D(\gamma)$, \item $H(H(\gamma,t_0),t_0)=H(\gamma,t_0)$. \end{enumerate} \end{Definition} Another formulation of this definition can be found in \cite{Ma2013} under the terminology of \textbf{path-lifting}. \begin{rem}\label{r:iz-connection} Diffeological connections satisfy many of the usual properties that classical connections on a principal $G$-bundle (where $G$ is a finite- dimensional Lie group) enjoy; in particular, they admit unique horizontal lifts of paths in $\pathloc(M)$ \cite[Article 8.32]{Igdiff}, and they pull back by smooth maps \cite[Article 8.33]{Igdiff}. \end{rem} \begin{Proposition} Let $V$ be a vector space. Then, $G$ acts smoothly from the right on the space $\Omega(P,V)$ of $V$-valued differential forms on $P$ by setting $$ \forall (g,\alpha) \in \Omega^n(P,V) \times G, \forall p\in \p(P), \quad (g_*\alpha)_{g.p} = \alpha_p \circ (dg^{-1})^n \; . $$ \end{Proposition} \begin{proof} $G$ acts smoothly on $P$ so that, if $p \in \p(P),$ $g.p \in \p(P)$. The right action is now well-defined, and smoothness is trivial. \end{proof} \begin{Definition} Let $\alpha \in \Omega(P;\mathfrak{g}).$ The differential form $\alpha$ is \textbf{right-invariant} if and only if, for each $p \in \p(P),$ and for each $g \in G,$ $$\alpha_{g.p} = Ad_{g^{-1}} \circ g_*\alpha_p \; .$$ \end{Definition} Now, let us turn to connections and holonomy. Let $p \in P$ and let $\gamma$ be a smooth path in $P$ starting at $p.$ \begin{Definition} A \textbf{connection} on $P$ is a $\mathfrak{g}-$valued right-invariant $1$-form $\theta,$ , such that, for each $ v \in \mathfrak{g},$ for any path $c : \R \rightarrow G$ such that $$ \left\{\begin{array}{ccr} c(0)& = & e_G\\ \partial_tc(t)|_{t=0}&= & v \; \; ,\end{array} \right. $$ and for each $p \in P$ we have: $$\theta(\partial_t(p.c(t))_{t = 0})=v \; .$$ \end{Definition} Now we assume that $dim(M)\geq 2$ and we fix a connection $\theta$ on $P.$ \begin{Definition} Let $\alpha \in \Omega(P;\mathfrak{g})$ be a $G-$invariant $1$-form. Let $\nabla \alpha = d\alpha - {\frac{1}{ 2}}[\theta,\alpha]$ be the horizontal derivative of $\alpha.$ The curvature $2$-form induced by $\theta$ is $$ \Omega = \nabla \theta \; .$$ \end{Definition} \section{Diffeologies in functional equations}\label{diffeq} \subsection{Principal bundles, fully regular Lie groups and holonomy} \label{s:reg} \subsubsection{Motions on groups and (partial) differential equation} {A wide class of first order differential equations are usually solved globally by the use of the integration of smooth paths $v$ in a Lie algebra into smooth paths $g$ in a Lie group, satisfying a differential equation of the type \begin{equation}\label{eq:logder}g(t)^{-1}\partial_tg(t) = v(t). \end{equation} When $v$ is constant, $g(t)=\operatorname{exp}(tv)$ produces, when it exists, the only solution $g$ such that $g(0)=e.$ This approach is the initial motivation for the development of the notion of Lie grops by Sophus Lie. For finite dimensional examples, this approach gives a geometric way to solve differential equations with value in Lie groups, but these techniques can be also of interest for infinite dimensional objects. More precisely, the first examples of interest, where the exponential map is not easy to define, are linear differential equations of the form \begin{equation} \label{eq:linPDE} \frac{df}{dt} = A(f) \end{equation} where $A$ is a differential operator of positive order acting in a space of smooth functions $F.$ \begin{ex} When $F = C^\infty(S^1,\R)$, consider $A = - \frac{d^2}{dx^2}.$ Then there is an operator $\operatorname{exp}(tA)$ which is unbounded, that solves (\ref{eq:linPDE} with eigenvalues $e^{tn^2}.$ Modifying $A$ into $\frac{d^2}{dx^2}$, the solution $\operatorname{exp}(tA)$ is a family of Fredholm operators of index $0$ with eigenvalues $e^{-tn^2} \leq 1.$ \end{ex} From this elementary example, which can be expanded and complexified as far as wanted, one can easily understand that control of the regularity of the solutions for these equations, and the methods for approximating them numerically, highly depends on the nature of the operator $A.$ The same way, equations of the form \begin{equation} \label{eq:Liebracket} \frac{dS}{dt} = \left[A(t), S(t)\right], \end{equation} where $S$ is an operator-valued solution in a Lie algebra $\mathfrak{g}$ and $A$ is a smooth path in $\mathfrak{g},$ have a formal solution $$S(t) = \operatorname{Ad}_{G(t)} S(0)$$ where $G$ is the solution of equation (\ref{eq:logder}) . These elementary facts can be generalized in two ways that we wish to mention here: \begin{itemize} \item Symmetries of a (partial) differential equation generate transformation groups which transform one solution to another. In practice, one often deals with \textit{infintesimal symmetries} which are formally tangent vectors to the group of symmetries \cite{O}. Integrating a Lie algebra of infinitesimal symmetries is less easy and requires technical abilities in the field of infinite dimensional Lie groups. One can find an example of such concerns for symmetries of the 3d-Euler equation in \cite{Rob}. These indications help to solve the PDE by reduction, in general case by case and when it produces ``enough'' symmetries, but anyway the symmetries of PDEs are in general of interest for the global understanding of the space of solutions. \item Integrable systems are differential equations that are equivalent to a zero curvature condition, that is, they can be expressed as $$dS \pm [S,S] =0$$ where $S$ is a well-chosen 1-form built from the initial PDE. The main clue for solving the problem is that this precise equation is a so-called \textit{zero curvature equation}, that integrates globally when one deals with on a trivial finite dimensional principal bundle in which $S$ is understood as a connexion 1-form. \end{itemize} In section \ref{diffeq}, we expose what is the actual state-of-the-art of the application of diffeologies to the necessary technical properies on diffeological Lie groups, in view of potential applications to PDEs that can be expressed in an infinite dimensonal Lie group. } \subsubsection{On regular diffeological Lie groups} Since we are interested in infinite-dimensional analogues of Lie groups, we need to consider tangent spaces of diffeological spaces, and we have to deal with Lie algebras and exponential maps. We state, after \cite{Les,DN2007-1,CW} the following definition: \begin{Definition} \label{reg1} \cite{Les} A {diffeological} Lie group $G$ with Lie algebra $\mathfrak{g}$ is called \textbf{regular} if and only if there is a smooth map \[ \operatorname{Exp}:C^{\infty}([0;1],\mathfrak{g})\rightarrow C^{\infty}([0,1],G) \] such that $g(t)=\operatorname{Exp}(v(t))$ is the unique solution of the differential equation \begin{equation} \label{loga} \left\{ \begin{array}{l} g(0)=e\\ \frac{dg(t)}{dt}g(t)^{-1}=v(t)\end{array}\right.\end{equation} We define the exponential function as follows: \begin{eqnarray*} exp:\mathfrak{g} & \rightarrow & G\\ v & \mapsto & exp(v)=g(1) \; , \end{eqnarray*} where $g$ is the image by $\operatorname{Exp}$ of the constant path $v.$ \end{Definition} {\begin{rem} Equation (\ref{loga}) is called \textbf{right logarithmic} equation while Equation \ref{eq:logder} is called \textbf{left logarithmic}. The correspondence between the two equations is actually well-exposed in \cite{KM}, and the existence of the solution of the first one is equivalent to the existence of a solution to the second one. \end{rem}} When the Lie group $G$ is a vector space $V$, the notion of regular Lie group specialize to what is called {\em regular vector space} in \cite{Ma2013} and {\em integral vector space} in \cite{Les}; we follow the first terminology. \begin{Definition} \label{reg2} \cite{Les} Let $(V,\p)$ be a {diffeological} vector space. The space $(V,\p)$ is \textbf{integral} or \textbf{regular} if there is a smooth map $$ \int_0^{(.)} : C^\infty([0;1];V) \rightarrow C^\infty([0;1],V) $$ such that $\int_0^{(.)}v = u$ if and only if $u$ is the unique solution of the differential equation \[ \left\{ \begin{array}{l} u(0)=0\\ u'(t)=v(t)\end{array}\right. .\] \end{Definition} This definition applies, for instance, if $V$ is a complete locally convex topological vector space equipped with its natural Fr\"olicher structure given by the Fr\"olicher completion of its n\'ebuleuse diffeology, see \cite{Igdiff,Ma2006-3,Ma2013}. We give now the corresponding notion for derivatives, after remarking that $\forall v \in V, V \subset {}^iT_vV$ through the identification of linear paths $t \mapsto tv$ with $v:$ \begin{Definition} Let $V$ be a diffeological vector space. Then $V$ is {\bf co-regular} if $$ \forall v \in V, {}^iT_vV = V.$$\end{Definition} This property, which seems natural, highly depends on the diffeology considered, see Example \ref{spagh2}. This may explain why all the authors since \cite{Les} and till now, and even us, have considered this problem as minor or even negligible. \begin{Definition} Let $G$ be a {diffeological} Lie group with Lie algebra $\mathfrak{g}$. Then, $G$ is { fully regular, i.e. regular} with integral Lie algebra if $\mathfrak{g}$ is integral and $G$ is regular in the sense of Definitions $\ref{reg1}$ and $\ref{reg2}$. \end{Definition} We finish this section with two structural results essentially proven in \cite{Ma2013} in a restricted setting for the second one. \begin{Theorem}\label{exactsequence} Let $$ 1 \longrightarrow K \stackrel{i}{\longrightarrow} G \stackrel{p}{\longrightarrow} H \longrightarrow 1 $$ be an exact sequence of Fr\"olicher Lie groups, such that there is a smooth section $s : H \rightarrow G,$ and such that the trace diffeology on $i(K) \subseteq G$ coincides with the push-forward diffeology from $K$ to $i(K).$ We consider also the corresponding sequence of Lie algebras $$ 0 \longrightarrow \mathfrak{k} \stackrel{i'}{\longrightarrow} \mathfrak{g} \stackrel{p}{\longrightarrow} \mathfrak{h} \longrightarrow 0 \; . $$ Then, \begin{itemize} \item The Lie algebras $\mathfrak{k}$ and $\mathfrak{h}$ are integral if and only if the Lie algebra $\mathfrak{g}$ is integral; \item The Fr\"olicher Lie groups $K$ and $H$ are regular if and only if the Fr\"olicher Lie group $G$ is regular. \end{itemize} \end{Theorem} \begin{Theorem} \label{regulardeformation} Let $(A_n)_{n \in \mathbb{N}} $ be a sequence of coregular and integral Fr\"olicher vector spaces equipped with a graded smooth multiplication operation on $\bigoplus_{n \in \mathbb{N}^*} A_n\, ,$ i.e. a multiplication such that for each $n,m \in \mathbb{N}^*$, $A_n .A_m \subset A_{n+m}$ is smooth with respect to the corresponding Fr\"olicher structures. \begin{itemize} \item Let us define the (non unital) algebra of formal series: $$ \mathcal{A}= \left\{ \sum_{n \in \mathbb{N}^*} a_n | \forall n \in \mathbb{N}^* , a_n \in A_n \right\}\; , $$ equipped with the Fr\"olicher structure of the infinite product. Then, the space $$1 + \mathcal{A} = \left\{ 1 + \sum_{n \in \mathbb{N}^*} a_n | \forall n \in \mathbb{N}^* , a_n \in A_n \right\} $$ is a regular Fr\"olicher Lie group with integral Fr\"olicher Lie algebra $\mathcal{A}.$ Moreover, the exponential map defines a smooth bijection $\mathcal{A} \rightarrow 1+\mathcal{A}.$ \end{itemize} \end{Theorem} A result similar to Theorem \ref{exactsequence} is also valid for Fr\'echet Lie groups, see \cite{KM}. \subsubsection{On the holonomy of a connection in a diffeological principal bundle} Now, let $P$ be a principal bundle. Let $p \in P$ and $\gamma$ a smooth path in $P$ starting at $p,$ defined on $[0,1].$ Let $H_\theta \gamma (t) = \gamma(t)g(t)$, where $g(t) \in C^\infty([0,1];\mathfrak{g})$ is a path satisfying the differential equation: $$ \left\{ \begin{array}{c} \theta \left( \partial_t H_\theta\gamma(t) \right) = 0 \\ H_\theta\gamma(0)=\gamma(0) \end{array} \right. $$ The first line of this equation is equivalent to the differential equation $$g^{-1}(t)\partial_tg(t) = -\theta(\partial_t\gamma (t))$$ which is integrable, and the second line is equivalent to the initial condition $g(0)=e_G.$ This shows that horizontal lifts are well-defined, as in the standard case of finite-dimensional manifolds. Moreover, the map $H_\theta(.)$ defines trivially a diffeological connection. This enables us to consider the holonomy group of the connection. Notice that a straightforward adaptation of the arguments of \cite{Ma2013} shows that the holonomy group does not depend (up to conjugation and up to the choice of connected component of $M$) on the choice of the base point $p.$ This definition allows us to consider reductions of the structure group. {}{Following \cite{Ma2013},} \begin{Theorem} \label{Courbure} We assume that $G_1$ and $G$ are regular Fr\"olicher groups with regular Lie algebras $\mathfrak{g}_1$ and $\mathfrak{g}.$ Let $\rho: G_1 \mapsto G$ be an injective morphism of Lie groups. If there exists a connection $\theta$ on $P$, with curvature $\Omega$, such that for any smooth $1$-parameter family $H_\theta c_t$ of horizontal paths starting at $p$, and for any smooth vector fields $X,Y$ in $M$, the map \begin{eqnarray} s, t \in [0,1]^2 & \rightarrow & \Omega_{Hc_t(s)}(X,Y) \label{g1} \end{eqnarray} is a smooth $\mathfrak g_1$-valued map (for the $\mathfrak g _1 -$ diffeology), \noindent and if $M$ is simply connected, then the structure group $G$ of $P$ reduces to $G_1,$ and the connection $\theta$ also reduces. \end{Theorem} We can now state the announced Ambrose-Singer theorem, using the terminology of \cite{Rob} for the classification of groups via properties of the exponential map \cite{Ma2013}: \begin{Theorem} \label{Ambrose-Singer} Let $P$ be a principal bundle whose structure group is a fully regular Fr\"olicher Lie group $G$. Let $\theta$ be a connection on $P$ and $H_\theta$ the associated diffeological connection. \begin{enumerate} \item For each $p \in P,$ the holonomy group $\Hol_p^L$ is a diffeological subgroup of $G$, which does not depend on the choice of $p$ up to conjugation. \item There exists a second holonomy group $H^{red},$ $\Hol \subset H^{red},$ which is the smallest structure group for which there is a subbundle $P'$ to which $\theta$ reduces. Its Lie algebra is spanned by the curvature elements, i.e. it is the smallest integrable Lie algebra which contains the curvature elements. \item If $G$ is a Lie group (in the classical sense) of type I or II, there is a (minimal) closed Lie subgroup $\bar{H}^{red}$ (in the classical sense) such that $H^{red}\subset \bar{H}^{red},$ whose Lie algebra is the closure in $\mathfrak{g}$ of the Lie algebra of $H^{red}.$ $\bar{H}^{red}$ is the smallest closed Lie subgroup of $G$ among the structure groups of closed sub-bundles $\bar{P}'$ of $P$ to which $\theta$ reduces. \end{enumerate} \end{Theorem} From \cite{Ma2013} again, we have the following result: \begin{Proposition} \label{0-courbure} If the connection $\theta$ is flat and $M$ is connected and simply connected, then for any path $\gamma$ starting at $p \in P,$ the map $$\gamma \mapsto H_\theta\gamma(1)$$ depends only on $\pi(\gamma(1))\in M$, and it defines a global smooth section $M \rightarrow P.$ Therefore, $P = M \times G.$ \end{Proposition} Let us precise a little bit more this result (see \cite[section 40.2]{KM} for an analogous statement in the $c^\infty$-setting): \begin{Theorem} \label{Hslice} Let $(G,\mathfrak{g})$ be a regular Lie group with regular Lie algebra and let $X$ be a simply connected Fr\"olicher space. Let $\alpha \in \Omega^1(M,\mathfrak{g})$ such that \begin{equation} \label{beta1} d\alpha + [\alpha,\alpha]=0\; . \end{equation} Then there exists a smooth map $$f : X \rightarrow G $$ such that $$df.f^{-1} = \alpha.$$ Moreover, we move from one solution $f$ to another by applying the Adjoint action of $G$, pointwise in $x \in X$. \end{Theorem} We remark that the theorem also holds if we consider the equation $$d\alpha - [\alpha,\alpha]=0$$ instead of (\ref{beta1}); we only need to change left logarithmic derivatives for right logarithmic derivatives, and Adjoint action for Coadjoint action. The correspondence between solutions is given by the inverse map $f \mapsto f^{-1}$ on the group $C^\infty(X,G).$ \subsection{Optimization on diffeological spaces} \label{sec:Optimization} In general, optimization methods aim at the minimization or maximization of an objective functional. Often, this functional depends on the solution of a partial differential equation (PDE). Examples of a PDE are the compliance of an elastic structure or a dissipated energy in a viscous flow. Since a maximization problem can be expressed as an minimization problem by considering the negative objective functional, often, only minimization problems are considered in the literature. In general, optimization methods are iterative methods that generate updates such that the objective functional is reduced (in the case of a minimization problem). We concentrate first on shape optimization to motivate the consideration of diffeological spaces in optimization techniques. Afterwards, we summarize the recent findings regarding optimization techniques on diffeological spaces. \subsubsection{A motivation for considering diffeological spaces in (shape) optimization} Shape optimization has many applications and a large variety of methods. Application examples are acoustic shape optimization {\cite{Schmidt2016}}, optimization of interfaces in transmission problems {\cite{Gangl2015,Paganini15}}, electrochemical machining {\cite{Hintermueller2011}}, image restoration and segmentation {\cite{Hintermueller2004}} and inverse modelling of skin structures {\cite{Naegel2015}}. Shape optimization problems are usually solved by using iterative methods such that one starts with an initial shape and then gradually evolves it be morphing it into the optimal shape. In order to formulate optimization methods and enable their theoretical investigations, one needs to define first what we describe as a shape. There are multiple options, e.g. the usage of landmark vectors~{\cite{Cootes1995,Hafner2000,Kendall1984,Perperidis2005,Soehn2005}, plane curves \cite{MichMum,Michor2007,Michor2007a,Mio2006} or surfaces in higher dimensions \cite{Bauer2011a,Bauer2012,Kilian2007,Kurtek2010,Michor2005}, boundary contours of objects \cite{Fuchs2009,Ling2007,Rumpf2009}, multiphase objects \cite{Wirth2010}, characteristic functions of measurable sets \cite{Zolesio2007} and morphologies of images \cite{Droske2007}.} In general, a shape space does not have a vector space structure. Instead, the next-best option is to aim for a manifold structure with an associated Riemannian metric. In case a manifold structure cannot be established for the shape space in question, an alternative option is a diffeological space structure. In contrast to shape spaces as Riemannian manifolds, research for diffeological spaces as shape spaces has just begun, see e.g.~{\cite{KW,KW21}. } In the following, we summarize the main findings of {\cite{KW21},} which suggests the use of diffeological spaces instead of smooth manifolds in the context of PDE constrained shape optimization. In shape optimization, one investigates shape functionals. For a suitable shape space $\mathcal{U}$ a shape functional is a function $J\colon \mathcal{U}\to\R$. An unconstrained shape optimization problem is given by \begin{equation} \label{OptProb} \min_{\Omega\in \mathcal{U}} J(\Omega). \end{equation} In applications, shape optimization problems are often constrained by equations like PDEs. Regarding suitable shape spaces in optimization problems, {\cite{KW21}} starts by concentrating on one-dimensional smooth shapes. A one-dimensional smooth shape is a $C^{\infty}$-boundary of a simply connected and compact set $\Omega$. These shapes can be interpreted as smooth single-closed curves represented by embeddings from $S^1$ into $\mathbb{R}^2$. Since we are only interested in the shape itself, i.e., the image of the curve, we are not interested in re-parametrisations. Therefore one can consider the shape space given in \cite{MichMum}: $$ B_e(S^1,\mathbb{R}^2) := \mathrm{Emb}(S^1,\mathbb{R}^2) / \mathrm{\operatorname{Diff}}(S^1), $$ where $\mathrm{Emb}(S^1,\mathbb{R}^2)$ is the set of all embeddings from $S^1$ to $\mathbb{R}^2$ and $\mathrm{\operatorname{Diff}}(S^1)$ the space of all diffeomorphisms from $S^1$ into itself. The space $B_e(S^1,\mathbb{R}^2)$ is a smooth manifold, see \cite{KM}. The tangent space is given by $$ T_cB_e(S^1,\mathbb{R}^2) \simeq \{ h \mid h= \alpha n, \alpha \in C^{\infty}(S^1) \}, $$ where $n$ denotes the exterior unit normal field to the shape boundary $c: S^1 \to \mathbb{R}^2$. The space of one-dimensional smooth shapes is a Riemannian manifold for various Riemannian metrics like the almost local metrics and Sobolev metrics; see e.g. {\cite{MichMum,Bauer2011a,Bauer2012,Michor2005, Michor2007}. } All these metrics arise from the standard $L^2$-metric by putting weights (almost local metrics), derivatives (Sobolev metrics) or both (weighted Sobolev metrics) in it. {In \cite{KW21},} for $\mathcal{U}$ the shape space $B_e(S^1,\R^2)$ combined with the first Sobolev metric and the Steklov-Poincaré metric {(cf. \cite{Schulz2016})} is considered. \begin{rem} One can generalize the shape space $B_e(S^1,\mathbb{R}^2)$ and its results for a compact manifold $M$ and a Riemannian manifold $N$ with $\operatorname{dim}(M) < \dim(N)$ {(cf., e.g., \cite{Michor2005}).} \end{rem} In general, to formulate optimization methods on a Riemannian shape space $(\mathcal{U},g)$, a Riemannian shape gradient with respect to $g$ is needed. A connection of the shape space $B_e(S^1,\R^2)$ together with the first Sobolev metric and the Steklov-Poncaré metric to shape calculus is given in {\cite{KW21}.} Moreover, both approaches are compared to each other {(cf. \cite{VSMS15,KW21}).} In particular, it should be mentioned that working with the Steklov-Poincaré metric has several computational advantages regarding the finite element mesh, which needs to be considered to solve PDE constraints. We refer the reader to {\cite{VSMS15,Siebenborn2016,Schulz2016}.} \begin{rem} \label{RemOD} It is well known that shape optimization algorithms may invoke substantial transformations of the initial shape in the course of the optimization process, which often leads to a deterioration of the cell aspect ratios of the underlying computational meshes. Usually, optimization problems in function spaces can be solved using one of two different approaches: discretize--then--optimize (DO) and optimize--then--discretize (OD). Shape optimization problems are no exception. We refer the interested reader to { \cite{BLWH}} for the description of differences between the OD and DO approaches by a prototypical example of a PDE-constrained shape optimization problem. \end{rem} Regardless the use and benefits of these OD-approaches (for which we refer to the literature), the shape space $B_e(S^1,\R^2)$ itself limits the application of the methods since it only contains smooth shapes. From a numerical point of view it is desirable to weak the smoothness assumption of the shapes. Numerical experiments show that also shapes with kinks can be handled with the Steklov-Poincaré metric; see { \cite{VSMS15, Schulz2016, Siebenborn2016}.} Thus, another shape space definition is necessary. In { \cite{KW21},} the space of so called $H^{1/2}$-shapes is defined: \begin{Definition} Let $\Gamma_0 \subset \mathbb{R}^d$ be a $d$-dimensional Lipschitz shape (the boundary of a non-trivial Lipschitz domain). The space of all $d$-dimensional $H^{1 / 2}$-shapes is given by $$ \mathcal{B}^{1 / 2}\left(\Gamma_0, \mathbb{R}^d\right):=\mathcal{H}^{1 / 2}\left(\Gamma_0, \mathbb{R}^d\right) / \sim, $$ where $$ \begin{aligned} & \mathcal{H}^{1 / 2}\left(\Gamma_0, \mathbb{R}^d\right) \\ & :=\left\{w: w \in H^{1 / 2}\left(\Gamma_0, \mathbb{R}^d\right) \text { injective, continuous; } w\left(\Gamma_0\right) \text { Lipschitz shape }\right\} \end{aligned} $$ and the equivalence relation $\sim$ is given by $$ w_1 \sim w_2 \Leftrightarrow w_1\left(\Gamma_0\right)=w_2\left(\Gamma_0\right) \text {, where } w_1, w_2 \in \mathcal{H}^{1 / 2}\left(\Gamma_0, \mathbb{R}^d\right) \text {. } $$ \end{Definition} Since the space $\mathcal B^{1/2}$ is a challenging one, it is so far unclear if it is a Riemannian manifold or even a manifold. Therefore, { \cite{KW21}} came up with the idea of using a different point of view and drop the restrictiveness of Riemannian manifolds and starts to consider the opportunities within diffeological spaces. Due to the wide variety of natural diffeological spaces, the $\mathcal B^{1/2}$ shape space is a diffeological space {(cf. \cite{KW21}).} In addition, $\mathcal B^{1/2}$ is way less restrictive than $B_e$. However, diffeological spaces are yet to be established in the area of optimization. The first results regarding optimization techniques on diffeological spaces are summarized in the next subsection \subsubsection{General concepts for optimization on diffeological spaces} In the following, we give a brief summary of \cite{GW2020}, in which optimization approaches on diffeological spaces are formulated. In order to formulate optimization methods on diffeological spaces, the steepest decent method on manifolds is considered extended to diffeological spaces. For convenience, we formulate the steepest descent method on a complete Riemannian manifold~$(M,g)$ in algorithm \ref{Algo}. \begin{algorithm} \caption{Steepest descent method on a complete Riemannian manifold~$(M,g)$} \label{Algo} \begin{algorithmic} \vspace{.3cm} \STATE{ \textbf{Require:} Objective function $f$ on $M$; Levi-Civita connection $\nabla$ on $(M,g)$; \\\phantom{\textbf{Require:} }step size strategy.} \vspace{.1cm} \STATE{ \textbf{Goal:} Find the solution of $\min\limits_{x\in M}f(x)$.} \vspace{.1cm} \STATE{ \textbf{Input:} Initial data $x_0\in M$. } \vspace{.3cm} \STATE{ \textbf{for} $k=0,1,\dots$ \textbf{do}} \vspace{.1cm} \STATE{ [1] Compute $\text{grad}f(x_k)$ denoting the Riemannian shape gradient of $f$ in $x_k$.} \vspace{.1cm} \STATE{ [2] Compute step size $t_k$.} \vspace{.1cm} \STATE{ [3] Set $ x_{k+1}:= \operatorname{exp}_{x_k}\left(-t_k \text{grad}f(x_k)\right)$, where $\mathrm{exp}\colon TM\to M$ denotes the exponential map} \vspace{.1cm} \STATE{ \textbf{end for}} \vspace{.3cm} \end{algorithmic} \end{algorithm} In \cite{GW2020}, diffeological counterparts to the known and needed objects for the steepest decent method on smooth manifolds are considered, i.e., diffeological variants of a Riemannian manifold, a Riemannian gradient, a Levi-Civita connection, as well as a retraction. In the following, we introduce these objects from \cite{GW2020}. \begin{rem} The article \cite{GW2020} does not consider a diffeological version of the exponential map, since for optimization purpose the exponential map is usually replaced by a retraction that approximates the exponential map. Furthermore, the existence of an exponential map in a diffeological sense is totally unclear. \end{rem} \subsubsection{Diffeological gradient} For optimization purposes the existence of a gradient is more or less indispensable, since it provides a direction of decent under special assumptions. In order to define a diffeological version of the gradient we first define a diffeological Riemannian space, which results in the definition of a diffeological gradient. {We copy here the definition given in \cite{GW2020}, that we compare with the previous definitions.} \begin{Definition}[Diffeological Riemannian space] Let $X$ be a diffeological space. We say $X$ is a {diffeological Riemannian space} if there exists a smooth map \begin{align*} \mathrm{Sym}({{}^{ie}TX},\mathbb{R}),\, x \mapsto g_x \end{align*} such that $ g_x\colon { {}^{ie}T_xX \times {}^{ie}T_xX} \to \mathbb{R} $ is smooth, symmetric and positive definite. Then, we call the map $g$ a diffeological Riemannian metric. \end{Definition} {\begin{rem} This definition fits with the defninition of a Riemannian metric along the lines of Definition \ref{d:RMbundle} on the diffeological vector bundle ${}^{ie}TX$ but differs from Definition \ref{d:RM} of an internal Riemannian metric. \end{rem}} \begin{Definition}[Diffeological gradient] Let $X$ be a diffeological Riemannian space. The diffeological gradient $ \operatorname{grad}f $ of a function $f \in C^{\infty}(X)$ in $x \in X$ is defined as the solution of \begin{align*} g_x(\operatorname{grad} f, \mathrm{d}_{c}) = \mathrm{d}_{c}(f). \end{align*} \end{Definition} \begin{rem} The existence of a diffeological gradient for a diffeological Riemannian space is not guaranteed. Moreover, so far we do not have any conditions to guaranty a diffeological gradient which does not result in a trivial diffeological Riemannian space. This should not be surprising, since even in the more restrictive setting of infinite dimensional manifolds the existence of a gradient is not guaranteed. \end{rem} \subsubsection{Towards updates of iterates: A diffeological retraction} In order to update the iterates, one generally uses the exponential map. However, the exponential map is an expensive operation in optimization techniques. Thus, one uses so-called retractions to update the iterates. \begin{Definition}[Diffeological retraction] \label{def:Retraction} Let $ X $ be a diffeological space. A diffeological retraction of $ X $ is a map $ \mathcal{R} \colon TX \rightarrow X $ such that the following conditions hold: \begin{itemize} \item[(i)] $ \mathcal{R}_{\mid_{T_xX}} (0) = x$ \item[(ii)] Let $\xi \in T_xX $ and $ \gamma_{\xi} \colon T_0\mathbb{R} \to T_xX,\ t \mapsto \mathcal{R}_{\mid_{T_xX}}(t \xi) $. Then $T_0\gamma_{\xi}(0) = \xi$. \end{itemize} \end{Definition} Since finding a diffeological retraction can be quite challenging, we also define the so-called weak diffeological retraction. \begin{Definition} \label{def:WeakRetraction} Let $X$ be a diffeological space and $CX$ the tangent cone bundle of $X$. A \emph{weak diffeological retraction} of $ X $ is a map $ \mathcal{R} \colon CX \rightarrow X $ such that the following conditions hold: \begin{itemize} \item[(i)] $ \mathcal{R}_{\mid_{C_xX}} (0) = x$ \item[(ii)] Let $\xi \in C_xX $ and $ \gamma_{\xi} \colon C_0\mathbb{R} \to C_xX,\ t \mapsto \mathcal{R}_{\mid_{C_xX}}(t \xi) $. Then, $C_0\gamma_{\xi} = \xi$. \end{itemize} In (ii), the map $C_0\gamma_{\xi}$ is a tangential cone map and given by $ C_0\gamma_{\xi} \colon C_0\mathbb{R} \to C_0X, \mathrm{d}_{\alpha} \mapsto \mathrm{d}_{\gamma_{\xi} \circ \alpha}. $ \end{Definition} \begin{rem} In \cite{GW2020}, there are not only retractions considered. The definition of a diffeological Levi-Civita connection as well as a proof of uniqueness of such a connection is also given. But similar to the Riemannian gradient a condition for the existence is not given. \end{rem} \subsubsection{First order optimization techniques on diffeological spaces} Considering algorithm~\ref{Algo}, \cite{GW2020} came up with a diffeological version of the steepest descent algorithm using a retraction instead of an exponential map. For convenience, we repeat this algorithm below (cf. algorithm \ref{AlgoDiff}). \begin{algorithm}[H] \caption{Steepest descent method on the diffeological space $X$ with Armijo backtracking line search} \label{AlgoDiff} \begin{algorithmic} \vspace{.3cm} \STATE{ \textbf{Require:} Objective function $f$ on a diffeological Riemannian space $X$; \\\phantom{\textbf{Require:} }diffeological retraction $\mathcal{R}$ on $X$.} \vspace{.1cm} \STATE{\textbf{Goal:} Find the solution of $\min\limits_{x\in X}f(x)$.} \vspace{.1cm} \STATE{\textbf{Input:} Initial data $x_0 \in X$; constants $\hat{\alpha}>0$ and $ {\displaystyle \sigma, \rho \in (0,1)}$ for Armijo \\\phantom{\textbf{Input: }}backtracking strategy} \vspace{.3cm} \STATE{\textbf{for} $k=0,1,\dots$ \textbf{do}} \vspace{.1cm} \STATE{ [1] Compute $\text{grad}f(x_k)$ denoting the diffeological shape gradient of $f$ in $x_k$.} \vspace{.1cm} \STATE{ [2] Compute Armijo backtracking step size: } \vspace{.1cm} \STATE{\hspace*{1cm} Set $\alpha := \hat{\alpha}$.} \STATE{ \hspace*{1cm} \textbf{while} $ f\big(\mathcal{R}_{\mid_{T_{x_k}X}}\left(-t_k \text{grad}f(x_k)\right)\big) > f(x^k)-\sigma\alpha \left\|\text{grad}f(x_k)\right\|^2_{T_{x_k}X}$ } \STATE{ \hspace*{1cm} Set $ \alpha :=\rho \alpha $.} \STATE{ \hspace*{1cm} \textbf{end while}} \STATE{ \hspace*{1cm} Set $t_k:=\alpha$.} \vspace{.1cm} \vspace{.1cm} \STATE{ [3] Set $ x_{k+1}:= \mathcal{R}_{\mid_{T_{x_k}X}}\left(-t_k \text{grad}f(x_k)\right).$} \vspace{.1cm} \STATE{ \textbf{end for}} \vspace{.3cm} \end{algorithmic} \end{algorithm} \begin{rem} Algorithm \ref{AlgoDiff} is applied to an example in \cite{GW2020}. For more details about the algorithm we refer the interested reader to the corresponding article. \end{rem} So far, there is no convergence proof of the algorithm. A convergence proof would be of grate interest for the diffeological as well as for the optimization community. Research work on diffeological optimization theory needs to be done in the future to generalize shape optimization to a diffeological setting. \section{On mapping spaces} \label{sec:MappingSpaces} \subsection{On the ILB setting} We consider in this subsection { very particular class of Fréchet Lie groups, that are the projective limit {}{of} Banach Lie groups as infinite dimensional Lie groups, that is, where charts and atlases enjoy also the projective limit property.} We rely heavily on \cite{Om} for terminology. We must first note that infinite-differentiability in the Fr\'echet sense is equivalent to smoothness in the diffeological sense when we equip a Fr\'echet space/manifold with the diffeology comprising Fr\'echet infinitely-differentiable parametrisations. { The framework that we develop here is more restrictive, but more simple and hence more accessible, than the one fully defined in \cite{Om}. In the sequel the abreviation ILB means ``Inverse Limit of Banach''.} \begin{Definition}[ILB-Manifolds \& Bundles]\label{d:ilb} \noindent \begin{enumerate} \item A { Fr\'echet manifold $M$ modeled on a Fr\'echet space $F$ is an } \textbf{ILB-manifold} if { \begin{itemize} \item there exists a sequence of Banach spaces $(B_n)$ such that \begin{itemize} \item $\forall n \in \N, $ $B_{n+1}$ is a subset of $B_n,$ dense for the topology of $B_n$ and such that the closed unit ball of $B_{n+1}$ is a compact subset of the closed unit ball of $B_n.$ \item $F$ is the projective limit of the sequence $(B_n)$ as a topological vector space. \end{itemize} With such assumptions $(F,(B_n)_{n \in \N})$ is called an \textbf{ILB vector space}. \item There exists a family of Banach manifolds $\{M_n\}_{n \in \N}$ in which there is a smooth and dense inclusion $M_{n+1}\hookrightarrow M_n$ for each $n$, and such that there exists a Banach atlas on $M_0$ that restricts to an atlas on $M_n$ for each $n$. \end{itemize}} \item An \textbf{ILB-map} $f$ between two ILB-manifolds $M$ and $N$ is a smooth map $f\colon M\to N$ along with a family of smooth maps $\{f_n\colon M_n\to N_n\}$ such that $f$ and all $f_n$ commute with all inclusions maps $M\hookrightarrow M_{n+1}\hookrightarrow M_n$ and $N\hookrightarrow N_{n+1}\hookrightarrow N_n$. \item An \textbf{ILB-principal bundle} is an ILB-map between two ILB-manifolds $(\pi\colon P\to M,~\pi_n\colon P_n\to M_n)$ such that for each $n$, the map $\pi_n\colon P_n\to M_n$ is a principal $G_n$-bundle where $G_n$ is a Banach Lie group. \item An ILB-map $F$ between two ILB-principal bundles $\pi\colon P\to M$ and $\pi'\colon P'\to M'$ is an \textbf{ILB-bundle map} if $F_n\colon P_n\to P'_n$ is a ($G_n$-equivariant) bundle map for each $n$. Note that $F$ induces an ILB-map $M\to M'$. An ILB-bundle map is an \textbf{ILB-bundle isomorphism} if it has an inverse ILB-bundle map. \end{enumerate} \end{Definition} { When one replaces the sequence of Banach spaces by a sequence of Hilbert spaces, we replace ILB by ILH (Inverse limit of Hilbert) in the definitions. Let us illustrate this setting with the two motivating examples of this setting. For this, we consider a smooth compact boundaryless manifold equipped with a fixed finite atlas $$\{\phi_k: O_k \subset \R^m \rightarrow M\}_{k \in \N_N}.$$ \begin{ex} The space $C^r(M,\R^n)$ of $r-$times continuously differentiable maps, equipped with the norm $$|| f ||_{r,\{\phi_k\}_{k \in \N_N}} = \sup_{k \in \N_N} \sup_{i \in \N_r} \sup_{x \in M} ||D^i f \circ \phi_k||_{L^i(\R^m,\R^n)}$$ is a Banach norm on $C^r(M,\R^n)$ which does not topologically depend on the choice of the atlas $\{\phi_k\}_{k \in \N_N}.$ Then the sequence $(C^r(M,\R^n))_{r \in \N}$ has $C^\infty(M,\R^n)$ as a projective limit. Let us now consider $N$ a $n-$dimensional (locally compact and paracompact) Riemannian manifold. Following \cite{Ee}, the exponential map on $E$ defines pointwise an atlas on $C^0(M,N),$ with domain in $C^0(M,\R^n),$ which by restriction of the domain, defines a structure of smooth Banach manifold on $C^r(M,N),$ modeled on $C^r(M,\R^n),$ for $r \in \N \cup \{\infty\}.$ This defines an ILB manifold. If moreover we replace $G$ by a Lie finite dimensional group, we get an ILB Lie group. \end{ex} Let us keep the notations of the last example. There always exist a smooth embedding of $N$ into $\R^{2n+1}$ following e.g. \cite{Hir}. We now assume that this is a Riemannian embedding, that is, the Riemannian metric on $N$ is the pull-back of the Riemannian (Euclidian) metric on $\R^{2n+1}.$ We also assume that $M$ is Riemannian. Therefore, for the Riemannian volume $dx$ on $M,$ the space $L^2(M,\R^{2n+1}),$ defined as the completion of $C^\infty(M,\R^{2n+1})$ for the norm $$ ||f||_{L^2} = \int_M f^2 dx$$ is a Hilbert space. Moreover, if $\Delta$ is the (positive) Laplacian on $M,$ one can define the Sobolev spaces $H^s(M,\R^{2n+1})$ for $s \in \R$ as the completion of $C^\infty(M,\R^{2n+1})$ for the norm $$ ||f||_{H^s} = \int_M\left( (1 + \Delta)^{s/2} f \right)^2 dx,$$ where the real power $(1 + \Delta)^s$ is defined as the operator (unbounded for $s>0$) with eigenvalue $(1 + \lambda_u)^s$ for the eigenvector $u$ of $\Delta$ with respect to the eigenvalue $\lambda_u \geq 0.$ For completeness, we have to recall that $\Delta$ extends to an unbounded, self-adjoint and positive operator on $L^2(M,\R^{2n+1}),$ with smooth eigenvectors, which explains the deep ease for taking some real powers of $(1 + \Delta).$ With these definitions, a special case of the so-called Sobolev embedding theorems is: \begin{Theorem} {}{The inclusion map} $ H^{s+k}(M,\R^{2n+1}) \subset C^k(M,\R^{2n+1}),$ {}{defined $\forall s > m/2,$} is a bounded map and $H^{s+k}(M,\R^{2n+1})$ is a dense subset of $C^k(M,\R^{2n+1})$ for the $C^k-$topology. \end{Theorem} We can now turn to our second example, following again \cite{Ee} as a main reference, completed by \cite{Om}. \begin{ex} \label{ex:Sobolev} Let $s > m/2.$ The set $$H^s(M,N) = H^s(M,\R^{2n+1}) \cap C^0(M,N)$$ is a (smooth) Hilbert manifold modeled on $H^s(M,\R^n),$ and this family defines ILH structures on $C^\infty(M,N).$ The atlas on $H^s(M,N)$ is the restriction of the atlas on $C^0(M,N)$ where charts are defined pointwise by the exponential map on $N.$ As a consequence, $H^s(M,N)$ is also the closure of $C^\infty(M,N)$ in $H^s(M,\R^{2n+1}).$ \end{ex} After these two examples, we have here to insist on the fact that there exist examples where {refined properties} of Fr\'echet/Banach manifold require more attention. One can see for instance the manifold of paths defined in \cite{St2017}. We now turn to a first point where diffeologies invite themselves in the ILB setting.For this, we consider a ILB space $(F, (B_n)_{n \in \N}).$ The linear groups $GL(B_n)$ are Banach Lie groups, as an immediate consequence of the implicit functions theorem on the Banach algebra $L(B_n)$ of bounded linear maps on $B_n,$ while example \ref{hypo} explains that the use of a diffeology for $GL(F)$ seem to be more natural, even if it differs from the subset diffeology inherited from the functional diffeology of $L(F).$ This problem has been highlighted in the first works concenring ILB spaces. Indeed, in \cite{Om}, H. Omori considers the spaces $$ \mathcal{L}_n=\cap_{k = 0}^n L(B_k)$$ and invokes ``natural differentiation" on $$\mathcal{L}_\infty = \cap_{n \in \N} \mathcal{L}_k$$ as well as on $$G\mathcal{L_\infty} = \cap_{n \in \N} GL(B_k).$$ Obviously, $G\mathcal{L_\infty}$ cannot be equipped with any atlas modeled on $\mathcal{L}_\infty$ in the actual state of knowledge, that is, we cannot construct explicitely any such atlas with the actual known techniques. Therefore, the extension of the definition of e.g. connections to an ILB setting require many refinements that can be found in \cite{DGV}. In the context of diffeologies, we have another way to consider smoothness of $G\mathcal{L_\infty}$ through the following result: \begin{Theorem} \cite{Ma2013} Let $(G_n)_{n \in \N}$ be a sequence Banach Lie groups, decreasing for $\subset,$ for which inclusion maps are smooth. Then $G = \bigcap_{n \in \N} G_n$ is a regular Fr\"olicher Lie group. \end{Theorem} This theorem obviously applies to $G\mathcal{L_\infty}.$ } \subsection{Mappings with non compact source} We first assume that $E$ is a vector bundle with typical fiber $V.$ Let us first consider a fixed local section $s$ of $E,$ with (open) domain $D.$ For any open subset of $M$ such that $ \bar{O} \subset D, $ the evaluation map is smooth. Since $M$ is paracompact, one can define a smooth map $\phi: M \rightarrow \R_+$ such that $$ \phi(x) = \left\{ \begin{array}{cl} 1 & \hbox{ if } x \in O \\ 0 & \hbox{ if } x \notin D \end{array} \right.$$ thus, talking about smoothness with respect to local section, the same is to talk about smoothness with respect to global sections. For the same reasons, one can assume $M$ to be Riemannian. These remarks combined with the existence of local trivialisations on $E,$ we wish to characterize all the topologies on smooth sections of $E, $ that we note by $C^\infty(M,E),$ such that the evaluation maps $$ \operatorname{ev} : ( x,s) \in M \times C^\infty(M,E) \mapsto \operatorname{ev}_x(s)= s(x) \in E$$ is continuous and moreover, for a given local trivialization $E|_U \sim U \times V$ of $E$ over the open subset $U$ of $M,$ and $\forall \alpha \in \N^*,$ the map $$ \operatorname{ev}^{(\alpha)}_{U}: ( x,s) \in M \times C^\infty(M,E) \mapsto \operatorname{ev}^{(\alpha)}_{U,x}(s)= D^\alpha_x s \in L^\alpha( {\R^m},E)$$ is continuous. For this purpose, the weaker topology is the pull-back topology on $C^\infty(M,E)$ through the family of maps $\left\{ \operatorname{ev}, \operatorname{ev}_U^{(\alpha)}\right\}.$ This is a standard exercise to show that this topology coincides with the topology defined, for a locally finite family $\{ U_i \, | \, i \in I\}$ of open subset of $M$ over which $E$ is trivial, by the semi-norms $||.||_{\alpha,K}$ indexed by $\alpha \in \N$ and by the compact subsets $K$ of $M$ defined, for $s \in C^\infty(M,E),$ by $$ ||s||_{0,K} = \sup_{x \in K} ||s(x)||$$ and, for $\alpha >0,$ $$ ||s||_{\alpha,K} = \sup\left\{\sup_{x \in K \cap U_i} || D^\alpha s(x)|| \, | \, U_i \cap K \neq \emptyset \right\}.$$ In other words, this is the topology of uniform convergence of derivatives at any order of sections $s,$ on any compact subset $K$ of $M.$ With this topology, no global estimate on $S$ is assumed. Let us now assume that $E$ is a fiber bundle with typical fiber $F$ which is a smooth (finite dimensional) manifold for dimension $k.$ Let $\{V_j \, | \, j \in J\}.$ be a locally finite open cover of $F$ such that each open domain $V_j$ is of compact closure in $F.$ Each $V_j$ is identified (by the chosen atlas on $F$) with an open subset $\tilde V_j$ of $\R^k.$ Let $U_i \times F$ be a local trivialization of $E.$ then on $\operatorname{ev}^{-1}(V_j) \subset U_i \times C^\infty(U_i, F)$ one can consider the distances $d_{i,j,K,\alpha}$ defined by the semi-norms $||.||_{\alpha,K}$ on (local) smooth sections with domain in $U_i$ with values in $V_j \sim \tilde V_j \subset \R^k.$ By the way, $C^\infty(M,E)$ \textbf{is a metrizable topological space} for the weaker topology which makes $\left\{ \operatorname{ev}, \operatorname{ev}_U^{(\alpha)}\right\}$ be a family of continuous maps. \textbf{However, if $M$ is not compact, there is actually no atlas for which $C^\infty(M,N)$ is a $C^0-$manifold for this topology} { in {}{contrast} with the case when $M$ is compact described before. In order to circumvent this problem, there are two ways: \begin{itemize} \item consider an exhaustive sequence of compact subsets of $M$, that we note by $(K_n)_{n \in \N}$ and remark that the topology of $C^\infty(M,E)$ is the inductive limit of the Fr\'echet manifold topologies of the increasing family $$\left\{C^\infty(K_n, E|_{K_n})\, | \, n \in \N \right\}.$$ \item consider the Fr\"olicher structure on $C^\infty(M,E)$ that makes the jet maps $$j^k : f \in C^\infty(M,E) \rightarrow J^k(M,E), \hbox{ for } k \geq 1$$ and $$j^\infty : f \in C^\infty(M,E) \rightarrow J^\infty(M,E)$$ smooth { (see \cite{GG1973} for basic definitions on jet spaces, or next section).} \end{itemize} Other stronger topologies exist in $C^\infty(M,N).$ In order to be not so incomplete, we have to mention here the smooth Whitney topology \cite{GG1973}, which is also based on Jet spaces, and for which $C^\infty(M,N)$ is a complete locally convex smooth manifold \cite{KM}. \subsection{Jets with non-compact source and a digression on geometric theory of PDEs} Following \cite[Appendix 2]{MRR-jets}, {the last structures on $C^\infty(M,E)$ extend to $J^k(M,E)$ and to $J^\infty(M,E).$ For this, we consider a finite dimensional fiber bundle $E$ with typical fiber $F$ (with $\operatorname{dim}(F)=n$) over a smooth manifold $M$ of dimension $m,$ which can be non compact. In the exposition, we also consider a generic trivialization $\varphi:U \times F \rightarrow E$ of $E$ over an open subset $U$ of $M.$ The manifold $M$ is the space of independent variables $x_{i}$, $1 \leq i \leq n$, and the typical fiber is the space of dependent variables $u^{\alpha}$, $1 \leq \alpha \leq m$. The reader may assume that the bundle $\pi$ is simply ${\mathbb R}^n \times {\mathbb R}^{m} \rightarrow {\mathbb R}^n$: even if it looks like we are ``trivializing" everything because we are working locally, the fact that we are interested in properties of differential equations and their solutions makes geometry highly non-trivial, as we show below. Let $s_{1}(x_{i})=(x_{i}, s_{1}^{\alpha}(x_{i}))$ and $s_{2}(x_{i})=(x_{i}, s_{2}^{\alpha}(x_{i}))$ be two local sections of the bundle $E \rightarrow M$ defined about a point $p = (x_{i})$ in $M$. We say that $s_{1}$ and $s_{2}$ {\em agree to order $k$ at $p \in M$} if $s_{1}$, $s_{2}$ and {\em all} the partial derivatives of the sections $s_{1}$ and $s_{2}$, up to order $k$, agree at $p$. This notion determines a coordinate-independent equivalence relation on local sections of $E$. We let $j^{k}(s)(p)$ represent the equivalence class of the section $s$ at $p$, and we call this equivalence class {\em the $k$--jet of $s$ at $p$}. \begin{Definition} The {\em $k$--order jet bundle of $E$} is the space \[ J^{k}E = \bigcup_{p \in M} J^{k}(p) \; , \] in which $J^{k}(p)$ denotes the set of all the $k$--jets $j^{k}(s)(p)$ of local sections $s$ at $p$. \end{Definition} The jet bundle $J^{k}E$ possesses a natural manifold structure; it fibers over $J^lE$ ($l < k$) and also over $M$: local coordinates on $J^{k}E$ are $(x_{i}, u^{\alpha}_{0}, u_{i_{1}}^{\alpha}, u_{i_{1}i_{2}}^{\alpha}, \dots ,u_{i_{1}\dots i_{k}}^{\alpha})$, in which \begin{equation*} \label{coo} u^{\alpha}_{0}(j^{k}(s)(p)) = s^{\alpha}(p) \; , \; \; \; \; \; u_{i}^{\alpha}(j^{k}(s)(p)) = \frac{\partial s^{\alpha}}{\partial x_{i}}(p) \; , \; \; \; \; \; \; \; u_{i_{1}i_{2}}^{\alpha}(j^{k}(s)(p)) = \frac{\partial^{2} s^{\alpha}}{\partial x_{i_{1}} \partial x_{i_{2}}}(p) \; , \end{equation*} and so forth, where $j^{k}(s)(p) \in J^kE$ and $(x^i) \mapsto (x^i , s^\alpha(x^i))$ is any local section in the equivalence class $j^k(s)(p)$. In these coordinates, the projection map $\alpha^{k} : J^{k}E \rightarrow M$ (source map) factors through a projection map $\pi^k : J^k(E) \rightarrow E$ for which, on a local generic trivialization $\varphi$ as defined before, where $E \sim U \times F$ locally, $\pi^k = \alpha^k \times \beta^k$ where $\beta^k$ is the local target map. Moreover, the projections} $\pi^{k}_{l} : J^{k}E \rightarrow J^{l}E$, $l < k$, are defined in obvious ways. For instance, $\pi_{M}^{k}$ is simply $(x_{i}, u^{\alpha}_{0}, u_{i_{1}}^{\alpha}, \dots , u_{i_{1}\dots i_{k}}^{\alpha}) \mapsto (x_{i})$. Let us precise a little more: for $p \in E,$ $(\pi_k)^{-1}(p)$ is a vector space modelled on $$\bigoplus_{i = 1}^k L^i(\R^m,\R^n)$$ where $L^i(\R^m,\R^n)$ is the space of $i-$linear symmetric mappings on $\R^m$ with values in $\R^n.$ Therefore, $J^k(M,E)$ can be understood as: \begin{itemize} \item a finite dimensional vector bundle over $E$ with typica fiber $\bigoplus_{i = 1}^k L^i(\R^m,\R^n)$ \item a finite dimensional fiber bundle over $M$ with typical fiber $$F \times \left(\bigoplus_{i = 1}^k L^i(\R^m,\R^n)\right).$$ \end{itemize}An extended exposition of the properties of $k-$jets can be found in \cite{GG1973}.} \smallskip Any local section $s : (x_{i}) \mapsto (x_{i},s^{\alpha}(x_{i}))$ of $E$ lifts to a unique local section $j^{k}(s)$ of $J^{k}E$ called the {\em $kth$ prolongation} of $s$. In coordinates, $j^{k}(s)$ is the section \[ j^{k}(s) = \left( x_{i}, s^{\alpha}(x_{i}), \frac{\partial s^{\alpha}}{\partial x_{i_{1}}} \, (x_{i}) \, , ..., \frac{\partial^{k} s^{\alpha}}{\partial x_{i_{1}} \dots \partial x_{i_{k}}} \, (x_{i}) \, , ... \right) \; . \] The {\em infinite jet bundle} $\pi^\infty_M : J^{\infty}E \rightarrow M$ is the inverse limit of the tower of jet bundles $M \leftarrow E \cdots \leftarrow J^{k}E \leftarrow J^{k+1}E \leftarrow \cdots$ under the standard projections $\pi^{k}_{l} : J^{k}E \rightarrow J^{l}E$, $k > l$. We have here to precise that we consider the inverse limit and not the inductive limit. That is, comparing with $J^k(M,E)$ which is a finite dimensional vector bundle over $E,$ taking the inverse limit, we also get $J^\infty(M,E)$ as a vector bundle over $E$ with typical fiber the \textbf{formal series} $$ \sum_{i = 1}^{+\infty} L^i(\R^m,\R^n).$$ }We describe the space $J^{\infty}E$ locally, by sequences \begin{equation} \label{co} (x_{i}, u^{\alpha}, u_{i_{1}}^{\alpha}, ..., u_{i_{1}i_{2}\dots i_{k}}^{\alpha}, ...) \; , \; \; \; 1 \leq i_1 \leq i_2 \leq \dots \leq i_k \leq n \; , \end{equation} obtained from the standard coordinates on the finite--order jet bundles $J^{k}E$. The limit $J^\infty E$ is a topological space: a basis for the topology on $J^\infty E$ is the collection of all sets of the form $(\pi^{\infty}_{k})^{-1}(W)$, in which $W$ is an open subset of $J^{k}E$. We would like to consider $J^\infty E$ as a manifold which needs to be at least an infinite dimensional manifold/ we need to re-define all of the standard differential geometry notions in this new context, see \cite{A,AK}: Let $\pi^{\infty}_{k} : J^{\infty}E \rightarrow J^{k}E$ denote the canonical projection from $J^{\infty}E$ onto $J^{k}E$. A function $f : J^{\infty}E \rightarrow {\mathbb R}$ is {\em smooth} if it factors through a finite-order jet bundle, that is, if $f = f_{k} \circ \pi^{\infty}_{k}$ for some smooth function $f_{k} : J^{k}E \rightarrow {\mathbb R}$. Real-valued smooth functions on $J^\infty E$ are also called {\em differential functions}, see \cite{O}. \textbf{This notion of smoothness coincides with the notion of Fr\"olicher space, where a generating set of functions $\F_0$ serves to define which set of mapping is smooth.} Moreover, considering the typical fibers $\sum_{i = 1}^{+\infty} L^i(\R^m,\R^n)$ of $J^\infty(M,E)$ as a vector bundle over $E,$ since each $L^i(\R^m,\R^n)$ is finite dimensional, it us easy to prove that $\sum_{i = 1}^{+\infty} L^i(\R^m,\R^n)$ is a Fr\'echet space, equipped with the semi-norms $$||.||_i =|| . ||_{L^i(\R^m,\R^n)} \circ \pi_i$$ where $\pi_i:\sum_{i = 1}^{+\infty} L^i(\R^m,\R^n) \rightarrow L^i(\R^m,\R^n)$ is the canonical projection. When one tries to define the tangent space $TJ^\infty(M,E),$ it is natural to consider all derivations $X$ on $\F_0.$ In this very particular setting, by restriction of the derivation $X$ to $C^\infty(J^k(M,E),\R )\subset \F_0,$ since $J^k(M,E)$ is a finite dimensional manifold, then the restriction of $X$ to $J^k(M,E)$ can be expressed as a (classical) vector field over $J^k(M,E)$ and, by inverse limit procedure, $TJ^\infty(M,E)$ is also the inverse limit of the sequence $$TJ^1(M,E) \leftarrow... \leftarrow TJ^k(M,E) \leftarrow ...$$ which shows that Olver's definition of $TJ^\infty(M,E)$ coincides with the standard definition of the tangent space to the Fr\'echet manifold $J^\infty(M,E).$ } These structures are useful to talk about partial differential equations understood as \textbf{partial differential relations} along the lines of e.g. \cite{Gro1986}. Indeed, a basic partial differential equation of the form $A(u)=0,$ where $A$ is a linear differential operator of order $k,$ envolves a solution $u$ and its partial derivatives till the order $k,$ and hence can be encoded as $\Xi (j^k(u))=0$ where $$\Xi: J^k(M,\R) \rightarrow \R$$ is a smooth map. The same holds for most non-linear partial differential equations of interest, but it can occur that the space $ \Xi^{-1}(0)$ can be equipped with many ``singularities'' as shows this very non-linear example based on a non-linear differential equation: \begin{ex} \label{piecewiseaffine} When $\Xi(x,y,y')= x\cos(y)\sin(4y'),$ the subset $\Xi^{-1}(0)$ has an infinity of singular points. Let us assume that we restrict $\mathcal{E}$ to a neighbourhood $U$ of a regular point. The solutions $x \mapsto (x,y(x))$ we would find are simply constant solutions or affine solutions and we may lose information depending on our choice of $U$. Now, solutions extend to piecewise affine (and hence only piecewise smooth) solutions on $\Xi^{-1}(0)$. \end{ex}} The problem of the geometry of singularities needs the definition of a generalized way to define differential geometric objects. { { Let us now consider a partial differential equation $\Xi = 0$ in $J^k(M,E) \subset J^\infty(M,E).$ Following Example \ref{piecewiseaffine}, this is a bit too optimistic to hope that what we call here \emph{the vanishing set} along the lines of \cite[Appendix 2]{MRR-jets}, which is the subset $\Xi^{-1}(0),$ has a structure of smooth manifold. Therefore, in the classical literature, one invokes the use of a \emph{locus}, that is, a smooth submanifold of $J^k(M,E)$ with range in $\Xi^{-1}(0).$ Typically, this locus can be defined by an unique embedding of an open subset of an Euclidian space $O$ to $J^k(M,E).$ This enables to define tangent space, differential forms and other objects on $\{\Xi=0\},$ actually with a slightly non rigorous arguments concerning their global definition on the full space $\Xi^{-1}(0).$ until \cite{MRR-jets} in the context of diffeologies. Indeed, the subset diffeology defines in a natural way a diffeological structure on the vanishing set, which completes the basic setting for the geometry on the main basic example of \textit{diffieties} given by $\Xi^{-1}(0),$ along the lines of \cite{MRR-Vin}. \begin{rem} The spaces of jets are also useful for alternate definitions of connections and their generalizations, see \cite{KMS}, {and an application of diffeologies to higher connections may be feasible.} We do not have enough place here to develop this aspect of the theory. \end{rem} } \subsection{Mappings with low regularity} { In the sequel we consider an algebra $\mathcal{M}$ with complex coefficients, with the hermitian product of matrices $(A,B) \mapsto tr(AB^*).$ If there is no possible confusion, we note this matrix norm by $||.||$ or by $||.||_{\mathcal{M}}$ if necessary. We now extend the construction of Example \ref{ex:Sobolev} the following way, along the lines of \cite{Ma2019}. \begin{Definition} Let $s \in \R,$ let $M$ be a compact boundaryless Riemannian manifold and let $N$ be a Riemannian manifold embedded in $\mathcal{M}.$ We define $H^s(M,N),$ as the completion of $C^\infty(M,N)$ in $H^s(M,\mathcal{M}).$ \end{Definition} We remark that the condition $s < \operatorname{dim}(M)/2$ is not assumed here. This implies a weaker differential structure on $H^s(M;\mathcal{M})$ when $s \leq dim(M) /2$ which can be viewed as follows, generalizing \cite{Ma2019} in a straightforward way: \begin{Proposition} $H^s(M,N)$ is a diffeological space, equipped with the subset diffeology inherited from $H^s(M,\mathcal{M}).$ \end{Proposition} } Let us now consider a compact connected Lie group $G$ of matrices. For $s>\operatorname{dim}(M)/2,$ it is well-known, that $H^s(M,G)$ is a Hilbert Lie group. The case where $M=S^1,$ that is the loop space or the loop group, is the most widely studied in the literature, see e.g. \cite{PS}. The biggest Sobolev order where $H^s(S^1,G)$ fails to be a Hilbert manifold, and also a group, is $s=1/2.$ \begin{Proposition} Let $s \leq 1/2.$. Then $H^{s}(S^1,G)$ and $H^{s}_0(S^1,G)$ are Fr\"olicher space. \end{Proposition} This proposition completes the classical setting, in which the study of the $H^{1/2}-$metric is a key point, from a geometric viewpoint, see e.g. \cite{F2,PS}, as from the viewpoint of shape analysis, see e.g. \cite{We}. \subsection{Groups of diffeomorphisms are regular... or not} Let $M$ be a locally compact, non-compact manifold, { which is assumed to be Riemannian without restriction,} equipped with its n\'ebuleuse diffeology. We equip the group of diffeomorphisms $\operatorname{Diff}(M)$ with the topology of convergence of the derivatives an any order, uniformly on each compact subset of $M,$ usually called $C^\infty-$compact-open topology or weak topology in \cite{Hir}. Traditionnally, $\operatorname{Vect}(M)$ is given as the Lie algebra of $\operatorname{Diff}(M),$ but \cite[section 43.1]{KM} shows that this strongly depends on the topology of $\operatorname{Diff}(M).$ { Indeed, the Lie algebra of vector fields described in \cite[section 43.1]{KM} is the Lie algebra of compactly supported vector fields, which is not the (full) Lie algebra $\operatorname{Vect}(M).$ In another context, when $M$ is compact, $\operatorname{Vect}(M)$ is the Lie algebra of $\operatorname{Diff}(M),$ which can be obtained by Omori's regularity theorems \cite{Om1973,Om} and recovered in \cite{CW}. } What is well known is that infinitesimal actions of $\operatorname{Diff}(M)$ on $C^\infty(M,\R)$ generate vector fields, viewed as order 1 differential operators. { The bracket on vector fields is given by $$(X,Y)\in \operatorname{Vect}(M) \mapsto [X,Y]= \nabla_XY - \nabla_YX,$$ where $\nabla$ is the Levi-Civita connection on $TM.$ This is a Lie bracket, stable under the Adjoint action of $\operatorname{Diff}(M).$ } Moreover, the compact-open topology on $\operatorname{Diff}(M)$ generates a corresponding $C^\infty-$compact-open topology on $\operatorname{Vect}(M).$ This topology is itself the $D-$topology for the the functional} diffeology on $\operatorname{Diff}(M).$ Following \cite[Definition 1.13 and Theorem 1.14]{Les}, $\operatorname{Vect}(M)$ equipped with the $C^\infty$ compact-open topology is a Fr\'echet vector space, { and the Lie bracket is smooth. Moreover, we feel the need to remark that} the evaluation maps $$ T^*M \times Vect(M) \rightarrow \R$$ separate $\operatorname{Vect}(M).$ Thus $\operatorname{Diff}(M)$ is a diffeological Lie group matching with the criteria of \cite[Definition 1.13 and Theorem 1.14]{Les}. Let $F$ be the vector space of smooth maps $f \in C^\infty(]0;1[;\R).$ We equip $F$ with the following semi-norms: For each $(n,k) \in \N^* \times \N $, $$||f||_{n,k} = \sup_{\frac{1}{n+1}\leq x \leq \frac{n}{n+1}} |D^k_xf|.$$ This is a Fr\'echet space, and its topology is the smooth compact-open topology, which is the $D-$topology of the compact-open diffeology. Let $$\A = \{ f \in C^\infty(]0;1[;]0;1[)|\lim_{x \rightarrow 1}f(x)=1 \wedge\lim_{x \rightarrow 0}f(x)=0 \}.$$ Finally, we set $$\D = \{ f \in \A | \inf_{x \in ]0;1[}f'(x) >0 \}.$$ $\D$ is a contractible} set of diffeomorphisms of the open interval $]0;1[$ which is an (algebraic) group for composition of functions. Composition of maps, and inversion, is smooth for the functional} diffeology. Unfortunately, $\D$ is not open in $\A.$ As a consequence, we are unable to prove that it is a Fr\'echet Lie group. However, considering the smooth diffeology induced on $\D$ by $\A,$ the inversion is smooth. As a consequence, $\D$ is (only) a diffeological Lie group. { Following \cite{Ma2018-2}, we can state: \begin{Theorem} $\D$ is a non-regular Fr\"olicher Lie group. \end{Theorem} We now give a consequence of this first theorem for the non-regularity of the group of diffeomorphisms of a non-compact boundaryless manifold, following \cite{Ma2018-2}. } \begin{Theorem} $Diff(M)$ is a non-regular Fr\"olicher Lie group. \end{Theorem} { \begin{rem} In this statement and as we stated before, $\operatorname{Diff}(M)$ is equipped with a topology which is called weak topology in some parts of the literature. It appears to be one of the weakest topologies, inherited from a Fr\"olicher structure, that make evaluation maps continuous and smooth. This is in great contrast with the large class of examples of manifold structures on $\operatorname{Diff}(M)$ for which $\operatorname{Diff}(M)$ is regular \cite{KM,KMR}. Therefore, as in the context of loop spaces described before, the geometry and the topology depend a lot on the chosen geometric structures. \end{rem}} \subsection{Geometry of finite and infinite triangulations} The notion of triangulation is itself a bit confuse in the litterature. One can define: \begin{itemize} \item either parametrized smooth triangulations in a manifold $M$: \begin{Definition} \label{smtriang} A \textbf{smooth triangulation} of $M$ is a family $\tau = (\tau_i)_{i \in I}$ where $I \subset \N$ is a set of indexes, finite or infinite, each $\tau_i$ is a smooth map $\Delta_n \rightarrow M,$ and such that: \begin{enumerate} \item $\forall i \in I, \tau_i$ is a (smooth) embedding, i.e. a smooth injective map such that {{}$(\tau_i)_*\left(\p({\Delta_n})\right)$} is also the subset diffeology of $\tau_i(\Delta_n)$ as a subset of $M.$ \item $\bigcup_{i \in I }\tau_i(\Delta_n) = M.$ (covering) \item $\forall (i,j) \in I^2,$ $\tau_i(\Delta_n) \cap \tau_j(\Delta_n) \subset \tau_i(\partial \Delta_n) \cap \tau_j(\partial \Delta_n).$ (intersection along the borders) \item $\forall (i,j) \in I^2$ such that $ D_{i,j}=\tau_i(\Delta_n) \cap \tau_j(\Delta_n) \neq \emptyset,$ for each $(n-1)$-face $F$ of $D_{i,j},$ the ``transition maps" $``\tau_j^{-1} \circ \tau_i'' : \tau_i^{-1}(F) \rightarrow \tau_j^{-1}(F)$ are affine maps. \end{enumerate} \end{Definition} \item or non-parametrized smooth triangulations, which consider only the images in $M$ of the tarndard simplex through a smooth simplex of the triangulation. \end{itemize} In open domains $\Omega \subset \R^n, $ one often restricts to {\bf affine} triangulations, for which, for each $k \in \N_n,$ $(n-k)-$ dimensional faces lie in $(n-k)-$dimensional affine subspaces of $\R^n$ and, in the parametrized case, embeddings are (restrictions of) affine maps. \subsubsection{Diffeologies associated to a fixed triangulation} There exists actually many diffeologies on a fixed triangulated manifold, and more generally, on a CW-complex and the first described were in { \cite{Nt2002,Nt2005} many years before \cite{CW2014}, see e.g. \cite{Kih2019,Kat2020,Kat2021} while other approaches are still in progress.} Each of them has its own technical interest. We give here a selected one adapted to our needs. we begin with a lemma from \cite{Ma2016-2} which is adapted from so-called gluing results present in \cite{Nt2002,pervova2017,pervova2018} to the context which is of interest for us. \begin{Lemma} \label{cov} Let us assume that $X$ is a topological space, and that there is a collection $\{(X_i,\F_i,\mcc_i)\}_{i \in I}$ of Fr\"olicher spaces, together with continuous maps $\phi_i: X_i \rightarrow X.$ Then we can define a Fr\"olicher structures on $X$ setting $$\F_{I,0} = \{f \in C^0(X,\R) | \forall i \in I, \quad f \circ \phi_i \circ \mcc_i \subset C^\infty(\R,\R)\},$$ wa define $\mcc_I$ the contours generated by the family $\F_{I,0},$ and $\F_I = \F(\mcc_I).$ \end{Lemma} Let $M$ be a smooth manifold for dimension $n.$ Let \begin{equation} \label{embtriangulation}\Delta_n= \{(x_0,...,x_n)\in \R_+^{n+1} | x_0 +... +x_n = 1\}\end{equation} be the standard $n-$ simplex, equipped with its subset diffeology. It is easy to show that this diffeology is reflexive through Boman's theorem already mentionned, and hence we can call it Fr\"olicher space $(\Delta_n, \F_{\Delta_n}, \C_{\Delta_n}),$ and we {{} denote} its associated reflexive diffeology by $\p(\Delta_n).$ Under these conditions, we equip the triangulated manifold $(M,\tau)$ with a Fr\"olicher structure $(\F_I,\mcc_I),$ generated by the smooth maps $\tau_i$ applying Lemma \ref{cov}. The following result is obtained from the construction of $\F$ and $\mcc:$ \begin{Theorem} The inclusion $ (M,\F,\mcc) \rightarrow M$ is smooth. \end{Theorem} \subsubsection{Geometry of the space of triangulations} Let us now fully develop an approach based on the remarks given in \cite{Ma2016-2}. For this, the space of triangulations of $\Omega$ is considered itself as a Fr\"olicher space, and the mesh of triangulations which makes the finite element method converge will take place, as the function $f,$ among the set of parameters $Q.$ We describe here step by step the Fr\"olicher structure on the space of triangulations. By the way, Under these conditions, we equip the triangulated manifold $(M,\tau)$ with a Fr\"olicher structure $(\F_I,\mcc_I),$ generated by the smooth maps $\tau_i$ applying Lemma \ref{cov}. The following result from \cite{Ma2020-3} is obtained from the construction of $\F$ and $\mcc:$ \begin{Theorem} The inclusion $ (M,\F,\mcc) \rightarrow M$ is smooth. \end{Theorem} \begin{rem} Maps in $\F_I$ can be intuitively identified as some piecewise smooth maps $M \rightarrow \R,$ which are of class $C^0$ along the 1-skeleton of the triangulation. We have proved also that $\mcc_I \subset \p_\infty(M).$ Some characteristic elements of $\mcc_I$ can be understood as paths which are smooth (in the classical sense) on the interiors of the domains of the simplexes of the triangulation, and that fulfill some more restrictive conditions while crossing the 1-skeleton of the triangulation. For example, paths that are (locally) stationnary at the 1-skeleton are in $\mcc_I.$ \end{rem} \begin{rem} While trying to define a Fr\"olicher structure from a triangulation, one could also consider $$\C_{I,0} = \left\{ \gamma \in C^{0}(\R,M)\, | \, \forall i \in I, \forall f \in C^\infty_c(\phi_i(\Delta_n),\R), f \circ \gamma \in C^\infty(\R,\R) \right\}$$ where $C^\infty_c(\phi_i(\Delta_n),\R)$ stands for compactly supported smooth functions $M \rightarrow \R$ with support in $\phi_i(\Delta_n).$ Then define $$\F_I' = \left\{f : M \rightarrow \R \, | \, f \circ \C_{I,0} \in C^\infty(\R,\R)\right\}$$ and $$\C_I' = \left\{C : \R \rightarrow M \, | \, \F_I' \circ c \in C^\infty(\R,\R)\right\}.$$ We get here another construction, but which does not understand as smooth maps $M \rightarrow \R$ the maps $\delta_k$ already mentionned. \end{rem} Now, let us fix the set of indexes $I$ and fix a so-called \textbf{model triangulation} $\tau.$ {{} This terminology is justified by two ideas: \begin{itemize} \item Anticipating next constructions, this model triangulation $\tau$ will serve at defining a sequence of refined trinagulations. This is our ``starting triangulation'' for the refinement procedure in the finite elements method. \item Changing $\tau$ into $g \circ \tau,$ where $g$ is a diffeomorphism, we get another model triangulation, which has merely the same properties as $\tau.$ But each ``starting'' trinagulation cannot be obtained by transforming a fixed triangulation by using a diffeomorphism. For example, on the 2-sphere, a tetrahedral triangulation $\tau_1$ and an octahedral triangulation $\tau_2$ separately generate two sequences of refined triangulations, and there is a topological obstruction for changing $\tau_1$ into $\tau_2$ by the action of a diffeomorphism of the sphere. \end{itemize} } We {{} denote} by $\mathcal{T}_\tau$ the set of triangulations $\tau'$ of $M$ such that the corresponding 1-skeletons are diffeomorphic to the 1-skeleton of $\tau$ (in the Fr\"olicher category). {{} The set $\mathcal{T}_\tau$ contains, but is not reduced to, the orbit of $\tau$ by the action of the group of diffeomorphisms. Indeed, one can reparametrize each simplex with adequate compatibility on the border. Intuitively speaking, reparametrizations need not to be smooth in the usual sense while ``crossing the border of a simplex''. This choice is motivated by the Fr\"olicher structure that we identify as useful for the finie elements method, ddefined hereafter.} \begin{Definition} Since $\mathcal{T}_\tau \subset C^\infty(\Delta_n, M)^I,$ we can equip $\mathcal{T}_\tau$ with the subset Fr\"olicher structure, in other words, the Fr\"olicher structure on $\mathcal{T}_\tau$ whose generating family of contours $\mcc$ are the contours in $C^\infty(\Delta_n, M)^I$ which lie in $\mathcal{T}_\tau.$ \end{Definition} We define the full space of triangulations $\mathcal{T}$ as the disjoint union of the spaces of the type $\mathcal{T}_\tau,$ with disjoint union Fr\"olicher structure. With this notation, in {{}the} sequel and when it carries no ambiguity, the triangulations in $\mathcal{T}_\tau$ is equipped with a fixed set of indexes $I$ (which is impossible to fix for $\mathcal{T}$). We need now to describe the procedure which intends to refine the triangulation and define a sequence of triangulations $(\tau_n)_{n \in \N}.$ We can now consider the refinement operator, which is the operator which divides a simplex $\Delta_n$ into a triangulation. \begin{Definition} Let $m \in \N,$ with $m \geq 3.$ Let $$\mu = \left\{\mu_i: \Delta_n \rightarrow \Delta_n \, | \, i \in \N_m\ \right\}$$ be a smooth triangulation of $\Delta_n$ Let $\tau \in \mathcal{T}.$ Then we define {{}$$\mu(\tau) = \{f_i \circ \mu_i \, | \, i \in \N_m \hbox{ and } \tau = (f_i)_{i \in I}\}.$$ We say that $\mu$ defines a \textbf{refinement map} if $\forall n \in \N^*, \mu^n(\tau)$ is a triangulation. } \end{Definition} With this definition, $\mu(\tau)$ is trivially a triangulation of $M$ if $\tau$ is a triangulation of $M.$ The conditions imposed in the definition ensures that the refinement map maps {{}a} triangulation to another {{}triangulation}, that is, if $\tau$ is a triangulation, $\mu(\tau)$ is {{}also} a {{}triangulation}. The delicate needed condition is that the new 0-vertices added to $tau$ in $\mu(\tau)$ are matching. \begin{Definition} Let $\tau \in \mathcal{T}.$ We define the $\mu-$refined sequence of triangulations $\mu^\N(\tau) = (\tau_n)_{n \in \N}$ by $$ \left\{ \begin{array}{ccl} \tau_0 & = & \tau \\ \tau_{n+1} & = & \mu(\tau_n) \end{array} \right.$$ \end{Definition} \begin{Proposition} \label{seqref} The map $$\mu^\N : \mathcal{T} \rightarrow \mathcal{T}^\N$$ is smooth (with $\mathcal{T}^\N$ equipped with the infinite product Fr\"olicher structure). \end{Proposition} Let $\Omega$ be a bounded connected open subset of $\R^n,$ and assume that the border $\partial \Omega = \bar{\Omega} - \Omega$ is a polygonal curve. Since $\R^n$ is a vector space, we can consider the space of affine triangulations: $$\operatorname{Aff}\mathcal{T}_\tau = \left\{ \tau' \in \mathcal{T}_\tau | \forall i , \tau_i' \hbox{ is (the restriction to } \Delta_n \hbox{ of) an affine map } \right\}.$$ We define $\operatorname{Aff}\mathcal{T}$ from $\operatorname{Aff}\mathcal{T}_\tau$ the same way we defined $\mathcal{T}$ from $\mathcal{T}_\tau,$ via disjoint union. We equip $\operatorname{Aff}(\mathcal{T}_\tau)$ {{}and} $\operatorname{Aff}(\mathcal{T})$ with their subset diffeology. We use here the notations of last Lemma. \begin{Theorem} Let $$c : \R \rightarrow \operatorname{Aff}(\mathcal{T}_\tau)$$ be a path on $\operatorname{Aff}(\mathcal{T}_\tau).$ Then $$ c \hbox{ is smooth } \Leftrightarrow \forall (i,j) \in I \times \N_{n+1}, t \mapsto x_j(c(t)_i) \hbox{ is smooth. }$$ \end{Theorem} \begin{Proposition} Let {{}$\mu$} be a fixed affine triangulation of $\Delta_n.$ The map $\mu^\N$ restricts to a smooth map from the set of affine triangulations of $\Omega$ to se set of sequences of affine triangulations of $\Omega.$ \end{Proposition} \subsection{Diffeologies and implicit functions in the ILB setting} We set the following notations, from the standard reference \cite{Om} and along the lines of \cite{Ma2020-1}: Let $\hbox{\bf E} = (E_\infty,(E_i)_{i \in \N})$ and $\hbox{\bf F} = (F_\infty,(F_i)_{i \in \N})$ be two ILB vector spaces. Let $O_0$ be an open neighborhood of $(0{{},}0)$ in $E_0 \times F_0,$ let $\hbox{\bf O} = (O_i)_{i \in I}$ with $O_i = O_0 \cap ({E_i \times F_i})$, for $i \in \N \cup \{\infty\}.$ {{} Let us now propose, { along the lines of \cite{Ma2020-3}}, a diffeological approach to the main result of \cite{Ma2020-1} that we recall here. For this, we consider a function $f_0$ of class $C^\infty$such that \begin{enumerate} \item $f_0(0; 0) = 0$ \item $D_2f_0(0; 0) = Id_{F_0}$. \end{enumerate} Moreover, let us assume that $f_0$ restricts to $C^\infty-$maps $ f_i : U_i \times V_i \rightarrow F_i,$ {and defines an ILB map $$\mathbf{f}:\mathbf{E} \rightarrow \mathbf{F}.$$} let $$U_\infty = E_\infty \cap U_0 \quad \hbox{ and } \quad V_\infty = V_0 \cap F_\infty.$$ We do not assume here any other assumption, contrasting with e.g. the classical Nash-Moser theorem \cite{Ham1982} where additional norm estimates are necessary. Under our weakened conditions, one can state \cite[Theorem 2.2]{Ma2020-1}: \begin{Theorem} \label{1.6} There exists a non-empty domain $D_\infty \subset U_\infty,$ possibly non-open in $U_\infty,$ and a function $$u_\infty : D_\infty \rightarrow V_\infty$$ such that $$\forall x \in D_\infty, \quad f_\infty(x; u_\infty(x)) = 0.$$ {{} Moreover, there exists a sequence $(c_i)_{i \in \N} \in (\R_+^*)^\N$ and a Banach space $B_{f_\infty}$ such that \begin{itemize} \item $B_{f_\infty} \subset E_\infty$ (as a subset) \item the canonical inclusion map $B_{f_\infty} \hookrightarrow E_\infty$ is continuous \end{itemize} which is the domain of the following norm (and endowed with it): $$||x||_{f_\infty} = \sup \left\{ \frac{||x_i||}{c_i}| i \in\N \right\}. $$ Then $D_\infty$ contains $\mathcal{B},$ the unit ball (of radius $1$ centered at 0) of $B_{f_\infty}.$} \end{Theorem} In \cite{Ma2020-1}, the question of the regularity of the implicit function is left open, because the domain $D_\infty$ is not a priori open in $O_\infty.$ This lack of regularity induces a critical breakdown in generalizing the classical proof of the Frobenius theorem to this setting. { This gap is filled in \cite{Ma2020-3} mostly by the following result: } \begin{Theorem} \label{IFTh} {{}Let $$f_i: O_i \rightarrow F_i, \quad i \in \N \cup \{\infty \}$$ be a family of maps, let $u_\infty$ the implicit function defined on the domain $D_\infty$, as in Theroem \ref{1.6}. Then, there exists a domain $D$ such that $\mathcal{B} \subset D \subset D_\infty$ such that the function $u_\infty$ is smooth for the subset diffeology of $D.$} \end{Theorem} The same way, we can state the corresponding Frobenius theorem {from the same reference:} \begin{Theorem}\label{lFrob} Let $$ f_i : O_i \rightarrow L(E_i,F_i), \quad i \in {{}\N}$$ be a collection of smooth maps satisfying the following condition: $$ i > j \Rightarrow f_j|_{O_i} = f_i$$ and such that, $$\forall (x,y) \in O_i, \forall a,b \in E_i$$ $$(D_1f_i(x,y)(a)(b) + (D_2f_i(x,y))(f_i(x,y)(a))(b) =$$ $$(D_1f_i(x,y)(b)(a) + (D_2f_i(x,y))(f_i(x,y)(b))(a) .$$ Then, $\forall (x_0, y_0) \in O_{\infty}$, there exists a diffeological subspace $ D $ of $O_\infty$ that contains $(x_0, y_0)$ and a smooth map $J : D \rightarrow F_\infty$ such that $$ \forall (x,y) \in D, \quad D_1J(x,y) = f_i(x, J(x,y)) $$ and, if {{}$D_{x_0}$ is the connected component of $(x_0,y_0)$ in $\{(x,y) \in D \, | \, x = x_0 \},$ } $$J_i(x_0,.) = Id_{D_{x_0}}. $$ {{} Moreover, there exists a sequence $(c_i)_{i \in \N} \in (\R_+^*)^\N$ and a Banach space $B_{f_\infty}$ such that \begin{itemize} \item $B_{f_\infty} \subset E_\infty\times F_\infty$ (as a subset) \item the canonical inclusion map $B_{f_\infty} \hookrightarrow E_\infty\times F_\infty$ is continuous \end{itemize} which is the domain of the following norm (and endowed with it): $$||x||_{f_\infty} = \sup \left\{ \frac{||x||_{E_i \times F_i}}{c_i}| i \in\N \right\}. $$ Then $D_\infty$ contains $\mathcal{B},$ the unit ball (of radius $1$ centered at 0) of $B_{f_\infty}.$} \end{Theorem}
1,941,325,221,209
arxiv
\section{Introduction} \label{sec:intro5} Intervening massive structures can, via weak gravitational lensing, alter the density of high-redshift objects detected behind them. Myers et al. (2003) demonstrated the anti-correlation between faint QSOs and groups of galaxies, confirming the earlier detection by Boyle, Fong \& Shanks (1988). Using models of lensing by simple haloes (Croom \& Shanks 1999) they concluded that, if due to gravitational lensing, the observed anti-correlation favoured more mass in groups of galaxies than accounted for in a universe with density parameter $\Omega_m = 0.3$. Though statistical lensing is most apparent for larger concentrations of mass, such as groups of galaxies, most recent attempts to measure and model associations between QSOs and foreground mass have focussed on the cross-correlation between QSOs and individual galaxies. Seldner \& Peebles (1979) were among the first authors to record a statistically significant clustering of individual galaxies with bright, high-redshift QSOs, although Tyson (1986) appears to be the first to have mentioned lensing as a possible explanation. Webster et al. (1988) developed a statistical lensing explanation for the association of QSOs with foreground galaxies, suggesting that more lensing mass must be being traced than expected \cite{Kov89,Nar89,Sch89}. Since then, more authors have found positive correlations between optically-selected QSOs and galaxies \cite{Tho95,Wil98}. Many more have measured positive correlations between galaxies and radio-selected QSOs \cite{Fug90,Bar97,Ben97,Nor00}, for which a larger lensing effect is expected. Very few {\it anti}-correlations between QSOs and galaxies have been detected \cite{Ben97,Fer97}, and these were mainly attributed to QSO-selection effects. However, as detected by Myers et al. (2003), an anti-correlation between {\it faint} QSOs and foreground matter is predicted by statistical lensing. The galaxy distribution, relative to underlying mass, is a function of bias, whereas the QSO distribution directly traces the mass that lensed it. Thus, comparing the QSO-galaxy cross-correlation to the galaxy auto-correlation can be used to constrain galaxy bias. Recent observations of positive correlations between bright QSOs and galaxies \cite{Wil98,Gaz03} suggest a very low value for the bias parameter ($b$~$\sim$~0.1). The work of Gazta\~naga (2003) suggests $b$~$\sim$~0.1 on sub-Megaparsec scales. The work of Williams \& Irwin (1998) suggests $b$~$\sim$~0.1 on Megaparsec scales but is consistent with $b$~$\sim$~0.1 on sub-Megaparsec scales. However, measurements of the strength of bias from clustering in galaxy surveys out to (redshift) $z \sim 0.2$ \cite{Ver02}, comparisons of local galaxy clustering with the Cosmic Microwave Background \cite{Lah02} and weak lensing shear \cite{Hoe02} at $z \sim 0.35$, seem to converge on a linear model of bias with $b$, of the order of unity on scales of $5-100 \Mpc$. Taken together, these results suggest that either there is a strong scale-dependence to bias, or that there exists an unexpected, strong systematic effect inducing positive correlations between QSOs and foreground galaxies. The faint flux limit of the 2dF QSO Redshift Survey \cite{Cro04}, henceforth referred to as the `2QZ', may test these two possibilities. As the 2QZ number-magnitude count slope at the survey flux limit is relatively flat, statistical lensing predicts an anti-correlation between 2QZ QSOs and foreground galaxies, as opposed to the positive correlations predicted for brighter QSO samples and detected by Williams \& Irwin (1998) and Gazta\~naga (2003). As there is no other explanation for such opposite signals in the different samples, such a detection would provide a strong confirmation of the lensing hypothesis, and hence the constraints on $b$. There are additional reasons why the 2QZ is appropriate to study statistical lensing. The 2QZ contains around half as many confirmed stars as QSOs, which may be used as a control, to determine if any cross-correlation arises from target-selection. The 2QZ can limit the most likely systematic, intervening dust---by considering QSO colours, the extent that dust in galaxies could obscure background QSOs may be determined. Finally, the known redshifts of 2QZ QSOs can ensure that they are physically removed from a galaxy sample. This paper regards the cross-correlation between faint QSOs and foreground galaxies, and its implications for cosmological parameters, particularly for galaxy bias. In Section~\ref{sec:data5} we outline the samples of QSOs and galaxies we shall cross-correlate. In Section~\ref{sec:cross5} we outline our cross-correlation methodology, and investigate possible explanations for the resulting signal. Section~\ref{sec:model5} introduces models we use to investigate the cross-correlation of QSOs and galaxies in terms of statistical lensing. Section~\ref{sec:discuss5} applies these lensing models and discusses implications for cosmological parameters, especially how galaxies are biased relative to underlying matter. Finally, in Section~\ref{sec:summary5}, we summarise the main results of this paper. \section{QSO and Galaxy Samples} \label{sec:data5} \begin{figure} \centering \includegraphics[width=8cm]{nmagint.eps} \caption{\small{ The QSO integrated number counts, for the 2QZ, in 0.2~mag bins, with Poisson errors. The line is a smoothed power law fit to the differential number counts. Brighter data points are from the 6QZ. Also displayed are the faint data from Boyle, Jones \& Shanks (1991) and Koo \& Kron (1988), which have been offset slightly to prevent the points from merging.}} \label{fig:Nmag.eps} \end{figure} The QSO and galaxy samples we will cross-correlate are essentially the same as described by Myers et al. (2003), so we will only outline them briefly. QSOs are taken from the final 2QZ catalogue \cite{Cro04}. The 2QZ comprises two $5\deg \times 75\deg$ declination strips, one in an equatorial region in the North Galactic Cap (centred at $\delta= 0^{\circ}$ with 09$^{\rm h}$50$^{\rm m}$ $\la \alpha \la $ 14$^{\rm h}$50$^{\rm m}$), and one at the South Galactic Pole (centred at $\delta = -30^{\circ}$, with 21$^{\rm h}$40$^{\rm m}$ $\la \alpha \la $ 03$^{\rm h}$15$^{\rm m}$). We will refer to these regions as the `NGC' and `SGC' respectively. QSOs are selected by ultra-violet excess (UVX) in the $u-b_{\rm J}:b_{\rm J}-r$ plane, in the magnitude range $18.25 \leq b_{\rm J} \leq 20.85$. The 2QZ colour selection is $\ga90$ per cent complete for UVX QSOs over the redshift range $0.3<z<2.2$. At higher redshifts the UVX technique fails as the Lyman-alpha forest enters the $U$ band, and the completeness of the survey rapidly drops. Unless otherwise specified, we will consider only QSOs with a redshift $z>0.4$, to prevent the overlap in real space of QSO and galaxy samples. We shall work with only the most definitively identified QSO sample, the so-called quality `11' sample. Quality `11' denotes a sample with the best level of reliability for both the QSO identification and redshift estimate (see Croom et al. 2004 for further explanation). These restrictions in redshift and spectroscopic quality leave 12042 QSOs in the SGC and 9565 in the NGC. We shall also consider the supplemental 6dF QSO survey (henceforth 6QZ), which contains 376 QSOs after the application of our $z>0.4$ and `11' only spectroscopic identification criteria. For further details of the 2QZ and 6QZ, see Croom et al. (2004). The expected strength of lensing-induced correlations between galaxies and a magnitude-limited sample of QSOs depends on the slope of the integrated number-magnitude counts, $\alpha$, fainter than the QSO sample's limit \cite{Nar89}. An enhancement of QSOs is expected behind foreground lenses when the slope of the QSO number-magnitude counts is greater than 0.4 and a deficit when the slope is less than 0.4. We thus need to estimate this slope for any lensing analysis. In Fig.~\ref{fig:Nmag.eps} we reproduce the QSO $N(<m)$ for the 2QZ from Myers et al. (2003). The points are QSO number counts in 0.2~mag bins with Poisson errors, and have been corrected for incompleteness and absorption by dust in our Galaxy. The line is a smoothed power law fit to the differential counts of form \begin{eqnarray} \frac{\rm d \it N}{\rm d \it m} = \frac{N_0}{10^{-\alpha_{\rm{d}}(m-m_0)} + 10^{-\beta_{ \rm{d}}(m-m_0)}} \label{eq:slopefit} \end{eqnarray} with a steep bright-end slope ($\beta_{\rm{d}} = 0.98$), a knee at $m_0 = 19.1$, and a flatter faint end slope ($\alpha_{\rm{d}} = 0.15$). Myers et al. (2003) marginalised across all of the parameters in this fit and determined a faint end slope of $\alpha =0.29\pm0.015$, noting that incompleteness corrections might raise the $1\sigma$ error to as much as 0.05. In this paper, we will assume a slope of $\alpha =0.29\pm0.03$ at the survey flux limit. Brighter data points are from the 6QZ. Also displayed are the faint data from Boyle, Jones \& Shanks (1991) and Koo \& Kron (1988), which have been offset slightly to prevent points from merging. Recent observations, attributed to statistical lensing, of positive correlations between QSOs and galaxies \cite{Wil98,Gaz03} have used relatively bright QSO samples, which only probe the steep QSO number-magnitude counts below the knee. The large size and faint flux limit of the 2QZ allows us to probe significantly beyond the knee and test the lensing prediction of an anti-correlation between {\it faint} QSOs and galaxies. The southern galaxy sample we consider in this paper is taken from the APM Survey \cite{Mad90b}, which is considered photometrically complete to a magnitude of $b_{\rm J}<20.5$ \cite{Mad90a}. The northern galaxy sample is taken from the Sloan Digital Sky Survey (henceforth SDSS) Early Data Release (henceforth EDR) of June 2001 \cite{Sto02}. The SDSS EDR sample is transformed into the $b_{\rm J}$ band from the SDSS $g'$ and $r'$ bands using the colour equations of Yasuda et al. (2001) and cut to $b_{\rm J}<20.5$ to match the APM limit. Both galaxy samples are restricted to areas in which they overlap the 2QZ strips. This leaves nearly 200,000 galaxies in the SGC 2QZ strip and 100,000 in the NGC 2QZ strip. Note that the SDSS EDR only partially fills the 2QZ NGC strip. \section{QSO and Galaxy Cross-Correlation Functions} \label{sec:cross5} Correlation functions \cite{Pee80} are the main statistic of choice in studies of how QSO and galaxy distributions are related, although how the statistic is estimated varies considerably. In this section, we measure the two-point cross-correlation between SDSS or APM galaxies and 2QZ (or 6QZ) QSOs. We study possible selection effects and different explanations for the signal. Notably, the expected variation of the QSO-galaxy cross-correlation with redshift and (especially) magnitude of the QSO sample is a strong prediction of the statistical lensing hypothesis, and is something we may be able to test using 2QZ data. \subsection{Correlation Estimator and Errors} To measure the two-point correlation function, $\omega(\theta)$, we use the estimator \cite{Pee80} \begin{equation} \omega(\theta) = \frac{DD_{12}(\theta)\bar{n}}{DR_{12}(\theta)} - 1, \label{equation:corrfunc5} \end{equation} \noindent where $DD_{12}$ denotes the number of data-point {\it pairs} drawn from populations 1 and 2 respectively with separation $\theta$. For $DR_{12}$ the population 2 data are replaced with a catalogue of random points with the same angular selection function as the data. The factor $\bar{n}$ is the ratio of the size of the random catalogue to the data. Throughout this paper, we produce random catalogues with $\bar{n}=10$, minimizing statistical noise but allowing efficient speed of calculation. For further details on the 2QZ angular selection function see Outram et al. (2003) and Croom et al. (2004). Note that the cross-correlation function is equivalent in either `direction' (i.e. under the exchange of labels $1$ and $2$ in Equation~\ref{equation:corrfunc5}) {\it provided that the angular selection of sources is accounted for by an appropriate random catalogue}. In general, the angular selection functions of, say, a galaxy population and a QSO population are not the same. Further, the angular completeness of a given survey is generally a function of redshift and magnitude, so considerable care should be taken in constructing random catalogues for population subsamples. Numerous estimates of error on the cross-correlation have been proposed. We will consider three of these. One of the simpler forms, is the Poisson error based on the number of data-data pairs (across the entire survey) in the angular bin probed: \begin{equation} \sigma_{\omega}^2(\theta) = \frac{\left[1+\omega(\theta)\right]^2}{DD(\theta)} \label{equation:poisDD} \end{equation} \noindent where we will use $\sigma_{\omega}$ to denote the standard error on the correlation function. Given that we wish to measure the significance of the correlation function compared to the null hypothesis represented by the random catalogue, we might instead use the Poisson error based on the number of data-random pairs, which could be achieved by substituting $DR/\bar{n}$ for $DD$ in Equation~\ref{equation:poisDD}. The hypothesis that error on the correlation function is Poisson is not strictly fair. All else being equal, the number of counts could be highly correlated as the same points appear in different pairs that are included in many different bins, especially on large scales. Some authors \cite{Sha94,Cro96} have suggested corrections to the Poisson form of error. Instead of such corrections, we will consider errors from field-to-field variations in the correlation function (see, e.g., Stevenson et al. 1985). Our data samples are split into 30 subsamples. This is arbitrary in the case of the SDSS EDR but deliberately reflects plate boundaries in the case of the APM data (and by extension the 2QZ, which is derived from APM photometry). The cross-correlation function is measured for each subsample and the variance between the subsamples is determined. The standard error on $\omega(\theta)$ is then the standard error between the subsamples, inverse weighted by variance to account for different numbers of objects in each subsample \begin{equation} \sigma_{\omega}^2(\theta) = \frac{1}{N-1}\sum_{L=1}^{N}\frac{DR_L(\theta)}{DR(\theta)}[\omega_L(\theta) - \omega(\theta)]^2. \label{equation:FTFerr} \end{equation} \noindent The weighting by number of objects is essential, mainly as plates in the southern APM immediately East of 00h RA cover less area than other plates at the same declination but also because of varying completeness in the QSO catalogue. We will refer to this error as {\it field-to-field error}. We will also consider a similar estimate of error to Equation~\ref{equation:FTFerr}, essentially a weighted form of the estimate proposed by Scranton et al. (2002) when calculating the auto-correlation of galaxies in the SDSS EDR \begin{equation} \sigma_{\omega}^2(\theta) = \sum_{L'=1}^{N}\frac{DR_{L'}(\theta)}{DR(\theta)}[\omega_{L'}(\theta) - \omega(\theta)]^2 \label{equation:JACKerr} \end{equation} \noindent Note that $L$ (in Equation~\ref{equation:FTFerr}) refers to a subsample on one of our 30 plates, whereas $L'$ refers to the subsample remaining {\it on the other 29 plates}. In other words, the procedure outlined in Equation~\ref{equation:JACKerr} is to remove each of the plates in turn and to calculate the variance between each sample on the 29 remaining plates. We shall refer to this as {\it jackknife error}. The unweighted version of this estimate agrees well with simulations (see the appendix of Zehavi et al. 2002). Finally, we will measure the statistical correlation (not to be confused with the correlation {\it function}) between adjacent bins of $w(\theta)$. The statistical correlation is related to the covariance, and is essentially an estimate of how independent the bins are - whether bin $i$ has a tendency to take the same value as bin $j$. We might expect the covariance of the correlation function to be high, as the same data points can appear in different pairs that are counted in many different bins. The statistical correlation takes the following form \begin{equation} Corr(i,j) = \frac{Cov(i,j)}{\sigma(\theta_i)\sigma(\theta_j)} \label{equation:covar} \end{equation} where the covariance, $Cov(i,j)$, is defined as \begin{equation} Cov(i,j) = \frac{1}{N}\sum_{M=1}^{N}(\omega_M(\theta_i) - \bar{\omega}(\theta_i))(\omega_M(\theta_j) - \bar{\omega}(\theta_j)) \end{equation} \noindent and $\theta_i$ and $\theta_j$ are two bins at different scales, and $\bar{\omega}$ and $\sigma$ represent the mean and standard deviation over a number of realisations, $M$. The correlation is $0$ if the bins are independent, approaches $1$ if an increase in bin $i$ leads to an increase in bin $j$ and approaches $-1$ if an increase in bin $i$ leads to a decrease in bin $j$. \begin{figure} \centering \includegraphics[width=8cm,totalheight=8cm]{sigma.eps} \caption[\small{Error Estimates and Estimator Check.}]{In the upper panel, we compare the Poisson, field-to-field and jackknife error estimates with a Monte Carlo error estimate determined from 100 Monte Carlo simulations of the NGC and SGC strips. In the lower panel we plot the mean of the 100 simulations for the NGC and SGC individually and for the NGC and SGC combined.} \label{fig:sigma5} \end{figure} \begin{figure} \centering \includegraphics[width=8cm,totalheight=8cm]{sigmacomp.eps} \caption[\small{Error Estimates Compared to Monte Carlo Estimates.}]{In the upper panel we display the error on the correlation function taken in ratio to the Monte Carlo error estimate (1 standard deviation across the Monte Carlo simulations), for each estimate of error mentioned in the text. In all cases the errors are determined for the combined NGC and SGC sample. A dashed line is drawn for comparison at $\sigma$/$\sigma_{MC} = 1$ (where the Monte Carlo estimate itself would lie). The lower panel depicts the covariance between adjacent bins determined from 100 Monte Carlo realisations.} \label{fig:sigmacomp5} \end{figure} To test the accuracy of our correlation function estimator and the associated error, we have created 100 Monte Carlo simulations of the 2QZ. Each simulation contains the same number of ($z>0.4$, identification quality of `11') QSOs as the 2QZ and has the same angular selection function, neglecting any intrinsic QSO clustering. We cross-correlate the simulated samples against APM galaxies (for the SGC strip) and the SDSS EDR (for the NGC strip). For each sample, the estimates of error outlined in this section are calculated and averaged. The mean value of the cross-correlation across the 100 samples is taken and the standard deviation ($1\sigma$) is recorded as the Monte Carlo error. To avoid confusion between our `Monte Carlo error' and the statistical `standard error' we shall refer to the Monte Carlo error as the `Monte Carlo deviation'. In the lower panel of Fig.~\ref{fig:sigma5}, we display the mean cross-correlation signal across the 100 Monte Carlo simulations. The agreement between the NGC and SGC results are excellent - better than 12~per~cent of the Monte Carlo deviation on the NGC mean over all scales. The deviation of the combined result from zero, the expected result, is similarly no more than 12~per~cent of the combined Monte Carlo deviation over all scales. We note that the shot noise (i.e. the standard error) would comprise 10~per~cent of the Monte Carlo deviation (as we have 100 samples). Although the correlation function diverges on scales smaller than 0.3~arcmin, the error is sufficiently large on these scales that we may consider the correlation estimator probably valid on all of the scales plotted and certainly valid on scales larger than 0.3~arcmin. Note that the consistency of the correlation estimator across all scales indicates that the software we use to calculate the estimator is robust. Also, note that the average Monte Carlo sampling of $\omega$ contains 1180 random points in the smallest bin and has a jackknife error of 0.098---this means that the typical fluctuation in our random catalog is three-and-a-half times smaller than our quoted error, and is dwarfed by the jackknife error, justifying our choice of $\bar{n}=10$ in Equation~\ref{equation:corrfunc5}. In the upper panel of Fig.~\ref{fig:sigma5} we compare the various error estimates. The general trend of the errors is in good agreement, although the Poisson error estimate begins to under-predict the error (as compared to the Monte Carlo estimate) on larger scales. We assume that the Monte Carlo deviation represents a fair estimate of the true error on the correlation function and, In Fig.~\ref{fig:sigmacomp5}, plot the various errors taken in ratio to the Monte Carlo deviation. It is obvious that the Poisson error is an underestimate on $\ga5$~arcmin scales. We have also estimated the Poisson error by substituting $DR$ for $DD$ in Equation~\ref{equation:poisDD}. Such a change has negligible effect, as the simulated QSOs have a largely random distribution. The jackknife and field-to-field error estimates are much better than the Poisson estimate and constitute reasonable estimates on scales from 0.2~arcmin to nearly a degree. The field-to-field error, however, is perhaps a 20~per~cent overestimate on larger scales, where the jackknife error remains in line with the Monte Carlo estimate. The jackknife error estimate has an additional advantage over the field-to-field estimate. When the number of data points on any plate approaches zero, the field-to-field estimate is ill-defined but the jackknife estimate remains accurate until the number of data points across the entire survey approaches zero. In Fig.~\ref{fig:sigmacomp5}, we also plot the correlation due to the covariance between {\it adjacent} bins. This covariance is low - almost within the 10~per~cent expected standard error. However, this covariance measure demonstrates that no correlation between adjacent bins is artificially introduced by our random catalogues - it does not guarantee that the data will show no covariance. We will quote the significance of results by estimating the correlation function and its associated error in one `large' bin (usually out to 10~arcmin), to minimise any covariance in the data on these scales. Throughout this paper, we adopt the estimate of the correlation function defined by Equation~\ref{equation:corrfunc5}, together with the jackknife error estimate of Equation~\ref{equation:JACKerr} as both seem fair over scales we will probe. \subsection{The Cross-Correlation of 2QZ QSOs and Galaxies} In Fig.~\ref{fig:qsoall5} we plot the cross-correlation of all 2QZ QSOs that meet our selection criteria ($z > 0.4$ and 2QZ identification of `11') against SDSS EDR galaxies (in the 2QZ NGC strip) and APM galaxies (in the 2QZ SGC strip). The upper panel of Fig.~\ref{fig:qsoall5} shows the cross-correlation individually for the strips---galaxies are anti-correlated with QSOs in both. The anti-correlation is slightly stronger in the NGC strip but not significantly so. In the lower panel of Fig.~\ref{fig:qsoall5}, we plot the cross-correlation for both `directions'. A significant anti-correlation is detected irrespective of whether we centre on galaxies and count QSOs or centre on QSOs and count galaxies, indicating that our random catalogues consistently account for the angular selection of QSOs or galaxies. If we bin the data displayed in Fig.~\ref{fig:qsoall5} in a single bin of extent 10~arcmin and estimate the correlation function and ($1\sigma$) jackknife error, we find there is an anti-correlation of strength $\omega(<10~arcmin) = -0.007\pm0.0025$ (10~arcmin is about 1 $\Mpch$ at the median galaxy redshift of 0.15). The significance of this result is 2.8$\sigma$ for $\omega_{qg}$ and 2.2$\sigma$ for $\omega_{gq}$. Although the anti-correlation is slightly less significant for $\omega_{gq}$, it is also slightly stronger, i.e. $\omega(<10~arcmin) = -0.008\pm0.0035$. Further, particularly on large scales ($>8$arcmin), the error in $\omega_{gq}$ is 60-80~per~cent of the error in $\omega_{qg}$. It is unclear exactly why the errors are slightly larger when the analysis is carried out centring on galaxies but it is likely due to small discrepancies, perhaps second-order gradients, between the random catalogues used to mimic the different angular selection functions of the QSO and galaxy samples. The anti-correlation displayed in Fig.~\ref{fig:qsoall5} is very strong on small scales and agreement between the two `directions' of the correlation function is excellent. For instance, both $\omega_{gq}$ and $\omega_{qg}$ show an anti-correlation of strength -0.02 out to 3~arcmin at 3$\sigma$ significance. Note that the innermost bin barely contributes to this particular signal, as it contains less than 100th of the pairs in the bin at 2~arcmin. When modelling, we will consider $\omega_{qg}$, the slightly weaker, slightly more significant result. This is mainly because ultimately, when discussing the anti-correlation in terms of lensing, we will compare the galaxy-galaxy auto-correlation, $\omega_{gg}$, to $\omega_{qg}$, which shares the same random catalogue. It is useful, though, to have shown a significant, consistent anti-correlation between QSOs and galaxies irrespective of the `direction' of the cross-correlation. \begin{figure} \centering \includegraphics[width=8cm,totalheight=8cm]{allqso.eps} \caption[\small{The Cross-Correlation of 2QZ QSOs and Galaxies.}]{The cross-correlation of 2QZ QSOs against SDSS EDR galaxies (in the NGC 2QZ strip) and APM galaxies (in the SGC 2QZ strip). The upper panel displays the cross-correlation signal for the 2 strips individually. The lower panel displays the signal combined for both strips. The lower panel shows estimates for both `directions', centring on QSOs and counting galaxies ($\omega_{qg}$) and centring on galaxies and counting QSOs ($\omega_{gq}$). Error bars represent $1\sigma$ jackknife errors. Labels note the number of objects of each population present within the confines of the 2QZ boundaries. Points within the same bin have been offset slightly for ease of display.} \label{fig:qsoall5} \end{figure} \subsection{Is the Anti-Correlation Between Galaxies and QSOs a Selection Effect?} \label{sec:stars5} Certainly, there is a significant anti-correlation between galaxies and 2QZ QSOs. It is natural to ask if the signal arises when constructing the QSO or galaxy catalogues. There are several techniques that might produce an anti-correlation between QSOs and galaxies. The initial construction of the 2QZ UVX target catalogue removed extended images. Although all `high' redshift ($z~\lower.73ex\hbox{$\sim$}\llap{\raise.4ex\hbox{$>$}}$\,$~0.5$) QSOs should appear stellar, they may merge with foreground objects to look extended on images. A bright ($b_j < 19.5$) subsample of these extended images should end up in the 2dF Galaxy Redshift Survey (henceforth 2dFGRS) and thus appear in deficit in the 2QZ. It turns out that this affects separations between galaxies and QSOs on scales of about 8~arcsec \cite{Mad02}, smaller than the scales we are probing. Restrictions on the placement of 2dF fibres means the minimum angle between objects in a 2QZ field is about 30~arcsec, which might mean a paucity of objects at small separations. However, this restriction should not include QSO-galaxy separations as, although the 2QZ was carried out simultaneously with the 2dFGRS, QSO observations were given a higher observational priority, so we would expect few QSOs to be rejected because of their proximity to galaxies. Additionally, the vast majority of fields in the 2dF survey were repeatedly observed to circumvent fibre-allocation problems. Myers et al. (2003) suggest that a 30~arcsec restriction on the minimum angle between 2dF objects in any field leaves no significant signature on these scales. The easiest way to judge measurement systematics in the 2QZ is to consider a control sample of objects that underwent identical data reduction as the QSOs but should display no cosmological signatures. There are 10587 stars (with `11' identification quality) in the 2QZ. In Fig.~\ref{fig:starall5} we plot the cross-correlation of these stars against our galaxy samples. The upper panel of Fig.~\ref{fig:starall5} compares the cross-correlation estimate for the NGC and SGC 2QZ strips. The agreement is reasonable, although the NGC sample is slightly more positively correlated with galaxies. Note that we might not expect the stellar correlation functions to be zero on all scales---gradients exist in local structure, which are not recreated in the random catalogues and which will make the correlation signal higher or lower on average. However, we would expect the stellar signature to be {\it flat} across the scales of interest. This is highlighted in the lower panel of Fig.~\ref{fig:starall5}, where we display the star-galaxy and galaxy-star cross correlations. Unlike in the QSO-galaxy case, there is a discrepancy in the large-scale zero-point of the correlation function that depends upon the `direction' of the cross-correlation. As we might expect, when the random catalogue is constructed to match the stellar distribution, the zero-point of the correlation function drops significantly (to about -0.03). The large-scale value of the correlation function {\it is} zero when the random catalogue is constructed to match the galaxy distribution, which is free from (at least genuine physical) gradients. The key point, is that the correlation functions for stars and galaxies are flat (deviating at most 0.5$\sigma$ from their respective zero-points on scales larger than $\theta\sim0.4$~arcmin), indicating that systematics in the construction of the galaxy and QSO samples are low, hence induce no false correlations in our QSO-galaxy cross-correlations. The innermost points ($\theta<0.4$~arcmin) plotted in Fig.~\ref{fig:starall5} seem to genuinely deviate from the zero-point and may be representative of merged images, fibre placement signatures or some other systematic. We will continue to plot these points in figures in this section but will not consider them in any modelling analysis or quotes of the significance of a signal. \begin{figure} \centering \includegraphics[width=8cm,totalheight=8cm]{allstar.eps} \caption[\small{The Cross-Correlation of 2QZ Stars and Galaxies.}]{The cross-correlation of 2QZ stars against SDSS EDR galaxies (in the NGC 2QZ strip) and against APM galaxies (in the SGC 2QZ strip). The upper panel displays the cross-correlation signal for the 2 strips individually. The lower panel displays the signal combined for both strips. The lower panel shows estimates for both `directions', centring on stars and counting galaxies ($\omega_{sg}$) and centring on galaxies and counting stars ($\omega_{gs}$). Error bars represent $1\sigma$ jackknife errors. Labels note the number of objects of each population present within the confines of the 2QZ boundaries. Points within the same bin have been offset slightly for ease of display.} \label{fig:starall5} \end{figure} \subsection{Cosmological Explanations For the Anti-Correlation Between QSOs and Galaxies} An obvious physical effect, other than lensing, that could cause an anti-correlation between QSOs and galaxies is dust in galaxies obscuring background QSOs. This would lead to a dearth of QSOs around galaxies. Alternatively, if the anti-correlation {\it is} due to statistical lensing, we might be able to see the variation of the cross-correlation between galaxies and QSOs with the magnitude of the QSO sample. \subsubsection{Is Intervening Dust the Cause of the Anti-Correlation Between QSOs and Galaxies?} \label{dust} \begin{figure} \centering \includegraphics[width=8cm,totalheight=8cm]{colour.eps} \caption[\small{The Distribution of QSO Colours Around Galaxies.}]{Limits on the amount of `typical' dust that could account for the QSO-galaxy anti-correlation measured in Fig.~\ref{fig:qsoall5}. The upper panels show the average colour of QSOs in bins centred on SDSS EDR galaxies (in the NGC 2QZ strip) and APM galaxies (in the SGC 2QZ strip). Error bars in these panels represent the standard deviation (1$\sigma$) in 1000 bootstrapped simulations with the same QSO positions as the 2QZ but scrambled colours. The lower panels translate these 1$\sigma$ error bars into limits on absorption in the $b_{\rm J}$ band ($A_{b_{\rm J}}$). The absorption limits are translated into limits on the QSO-galaxy cross-correlation using a simple model outlined in the text and displayed against the points and errors on $\omega_{qg}$ from Fig.~\ref{fig:qsoall5}, represented by triangles.} \label{fig:dust5} \end{figure} We can use colour information to determine if intervening dust preferentially distributed around galaxies could remove QSOs from the 2QZ catalogue out to 10~arcmin ($\sim 1~\Mpch$). Our method is similar to the correlation estimator of Equation~\ref{equation:corrfunc5} but instead of counting the average {\it number} of QSOs in differential annuli around galaxies, we work out the average {\it colour} of QSOs. Errors are calculated using 1000 bootstrapped simulations with the same QSO positions as the 2QZ catalogue but scrambled colours. Knowing the expected QSO colour on degree scales, we can calculate the allowed colour excess, $E(B-V)$, in a given bin for both sets of measured 2QZ colours, $u-b_{\rm J}$ and $b_{\rm J}-r$. Schlegel, Finkbeiner \& Davis (1998) provide tables to convert from the colour excess to the amount of absorption by dust. Thus we can constrain the amount of dust around galaxies along QSO lines of sight. Our 1$\sigma$ limits on absorption can be converted into a limit on the correlation function using a simple model outlined in Boyle, Fong \& Shanks (1988). Dust around galaxies will cause an absorption (which we calculate from our measured colour excesses) that will alter the magnitude limit of the QSO number counts close to galaxies \begin{equation} \omega + 1 = \frac{N(<m)_{Gal}}{N(<m)_{Field}} = \frac{N(<m -A_{b_{\rm J}})_{Field}}{N(<m)_{Field}} \label{equation:absorption5} \end{equation} \noindent where $N(<m)$ is the integrated number counts, $A_{b_{\rm J}}$ is the absorption in the $b_{\rm J}$ band and the subscripts `Gal' and `Field' represent values near to galaxies and in the field, respectively. Equation~\ref{equation:absorption5} can easily be simplified if the number counts are represented by a power law but to be exact, we will simply use the full fitted form of the number counts of 2QZ QSOs (see Fig.~\ref{fig:Nmag.eps}). We now have a simple model that converts the error on our measurement of the average colour of QSOs near galaxies into a limit on the observed anti-correlation due to dust. In the upper two panels of Fig.~\ref{fig:dust5}, we show the average colour of QSOs around our galaxy samples with bootstrapped error bars, for 2QZ colours. A solid line marks the expected value from our scrambled bootstrap simulations. Except, perhaps, for the innermost bin, there is no significant deviation in the colour of QSOs from the expected value. The reddening displayed in the innermost bin of the $u-b_{\rm J}$ colours corresponds to an increase to the blue in the $b_{\rm J}-r$ colours, suggesting it is a small-scale measurement artefact or a statistical fluctuation, rather than due to dust. In the lower panel of Fig.~\ref{fig:dust5}, we translate the bootstrapped error bars into limits on absorption by dust around galaxies. The solid $u-b_{\rm J}$ and dashed $b_{\rm J}-r$ lines in the lower panel represent the 1$\sigma$ limit on the anti-correlation due to dust allowed by our colour limits. The anti-correlation between QSOs and galaxies measured in Fig.~\ref{fig:qsoall5} is plotted for comparison. The 1$\sigma$ limits are insufficient to account for the anti-correlation between galaxies and QSOs, in fact, the $b_{\rm J}-r$ limits are marginally rejected, at a 2$\sigma$ level, and (at this level of rejection) could only account for about 30~per~cent of the anti-correlation between QSOs and galaxies (out to 10~arcmin). Schlegel, Finkbeiner \& Davis (1998) base their absorption laws, from which our absorption estimates are derived, on the difference in reddening between local, lightly-reddened standard stars and more distant reddened stars in the Milky Way \cite{Car89,Odo94}. The reddened stars are chosen to sample a wide range of interstellar environments \cite{Fit90} and Schlegel et al. demonstrate that their dust laws reproduce the reddening (as compared to the MgII index) of a sample of $\sim500$ elliptical galaxies that have broad sky coverage. However, by design, the absorption law used by Schlegel et al. only applies to our Galaxy. There is no reason to believe it is universal. In fact, though the Magellanic Clouds have been shown to have similar absorption laws to the Milky Way \cite{Koo82,Bou85}, other local ($z~\lower.73ex\hbox{$\sim$}\llap{\raise.4ex\hbox{$<$}}$\,$~0.03$) galaxies have been shown to have `greyer' absorption laws \cite{Cal94,Kin94}, meaning that more dust absorption (approximately 25~per~cent in the $b_{\rm J}$ band) is expected for the same reddening. At the $2\sigma$ level, none of these alternative dust laws could provide enough absorption to explain the observed QSO-galaxy anti-correlation. It is possible to construct dust models that would reconcile the lack of reddening of the QSO sample near galaxies with a paucity of QSOs around galaxies. For instance, the UVX QSO selection of the 2QZ means that QSOs that were excessively reddened may be lost entirely from the 2QZ, especially if the measured colours of 2QZ QSOs exhibit a large scatter around their intrinsic colours (which they don't, at least at $z < 2$ - see Croom et al. 2004). Also, if there was a lot of dust close to some galaxies, which completely obscured QSOs without reddening them, then the clustering of galaxies alone might lead to the observed galaxy-QSO anti-correlation. However, though various dust models can be constructed to explain a galaxy-QSO anti-correlation, it is very difficult to reconcile any dust model with the positive QSO-galaxy correlations seen in other samples \cite{Wil98,Gaz03}. Acknowledging the provisos outlined above, we tentatively proceed assuming that dust around galaxies cannot account for the anti-correlation between QSOs and galaxies---but can we find definitive evidence that gravitational lensing is responsible? \subsubsection{Dependence of the Cross-Correlation Signal on Magnitude and Redshift} \label{sec:magevolve} Perhaps the key prediction of magnification bias is that an enhancement of QSOs is expected near foreground lenses when the slope of the QSO number-magnitude counts is greater than 0.4 and a deficit of QSOs around the same lenses when the slope is less than 0.4. The QSO number counts are shown in Fig.~\ref{fig:Nmag.eps}. The `knee' of the magnitude distribution, where the slope is 0.4, lies around $b_{\rm J}$ = 19.1 to $b_{\rm J}$ = 19.6. A very simple model would predict (under the assumption that lensing samples QSOs up to about a magnitude fainter than the sample limit) a positive correlation between QSOs and galaxies up to a (QSO) $b_{\rm J}$ magnitude of around 18.1-18.6, no correlation between QSOs and galaxies in the range 18.1-18.6 to the knee of the magnitude counts, and an anti-correlation between QSOs and galaxies fainter than about $b_{\rm J}$ = 19.6. At the time of writing, no author has yet shown the dependence of the cross-correlation signal with magnitude from a positive correlation at bright QSO magnitudes to an anti-correlation at faint magnitudes, although some authors have shown the transition from a positive correlation to zero correlation \cite{Wil98,Gaz03}. This is mainly a problem of QSO sampling - as yet no single survey spans the QSO magnitude distribution in a manner that produces significant numbers of QSOs at both bright and faint magnitudes. The combined area and depth of the 2QZ and 6QZ allows us to trace the dependence of the cross-correlation between QSOs and galaxies with magnitude across the knee of the QSO number-magnitude counts for the first time. In the upper panel of Fig.~\ref{fig:evolve5} we show the dependence of the QSO-galaxy cross-correlation signal (measured out to 10~arcmin) in (differential) magnitude bins spanning the range $16.25 < b_{\rm J} < 20.85$. QSOs are taken from the 2QZ for $b_{\rm J} > 18.25$ and from the 6QZ for $b_{\rm J} < 18.25$. There is a loose trend in the data suggesting that the brightest QSOs are positively correlated with our galaxy samples (at the 2.2$\sigma$ level for $b_{\rm J} < 16.65$) and the faintest QSOs are anti-correlated with our galaxy samples (at the 2.3$\sigma$ level for $b_{\rm J} > 19.85$) and there is no significant result for the remainder of the magnitude range. To test whether this trend allows us to favour lensing over dust in foreground galaxies as a cause of the QSO-galaxy cross-correlation signal, we have constructed two toy models. The best-fitting models are displayed in Fig~\ref{fig:evolve5}. The `dust' model is essentially as shown in Equation~\ref{equation:absorption5} with $A_{b_J}$, the absorption (on 10~arcmin scales to match the data in Fig.~\ref{fig:evolve5}) as the free parameter. The relative numbers in a given magnitude bin are calculated from the full integrated counts displayed in Fig.~\ref{fig:Nmag.eps}. The `lens' model assumes that $\omega_{qg} = K\left(2.5\alpha(m)-1\right)$ where $K$ is considered a constant (see Equation~\ref{equation:WilandIrwin} below) out to the 10~arcmin scale of interest. There are actually two free parameters in this model, $K$ and $m$ where $m$ is the number of magnitudes fainter than the bin of interest to sample the slope, $\alpha$, of the integrated number counts. The best dust model has $A_{b_J} = 0.005$ with a reduced $\chi^2$ value of 1.20 $\left(P(\chi^2)=0.29\right)$. The best lens model has $K = 0.015$, $m = 2.1$ and a reduced $\chi^2$ of 1.04 $\left(P(\chi^2)=0.41\right)$. Neither of these simple models can be ruled out by the data. We cannot say with any confidence, then, that we have detected the full dependence of the cross-correlation signal with magnitude predicted by statistical lensing. This is not necessarily surprising, as there are very few objects in the 6QZ and only 256 QSOs brighter than $b_{\rm J}=18.1$ that meet our usual redshift and identification quality selection criteria. Nevertheless, we do see a trend away from anti-correlations for brighter samples, in line with a lensing hypothesis. \begin{figure} \centering \includegraphics[width=8cm,totalheight=8cm]{evolvewmod.eps} \caption[\small{The Variation of $\omega_{gq}$ With Magnitude and Redshift.}]{The cross-correlation of 2QZ QSOs against SDSS EDR galaxies (in the NGC 2QZ strip) and against APM galaxies (in the SGC 2QZ strip) and their dependence on $b_{\rm J}$ magnitude and redshift. The upper and lower panels display the cross-correlation signal measured out to 10~arcmin for subsamples of 2QZ and 6QZ QSOs in magnitude and redshift bins respectively. Error bars represent $1\sigma$ jackknife errors. The `lensing' and `dust' models are described in the text.} \label{fig:evolve5} \end{figure} In the lower panel of Fig.~\ref{fig:evolve5}, we show the variation of the cross-correlation of 2QZ QSOs against our combined galaxy samples with redshift in differential bins of $z = 0.4$. A simple lensing model predicts no detectable trend in the cross-correlation signal with QSO redshift. We might expect a stronger signal at larger redshifts, as we will, on average, sample fainter QSOs in these bins and thus observe the signature of changes in magnitude in the redshift distribution. Indeed, we see a reasonably consistent anti-correlation for all redshift bins. The signal is slightly stronger at high redshifts but not significantly so. The lowest redshift QSOs are significantly correlated with our galaxy samples, no doubt due to genuine clustering at these redshifts, justifying our cut of $z < 0.4$ in other analyses throughout this paper. In this section, we have shown a significant anti-correlation between faint QSOs and galaxies, and that is not a systematic effect of the QSO selection. We rule out the possibility that the majority of the signal is due to dust distributed around galaxies at the 2$\sigma$ level by comparing QSO colours in the field and close to galaxies, noting the caveat that models can be constructed where QSOs are obscured by dust without being reddened. We have shown that the anti-correlation is, however, consistent with lensing predictions. We will now model the anti-correlation assuming it is due to lensing and consider cosmological implications. \section{Statistical Lensing Models} \label{sec:model5} In this section, we outline two lensing models we shall use in describing the anti-correlation between QSOs and galaxies. Both models compare the cross-correlation between QSOs and galaxies to the auto-correlation of galaxies to determine galaxy bias with respect to the mass traced by QSO light. The first, due to Williams \& Irwin (1998; see also Williams \& Frey 2003), uses a linear biasing prescription to relate fluctuations in the foreground mass distribution to QSO magnification. Williams \& Irwin showed that this simple model agrees well with more complicated simulations \cite{Dol97,San97}. The second model, due to Gazta\~naga (2003), allows for scale-dependent bias, and more fully considers the redshift distributions of the source and lens populations. \subsection{Williams \& Irwin Model} \label{sec:LBM} The convergence of lensing matter, $\kappa$, is defined as \begin{equation} \kappa(\theta) = \frac{\Sigma(D_l,\theta)}{\Sigma_{cr}(D_l,D_s)} \label{equation:convergence} \end{equation} \noindent where $\Sigma(D_l,\theta)$ is the surface mass density of the lensing material. The critical mass surface density is a function of the redshift of background source QSOs ($z_s$) and of the foreground lensing matter ($z$), and is defined \begin{equation} \Sigma_{cr}(D_l,D_s) = \frac{c^2}{4\pi G}\frac{D_s}{D_lD_{ls}} \label{equation:sigmacrit} \end{equation} \noindent for (angular diameter) lens distance $D_l$, source distance $D_s$ and lens-source separation $D_{ls}$. We take $z_s = 1.5$, the median redshift of the 2QZ, for the background source redshift. Although this seems a rather extreme approximation of the actual distribution of QSOs, Bartelmann \& Schneider (2001) suggest it is fair. Williams \& Irwin model the lensing material as a smooth slab that extends over the redshift range $z=0$ to $z=z_{\rm{max}} = 0.3$, where the extent of the redshift distribution is estimated using the magnitude-redshift selection function of Baugh \& Efstathiou (1993), which provides a good fit \cite{Mad96} to the Stromlo-APM Survey redshift distribution \cite{Lov92a,Lov92b}. We will also use the selection function of Baugh \& Efstathiou to model the galaxy redshift distribution. Note that just over 95~per~cent of galaxies are included out to $z_{\rm{max}} = 0.3$. Lensing arises due to angular fluctuations in the projected matter density around the mean. The mass fluctuations can be characterised by the density contrast $\delta(\theta)$, which measures the mass density at a given scale relative to the mean across the sky. As the density across the entire sky {\it is} the mean, the distribution of density contrast is normalised $\left(\int P(\delta | \theta)\delta \rm{d}\delta = 1\right)$. The effective lensing convergence due to mass fluctuations are enhanced above the mean, yielding $\kappa_{\rm{eff}}(\theta) = \bar{\kappa}(\delta-1)$. In this model, bias is scale-independent, meaning that galaxy fluctuations trace mass fluctuations as $\delta_{G}(\theta)-1 = b[\delta(\theta)-1]$. The surface density of a slab of matter of thickness $cdt$ at redshift $z$, is $\Sigma = \rho_{crit}\Omega_0(1+z)^3cdt$, where $\rho_{crit}$ is the critical density of the Universe. Thus \begin{equation} \kappa_{\rm{eff}}(\theta) = \frac{\bar{\kappa}}{b}(\delta_G-1) = \frac{3H_0^2c}{8\pi G}\frac{\Omega_0}{b}(\delta_{G}-1)\int_0^{z_{\rm{max}}}\frac{(1+z)^3\frac{dt}{dz}dz}{\Sigma_{cr}(z,z_s)} \label{equation:sigmaslab} \end{equation} The enhancement of QSO number density over the mean around any galaxy is given, for QSO number-counts with slope $\alpha$, by \begin{eqnarray} \delta_{Q} = \mu^{2.5\alpha - 1} \approx \left(1+2 \bar{\kappa}\frac{(\delta_G-1)}{b}\right)(2.5\alpha -1) \label{equation:deltaq} \end{eqnarray} \noindent where $\mu \approx 1+2\kappa_{\rm eff}$ is the lensing magnification in the weak regime (and we've performed a Taylor Expansion). Now, the QSO-galaxy cross-correlation can be estimated by the enhancement in galaxy counts across the probability distribution of galaxy density contrasts \begin{eqnarray} \omega_{qg}(\theta) + 1 = \int P(\delta_{G} | \theta)\delta_{G}\delta_{Q}\rm{d}\delta_{G} \end{eqnarray} \noindent substituting Equation~\ref{equation:deltaq} into the previous equation and remembering $\int P(\delta | \theta)\delta \rm{d}\delta = 1$, one can rearrange to show \begin{eqnarray} \omega_{qg}(\theta) = (2.5\alpha - 1)\frac{2\bar{\kappa}}{b}\left[\int P(\delta_{G} | \theta)\delta_{G}\delta_{G}\rm{d}\delta_{G}\right] -1 \end{eqnarray} \noindent so, finally \begin{eqnarray} \omega_{qg}(\theta) = (2.5\alpha - 1)\frac{2\bar{\kappa}}{b}\omega_{gg}(\theta) \label{equation:WilandIrwin} \end{eqnarray} \noindent relates $\omega_{gg}(\theta)$, to $\omega_{gg}(\theta)$ and $b$, the (scale-independent) bias parameter. Other than $b$, cosmology is contained in the (dimensionless) mean convergence $\bar{\kappa}$ (Equation~\ref{equation:sigmaslab}). The QSO number counts are included in the power-law slope, $\alpha$. \subsection{Gazta\~naga Model} This model is reviewed in the Appendix, and described in detail by Gazta\~naga (2003; see also Myers 2003). Gazta\~naga (2003) has shown that a good power-law expression for scale-dependent biasing of galaxies is \begin{eqnarray} b(R) = b_{0.1}\left(\frac{0.1\Mpch}{R}\right)^{\gamma_b} \label{biasmod} \end{eqnarray} \noindent Whilst Gazta\~naga assumed that this bias corresponded to the ratio of the galaxy and matter correlation functions (see also Guim\~{a}raes 2001); $b(r)=[\xi_{gg}(r)/\xi_{mm}(r)]^{0.5}$, we are in fact comparing the cross-correlation of QSOs and galaxies with the galaxy auto-correlation function, and so are defining a bias function via $b(r)=\xi_{gg}(r)/\xi_{gm}(r)$. Whilst on large scales these two definitions should be equal, one expects the halo of the central galaxy to affect the shape of $\xi_{gm}(r)$ on small scales. Therefore we consider a revised version of the Gazta\~naga model to take this into account. Through observation of $\omega_{gg}$ and $\omega_{gq}$ we can constrain the amplitude and slope of the bias correlation function. The two-dimensional projection of the galaxy correlation function is (consider Peebles 1980) \begin{eqnarray} \omega_{gg}(\theta) = \sigma_{0.1}^2b_{0.1}b^*\theta^{1-\gamma_{gg}}A_{gg,0.1} \label{equation:Agg} \end{eqnarray} \noindent whilst the galaxy-QSO cross-correlation is given by \begin{eqnarray} \omega_{gq}(\theta) = \sigma_{0.1}^2b^*\theta^{1-\gamma_{gq}}A_{gq,0.1} \label{equation:Aqg} \end{eqnarray} where $\sigma_{0.1}$, the amplitude of fluctuations in the matter correlation function (averaged over a sphere of radius 0.1$h^{-1}$Mpc) is defined in Equation~\ref{equation:sigsig}, $A_{gg,0.1}$ and $A_{gq,0.1}$ depend on the radial selection functions of the galaxy and QSO samples and the lensing efficiency, and $b$ and $\gamma$ denote the slope and amplitude of a scale-dependent bias model. In the particular case where the two bias definitions ($\left[\xi_{gg}/\xi_{gm}\right]$; $\left[\xi_{gg}/\xi_{mm}\right]^{0.5}$) are equivalent, $b*$ is equivalent to $b_{0.1}$ (see the Appendix for further details on these parameters, particularly $b*$). The slope of the bias correlation function, $\gamma_b$, can be easily determined from $\gamma_{gg}$ and $\gamma_{gq}$. The bias parameter, $b_{0.1}$, can then be determined \begin{eqnarray} \label{equation:biascalc2} b_{0.1} &=& \frac{\omega_{gg}(\theta)}{\omega_{gq}(\theta)}\frac{A_{gq,0.1}}{A_{gg,0.1}}\theta^{\gamma_b} \end{eqnarray} In the case where the two bias definitions, above, agree exactly (as implicitly assumed by Gazta\~naga), $b^*=b_{0.1}$ and $\gamma^*=\gamma_b$. In this case, one can further derive the slope of the mass correlation function, $\gamma$, and $\sigma_{0.1}$ can be determined via \begin{eqnarray} \sigma_{0.1}^2 &=& \frac{\omega_{gq}(\theta)^2}{\omega_{gg}(\theta)}\frac{A_{gg,0.1}}{A_{gq,0.1}^2}\theta^{\gamma-1}. \label{equation:biascalc} \end{eqnarray} \section{Results and Discussion} \label{sec:discuss5} \begin{figure} \centering \includegraphics[width=8cm,totalheight=8cm]{gg.eps} \caption[\small{The Auto-correlation of Galaxies Compared to the Cross-Correlation of QSOs with Galaxies.}]{The galaxy-galaxy auto-correlation combined from SDSS EDR galaxies (in the NGC strip) and APM galaxies (in the SGC strip) is compared to the faint-end QSO-galaxy cross-correlation function. Both correlation functions are fitted by power laws, displayed by lines drawn through the respective data. Error bars represent $1\sigma$ jackknife errors.} \label{fig:gg5} \end{figure} In Fig.~\ref{fig:gg5} we plot the auto-correlation of galaxies combined for galaxies from the SDSS EDR in the NGC 2QZ strip and the APM Survey in the SGC 2QZ strip. Fig.~\ref{fig:gg5} also shows the anti-correlation between QSOs and galaxies discussed throughout this paper. We have plotted the result for the QSO sample fainter than $b_{\rm J}$ = 19.6, to ensure we are sampling QSOs from a region of the number-magnitude counts fainter than the `knee' of the distribution (see Section~\ref{sec:magevolve}). We fit simple power-laws to the two correlation functions, finding that \begin{eqnarray} \omega_{qg}(\theta) &=& -0.024\pm^{0.008}_{0.007}\theta^{-1.0\pm0.3} \\ \omega_{gg}(\theta) &=& 0.330\pm^{0.015}_{0.014}\theta^{-0.76\pm0.04} \label{equation:results5} \end{eqnarray} \noindent where $\theta$ is expressed in arcminutes, 1~arcminute being about 0.1$\Mpch$ at the median redshift of our galaxy data. In both cases, a simple $\chi^2$ fit suggests good power-law approximations to the data, with a reduced $\chi^2$ value of 1.6 ($P(\chi^2)=0.11$) for the galaxy-galaxy correlation fit and 1.4 $\left(P(\chi^2)=0.20\right)$ for the QSO-galaxy anti-correlation fit. The result for the slope of the galaxy-galaxy auto-correlation is in excellent agreement with recent small-scale measurements for galaxies with similar limiting flux to the sample used here \cite{Con02}. \subsection{Williams \& Irwin Model} \begin{table} \centering \begin{tabular}{cccc} \hline Model & & {$\Omega_m = 0.3,\Omega_{\Lambda} = 0.7$} & {$\Omega_m = 1$} \\ \hline W\&I, 98 & $b_{0.1}$ & 0.13$\pm^{0.08}_{0.07}$ & 0.32$\pm^{0.20}_{0.18}$ \\ \hline Gaz, 03 & $b_{0.1}$ & $ 0.052 \pm^{0.064}_{0.027} $ &$ 0.142 \pm^{0.167}_{0.072}$\\ &$b_{0.2}$ & $ 0.061 \pm^{0.055}_{0.025}$&$ 0.166 \pm^{0.144}_{0.069}$\\ \hline \end{tabular} \caption[\small{The Galaxy Bias Parameter $b$ In Different Cosmologies.}]{\small{In the row labelled `W\&I, 98' we list $b$, evaluated at $\theta=1$~arcmin ($\sim0.1\Mpch$), as measured from Equation~\ref{equation:biasWilandIrwin5}, with $\Sigma_{cr}(z,z_s)$ calculated using Equation~\ref{equation:sigmacrit} with $z_s = 1.5$ for the median QSO redshift. In the row labelled `Gaz, 03' we list $b$ evaluated at $\theta=1$~arcmin, and $\theta=2$~arcmin ($\sim0.2\Mpch$), as measured from Equation~\ref{equation:biascalc2}. Both estimates are shown for \rm{$\Lambda$CDM}\, and EdS cosmological models, assuming $H_0 = 70\, {{\rm km}}\,{{\rm s}}^{-1} \Mpc^{-1}$.}} \label{table:beta} \end{table} Taking our measured values for the correlation functions, evaluated at $\theta=1$~arcmin ($\sim0.1\Mpch$), and a faint end slope of $\alpha =0.29\pm0.03$, Equation~\ref{equation:WilandIrwin} \cite{Wil98} reduces to \begin{eqnarray} b_{0.1} = 7.56 \pm^{4.83}_{4.24} \bar{\kappa}, \label{equation:biasWilandIrwin5} \end{eqnarray} \noindent a function of the convergence, $\bar{\kappa}$. We calculate this assuming either an Einstein-de-Sitter (henceforth EdS), or \rm{$\Lambda$CDM}\, cosmology, and the resulting values of $b$ are displayed in Table~\ref{table:beta}. The model is somewhat dependent on the parameter $z_{\rm{max}}$ of Equation~\ref{equation:sigmaslab}. For instance, increasing $z_{\rm{max}}$ from $0.3$ to $0.4$ increases the estimates of $b_{0.1}$ by 50~per~cent, with the errors scaling accordingly. In our samples, 99.5~per~cent of galaxies lie at $z < 0.4$. Note that, as the measured slopes of $\omega_{gq}(\theta)$ and $\omega_{gg}(\theta)$ are not the same, a scale-independent model of bias is only marginally accepted. It is relatively easy to extend Equation~\ref{equation:WilandIrwin} to a simple model of scale-dependent bias, obtaining \begin{eqnarray} b = b_{0.1}\left(\frac{0.1\Mpch}{r}\right)^{-0.24\pm0.30}, \label{equation:biasWilandIrwin6} \end{eqnarray} \noindent where $b_{0.1}$ is shown in Table~\ref{table:beta} and $r$ is equivalent to $\theta$ expressed in $\Mpch$ at the mean redshift of our galaxy data. \subsection{Gazta\~naga Model} Using the measured slopes $\gamma_{gg}$ and $\gamma_{gq}$, we find a slope for the bias correlation function of $\gamma_b = -0.24 \pm 0.30$. We now use Equation~\ref{equation:biascalc2} to determine the galaxy bias in both \rm{$\Lambda$CDM}\, and EdS cosmologies, with $H_0 = 70\, {{\rm km}}\,{{\rm s}}^{-1} \Mpc^{-1}$, for 0.1$\Mpch$ scales. The results are displayed in Table~\ref{table:beta}. The analysis is repeated measuring $b$ on $0.2\Mpch$ scales to allow a direct comparison with the results of Gazta\~naga (2003). The results only have a slight dependence on $H_0$. Increasing $H_0$ to $100\, {{\rm km}}\,{{\rm s}}^{-1} \Mpc^{-1}$ increases the estimate of $b$ by 9~per~cent (for both EdS and \rm{$\Lambda$CDM}\, cosmologies, and for both $b_{0.1}$ and $b_{0.2}$). Decreasing $H_0$ to $50\, {{\rm km}}\,{{\rm s}}^{-1} \Mpc^{-1}$ decreases $b$ by either 8~per~cent (\rm{$\Lambda$CDM}\,) or 7~per~cent (EdS). Using the fit to $\omega_{gg}(\theta)$ from Equation~\ref{equation:results5}, together with Equation~\ref{equation:biascalc2}, we derive a model for $\omega_{gq}(\theta)$ as a function of $b_{0.1}$ and ${\gamma_b}$. We fit this model to the cross-correlation data by performing a maximum likelihood analysis, determining the likelihood of each model by calculating the $\chi^2$ value of each fit. Fig.~\ref{fig:b01_gamma} shows likelihood contours in the $b_{0.1}$ -- ${\gamma_b}$ plane. There is a clear degeneracy, with stronger scale-dependence of bias (i.e. more negative $\gamma_b$) implying a lower value of $b_{0.1}$. The uncertainty in our determination of $b$ is somewhat larger than the quoted error in Gazta\~naga (2003), and, as demonstrated by Fig.~\ref{fig:b01_gamma}, is considerably skewed in $b$-space. From Fig.~\ref{fig:b01_gamma} it is possible to reconcile the value of $b_{0.1}$ obtained with the method of Gazta\~naga with the somewhat higher value of $b_{0.1}$ obtained using the Williams \& Irwin model: the latter assumes scale-invariant bias, or ${\gamma_b}=0$, which corresponds to a higher value of $b_{0.1}\sim 0.12$ from the $b_{0.1}$ -- ${\gamma_b}$ degeneracy in the Gazta\~naga model fit. \begin{figure} \centering \includegraphics[width=8cm,totalheight=8cm]{b01_gamma_lambda.eps} \caption[\small{$b_{0.1}$ -- ${\gamma_b}$ likelihood contours}]{Likelihood contours in the $b_{0.1}$ -- ${\gamma_b}$ plane for a fit to the QSO-galaxy cross-correlation function, $\omega_{gq}(\theta)$, assuming $\Lambda$CDM. Contours are plotted for $\chi^2$ values corresponding to a one-parameter confidence of 68 per cent (dotted contour), and two-parameter confidence of 68 and 95 per cent (dashed contours). The best fit model obtained (marked with a $+$) has $b_{0.1}=0.052$ and ${\gamma_b}=-0.23$. The stars mark the models plotted with dashed lines in Figure~\ref{fig:biascale}. } \label{fig:b01_gamma} \end{figure} \subsection{Discussion} It is interesting that our bias predictions, based on a detection of a galaxy-QSO {\it anti}-correlation, agree well with the predictions of an independent author \cite{Gaz03} who reported a {\it positive} galaxy-QSO correlation, working with bright QSOs and (mostly) independent data. Both of the methods we have used in this paper, which use quite different lensing models to determine the amplitude of the galaxy bias parameter, $b$, consistently agree that on scales of $0.1~\Mpch$, galaxies are strongly anti-biased (i.e. $b < 1$) with a bias parameter of $b \sim 0.1$. Using a simple model \cite{Wil98} and assuming that lensing matter can be represented by a uniform slab extending out to the redshift where 95~per~cent of the surveyed galaxies are included, and the lensed QSOs may be represented by a single population at $z = 1.5$, we find $b_{0.1} = 0.13 \pm^{0.08}_{0.07}$ for a \rm{$\Lambda$CDM}\, cosmology. A more realistic model that traces spherical fluctuations in the underlying lensing matter on $0.1\Mpch$ scales, across the redshift distributions of our QSO and galaxy samples \cite{Gaz03}, yields $b_{0.1} = 0.04 \pm^{0.18}_{0.01}$. Note that, whilst traditionally galaxy bias is measured from the ratio between the galaxy auto-correlation and the mass auto-correlation, as we are comparing the cross-correlation of QSOs and galaxies with the galaxy auto-correlation function, we are instead defining a bias function via $b(r)=\xi_{gg}(r)/\xi_{gm}(r)$. \begin{figure} \centering \includegraphics[width=8cm,totalheight=8cm]{biascale.eps} \caption[\small{Recent Measurements of the Bias Parameter}]{A comparison of some recent measurements of the bias parameter with those made in this paper. The filled circles are measurements of the bias parameter deduced from the ratio of the correlation functions in Fig.~\ref{fig:gg5} using Equation~\ref{equation:biascalc2} (i.e. the `Gazta\~naga Model'). The angular bins of the correlation functions have been projected to the median redshift of the galaxy distribution ($z \sim 0.12$). The solid line represents the best fitting bias model, with $b_{0.1}=0.052$ and ${\gamma_b}=-0.23$. The various dashed lines depict the error ranges taken from the 68\% likelihood contour; the models chosen are shown by stars in Figure~\ref{fig:b01_gamma}. Each measurement displayed from this paper is calculated using a \rm{$\Lambda$CDM}\, cosmology. Other points represent recent measurements of the bias parameter from the literature.} \label{fig:biascale} \end{figure} A value of $b\sim0.1$ might seem at odds with observations of much higher values of $b \sim 1$ on Megaparsec scales \cite{Ver02,Lah02,Hoe02}, assuming a simple linearly-biased \rm{$\Lambda$CDM}\, mass distribution, suggesting more mass around galaxies than expected. However, the slope of the scale-dependent bias parameter is found to be $\gamma_b = -0.24 \pm 0.3$. Although we cannot rule out a linear bias parameter on scales of $0.1\Mpch$ from the slopes of our fitted correlation functions, the fact that many other authors determine much higher values of $b \sim 1$ on Megaparsec scales \cite{Ver02,Lah02,Hoe02} suggests that bias might be a strong function of scale on $100\kpch$ scales. In Fig.~\ref{fig:biascale} we plot the bias implied across a range of scales from measurements made in this paper and compare them to measurements made by other authors \cite{Wil98,Ver02,Lah02,Hoe02,Gaz03}. The models discussed in this paper allow a scale-dependent model (with $b \sim r^{0.5}$) that is consistent both with our observations on $0.1\Mpch$ scales, as well as other measures of the galaxy bias parameter, with $b\sim1$, on larger scales \cite{Lah02,Ver02}. Whilst the models shown in Fig.~\ref{fig:biascale} appear inconsistent with the measurements of Hoekstra et al. (2002), it may be that the simple model of scale-dependent bias assumed (equation~\ref{biasmod}) is not a good description of bias on larger scales. Indeed, on very large scales, bias is expected to be linear and hence independent of scale. On the other hand, Williams \& Irwin (1998) suggest galaxies are highly anti-biased ($b \sim0.1$) even on 5-10$\Mpch$ scales, which would only be consistent with our observations if bias has no scale-dependence and is certainly inconsistent with the results of other authors \cite{Lah02,Ver02,Hoe02}. To attempt to explain strong scale-dependence of bias, and, therefore, a higher than expected galaxy-mass cross-correlation signal on $100\kpch$ scales, we can consider halo occupation models of the galaxy distribution \cite{Ber02,Jai03}. The form of the cross-correlation in a \rm{$\Lambda$CDM}\, universe can be determined from simulations, using halo occupation models to populate dark matter haloes with galaxies \cite{Ber02,Jai03}, and taken in ratio with the real-space galaxy auto-correlation function measured from the APM galaxy catalogue \cite{Bau96}, such models typically yield a value of $b\sim0.6$ on $0.1\Mpch$ scales. Thus some level of anti-bias arises naturally in halo occupation models (albeit not as much as observed), going some way towards explaining the discrepancy. However, these models are complicated and depend strongly on galaxy type, and are thus not well constrained on the scales probed. Whilst a high QSO-galaxy cross-correlation signal places a constraint on such models, it may be possible to reconcile them with a better understanding of how galaxies populate haloes. By comparing the two definitions ($\left[\xi_{gg}/\xi_{gm}\right]$ and $\left[\xi_{gg}/\xi_{mm}\right]^{0.5}$) of $b(r)$ in a range of simulations, Berlind and Weinberg (2002) demonstrated that whilst they are not in perfect agreement in the scale-dependent bias regime ($0.1<r<4\,h^{-1}$\,Mpc), the two definitions of $b(r)$ have a very similar shape over this range. However, given the large anti-bias found, and strong scale-dependence of bias implied, it is quite likely that they {\it will in fact be considerably different}. Therefore, care needs to be taken in interpreting the results of this analysis. In particular, we have chosen not to follow the analysis of Gazta\~naga (2003) in deriving the slope of the mass correlation function, $\gamma$, and $\sigma_{0.1}$ under the assumption that the two bias definitions are equal, as it is likely that this would lead to a large systematic error, and hence misleading results. Finally we should point out that in this work we have assumed the weak lensing approximation, $\mu \approx 1+2\kappa$. Takada and Hamana (2003) have shown that a non-linear magnification correction will enhance the amplitude of the magnification correlation by around 10-25 per cent on arcminute scales for the QSO-galaxy cross-correlation signal, and such a correction would therefore boost our bias estimates by this fraction. Whilst in the right direction, such a correction is too small to explain the observed discrepancy in a \rm{$\Lambda$CDM}\, cosmology. Note also that Table~\ref{table:beta} indicates that an EdS cosmology with a value of $b_{0.1} \sim 1$ agrees with the data at the $2\sigma$ level. Though an EdS framework is consistent with the previous QSO lensing results of Croom \& Shanks (1999) and Myers et al. (2003), it is inconsistent with the majority of recent cosmological data. \section{Conclusions} \label{sec:summary5} In this paper, we have studied the cross-correlation of galaxies and background QSOs. Galaxies were drawn from the APM Survey in the region of the 2dF QSO Redshift Survey Southern Galactic Cap strip and the Sloan Digital Sky Survey Early Data Release in the Northern Galactic Cap strip. The QSO-galaxy cross-correlation function $\omega_{gq}$ suggests that 2QZ QSOs and galaxies are anti-correlated with significance ($2.8\sigma$) and strength $-0.007$, over the angular range out to 10~arcmin (1$\Mpch$ at the median redshift of our galaxy samples). This result is unique in that it is possibly the first significant anti-correlation detected between QSOs and galaxies that is not subject to small sample sizes or problems in selecting the populations. Our detection of a galaxy-QSO anti-correlation is consistent with the predictions of statistical lensing theory. When combined with the work of Gazta\~naga (2003), a consistent picture emerges that spans faint and bright QSO samples, which, due to the changing QSO number-magnitude count slope, have very different clustering properties, relative to the foreground galaxy population. We have also considered a number of other possible explanations for the anti-correlation between QSOs and galaxies. Firstly, errors and the correlation estimator were proven robust against Monte Carlo simulations. Care was taken to demonstrate that there is no significant correlation (0.5$\sigma$ positive correlation) between stars (which were selected within the 2dF Survey in the same way as the QSO sample) and our galaxy samples. We are thus confident that the anti-correlation between QSOs and galaxies is not a systematic error. The colours of QSOs around galaxies were used to place limits on the effect dust (modelled with an absorption law appropriate to the Milky Way) could have in producing an anti-correlation of QSOs with galaxies. Whilst our simple dust models could account for at most a third of the observed anti-correlation signal (at 2$\sigma$ significance), we do not rule out the possibility that scatter in QSO colours might preferentially discard reddened QSOs from the 2QZ, nor do we specifically rule out dust models that could obscure QSOs without reddening them, such as grey dust, or heavy concentrations of dust in some galaxies. No dust model, however, could easily explain the positive galaxy-QSO correlations found by Gazta\~naga (2003). We find that, for a \rm{$\Lambda$CDM}\, cosmology, galaxies are highly anti-biased on small scales. We consider two models that use quite different descriptions of the lensing matter and find they yield consistent predictions for the strength of galaxy bias on $0.1\Mpch$ scales of $b \sim 0.1$. The inferred strength of this result is in agreement with the work of Gazta\~naga (2003). The slope of the scale-dependent bias parameter is found to be $\gamma_b = -0.24 \pm 0.3$. The fact that many other authors determine much higher values of $b \sim 1$ on Megaparsec scales \cite{Ver02,Lah02,Hoe02} suggests that bias might be a strong function of scale. To explain such strong scale-dependence of bias, we can consider halo occupation models of the galaxy distribution \cite{Ber02,Jai03}. Such models predict some level of anti-bias on $0.1\Mpch$ scales, typically yielding a value of $b\sim0.6$ \cite{Ber02}. However, these models are complicated and depend strongly on galaxy type, and are thus not well constrained on the scales probed. Therefore, it may be possible to reconcile them with the observations reported here through a better understanding of how galaxies populate haloes. An alternative interpretation of these results is that they indicate that there is significantly more mass present, at least on the $100\kpch$ scales probed, than predicted by \rm{$\Lambda$CDM}\,. Myers et al. (2003) recently detected a strong anti-correlation between the same faint 2QZ QSOs and groups of galaxies. By applying models of gravitational lensing by simple haloes they used this anti-correlation signal to determine the mass of these lensing galaxy groups, concluding that the observed anti-correlation favours considerably more mass in groups of galaxies than accounted for in a universe with density parameter $\Omega_m = 0.3$. It is hard to account for the statistical lensing properties either of the galaxies presented here, or the groups of galaxies from Myers et al., in a low mass ($\Omega_m \sim 0.3$) universe with scale-independent bias. \section*{Acknowledgements} The 2dF QSO Redshift Survey was based on observations made with the Anglo-Australian Telescope and the UK Schmidt Telescope, and we would like to thank our colleagues on the 2dF Galaxy Redshift Survey team and all the staff at the AAT that have helped to make this survey possible. ADM ackowledges the support of a PPARC studentship. PJO acknowledges the support of a PPARC Fellowship. This work was partially supported by the `SISCO' European Community Research and Training Network. Funding for the creation and distribution of the SDSS Archive has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Aeronautics and Space Administration, the National Science Foundation, the U.S. Department of Energy, the Japanese Monbukagakusho, and the Max Planck Society. The SDSS Web site is {\tt http://www.sdss.org/}. The SDSS is managed by the Astrophysical Research Consortium (ARC) for the Participating Institutions. The Participating Institutions are The University of Chicago, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, University of Pittsburgh, Princeton University, the United States Naval Observatory, and the University of Washington.
1,941,325,221,210
arxiv
\section{Introduction} Over the past ten years many diffraction suppression systems were developed for direct detection of extrasolar planets. At the same time, promising ground-based projects were proposed and are currently under development like SPHERE at the VLT \citep{2006Msngr.125...29B} and GPI \citep{2006SPIE.6272E..18M}. Larger telescopes are desirable to improve performance of exoplanet searches towards lower masses and closer angular distances, ideally down to Earth-like planets. Several concepts of Extremely Large Telescopes (ELTs) are currently being studied all over the world (European-Extremely Large Telescope (E-ELT, \citet{2004SPIE.5489..391D}), Thirty Meter Telescope (TMT, \citet{2006SPIE.6267E..71N}), Giant Magellan Telescope (GMT, \citet{2004SPIE.5489..441J})). The characteristics of these telescope designs may have an impact on their high contrast imaging capabilities. Parameters like central obscuration, primary mirror segmentation, large spider arms, can impose strong limitations for many coronagraphs. It is therefore essential to indentify and evaluate a coronagraph concept which is well-suited to ELTs. The Apodized Pupil Lyot Coronagraph (APLC) is one of the most promising concepts for ELTs. Its sensitivity to central obscuration is less critical than, e.g., for phase masks \citep{2000PASP..112.1479R, 2005ApJ...633.1191M} but the APLC still allows for a small inner working angle (IWA) and high throughput if properly optimized. Other amplitude concepts (\citet[e.g.][]{2002ApJ...570..900K}) are also usable with centrally obscured aperture but suffer from low throughput especially if the IWA is small. The potential of the APLC has already been demonstrated for arbitrary apertures \citep{2002A&A...389..334A, 2003A&A...397.1161S} and specific solutions for obscured apertures have been proposed \citep{2005ApJ...618L.161S}. In this paper, we analyze the optimization of the APLC and evaluate its sensitivity with respect to the main parameters mentioned above. In section 2 we briefly revise the APLC formalism and we define a criterion for optimizing the coronagraph parameters. The impact of several telescope parameters on the optimal configuration is evaluated in section 3. Then, section 4 shows an application of the APLC optimization to two potential ELTs designs. Finally, we derive conclusions. \section{Apodization for centrally obscured pupil} \subsection{Formalism} In this section, we briefly revise the formalism of the APLC. The APLC is a combination of a classical Lyot coronagraph (hard-edged occulting focal plane mask, hereafter FPM) with an apodization in the entrance aperture. \begin{figure*}[!ht] \includegraphics[width=9cm]{profilebell20.eps} \includegraphics[width=9cm]{profilebagel47.eps} \caption{Typical apodizer shape for the bell regime (left) and the bagel regime (right). Central obscuration is 30$\%$.} \label{apodizershape1} \end{figure*} In the following, for sake of clarity, we omit the spatial coordinates $r$ and $\rho$ (respectively for pupil plane and focal plane). The function that describes the mask is noted $M$ (equal to 1 inside the coronagraphic mask and to 0 outside). With the mask absorption $\varepsilon$ ($\varepsilon = 1$ for an opaque mask), the FPM is then equal to: \begin{equation} 1 - \varepsilon M \end{equation} $P$ is the telescope aperture, and $\phi$ the profile of the apodizer. $\Pi$ describes the pupil stop function, which is considered -- in first approximation -- to be equal to the telescope aperture ($\Pi = P$). The coronagraphic process, corresponding to propagation from the telescope entrance aperture to the detector plane, is expressed in Eq. 2 to 6. Planes A, B, C and D correspond respectively to the telescope aperture, the coronagraphic focal plane, the pupil stop plane and the detector plane as defined in Fig. \ref{coronoconcept}. The Fourier transform of a function $f$ is noted $\hat{f}$. The symbol $\otimes$ denotes the convolution product. \noindent The entrance pupil is apodized in the pupil plane: \begin{equation} \psi_A = P\phi \end{equation} \noindent The complex amplitude of the star is spatially filtered (low frequencies) by the FPM: \begin{equation} \psi_B = \hat{\psi}_A \times [1 - \varepsilon M] \end{equation} \noindent The exit pupil image is spatially filtered (high frequencies) by the stop: \begin{equation} \psi_C = \hat{\psi}_B \times \Pi \end{equation} \begin{equation} \psi_C = [\psi_A - \varepsilon \psi_A \otimes \hat{M}] \times \Pi \end{equation} \noindent The coronagraphic amplitude on the detector plane becomes: \begin{equation} \psi_D = \hat{\psi}_C = [\hat{\psi}_A - \varepsilon \hat{\psi}_A M] \otimes \hat{\Pi} \label{amplitudecorono} \end{equation} The coronagraphic process can be understood as a destructive interference between two waves (Eq. 5): the entrance pupil wave $P\phi$, noted $\psi_A$ and the diffracted wave by the mask (corresponding to $\varepsilon \psi_A \otimes \hat{M}$). In the non-apodized case ($\phi=1$), the two wavefronts do not match each other, and the subtraction does not lead to an optimal starlight cancellation in the Lyot stop pupil plane. A perfect solution is obtained if the two wavefronts are identical (i.e., the diffracted wave by the mask (\textit{M}) is equal to the pupil wave in amplitude). \begin{figure}[!ht] \begin{center} \includegraphics[width=9cm]{Position.eps} \caption{Scheme of a coronagraph showing the pupil plane containing the apodizer ($\psi_A$), the focal plane with the FPM ($\psi_B$), the pupil image spatially filtered by the stop ($\psi_C$) and the detector plane ($\psi_D$).} \label{coronoconcept} \end{center} \end{figure} This latter case is obtained with the Apodized Pupil Phase Mask Coronagraph \citep{1997PASP..109..815R, 2002A&A...389..334A, 2003A&A...397.1161S}. For the APLC, the coronagraphic amplitude is minimized and proportional to the apodizer function. Considering a pupil geometry, the apodization function is related to the size of the FPM. More precisely, the shape of the apodizer depends on the ratio between the extent of $\hat{M}$ and the central obscuration size \citep{2005ApJ...618L.161S}. If the extent of $\hat{M}$ is bigger than the central obscuration, the apodizer takes a "bell" shape (typically it maximizes the transmission near the central obscuration of the pupil (see Fig.\ref{apodizershape1} (left) as illustration). On the contrary, if the extent of $\hat{M}$ is smaller than the central obscuration, the apodizer takes a "bagel" shape reducing transmission in the inner and outer part of the pupil (see Fig.\ref{apodizershape1} (right) as illustration). Thus, the apodizer shape depends on both, the FPM size and the central obscuration size. Throughputs (apodizer transmission/pupil transmission) as a function of the FPM size is given in Fig. \ref{transmission} for different obscuration sizes (15 to 35 \%). These curves show a second maximum corresponding to the transition between the two apodizer regimes which depends on the central obscuration size. Since apodizer throughput does not evolve linearly with FPM diameter, it is not trivial to determine the optimal FPM/apodizer combination. Moreover, throughput might not be the only relevant parameter when optimizing a coronagraph. A thorough signal to noise ratio analysis is definitely the right way to define the optimal FPM/apodizer system, but this would be too instrument specific for the scope of this study. Here, we investigate a general case for any telescope geometry and derive the corresponding optimal FPM size. \subsection{APLC optimization criteria} Usually, in Lyot coronagraphy, the larger the FPM diameter the larger the contrast. However, in the particular case of apodized Lyot coronagraph the transmission of an off-axis point-like object is not linear (Fig. \ref{transmission}) and then a trade-off has to be made between contrast and throughput. This problem has been studied by \citet{2004EAS....12..165B} who evaluated optimal Lyot stops for any telescope pupil geometry and for any type of coronagraph. Based on this study, we propose a criterion adapted to the APLC to optimize the apodizer/ FPM combination. This criterion maximizes the coronagraphic performance while minimizing the loss of flux of the off-axis object. While not replacing a thorough signal-to-noise ratio evaluation, our criterion takes into account the modification of the off-axis PSF (in intensity and in shape) when changing the coronagraph parameters. Several metrics can be used to quantify the capability of a coronagraph (\citet[e.g.][]{2004EAS....12..165B}). Here, we use the contrast ($\mathscr{C}$) averaged over a range of angular radii : \\ \begin{equation} \label{contrast} \mathscr{C} = \frac{max\left(\mid \psi_D (\rho, \alpha)_{\varepsilon = 0} \mid^{2}\right)} {\left( \int^{2\pi}_{0} \int^{\rho_f}_{\rho_i} \mid \psi_D (\rho, \alpha) \mid^{2}\rho\, d\rho\, d\alpha \right) \slash \pi ({\rho_f}^{2} - {\rho_i}^{2})} \end{equation} \noindent where $\mathscr{C}$ is expressed in polar coordinates $\rho$ and $\alpha$. We denote by respectively $\rho_i$ and $\rho_f$ the short radii and the large radii, respectively, defining the area of calculation for $\mathscr{C}$. The attenuation of the off-axis object is given by the ratio of maximum image intensity with the apodizer only to the one without the coronagraph, i.e., without the apodizer and the FPM. This quantity differs from the throughput, since it also takes into account the modification of the PSF structure when changing the apodizer profile : \begin{equation} max\left(\frac{\mid \psi_D (\rho, \alpha)_{\varepsilon = 0} \mid^{2} }{\mid \hat{P}(\rho, \alpha) \mid^{2}}\right) \label{psfattenuation} \end{equation} \noindent Now, let us define the criterion $C_{\mathscr{C}}$ as the product of $\mathscr{C}$ and Eq. \ref{psfattenuation}. \\ \begin{equation} C_{\mathscr{C}} = \mathscr{C} \times max\left(\frac{\mid \psi_D (\rho, \alpha)_{\varepsilon = 0} \mid^{2} }{\mid \hat{P}(\rho, \alpha) \mid^{2}}\right) \label{CC} \end{equation} \noindent The first term of $C_{\mathscr{C}}$ (Eq. \ref{contrast}, which characterizes the performances of the coronagraphic system) is then adapted to the region of interest in the coronagraphic image and can be well matched to the instrument parameters. \noindent The second term (Eq. \ref{psfattenuation}) takes into account the modification of the PSF structure when changing the apodizer profile and guarantees a reasonably moderate attenuation of the off-axis PSF maximum intensity (i.e, guarantees that when the coronagraph rejects the star it does not reject the planet as well). \begin{figure}[!ht] \begin{center} \includegraphics[width=9cm]{transmission.eps} \caption{Apodizer throughput (relative to full transmission of the telescope pupil) as a function of FPM diameter for different obscuration sizes.} \label{transmission} \end{center} \end{figure} Although our criterion cannot replace a thorough signal-to-noise ratio analysis (no instrumental model, no noise terms), it presents a reasonable approach by assuming the residual light leaking through the coronagraph as noise. Our criterion allows us to investigate the trade-off between performance and throughput while keeping the study general and independent of a specific instrument setup. Moreover, the validity of this criterion is supported by the pupil stop optimization study of \citet{2004EAS....12..165B} who was facing a problem similar to ours, and also by the results presented and discussed afterwards in this paper. \section{Sensitivity analysis} \subsection{Assumptions} Based on the previously defined criterion, we now analyze the behavior of several telescope parameters as a function of the size of the FPM (and hence APLC characteristics) with the main objective to explore possibilities how to optimize the APLC configuration for a given ELT design. One advantage of $C_{\mathscr{C}}$ is that the area of optimization in the focal plane can be well matched to the instrumental parameters. For that reason, we have limited the search area and investigated $C_{\mathscr{C}}$ only between $\rho_i = 3 \lambda/D$ at small radii and $\rho_f =100 \lambda/D$ at large radii. These limits correspond to the Inner Working Angle (distance at which an off-axis object reaches a significant transmission) and to the high-order Adaptive Optics (AO) cut-off frequency, respectively. At radii larger than the AO cut-off frequency, the coronagraph will only have a minor effect since atmospheric turbulence is not corrected and atmospheric speckles dominate. For the simulations presented in the next sections, we assume a circular pupil with 30\% central obscuration. The central obscuration ratio is left as a free parameter only in section \ref{sec:obscuration} where we evaluate its impact. The pupil stop is assumed identical to the entrance pupil including spider arms \citep{2005ApJ...633..528S}. Section \ref{sec:spider}, where the impact of the spider arms' size is analyzed, assumes 42-m telescope. Elsewhere, simulation results do not depend on the telescope diameter. Apodizer profiles were calculated numerically with a Gerchberg-Saxton iterative algorithm \citep{GSalgo}. The pixel sampling in the focal plane is 0.1 $\lambda/D$, and the pupil is sampled with 410 pixels in diameter. When phase aberrations are considered we are adopting a wavelength of 1.6$\mu$m corresponding to the H-band in the near infrared. \begin{center} \begin{table*} \centering \caption{Optimum FPM diameter (and hence APLC characteristics) for several obscuration sizes and criteria.} \label{obscuration} \begin{tabular}{ccc|cc} \hline \hline &\multicolumn{2}{c|}{$C_{\mathscr{C}}$ } & \multicolumn{2}{c}{Max. throughput} \\ \cline{2-5} \\ Obstruction size (\%) & FPM ($\lambda/D$) & Throughput ($\%$) & FPM ($\lambda/D$) & Throughput ($\%$) \\ \hline 10 & 4.3 & 59.4& 4.1 & 62.2 \\ 15 & 4.3 & 58.3 & 4.0 &62.4 \\ 20 & 4.4 & 55.8& 3.8 & 65.5 \\ 25 & 4.6 & 52.7& 3.6 & 67.9\\ 30 & 4.7 & 51.2& 3.5 & 68.7\\ 35 & 4.9 & 49.4& 3.3 & 70.4 \\ \hline \end{tabular} \end{table*} \end{center} \begin{figure}[!ht] \begin{center} \includegraphics[width=9cm]{critereLyotstop80.eps} \caption{$C_{\mathscr{C}}$ average between 3 and 100 $\lambda/D$ as a function of the FPM diameter and obscuration sizes, in the case of the APLC and classical Lyot coronagraph.} \label{centralobscuration1} \end{center} \end{figure} \subsection{Critical parameter impacts} In the following sub-sections, we are studying the impact of 2 major categories of diffraction effects. The first category deals with amplitude variations: central obscuration, spider arms, primary mirror segmentation, segment-to-segment reflectivity variation, and pupil shear (misalignment of the coronagraph stop with respect to the instrument pupil). Inter-segment gaps and other mechanical secondary supports are not considered, since they would require finer pixel sampling in the pupil image, resulting in prohibitively large computation times with a non-parallel computer. In addition, some mechanical secondary supports can be much smaller than the main spider arms. To first approximation, their effects can be considered to be similar to the ones produced by spider arms. The second category is related to phase aberrations, that we assumed to be located in the pupil plane (no instrumental scintillation). We only modeled low-order segment aberrations (piston, tip-tilt, defocus, astigmatism). Higher orders are less relevant for the optimization of the FPM size, but can have a significant impact on the coronagraphic performance. The amplitude diffraction effect of gaps is partially accounted for (at least for infinitely small gaps) by the phase transition we are generating between primary mirror segments. \begin{figure}[!ht] \begin{center} \includegraphics[width=9cm]{APLCideal.eps} \caption{Radial profiles of PSFs and coronagraphic images obtained with optimal APLC (using $C_{\mathscr{C}}$) for several obscuration sizes.} \label{centralobscuration2} \end{center} \end{figure} \subsubsection{Central obscuration} \label{sec:obscuration} \noindent The first parameter we are evaluating is the central obscuration. High contrast instruments have to deal with central obscuration ratios which typically range from 10\% to 35\% (CFHT: 35\%, HST: 33\%, VLT: 14 \%). ELTs will likely have larger obscurations than current 8-m class telescopes to preserve a reasonable size of the telescope structure. In Fig. \ref{centralobscuration1}, the criterion $C_{\mathscr{C}}$ is shown for different obscuration sizes ranging from 10 to 35 \%. The curves show two maxima. The first one is located near 2 $\lambda$/D and experiences a large contrast variation while the second one (near 4$\lambda$/D) shows a smaller dispersion. \begin{figure}[!ht] \begin{center} \includegraphics[width=8cm]{spider.eps} \caption{Pupil configurations considered here.} \label{spider1} \end{center} \end{figure} Table 1 summarizes these results and gives the position of the second maximum versus the obscuration size for the criterion previously mentioned and for a criterion based solely on the maximum throughput (like in Fig. \ref{transmission}). \begin{figure}[!ht] \begin{center} \includegraphics[width=9cm]{Spider62cmApod30Critere.eps} \caption{$C_{\mathscr{C}}$ average between 3 and 100 $\lambda/D$ as a function of the FPM diameter and number of spider arms. Spider thickness is set to 62 cm.} \label{spider2} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=9cm]{ReflectivityApod30Critere.eps} \caption{$C_{\mathscr{C}}$ average between 3 and 100 $\lambda/D$ as a function of the FPM diameter and reflectivity variations.} \label{reflectivity} \end{center} \end{figure} If we only consider the second maximum, which is more promising in terms of contrast and appears less sensitive, the optimal FPM diameter ranges from 4.3 to 4.9 $\lambda/D$ for obscuration ratios between 10 to 35\%. Here, our criterion $C_{\mathscr{C}}$ is more relevant than just throughput, since it is better adapted to the region of interest in the coronagraphic image and to the modification of the PSF structure. We see a non-linear increase of optimum FPM size with the obscuration ratio because more starlight is redistributed in the Airy rings of the PSF. A solely throughput-based consideration shows the opposite behavior with a larger dispersion of the FPM size, which is not consistent with the effect on the PSF structure. However, at small obscuration sizes (10\%-15\%), maximum throughput yields a similar optimal FPM diameter as $C_{\mathscr{C}}$. \begin{figure}[!ht] \begin{center} \includegraphics[width=9cm]{SpiderSizeApod30CritereOWL.eps} \caption{$C_{\mathscr{C}}$ average between 3 and 100 $\lambda/D$ as a function of the FPM diameter and spider arms thickness. Number of spider arms is set to 6.} \label{spider3} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=9cm]{ShearPApod30Critere.eps} \caption{$C_{\mathscr{C}}$ average between 3 and 100 $\lambda/D$ as a function of the FPM diameter and pupil shear.} \label{shearpupil} \end{center} \end{figure} We consider this result as evidence for the relevance of our criterion $C_{\mathscr{C}}$ to optimize the FPM size (and hence the APLC characteristics) with respect to the size of the central obscuration. Moreover, the validity of our criterion is also supported by the comparison of coronagraphic PSFs using an optimized APLC in Fig. \ref{centralobscuration2}. The optimized APLC allows for a contrast performance which is rather insensitive to the central obscuration size. \subsubsection{Spider arms} \label{sec:spider} On an ELT, the secondary mirror has to be supported by a complex system of spider arms ($\sim$ 50 cm) and cables ($\sim$ 30-60 mm) to improve stiffness. Evaluating the influence of these supports is important in the context of coronagraphy. The pixel sampling of our simulations limited by available computer power does not allow us to model the thinnest mechanical supports. However, the impact of these supports on the PSF structure will be similar to the one of spider arms but at a reduced intensity level. Several configurations were considered as shown in Fig.\ref{spider1}. As the number of spider arms increases from 3 to 7, the contrast gets worse (but no more than a factor of 2). The curves in Fig.\ref{spider2} are almost parallel, indicating that the number of spider arms has no significant influence on the optimal FPM size. The second maximum of $C_{\mathscr{C}}$ peaks at 4.7 $\lambda/D$ with a small dispersion of 0.2 $\lambda/D$. \begin{table*} \caption{APLC optimization for an obscuration of 30\% } \label{} \centering \begin{tabular}{lcc} \hline \hline Parameters & Value range & Optimal APLC configuration ( FPM range in $\lambda/D$) \\ \hline Obscuration & 30\% & 4.7 \\ Spider (arm) & 3 - 7 & 4.6 - 4.8 \\ Spider (size) & 15 - 90 cm & 4.6 - 4.8 \\ Shear pupil & 0.5 - 2 \% & 4.7 - 4.9 \\ Segment reflectivity & 0.25 - 5 \% & 4.5 - 4.7 \\ Low-order aberrations & 1 - 100 nm rms & 3.5 - 6.0 \\ Chromatism ($\Delta \lambda \slash \lambda$) & 1.4 - 5 \% & 4.7 - 4.8 \\ Chromatism ($\Delta \lambda \slash \lambda $) & 5 - 20 \% & 4.8 - 5.3 \\ \hline \end{tabular} \end{table*} \begin{figure}[!ht] \begin{center} \includegraphics[width=9cm]{AstigmApod30Critere.eps} \caption{$C_{\mathscr{C}}$ average between 3 and 100 $\lambda/D$ as a function of the FPM diameter and low-order aberrations.} \label{aberrations} \end{center} \end{figure} Assuming a 6-spider arms configuration (OWL-like), we also analyzed the sensitivity to spider arms thickness from 15 cm to 93 cm (Fig.\ref{spider3}). The increasing width of the spider arms tends to flatten the profile of $C_{\mathscr{C}}$, making the selection of an optimal FPM more difficult (or less relevant) for very large spider arms. However, for the actual size of spider arms likely being of the order of 50 cm, the optimal size of the FPM (and hence APLC) is still 4.7 $\lambda/D$. \subsubsection{Segments reflectivity variation} \noindent The primary mirror of an ELT will be segmented because of its size, and a potential resulting amplitude effect is segment-to-segment reflectivity variation. We show the APLC optimization sensitivity for segment reflectivity variation from 0 to 5 \% peak-to-valley in Fig.\ref{reflectivity}. For this simulation, the primary mirror was assumed to consists of $\sim$750 hexagonal segments. The criterion $C_{\mathscr{C}}$ is robust for FPMs smaller than 4 $\lambda/D$. A loss of performance with reflectivity variation is observed for larger FPM. However, the optimal FPM size remains located at 4.7 $\lambda/D$ with a small dispersion of 0.2 $\lambda/D$. \subsubsection{Pupil shear} As already mentioned above an APLC includes several optical components : apodizer, FPM and pupil stop. The performance of the APLC also depends on the alignment of these components. In particular, the pupil stop has to accurately match the telescope pupil image. This condition is not always satisfied, and the telescope pupil may undergo significant mismatch which could amount to more than 1\% of its diameter. The pupil shear is the mis-alignment of the pupil stop with respect to the telescope pupil image. It is an issue especially for ELTs for which mechanical constraints are important for the design. For example, the James Webb Space Telescope is expected to deliver a pupil image for which the position is known at about 3-4\%. Therefore, the performance of the mid-IR coronagraph \citep{2004EAS....12..195B} will be strongly affected. \begin{figure}[!ht] \begin{center} \includegraphics[width=9cm]{FilterBandpassELTcritere.eps} \caption{$C_{\mathscr{C}}$ average between 3 and 100 $\lambda/D$ as a function of FPM diameter and the filter bandpass.} \label{chromatism} \end{center} \end{figure} \begin{table}[!ht] \begin{center} \caption{Chromatism effects synthesis} \label{spectralR} \begin{tabular}{ccccc} \hline \hline $\Delta \lambda \slash \lambda$ (\%) & $FPM (\lambda/D)$ & $FPM_{\lambda_{max}}$ ($\lambda/D$) & $F_1$ & $F_2$ \\ \hline 0.3 & 4.70 & 4.70 & 1.0 & 1.0 \\ 1.4 & 4.70 & 4.73 & 1.1 & 1.1 \\ 2 & 4.70 & 4.75 & 1.1 & 1.1 \\ 5 & 4.80 & 4.82 & 1.6 & 1.6 \\ 10 & 5.00 & 4.94 & 2.6 & 3.7 \\ 20 & 5.30 & 5.20 & 3.7 & 14.6 \\ 50 & 5.90 & 5.87 & 26.3 & 180.9 \\ \hline \end{tabular} \end{center} \end{table} On SPHERE, the planet-finder instrument for the VLT (2010), the pupil shear was identified as a major issue and a dedicated Tip-Tilt mirror was included in the design to preserve the alignment at a level of 0.2\% \citep{2006tafp.conf..353B}. The behavior of $C_{\mathscr{C}}$ in Fig. \ref{shearpupil} is somewhat different from the behavior of the previous parameters. The loss of performance is significant even for small FPM. However, the criterion is still peaking at 4.7 $\lambda/D$ with a variation of about 0.2 $\lambda/D$ although above 4.5 $\lambda/D$ the curves are rather flat indicating that a larger FPM would not improve performance. \subsubsection{Static aberrations} \begin{figure*}[!ht] \begin{center} \includegraphics[width=5cm]{APLCdesign1.eps} \includegraphics[width=5cm]{APLCdesign2.eps} \caption{Optimized apodized E-ELT apertures: telescope design 1 (left), telescope design 2 (right).} \label{design} \end{center} \end{figure*} \noindent Here, static aberrations refer to low-order aberrations on the segments of the large primary mirror. We separately investigated the effect of piston, tip-tilt, defocus and astigmatism, and found the behavior to be similar for all these aberrations. In contrast to the other defects, both, the performance and the optimal FPM diameter (optimal APLC) are very sensitive to low-order aberrations. As the amplitude of aberrations increases, the dependency of $C_\mathscr{C}$ on FPM diameter becomes flatter and the optimal FPM size is getting smaller (Fig. \ref{aberrations}). A larger FPM would not decrease performance enormously. For values larger than 15nm, there is no longer clear evidence of an optimal size beyond $\sim 3.5 \lambda/D$. The performance is rather insensitive to the actual FPM size. Even though low-order aberrations strongly affect APLC performance, their presence has virtually no impact on the optimized configuration. The fairly constant performance in the presence of larger low-order aberrations indicates that low-order aberrations are not a relevant parameter for the optimization of the APLC. \subsubsection{Chromatism} \noindent All previous analysis was performed for monochromatic light of the wavelength $\lambda_0$. However, as with the classical Lyot coronagraph, the APLC performance should depend on the ratio between FPM size and PSF size and therefore on wavelength. Hence, the impact of chromatism on the APLC optimization must be evaluated. We note that the chromatism of the APLC can also be mitigated by a slight modification of the standard design \citep{2005PASP..117.1012A}. Figure \ref{chromatism} and Tab. \ref{spectralR} present the results of the simulations for several filter bandpass widths ($\Delta \lambda \slash \lambda$) when using the standard monochromatic APLC. As long as the filter bandpass is smaller than 5 \%, the optimal FPM size and performance are nearly the same as in the monochromatic case. The values displayed in columns 4 and 5 of Tab. \ref{spectralR} quantify the loss of contrast due to chromaticity with respect to the monochromatic case for the APLC being optimized to the filter bandpass ($F_{1}$) and to the central wavelength of the band ($F_{2}$). These two factors begin to differ significantly from each other at a filter bandpass larger than 5 \%. Hence, optimization of the APLC for chromatism is needed for a filter bandpass exceeding this value. An efficient way of optimizing an APLC for broad band application is to optimize it for the longest wavelength of the band, which leads to results that are within 0.1$\lambda/D$ of the true optimal FPM size. This behavior can be explained by the non-symmetrical evolution of the residual energy in the coronagraphic image around the optimal FPM size at $\lambda_0$ \citep{2003A&A...397.1161S}. Another way to minimize chromaticity would be to calculate the apodizer profile for the central wavelength and only optimize the FPM diameter considering the whole bandpass. We compared the behavior of both methods for $\Delta \lambda \slash \lambda = 20\%$: they are actually very comparable in terms of performance. \section{Application to the E-ELT} \label{sec:Application} In this section, we apply the tools and results from the APLC optimization study discussed in the previous section to the two telescope designs proposed for the European-ELT. The objective is to confirm our optimization method and to produce contrast idealized profiles which admittedly must not be confused with the final achievable contrast in the presence of a realistic set or instrumental error terms. \subsection{Starting with telescope designs} We assume a circular monolithic primary mirror of 42 meters diameter. Segmentation errors are not taken into account, although we note that the E-ELT primary mirror consists of hexagonal segments with diameters ranging from 1.2 to 1.6 meters in its current design. There are two competing telescope designs considered: a 5 mirror arrangement (design 1) and a 2 mirror Gregorian (design 2). For our purpose, the two designs differ by their central obscuration ratios and the number of spider arms. Design 1 (Fig.\ref{design} left) is a 30\% obscured aperture with 6 spider arms of 50 cm and design 2 (Fig.\ref{design} right) is a 11\% obscured aperture with 3 spider arms of 50 cm These numbers are likely to be subject to change as the telescope design study is progressing. Mechanical supports (non-radial cables of the secondary mirror support) and intersegment gaps are not considered for the reasons mentioned in section discussing spider arms. In such conditions and taking into account the previous sensitivity analysis on central obscuration, spider arms, and chromatism ($\Delta \lambda \slash \lambda = 20 \%$) we found an optimal APLC configurations with the apodizer designed for 4.8 and 4.3 $\lambda/D$ and with a FPM size of 5 and 4.3 $\lambda/D$ for design 1 and 2, respectively. \begin{figure*}[!ht] \begin{center} \includegraphics[width=9cm]{R5NOoptimized.eps} \includegraphics[width=9cm]{R5optimized.eps} \caption{Radial profiles of PSFs and coronagraphic images ($\lambda/\Delta \lambda = 5$) for the 2 designs considering throughput optimization (up) or $C_\mathscr{C}$ optimization (bottom).} \label{PSFcomp} \end{center} \end{figure*} \citet{2005ApJ...633..528S} has demonstrated that optimization or under-sizing of the pupil stop is not necessary with the APLC. We independently verified and confirm this result using our criterion applied on the stop rather than on the mask. \subsection{Radial contrast} \label{sec:perfELT} As already shown in section \ref{sec:obscuration}, the optimal APLC configuration with our criterion is different to the optimal configuration considering throughput as a metric. We can now demonstrate this difference using contrast profiles. Figure \ref{PSFcomp} compares the coronagraphic profiles based on throughput optimization (apodizer and FPM size are 3.5 and 4.1 $\lambda/D$ for design 1 and 2, see Figure \ref{transmission}) with the one obtained from optimization with our criterion. For design 2, the optimization with both methods leads to similar APLC configurations (4.3 and 4.1 $\lambda/D$). Hence, the contrast performance between them differs by only a factor of 3. For design 1, instead, the gain by using our criterion for the optimization is a staggering factor 10 in contrast. In addition, the plot shows that APLC contrast performance only weakly depends on the telescope geometry with this optimization method. This is an important result, which means that the APLC can efficiently cope with a large variety of telescope designs. \section{Conclusion} The Apodized Pupil Lyot Coronagraph is believed to be a well suited coronagraph for ELTs and to the search of extrasolar planets with direct imaging. The high angular resolution of such large telescopes relaxes the constraints on the Inner Working Angle (IWA) of a coronagraph which is an important issue for high contrast imaging instruments on 8-m class telescopes. Hence, coronagraphs with a relatively large IWA like the APLC present an interesting alternative to the small IWA coronagraphs such as the phase mask coronagraphs. The objective of this paper was to analyze the optimization of APLC in the context of ELTs. We defined a criterion ($C_\mathscr{C}$) similar to the one use by \citet{2004EAS....12..165B} for the general problem of Lyot stop optimization in coronagraphy. We then analyzed the behavior of this criterion as a function of the FPM diameter in the presence of different telescope parameters. The optimal FPM is determined by the maximum value of the criterion. A sensitivity analysis was carried out for the several telescope parameters like central obscuration, spiders, segment reflectivity, pupil shear, low-order static aberrations and chromatism. Some of these parameters are not relevant for APLC optimization such as low-order aberrations which provide a pretty flat response of the criterion to FPM diameter when applied at reasonably large amplitudes. However, ELTs are not yet defined well enough to predict the level of static aberrations coronagraphs will have to deal with. The parameter which had the largest impact on the optimum FPM diameter is the central obscuration. An obscuration ratio of 30\% leads to and optimal APLC of 4.7 $\lambda/D$. In most cases, the optimal sizes derived for other telescope parameters are quite consistent with the one imposed by the central obscuration. The dispersion of the FPM size is no larger than 0.2$\lambda$/D given the range of parameters we have considered. We also demonstrated that APLC optimization based on throughput alone is not appropriate and leads to optimal FPM sizes that are decreasing with increasing obscuration ratios. This behavior is opposite to the one derived using our criterion. The superior quality of our criterion is supported by the comparison of contrast profiles obtained with both optimization methods in sections \ref{sec:perfELT} and \ref{sec:obscuration}. Although the idealized simulations presented in this paper do not consider atmospheric turbulence and instrumental defects, they allow us to find the optimal APLC configuration and PSF contrast for each case. \citet{2006A&A...447..397C} show that the ultimate contrast achievable by differential imaging (speckle noise suppression system to enhance the contrast, \citet{1999PASP..111..587R, 2000PASP..112...91M, 2003PASP..115.1363B, 2004ApJ...615..562G}) with a perfect coronagraph is not sensitive to atmospheric seeing but critically depends on static phase and amplitude aberrations. Our results therefore present the possibility to extend this study to the more realistic case of a real coronagraph taking into account relevant effects releated to telescope properties. In addition, we have also started a development of APLC prototypes whose characteristics were defined with the present numerical analysis. Experiments with these prototypes will be carried out during the next year in the near IR on the High Order Test-bench \citep{2006SPIE.6272E..81V} developed at the European Southern Observatory. The practical study of the APLC will also benefit from prototyping activities led by the department of Astrophysics at the University of Nice (LUAN) and carried out for development of SPHERE for the VLT. \begin{acknowledgements} P. M would thanks Pierre Riaud for helpful discussions. This activity is supported by the European Community under its Framework Programme 6, ELT Design Study, Contract No. 011863. \end{acknowledgements} \nocite{*}
1,941,325,221,211
arxiv
\section{Introduction} \subsection{The many-worlds interpretation} Any critique of the many-worlds interpretation of quantum mechanics ought to begin by praising it. In the simplest form of the interpretation, such as that presented by Everett in 1957 \cite{everett1957}, the universe is regarded as a closed quantum system. Its state vector (Everett's ``universal wave function'') evolves unitarily according to an internal Hamiltonian. Measurements and the emergence of classical phenomena are described entirely by this evolution. ``Observables'' are simply dynamical variables described by operators. No separate ``measurement process'' or ``wave function collapse'' ideas are invoked. Thus, consider a laboratory measurement of $S_{z}$ on a spin-1/2 particle. This is nothing more than an interaction among the particle, the lab apparatus, and the conscious observer, all of which are subsystems of the overall quantum universe. Initially, the particle is in the state $\ket{\psi_{0}} = \alpha \ket{\uparrow} + \beta \ket{\downarrow}$. The apparatus and the observer are in initial states $\ket{0}$ and $\ket{\mbox{``ready''}}$ respectively. Now the particle and the apparatus interact and become correlated: \begin{equation} \ket{\psi_{0}} \otimes \ket{0} \otimes \ket{\mbox{``ready''}} \longrightarrow \Big ( \alpha \ket{\uparrow} \otimes \ket{+\mbox{$\frac{\hbar}{2}$}} + \beta \ket{\downarrow} \otimes \ket{-\mbox{$\frac{\hbar}{2}$}} \Big ) \otimes \ket{\mbox{``ready''}}, \end{equation} where $\ket{+\mbox{$\frac{\hbar}{2}$}}$ and $\ket{- \mbox{$\frac{\hbar}{2}$}}$ are apparatus states representing the two possible measurement results. The observer next interacts with the apparatus by reading its output, leading to a final state \begin{equation} \label{eq-twobranches} \alpha \ket{\uparrow} \otimes \ket{+\mbox{$\frac{\hbar}{2}$}} \otimes \ket{\mbox{``up''}} + \beta \ket{\downarrow} \otimes \ket{-\mbox{$\frac{\hbar}{2}$}} \otimes \ket{\mbox{``down''}} . \end{equation} The memory record of the observer (``up'' or ``down'') has become correlated to both the original spin and the reading on the apparatus. The two components of the superposition in Equation~\ref{eq-twobranches} are called ``branches'' or ``worlds''. Since all subsequent evolution of the system is linear, the branches effectively evolve independently. The observer can condition predictions of the future behavior of the particle on his own memory record---for example, if his memory reads ``spin up'' then he may regard the state of the spin as $\ket{\uparrow}$. No collapse has occurred; both measurement outcomes are still present in the overall state. But conditioning on a particular memory record yields a {\em relative state} of the particle that corresponds to that record. In the same way, if other observers read the apparatus or perform independent measurements of the same observable, all observers will find that their memory records are consistent. Here is another way to look at this process. Consider the dynamical variable $\oper{C}$ on the spin-observer subsystem given by: \begin{equation} \oper{C} = \proj{\uparrow} \otimes \proj{\mbox{``up''}} + \proj{\downarrow} \otimes \proj{\mbox{``down''}} . \end{equation} This variable is a projection onto the subspace of system states in which the spin state and the observer memory state agree. At the start of the measurement process, the ``expectation'' $\ave{C} = \bra{\Psi} \oper{C} \ket{\Psi} = 0$, but at the end $\ave{C} = 1$. The evolution of $\ave{C}$ tells us that a correlation has emerged between the spin and the memory record. Note that this does not depend on a probabilistic interpretation of the expectation $\ave{C}$. The expectation $\ave{C}$ simply indicates the relationship between the system state and eigenstates of $\oper{C}$ that are either uncorrelated ($\ave{C} = 0$) or correlated ($\ave{C} = 1$). There are many things to like about the many-worlds account. It entails no processes other than the usual dynamical evolution according to the Schr\"{o}dinger equation. It explains at least some characteristics of a measurement, such as the repeatability and consistency of the observers' records. It focuses attention on the actual physical interactions involved in the measurement process. Some details may be tricky, such as the identification of $|\alpha|^{2}$ and $|\beta|^{2}$ as observed outcome probabilities in repeated measurements \cite{bornrule}. Nevertheless, the many-worlds idea has proven to be very fruitful, for example, in motivating the analysis of decoherence processes \cite{zurek} and their role in the emergence of quasi-classical behavior in quantum systems \cite{quasi}. The essential idea of the many-worlds program was formulated by Bryce DeWitt \cite{dewitt} in the following maxim: \begin{quotation} The mathematical formalism of the quantum theory is capable of yielding its own interpretation. \end{quotation} DeWitt called this the ``EWG metatheorem'', after Everett and two other early exponents of the interpretation, John Wheeler \cite{wheeler} and Neill Graham \cite{graham}. DeWitt's claim is that the only necessary foundations for sensible interpretational statements about quantum theory are already present in the mathematics of the Hilbert space of states and the time evolution of the global system. Nothing outside of the system and its unitary evolution is required. \subsection{Two universes, two pictures} \label{subsec:twouniverses} Consider a closed quantum ``universe'', which we will call Q. System Q is composite with many subsystems. Its time evolution is unitary, so that the state at any give time is \begin{equation} \ket{\Psi(t)} = \oper{U}(t) \ket{\Psi_{0}} \end{equation} for evolution operator $\oper{U}(t)$ and initial state $\ket{\Psi_{0}}$. For convenience, we will refer to this as the ``actual'' time evolution of the system. To make our mathematical discussion straightforward, we imagine that Q is bounded in space, so that its Hilbert space $\mathcal{H}\sys{Q}$ has a discrete countable basis set. (The Hamiltonian eigenbasis would be an example of such.) If we further impose an upper limit $E_{\max}$ to the allowed energy of the system, the resulting $\mathcal{H}\sys{Q}$ is finite-dimensional. Note that this scarcely limits the possible complexity of Q. The system may still contain a multitude of subsystems with complicated behavior. The subsystems may exchange information and energy. Some of the subsystems may function as ``observers'', interacting with their surroundings and recording data in their internal memory states. According to the DeWitt maxim, the initial state $\ket{\Psi_{0}}$ and time evolution operator $\oper{U}(t)$ suffice to specify a many-worlds interpretation of what happens in Q. One way to describe this is to consider a large collection of dynamical variables $\oper{A}_{1}$, $\oper{A}_{2}$, etc. These may represent particle positions, observer memory states, correlation functions, and so on. From the time-dependent expectations $\ave{A_{k}}_{t}$ we identify processes such as measurements, decoherence, and communication.\footnote{Indeed, if the set $\{ \oper{A}_{k} \}$ is large enough, we can completely reconstruct the time evolution $\ket{\Psi(t)}$ from the expectations $\ave{A_{k}}_{t}$.} We can in principle tell what the system ``looks like'' to various observer subsystems inside Q. We next introduce a different, much simpler closed system Q$'$ consisting of three coupled harmonic oscillators. Again the Hilbert space $\mathcal{H}\sys{Q$'$}$ has a discrete countable basis, and if we further impose an upper energy limit we can arrange for $\dim \mathcal{H}\sys{Q} = \dim \mathcal{H}\sys{Q$'$}$. The two Hilbert spaces are therefore isomorphic, and there exists an isomorphism map for which the initial Q$'$ state corresponds to the initial Q state. This means we can effectively regard Q and Q$'$ as the {\em same} system with the same initial state $\ket{\Psi_{0}}$ evolving under different time evolutions $\oper{U}(t)$ and $\oper{V}(t)$. Variables $\oper{B}_{k}$ for Q$'$ are different operators in $\mathcal{H}\sys{Q}$, corresponding to the oscillator positions and momenta, etc. With respect to the alternate $\oper{V}(t)$ evolution, the expectations of these Q$'$ variables would be \begin{equation} \ave{B_{k}}'_{t} = \bra{\Psi_{0}} \oper{V}^{\dagger}(t) \oper{B}_{k} \oper{V} \ket{\Psi_{0}}. \end{equation} These expectations would tell us ``what happens'' in Q$'$. (The actual evolution of $\ave{B_{k}}_{t}$ under the actual time evolution $\oper{U}(t)$ would, of course, be quite different.) Now consider a new set of variables in Q: \begin{equation} \tilde{\oper{B}}_{k} = \left ( \oper{U}(t) \oper{V}^{\dagger}(t) \right ) \oper{B}_{k} \left (\oper{V}(t) \oper{U}^{\dagger} (t) \right ) . \end{equation} The $\tilde{\oper{B}}_{k}$ operators are time dependent. But consider how their expectations evolve in time under the actual time evolution of Q. \begin{eqnarray*} \ave{\tilde{B}_{k}}_{t} & = & \bra{\Psi_{0}} \oper{U}^{\dagger} \tilde{\oper{B}}_{k} \oper{U} \ket{\Psi_{0}} \nonumber \\ & = & \bra{\Psi_{0}} \oper{U}^{\dagger} \oper{U} \oper{V}^{\dagger} \oper{B}_{k} \oper{V} \oper{U}^{\dagger} \oper{U} \ket{\Psi_{0}} \nonumber \\ & = & \bra{\Psi_{0}} \oper{V}^{\dagger} \oper{B}_{k} \oper{V} \ket{\Psi_{0}}, \end{eqnarray*} exactly the time dependence of $\ave{B_{k}}'_{t}$ under the alternate Q$'$ time evolution $\oper{V}$. In other words, with respect to these time-dependent variables, the complex system Q behaves exactly like the much simpler system Q$'$. There is nothing particularly strange about considering time-dependent observables. We have described Q and its evolution using the {\em Schr\"{o}dinger picture} \cite{asher}, in which observables are typically time-independent and system states evolve in time. But we can also use the equivalent (and only slightly less familiar) {\em Heisenberg picture}, in which time dependence is shifted to the observables.\footnote{The time-dependence of observables in the Heisenberg picture has conceptual appeal. After all, to measure a particle's spin on Monday or on Tuesday would require slightly different experimental set-ups, and so the two observables may plausibly be represented by different operators.} The system state is thus $\ket{\Psi_{0}}$ at all times but the observables are redefined as \begin{equation} \hat{\oper{A}}_{k} (t) = \oper{U}^{\dagger}(t) \oper{A}_{k} \oper{U}(t) . \end{equation} Then $\ave{A_{k}}_{t} = \bra{\Psi (t)} \oper{A}_{k} \ket{\Psi (t)} = \bra{\Psi_{0}} \hat{\oper{A}}_{k}(t) \ket{\Psi_{0}}$. In perturbation theory, we also frequently use an {\em interaction picture}, in which the time evolution due to an unperturbed Hamiltonian $\oper{H}_{0}$ is shifted to the observables, while the interaction Hamiltonian $\oper{H}\subtext{int}$ produces changes in the system state. What we have done, therefore, is simply changed pictures. With respect to the time-dependent variables $\tilde{\oper{B}}_{k}(t)$ in the {\em Q$'$ picture}, the actual time evolution of Q exactly matches the hypothetical time evolution of Q$'$. And of course, we can generalize this idea. For {\em any} closed Q$'$ with a Hilbert space of the same dimension as $\mathcal{H}\sys{Q}$, and for any hypothetical Q$'$ time evolution $\oper{V}(t)$, we can find a set of time-dependent variables with respect to which the actual Q time-evolution looks like the alternate Q$'$ evolution. Complex universes can be made to look simple and {\em vice versa}. \subsection{Grue and bleen} Our argument calls to mind an idea from philosophy, devised in 1955 by Nelson Goodman.\cite{goodman} We begin with familiar terms {\em blue} and {\em green} describing the colors of objects in our surroundings. Now we fix a time $T$ and define new terms {\em grue} and {\em bleen} as follows. \begin{itemize} \item An object is {\em grue} if is {\em green} before $T$ and {\em blue} after. \item An object is {\em bleen} if it is {\em blue} before $T$ and {\em green} after. \end{itemize} Goodman presented this idea to illustrate his ``new riddle of induction''. If we fix $T$ to lie in the future, then all present evidence that an object is {\em green} is also evidence that it is {\em grue}. Here, however, we are not principally concerned about inductive reasoning. It does not matter to us whether $T$ lies in the future or the past. In the quantum situation, the ordinary Q-observables $\oper{A}_{k}$ correspond to the ordinary colors {\em green} and {\em blue}. The time-dependent Q$'$-picture observables $\tilde{\oper{B}}_{k}$ correspond to the new terms {\em grue} and {\em bleen}. \begin{figure} \begin{center} \includegraphics[width=5.0in]{twoworlds.png} \end{center} \caption{Two universes. Q is complex and contains many subsystems, including those that may be regarded as observers (such as the bee). Q$'$ is extremely simple. Nevertheless, the two Hilbert spaces $\mathcal{H}\sys{Q}$ and $\mathcal{H}\sys{Q$'$}$ are isomorphic, so that Q and Q$'$ may be regarded as two pictures of the {\em same} universe.} \end{figure} We have an intuition that the terms {\em grue} and {\em bleen} are less basic than {\em green} and {\em blue}. After all, the definitions of {\em grue} and {\em bleen} are explicitly time-dependent. On the other hand, suppose we start with {\em grue} and {\em bleen} and pose these time-dependent definitions: \begin{itemize} \item An object is {\em green} if it is {\em grue} before $T$ and {\em bleen} after. \item An object is {\em blue} if is it {\em bleen} before $T$ and {\em grue} after. \end{itemize} Thinking only about the language, the best we can do is say that the {\em green-blue} system and the {\em grue-bleen} system are time-dependent {\em relative to each other}. In the same way, we could begin with the $\tilde{\oper{B}}_{k}$ description and define the Q-picture $\oper{A}_{k}$ operators as time-dependent combinations of them. Each set of observables is time-dependent with respect to the other. We can distinguish the two color systems by going outside mere language and considering the operational meaning of the terms. We can define green and blue by a measurement of, say light wavelength. To determine whether an object is green, we can use a similar operational procedure both before and after time $T$. But the procedure to determine whether the object is grue will work differently before and after $T$. It is this appeal to external facts that makes the green-blue distinction more basic and elementary than the grue-bleen distinction. What can we say about our Q and Q$'$ pictures? We might appeal to the physical measurement procedures required to measure $\oper{A}_{k}$ and $\tilde{\oper{B}}_{k}$. The procedure for measuring $\oper{A}_{k}$ is simple and time-independent, while that for measuring $\tilde{\oper{B}}_{k}$ is complicated and changes with time. But as long as we only consider measurement devices and processes within our closed quantum system, this does not suffice. $\tilde{\oper{B}}_{k}$ devices and processes would be simple and time-independent in the Q$'$ picture, while $\oper{A}_{k}$ devices and processes would be wildly time-varying in the same picture. This is a reference frame problem. In both Galilean and Einsteinian relativity, there is no natural, universal way to identify points in space at different times. Space is too smooth and uniform; it does not have intrinsic ``landmarks''. Hence, there is no natural and universal way to determine whether an object is ``at rest''. In the same way, the Hilbert space $\mathcal{H}\sys{Q}$ is also too smooth and uniform to identify state vectors and operators at different times. From within the system, we cannot determine whether a given collection of observables is time-dependent. If we cannot distinguish the Q and Q' pictures from within the system, the natural thing is to appeal to hypothetical measurement devices external to Q, unaffected by our change of picture. Then $\oper{A}_{k}$ devices are objectively simpler than $\tilde{\oper{B}}_{k}$ devices. But this appeal to something {\em outside} of the closed system Q is explicitly excluded by DeWitt's maxim. We appear to be left with an inescapable dilemma. If we can only consider how the state of the system evolves, then that same history $\ket{\Psi(t)}$ can appear, with respect to different pictures, as either the complex system Q or the simple system Q$'$ {\em or any other quantum system with the same Hilbert space, undergoing any unitary time evolution whatsoever}. We cannot identify one of these pictures as the ``correct'' one without appealing to external measurement devices---that is, to measurement apparatus not treated as part of the isolated quantum system. \subsection{What is a system?}. \label{subsec-whatisasystem} Since the Hilbert spaces of quite different quantum systems are isomorphic, some additional information is required to apply quantum theory in an unambiguous way. This is not a novel point. For example, David Wallace \cite{wallace} says, ``[A]bsent additional structure, a Hilbert-space ray is just a featureless, unstructured object, whereas the quantum state of a complex system is very richly structured.'' Wallace regards this additional structure as part of the specification of the quantum system in the first place. He considers two possible ways the provide this structure: a specified decomposition of the quantum system into subsystems (and thus its Hilbert space into quotient spaces), or a specified set of operators of fixed meaning. In this view, the two universes Q and Q', with sets of operators $\{ \oper{A}_{k} \}$ and $\{ \tilde{\oper{B}}_{k} \}$, are entirely different systems rather than different pictures of the same system. The rest of this paper has two aims. First, we want to pin down the nature of the additional structure that Wallace posits. We will do this by considering the problem in more generality. Section~\ref{sec-framework} presents a general framework for describing theories that include states, time evolution, and interpretational statements. Such a framework naturally entails groups of automorphisms, which we examine in Section~\ref{sec-similarities}. Some theories, including both quantum and classical mechanics, require ``frame information'' to resolve ambiguities that arise from these automorphisms. Section~\ref{sec-examples} presents several examples of our framework in action. In Section~\ref{sec-taming} we turn to our second aim, which is to use our general framework to evaluate the additional structure required for a meaningful interpretation (of the many-worlds variety or not). What this physical nature of this frame information? In what ways might the strict many-worlds program---as embodied by De Witt's maxim---prove inadequate? Section~\ref{sec-remarks} includes remarks and observations occasioned by a our line of reasoning. \section{A general framework} \label{sec-framework} \subsection{States and time evolution} A {\em schema} for a theory has several parts. We begin with a set of {\bf states} $\mathcal{S} = \{ x, y, z, \ldots \}$. Informally, these might be definite states or, in the case of a non-deterministic theory, probability distributions over collections of definite states. To model time evolution, we introduce a sequence $(t_0, t_1, \ldots, t_{N})$ of times, where $N \geq 1$. Each time $t_{k}$ is associated with a state $x_k = x(t_{k}) \in \mathcal{S}$. The whole sequence $\vec{x} = (x_0, x_1, \ldots, x_N)$ may be termed a {\em trajectory}. Our schema includes a set of {\bf kinematically possible maps} $\mathcal{K} = \{ D, E, \ldots \}$, which are functions on the set of states: $D: \mathcal{S} \rightarrow \mathcal{S}$ for $D \in \mathcal{K}$. (To avoid a proliferation of parentheses, we will denote the action of $D$ on state $x$ as $Dx$ rather than $D(x)$.) The maps in $\mathcal{K}$ describe the evolution of the state over each interval in our time sequence. Thus, for the interval from $t_{k}$ to $t_{k+1}$, \begin{equation} x_{k+1} = D_{k+1,k} \, x_{k} \end{equation} for some $D_{k+1,k} \in \mathcal{K}$. The sequence $\vec{D} = (D_{1,0}, D_{2,1}, \ldots, D_{N,N-1})$ thus describes the time evolution over the entire sequence of time intevals. A pair $(x_0, \vec{D})$ includes an initial state $x_0 \in \mathcal{S}$ and a sequence $\vec{D} \in \mathcal{K}^{N}$ of time evolution maps; such a pair is called a specific {\em instance} of the theory. We can of course compose successive maps. In the general case we do not assume that $\mathcal{K}$ is closed under composition, so it may be that $D_{k+2,k} = D_{k+2,k+1} D_{k+1,k}$ is not in $\mathcal{K}$. But in many specific cases, $\mathcal{K}$ actually forms a group, being closed under composition and containing both the identity map 1 and inverses for every element. In such cases, we say that our theory is {\em reversible}. In a reversible theory, $\mathcal{K}$ includes maps between any pair of times $t_j$ and $t_k$, where $j,k \in \{0,\ldots,N\}$: \begin{equation} \label{eq-dkjdef} D_{k,j} = \left \{ \begin{array}{ll} D_{k,k-1} \cdots D_{j+1,j} & k > j \\ 1 & k = j \\ D_{j,k}^{-1} & k < j . \end{array} \right . \end{equation} The algebraic structure of $\mathcal{K}$ is reflected in the way that maps combine. If $\mathcal{K}$ is a group, then for any $j,k,l \in \{0,\ldots,N\}$, we have \begin{equation} \label{eq-composition} D_{k,j} = D_{k,l} D_{l,j} . \end{equation} (Note that, in a reversible theory, this relation holds for any time order of $t_{j}$, $t_{k}$ and $t_{l}$.) If $\mathcal{K}$ is a group, it is not hard to generalize our schema to a continuous time variable $t$. A trajectory is a function $x(t)$ that yields a state in $\mathcal{S}$ at any time. For any two times $t_{1}$ and $t_{2}$, we have a map $D(t_{2},t_{1})$ such that $x(t_{2}) = D(t_{2},t_{1}) x(t_{1})$. These maps are related to one another by a composition relation analogous to Equation~\ref{eq-composition}: \begin{equation} D(t_{2},t_{1}) = D(t_{2},t_{3}) D(t_{3},t_{1}) . \end{equation} Everything in the schema works pretty much the same. For ease of exposition we will base our discussion on a finite sequence of discrete times $(t_{0}, \ldots , t_{N})$, leaving the straightforward generalization to continuous time schemata for the reader. At the other end of the ``time complexity spectrum'', our later examples of our framework will involve only a single time interval from $t_0$ to $t_1$. The set $\mathcal{K}$ may still be closed in these schemata, or even have a group structure, but the composition of maps will not correspond to time evolution over successive intervals. \subsection{Interpretational statements} What is an interpretation? To give a general answer to this question is beyond the scope of this paper. We will merely assume that every theory comes equipped with a collection $\mathcal{I}$ of {\bf interpretational statements}, which are propositions about the state and/or the map of a particular instance of the theory. For example, immediately after giving the quantum state in Equation~\ref{eq-twobranches}, we stated, {\em The memory record of the observer (``up'' or ``down'') has become correlated to both the original spin and the reading on the apparatus.} This is an interpretational statement, and its truth is determined by the properties of the state in Equation~\ref{eq-twobranches}. In our abstract framework, we will not be much concerned with the {\em content} of an interpretational statement, but rather with the fact that it is a statement about elements of the mathematical formalism of our theory. Thus, a state proposition is a statement $P(x)$ about a state $x \in \mathcal{S}$, and a more general type of proposition would be $P(x_{0},\vec{D})$, referring to an initial $x_{0} \in \mathcal{S}$ and a sequence of time evolution maps $\vec{D}$. (Notice that the more general form also encompasses propositions about states at any time $t_{k}$, since we can construct the entire state trajectory $\vec{x}$ from $x_0$ and $\vec{D}$.) Statements of both kinds may appear in $\mathcal{I}$. Whatever else an interpretation may include, it must surely entail such a set of interpretational statements; and if this set is empty or trivial, the interpretation is nugatory. An interpretational statement is either true or not true. We say ``not true'' here rather than ``false'', because it may be that a statement has an indeterminate value. Consider a naive example. For a spin-1/2 particle, our statement $P$ is ``$S_{z} = +\mbox{$\frac{\hbar}{2}$}$.'' If the spin state is $\ket{\uparrow}$, the statement $P$ is true, inasmuch as a measurement will surely confirm it. If the spin state is $\ket{\downarrow}$, it is reasonable to call $P$ false, since its negation (``$S_{z} \neq +\mbox{$\frac{\hbar}{2}$}$'') is true in the same sense. But if the spin state is $\ket{\rightarrow}$, neither $P$ nor its negation is true. Thus, we simply say that $P$ is true for the state $\ket{\uparrow}$ and not true for other states like $\ket{\downarrow}$ and $\ket{\rightarrow}$. Without a more explicit ``theory of interpretation'', we cannot say more about the structure of $\mathcal{I}$. For example, we do not assume that the collection $\mathcal{I}$ has any particular algebraic closure properties. If $P,Q \in \mathcal{I}$, we have no warrant to declare that $\neg P$, $P \vee Q$, or $P \wedge Q$ are part of $\mathcal{I}$. \section{Similarities}. \label{sec-similarities} \subsection{Simple similarities} There is one more essential element to our schema. It may be that some states in $\mathcal{S}$ are equivalent to others. That is, some states will yield exactly the same true (or not true) interpretational statements. Thus, we suppose that our schema comes equipped with a set $\mathcal{U}$ of {\em $\mathcal{K}$-similarities} (or just {\em similarities}). Each similarity is a map $V: \mathcal{S} \rightarrow \mathcal{S}$ that satisfies the following property. \begin{quote} {\bf Property S.} Both of these are true of $V$: \begin{itemize} \item $V$ is a bijection. \item $V D V^{-1} \in \mathcal{K}$ if and only if $D \in \mathcal{K}$. \end{itemize} \end{quote} We do {\em not} assume that every $V$ with this property is necessarily a similarity in $\mathcal{U}$. However, we note that if $V$ and $W$ satisty Property S, so does $VW$ and $V^{-1}$. Thus, it is natural to suppose that the collection $\mathcal{U}$ forms a group, and we will make that assumption. Think of the $\mathcal{K}$-similarity map $V \in \mathcal{U}$ as a set of ``spectacles'' with which we examine the states in $\mathcal{S}$. Through the spectacles, the state $x$ appears to be the state $\tilde{x} = Vx$. The dynamical law that applies the kinematically possible map $D$ to $x$ appears to be a different map $\tilde{D} = VDV^{-1}$, which is also in $\mathcal{K}$: \begin{equation} \label{basiccommute} \begin{CD} x_{0} @>{V}>> \tilde{x}_{0} \\ @V{D_{1,0}}VV @VV{\tilde{D}_{1,0}}V \\ x_{1} @>{V}>> \tilde{x}_{1} \\ @V{D_{2,1}}VV @VV{\tilde{D}_{2,1}}V \\ \vdots & & \vdots \\ @V{D_{N,N-1}}VV @VV{\tilde{D}_{N,N-1}}V \\ x_{N} @>{V}>> \tilde{x}_{N} \\ \end{CD} \end{equation} The point is that $( \tilde{x}_0, \tilde{\vec{D}} )$ is an instance of our theory if and only if $( x_{0}, \vec{D} )$ is. The situation viewed through the spectacles fits the schema just as well as the situation without. The spectacles simply provide a new ``frame of reference'' for describing the state and the time evolution. If the theory is reversible, so that every $E \in \mathcal{K}$ has an inverse map $E^{-1}$, we note that every element $E \in \mathcal{K}$ automatically satisfies Property S: $E$ is a bijection, and $EDE^{-1} \in \mathcal{K}$ if and only if $D \in \mathcal{K}$. This opens the possibility that the $\mathcal{K}$-similarity group $\mathcal{U}$ might contain (among other things) every map in $\mathcal{K}$. If $\mathcal{K} \subseteq \mathcal{U}$, we say that the the $\mathcal{K}$-similarity group $\mathcal{U}$ is {\em $\mathcal{K}$-inclusive}. A $\mathcal{K}$-similarity is not at all the same thing as a dynamical symmetry of a particular instance of the theory. If $D$ is a particular dynamical map, a dynamical symmetry $V$ would satisfy $VD = DV$, which in turn implies that $VDV^{-1} = D$. Property S instead has a weaker condition, that $\tilde{D} = VDV^{-1}$ is some map in $\mathcal{K}$; but this condition must hold for every map $D \in \mathcal{K}$. From a slightly different point of view, the similarity map $V$ acts a symmetry of the {\em sets} $\mathcal{S}$ and $\mathcal{K}$, in that $V \mathcal{S} = \mathcal{S}$ and $V \mathcal{K} V^{-1} = \mathcal{K}$. Interpretational statements must respect similarities within the schema. For instance, suppose $P(x)$ is a state proposition in $\mathcal{I}$. Then for any $V \in \mathcal{U}$, we must have $P(x) \Leftrightarrow P(Vx)$ (by which we mean that $P(x)$ and $P(Vx)$ are true for exactly the same states $x \in \mathcal{S}$). For a more general type of proposition, \begin{equation} \label{eq-similarinterp} P(x_0,\vec{D}) \Leftrightarrow P(\tilde{x}_0,\tilde{\vec{D}}) = P(Vx_0, (VD_{1,0}V^{-1}, \ldots, VD_{N,N-1}V^{-1})\,) \end{equation} for all $V \in \mathcal{U}$. Each similarity $V \in \mathcal{U}$ imposes a restriction on the possible interpretational statements in $\mathcal{I}$. Therefore, we can regard $\mathcal{I}$ and $\mathcal{U}$ as ``dual'' to one another. The larger the set of $\mathcal{K}$-similarities, the more restricted is the allowed set of interpretational statements. \subsection{Extended similarities} The similarities $V \in \mathcal{U}$ are spectacles with which we may view an instance of our theory. But it is also possible to imagine time-dependent spectacles which apply different maps at different times. This is analogous to translating from {\em blue-green} color language to {\em grue-bleen} language. What kind of time-dependent spectacles might we have? An {\em extended similarity map} is a sequence $\vec{V} = (V_0, V_1, \ldots , V_{N})$ of maps on $\mathcal{S}$. We require that this sequence satisfies the following property: \begin{quote} {\bf Property S$\sys{ext}$.} Both of these are true of all maps in $\vec{V}$: \begin{itemize} \item $V_{k} \in \mathcal{U}$. \item $V_{k+1} D V_{k}^{-1} \in \mathcal{K}$ if and only if $D \in \mathcal{K}$. \end{itemize} \end{quote} The meaning of this property can be explained by a diagram. \begin{equation} \label{extendedcommute} \begin{CD} x_{0} @>{V_0}>> \tilde{x}_{0} \\ @V{D_{1,0}}VV @VV{\tilde{D}_{1,0}}V \\ x_{1} @>{V_1}>> \tilde{x}_{1} \\ @V{D_{2,1}}VV @VV{\tilde{D}_{2,1}}V \\ \vdots & & \vdots \\ @V{D_{N,N-1}}VV @VV{\tilde{D}_{N,N-1}}V \\ x_{N} @>{V_N}>> \tilde{x}_{N} \\ \end{CD} \end{equation} Property S$\sys{ext}$ therefore requires that, for an extended similarity $\vec{V}$, $(\tilde{x}_0, \tilde{\vec{D}})$ is an instance of the theory if and only if $(x_0,\vec{D})$ is. We may regard $\vec{V}$ as a symmetry of the sets $\mathcal{S}$ and $\mathcal{K}^{N}$, in the sense that $V_{k} \mathcal{S} = \mathcal{S}$ and $V_{k+1} \mathcal{K} V_{k}^{-1} = \mathcal{K}$ for all $k$. We denote the set of extended similarities by $\mathcal{U}\sys{ext}$. We do not assume that every extended map $\vec{V}$ satisfying Property S$\sys{ext}$ must be in $\mathcal{U}\sys{ext}$. It is interesting to note that in some schemata there are examples in which $V_{k} \in \mathcal{U}$ for all $k$, but $\vec{V}$ fails to satisfy Property S$\sys{ext}$. However, if $V$ satisfies Property S, then $(V, V, \ldots , V)$ must also satisfy Property S$\sys{ext}$. Therefore we will assume $(V, V, \ldots , V) \in \mathcal{U}\sys{ext}$ for every $V \in \mathcal{U}$. That is, time-independent spectacles are always allowed in $\mathcal{U}\sys{ext}$, and in this sense we may say that $\mathcal{U} \subseteq \mathcal{U}\sys{ext}$. We further assume that the set $\mathcal{U}\sys{ext}$ of extended similarities is itself a group. An element $\vec{V}$ in the extended similarity group $\mathcal{U}\sys{ext}$ turns one instance $(x_0,\vec{D})$ of a theory into another instance $(\tilde{x}_0,\tilde{\vec{D}})$ of the theory. But in a more fundamental sense, we should regard $(x_0,\vec{D})$ and $(\tilde{x}_0,\tilde{\vec{D}})$ merely as different {\em pictures} of the same actual situation, the one picture transformed into the other by the use of (possibly time-dependent) spectacles. Of course, the truth of an interpretational statement should not depend on the picture used to describe the instance of the theory. Thus, we require that \begin{equation} \label{eq-extsimilarinterp} P(x_0,\vec{D}) \Leftrightarrow P(\tilde{x}_0,\tilde{\vec{D}}) = P(V_{0} x_0, (V_{1}D_{1,0}V_{0}^{-1}, \ldots, V_{N} D_{N,N-1}V_{N-1}^{-1})\,) \end{equation} for each $P \in \mathcal{I}$ and $\vec{V} \in \mathcal{U}\sys{ext}$. We recognize this as just the extended version of Equation~\ref{eq-similarinterp}, and we note that it includes that fact as a special case. We note that any extended similarity $\vec{V}$ preserves the composition relations among the maps in $\mathcal{K}^{N}$. Suppose for simplicity that our theory is reversible and we specify a particular sequence of evolution maps $\vec{D} = (D_{1,0}, D_{2,1}, \ldots, D_{N,N-1})$. We define the maps $D_{kj}$ according to Equation~\ref{eq-dkjdef} and say that $\tilde{D}_{kj} = V_{k} D_{kj} V^{-1}_{j}$. Then the transformed set of maps satisfies a transformed version of Equation~\ref{eq-composition}, namely that \begin{equation} \tilde{D}_{k,j} = \tilde{D}_{k,l} \tilde{D}_{l,j} \end{equation} for any $j,k,l \in \{0, \ldots, N\}$. In other words, $\vec{V}$ preserves the algebraic structure of $\mathcal{K}$ that arises from time evolution over successive time intervals. \subsection{The DeWitt Principle} Our framework tells us that an interpretational system involves, not simply the set $\mathcal{I}$ of interpretational statements, but also the group $\mathcal{U}\sys{ext}$. The former includes everything that might be truthfully asserted about a physical situation. The latter tells us which different instances $(x_0, \vec{D})$ and $(\tilde{x}_{0},\tilde{\vec{D}})$ of a theory should be regarded as different pictures of the same situation. These are related, since the same interpretational statements must be true in both equivalent pictures. DeWitt's maxim says that the interpretation of quantum theory can be derived from the mathematical structure of the theory. For this to hold, we must be able to derive $\mathcal{I}$ and $\mathcal{U}\sys{ext}$ from the mathematical structure of $\mathcal{S}$ and $\mathcal{K}$. No outside elements or special assumptions need be, or should be, introduced. Therefore, {\em every} map $V$ that satisfies Property S is a symmetry of $\mathcal{S}$ and $\mathcal{K}$, and so should be included in $\mathcal{U}$; and the same is true of every sequence $\vec{V}$ of such maps satisfying Property S$\sys{ext}$. Thus, we pose the following {\bf principle of maximal similarity}, which we may for convenience call the ``DeWitt Principle''. \begin{quote} {\bf DeWitt Principle.} For a given $\mathcal{S}$ and $\mathcal{K}$, we must choose the similarity group $\mathcal{U}$ and the extended group $\mathcal{U}\sys{ext}$ to be maximal. \end{quote} That is, \begin{itemize} \item The similarity group $\mathcal{U}$ contains every map $V$ satisfying Property S. \item The extended similarity group $\mathcal{U}\sys{ext}$ contains every sequence $\vec{V}$ of elements of $\mathcal{U}$ satisfying Property S$\sys{ext}$. \end{itemize} It is not hard to show that the maximal $\mathcal{U}$ and $\mathcal{U}\sys{ext}$, as defined, exist and are groups. When we assume that $\mathcal{U}$ and $\mathcal{U}\sys{ext}$ are maximal, we maximally constrain the set $\mathcal{I}$ of interpretational statements. This is the other side of the DeWitt Principle. If the mathematical formalism of a theory is capable of yielding its own interpretation, it follows that the only allowable interpretational statements are those that can be derived from the mathematical formalism alone. These interpretational statements must ``look the same'' through both time-independent and time-dependent similarity spectacles. Of course, as we will see, it may be that the appropriate choice of $\mathcal{U}\sys{ext}$ is not maximal. There may be additional constraints on similarities, allowing for a wider range of interpretational statements. But a non-maximal choice of $\mathcal{U}\sys{ext}$ cannot be derived from the structure of the sets $\mathcal{S}$ and $\mathcal{K}$. \subsection{Reversibility, transitivity and interpretation} Suppose we have a reversible theory, so that $\mathcal{K}$ is a group. Then the DeWitt Principle implies that every element of $\mathcal{K}$ is also a $\mathcal{K}$-similarity in $\mathcal{U}$. Thus, $\mathcal{U}$ is $\mathcal{K}$-inclusive (i.e., $\mathcal{K} \subseteq \mathcal{U}$). And in fact, we can say more. In a reversible theory, for any sequence $\vec{E} = (E_0, E_1, \ldots , E_{N}) \in \mathcal{K}^{N}$ must be in $\mathcal{U}\sys{ext}$. Thus, $\mathcal{K}^{N} \subseteq \mathcal{U}\sys{ext}$. We say that the set $\mathcal{K}$ of kinematically possible maps acts {\em transitively} on the state set $\mathcal{S}$ if, for any $x,y \in \mathcal{S}$ there exists $D \in \mathcal{K}$ so that $y = Dx$. That is, any given state $x$ can be turned into any other given state $y$ by some kinematically possible dynamical evolution. Consider a reversible theory schema in which $\mathcal{K}$ acts transitively on $\mathcal{S}$. As we have seen, the DeWitt Principle implies that $\mathcal{K} \subseteq \mathcal{U}$. Any such $\mathcal{K}$-inclusive similarity group $\mathcal{U}$ must also act transitively on $\mathcal{S}$. But this has an important and baleful implication for the collection $\mathcal{I}$ of interpretational statements. Suppose $P$ is a state proposition, and consider two arbitrary states $x,y \in \mathcal{S}$. By transitivity there exists $V \in \mathcal{U}$ such that $y = Vx$. Thus $P(x) \Leftrightarrow P(Vx) = P(y)$. In other words, the only possible state propositions in $\mathcal{I}$ are those that are true for every state or for none. There are no non-trivial state propositions in $\mathcal{I}$. The implications for the extended similarity group $\mathcal{U}\sys{ext}$ are even stronger. The DeWitt Principle applied to $\mathcal{U}\sys{ext}$ implies that $\mathcal{K}^{N} \subseteq \mathcal{U}\sys{ext}$. This means we can freely choose $\vec{V} \in \mathcal{K}^{N}$ and guarantee that $\vec{V} \in \mathcal{U}\sys{ext}$. Now choose any two states $x_{0},y_{0} \in \mathcal{S}$ and any two sequences $\vec{D},\vec{E} \in \mathcal{K}^{N}$. Since $\mathcal{K}$ acts transitively on $\mathcal{S}$, we can find $V_{0} \in \mathcal{K}$ such that $y_{0} = V_{0}x_{0}$. Furthermore, for $k \geq 1$ the map $V_{k} = E_{k,k-1} V_{k-1}D_{k-1,k} \in \mathcal{K}$, and so the sequence $\vec{V}$ forms ``time-dependent spectacles'' in $\mathcal{U}\sys{ext}$. The following diagram commutes: \begin{equation} \label{eq-anyintoany} \begin{CD} x_{0} @>{V_0}>> y_{0} \\ @V{D_{1,0}}VV @VVE_{1,0}V \\ x_{1} @>{V_1}>> y_{1} \\ @V{D_{2,1}}VV @VVE_{2,1}V \\ \vdots & & \vdots \\ @V{D_{N,N-1}}VV @VVE_{N,N-1}V \\ x_{N} @>{V_N}>> y_{N} \\ \end{CD} \end{equation} {\em Any specific instance $(x_0,\vec{D})$ of our theory can be transformed into any other specific instance $(y_0,\vec{E})$.} Therefore, the general interpretational statements $P(x_0,\vec{D})$ and $P(y_0,\vec{E})$ must both be equivalent. This may be stated as our main general result: \begin{quote} {\bf Theorem.} Consider a reversible theory schema in which $\mathcal{K}$ acts transitively on $\mathcal{S}$. If the DeWitt Principle holds, then $\mathcal{I}$ contains no non-trivial statements. \end{quote} We might restate this conclusion in another way: A reversible theory in which any state could in principle evolve to any other state {\em cannot} yield its own non-trivial interpretation without additional constraints on $\mathcal{U}\sys{ext}$. \section{Examples}. \label{sec-examples} In this section we will set up a few examples of theory schemata and discuss some of the properties of each. For simplicity, each example considers time evolution over a single interval of time from $t_{0}$ to $t_{1}$. \subsection{Deck shuffling} Consider a standard deck of 52 cards. The state set $\mathcal{S}$ consists of every arrangement of the cards in the deck, and a kinematically possible map is simply a permutation of the deck. All such permutations are in $\mathcal{K}$. Suppose now we divide the deck into two half-decks of 26 cards each. Every rearrangement of the whole deck is in $\mathcal{S}$. However, our kinematically possible maps include only separate rearrangements of the half-decks. Thus, if the queen of hearts starts out in half-deck \#1, it will stay there no matter what ``time evolution'' $D \in \mathcal{K}$ occurs. This, like the full-deck theory, is a reversible theory. The DeWitt Principle implies that $\mathcal{K} \subseteq \mathcal{U}$ for both theories. For the undivided deck, the permutation group acts transitively on the state set. This theory, therefore, has no non-trivial statements in $\mathcal{I}$. What about $\mathcal{U}$ and $\mathcal{U}\sys{ext}$ for the half-deck theory? In this schema there are maps in the maximal $\mathcal{U}$ that are not in $\mathcal{K}$. For instance, consider a map $X$ on states that exchanges the two half-decks. This is not in $\mathcal{K}$, but it does satisfy Property S since both $XDX^{-1}$ and $X^{-1}DX$ are half-deck shuffles. (The two half-decks are exchanged twice.) From the DeWitt Principle, both $X$ and the identity map $1$ are in $\mathcal{U}$. However, the sequence $\vec{V} = (1,X)$ does not satisfy Property S$\sys{ext}$ and therefore is not in $\mathcal{U}\sys{ext}$. In the half-deck theory, $\mathcal{K}$ is a group but it does not act transitively on $\mathcal{S}$. The divided deck with separate half-deck permutations does potentially have non-trivial statements in $\mathcal{I}$. For example, the statement ``All of the jacks are in the same half-deck'' will not change its truth value if the half-decks are reshuffled or exchanged. Such a statement expresses a property that may be the basis for an interpretational statement. \subsection{Symbolic dynamics} A very interesting example arises from {\em symbolic dynamics}. In symbolic dynamics, the states are bi-infinite sequences of symbols from a finite alphabet. The set of allowed sequences may be constrained by some rule; for instance, we may be restricted to binary sequences that never include more than two 1's in succession. The particular example we will consider includes all binary sequences in $\mathcal{S}$. This is known in the literature as the ``full shift'' and is the symbolic dynamics associated with the ``baker's map'' on the unit square. The dynamical maps are finite left or right shifts of the sequences in $\mathcal{S}$. There are thus two reasonable choices for $\mathcal{K}$. First, $\mathcal{K}$ might contain only the elementary map $\sigma$ that shifts the sequence by one place: given a sequence $\vec{x}$, $(\sigma \vec{x})_{i} = x_{i+1}$. Second, we might posit that $\mathcal{K}$ includes all finite shifts, so that $\mathcal{K} = \{ \ldots, \sigma^{-1}, 1, \sigma, \sigma^{2}, \ldots \}$. This amounts to assuming that the underlying time evolution can occur at any finite speed, so that an arbitrary number of elementary shifts in either direction may occur within our given time interval. We will make the second choice, which makes $\mathcal{K}$ a group and the theory reversible. Thus, under the DeWitt Principle, all the shifts in $\mathcal{K}$ are also similarities in the maximal group $\mathcal{U}$. This maximal $\mathcal{U}$ also includes many other maps as well. For example, $\mathcal{U}$ contains the map $\beta$ that complements the sequence: $(\beta \vec{x})_{i} = \bar{x}_{i}$, where $\bar{0} = 1$ and $\bar{1} = 0$. It also contains the reflection map $\rho$: $(\rho \vec{x})_{i} = x_{-i}$. However, $\mathcal{U}$ cannot contain any map $V$ that takes a constant sequence to a non-constant sequence. Let us prove this assertion. Our definition of the similarity group $\mathcal{U}$ for symbolic dynamics implies the following: \begin{quotation} \noindent If $V \in \mathcal{U}$, then for all $n \in \mathbb{Z}$ there exists $m \in \mathbb{Z}$ such that $V^{-1} \sigma^{n} V = \sigma^{m}$, or equivalently $\sigma^{n} V = V \sigma^{m}$. \end{quotation} We will use the contrapositive of this fact. \begin{quotation} \noindent If there exists $n \in \mathbb{Z}$ such that for all $m \in \mathbb{Z}$ we have $\sigma^{n} V \neq V \sigma^{m}$, then $V \notin \mathcal{U}$. \end{quotation} Now consider the constant sequence $\vec{b} = \ldots bbbb \ldots$, and suppose $V \vec{b}$ is not constant. Then there exists $n \in \mathbb{Z}$ such that $\sigma^{n} V \vec{b} \neq V \vec{b}$. But for any $m \in \mathbb{Z}$, $\vec{b} = \sigma^{m} \vec{b}$, and so $\sigma^{n} V \vec{b} \neq V \sigma^{m} \vec{b}$. Thus $\sigma^{n} V \neq V \sigma^{m}$, and hence $V \notin \mathcal{U}$. The similarity group $\mathcal{U}$ does not act transitively on $\mathcal{S}$. Therefore, even if we impose the DeWitt Principle, the statements in $\mathcal{I}$ may still include nontrivial statements like, ``The sequence is constant,'' which retain their truth value under shifts, reflection, complementation, etc. \subsection{Classical Hamiltonian dynamics} Suppose we have a classical system described by a phase space with $n$ real coordinates $q_{k}$ and $n$ associated momenta $p_{k}$. To make things a bit simpler, we can shift our time coordinate so that $t_{0} = 0$ and $t_{1} = \tau$. The allowed time evolutions in $\mathcal{K}$ are the ``Hamiltonian maps'' that result from a (possibily time-dependent) Hamiltonian function $H(q_{k}, p_{k}, t)$ acting over the time interval ($t=0$ to $t=\tau$), so that \begin{equation} \dot{p}_{k} = \frac{dp_{k}}{dt} = - \frac{\partial H}{\partial q_{k}} \qquad \mbox{and} \qquad \dot{q}_{k} = \frac{dq_{k}}{dt} = \frac{\partial H}{\partial p_{k}} . \end{equation} Two maps can be composed as follows. Suppose we have maps $D_{1}$ and $D_{2}$, which are produced by Hamiltonian functions $H_{1}(q_{k},p_{k},t)$ and $H_{2}(q_{k},p_{k},t)$ controlling the dynamics over the time interval $0$ to $\tau$. Then we can construct a new map $D_{21}$ via the following Hamiltonian: \begin{equation} H_{21}(q_{k},p_{k},t) = \left \{ \begin{array}{ll} 2H_{1}(q_{k},p_{k},2t) & 0 \leq t \leq \tau/2 \\ 2H_{2}(q_{k},p_{k},2t-\tau) & \tau/2 < t \leq \tau . \end{array} \right . \end{equation} This will cause the system to evolve according to a ``two times faster'' version of $H_{1}$ for the first half of the time interval, and a ``two times faster'' version of $H_{2}$ for the second half of the interval. The resulting change in state will simply be the map $D_{21} = D_{2} D_{1}$. This theory is reversible, since the evolution by $H(q_{k}, p_{k}, t)$ can be exactly reversed by the Hamiltonian $-H(q_{k}, p_{k}, \tau-t)$. Thus the maximal $\mathcal{U}$ includes all of $\mathcal{K}$, and potentially many other maps. The set of Hamiltonian maps also acts transitively on the classical phase space. Given any two points $(q_{k},p_{k})$ and $(q_{k}',p_{k}')$, it is not hard to write down a Hamiltonian function that evolves one into the other in the time interval from $0$ to $\tau$. Thus, if the DeWitt Principle holds, $\mathcal{I}$ contains no non-trivial interpretational statements. \subsection{Unitary quantum mechanics} In quantum theory, the states in $\mathcal{S}$ are vectors $\ket{\psi}$ of unit norm in a Hilbert space $\mathcal{H}$. As before, we take $\dim \mathcal{H}$ to be finite, though maybe extremely large. The kinematically possible maps $\mathcal{K}$ include all unitary operators on $\mathcal{H}$. All such operators can be realized by evolving the state vector via the Schr\"{o}dinger equation using the Hamiltonian operator $\oper{H}(t)$: \begin{equation} i \hbar \ket{\psi(t)} = \oper{H}(t) \ket{\psi(t)} \qquad \Longrightarrow \qquad \ket{\psi(t_{1})} = \oper{U} \ket{\psi(t_{0})} . \end{equation} Since this theory is reversible, the maximal similarity group $\mathcal{U}$ includes all of the unitary operators in $\mathcal{K}$. The unitary operators also act transitively on the unit vectors in a Hilbert space $\mathcal{H}$. Thus, the DeWitt Principle excludes all non-trivial interpretational statements from $\mathcal{I}$. From these examples we may draw a general lesson. Some theories have non-trivial statements whose truth value is unchanged by any similarity, even when $\mathcal{U}$ and $\mathcal{U}\sys{ext}$ are maximal. In this way, it is possible that ``the mathematical formalism" of a theory could yield ``its own interpretation''. But this is impossible for many interesting theories, including both classical Hamiltonian dynamics and unitary quantum mechanics. \section{Taming quantum similarities?}. \label{sec-taming} Suppose we have a reversible theory schema in which $\mathcal{K}$ acts transitively on $\mathcal{S}$. Under the DeWitt Principle, the unlimited similarity groups $\mathcal{U}$ and $\mathcal{U}\sys{ext}$ are too big to admit non-trivial interpretational statements in $\mathcal{I}$. Therefore, any meaningful interpretation for the theory will require us to limit the similarity groups in some way. We must either have $\mathcal{K} \not\subseteq \mathcal{U}$ or $\mathcal{K}^{N} \not\subseteq \mathcal{U}\sys{ext}$, or both. This is precisely the ``additional structure'' posited by Wallace \cite{wallace}, discussed in Subsection~\ref{subsec-whatisasystem} above. The basis for a limitation of this kind cannot be found in the mathematical formalism of $\mathcal{S}$ and $\mathcal{K}$. Any such external limitation will therefore contravene our version of the DeWitt Principle. It will be useful here briefly to describe a couple of plausible ``non-DeWitt'' limitations on $\mathcal{U}$ and $\mathcal{U}\sys{ext}$ for the example of unitary quantum mechanics over a single time interval. \subsection{Subsystem decomposition} First, suppose $\mathcal{H}$ can be decomposed as a tensor product of smaller spaces: $\mathcal{H} = \mathcal{H}\sys{1} \otimes \mathcal{H}\sys{2} \otimes \cdots \otimes \mathcal{H}\sys{n}$. (This is one of the possibilities mentioned by Wallace.) Each $\mathcal{H}\sys{k}$ represents the state space of a subsystem of the whole quantum system. This does not by itself limit the kinematically possible time evolutions in $\mathcal{K}$, since the subsystems might interact with one another in an arbitrary way. But if we take the subsystem decomposition as given, we may plausibly restrict our similarities to operators of the form: \begin{equation} \label{subsystemsimilarity} V = V\sys{1} \otimes V\sys{2} \otimes \cdots \otimes V\sys{n} . \end{equation} Our similarity spectacles can modify the states of the individual subsystems, but they cannot mix the subsystems together. In this case, even though $\mathcal{K}$ acts transitively on $\mathcal{S}$, the similarity group $\mathcal{U}$ does not. This restriction on $\mathcal{U}$ (and hence $\mathcal{U}\sys{ext}$) allows for many non-trivial interpretational statements in $\mathcal{I}$. For example, consider the state proposition $P(x)$ = ``In state $x$, subsystems 1 and 2 are entangled.'' Since the $\mathcal{K}$-similarities do not mix subsystems, this statement has the same truth value, regardless of what similarity spectacles are applied to the state. We must remember, however, that there are infinitely many tensor product decompositions of $\mathcal{H}$ \cite{meronomic}. That is, we can decompose a composite system into subsystems in an unlimited number of ways. States that are entangled with respect to one decomposition may not be entangled with respect to another. For instance, consider a system with $\dim \mathcal{H} = 4$ that can be regarded as a pair of qubits, labeled 1 and 2. This pair could be in one of the four entangled ``Bell states'': \begin{equation} \begin{array}{l} \ket{\Phi_{\pm}\sys{12}} = \frac{1}{\sqrt{2}} \left ( \ket{0\sys{1}} \otimes \ket{0\sys{2}} \pm \ket{1\sys{1}} \otimes \ket{1\sys{2}} \right ) \\[1ex] \ket{\Psi_{\pm}\sys{12}} = \frac{1}{\sqrt{2}} \left ( \ket{0\sys{1}} \otimes \ket{1\sys{2}} \pm \ket{1\sys{1}} \otimes \ket{0\sys{2}} \right ) . \end{array} \end{equation} On the other hand, there exists an entirely different decomposition of the system into qubits designated A and B, with respect to which these are product states: \begin{equation} \begin{array}{lcl} \ket{\Phi_{+}\sys{12}} = \ket{\Phi\sys{A}} \otimes \ket{+\sys{B}} & \quad & \ket{\Psi_{+}\sys{12}} = \ket{\Psi\sys{A}} \otimes \ket{+\sys{B}} \\ \ket{\Phi_{-}\sys{12}} = \ket{\Phi\sys{A}} \otimes \ket{-\sys{B}} & \quad & \ket{\Psi_{-}\sys{12}} = \ket{\Psi\sys{A}} \otimes \ket{-\sys{B}} . \end{array} \end{equation} Subsystem decompositions are necessary to describe many important processes. For example, decoherence processes depend on the decomposition of the whole system into a subsystem of interest and an external environment. We must therefore ask, where does a special subsystem decomposition come from? Neither the set of possible states $\mathcal{S}$ nor the set $\mathcal{K}$ of kinematically possible maps picks out a particular decomposition. It must come from somewhere else. Non-trivial interpretational statements about entanglement are only possible once a preferred decomposition is specified, by whatever means. From the point of view espoused by Wallace \cite{wallace}, the subsystem decomposition is simply a {\em given} for a particular physical situation. The mathematical formalism of quantum theory specifies $\mathcal{S}$ and $\mathcal{K}$ {\em and} a similarity group $\mathcal{U}$ that respects the preferred subsystem decomposition. The question of the physical basis for this decomposition---its origin and representation in the state and dynamics of the system of interest---simply cannot arise. As Wallace himself points out, however, this decomposition is itself the real source of the complexity of the quantum world. If we allow ourselves to invoke a hypothetical outside observer, it is easy to see how a preferred decomposition could emerge. The subsystems in the special decomposition correspond to different ways that the observer can access the system of interest. {\em This} sort of control or measurement interaction affects {\em this} subsystem, {\em that} sort affects {\em that} subsystem. The decomposition emerges from the nature of the devices that implement these operations. But these devices do {\em not} reside in the system of interest, and their intervention means that the system is no longer isolated. Subsystem decomposition is a special type of quantum reference frame information, called {\em meronomic} information \cite{meronomic}. We will briefly discuss the role of quantum reference frames in Subsection~\ref{ssec-qrfs} below. \subsection{Time-independent spectacles} Here is another potential limitation, this one on the extended similarity group $\mathcal{U}\sys{ext}$. We allow any unitary map $V \in \mathcal{U}$, but we declare that the only elements of $\mathcal{U}\sys{ext}$ are those of the form $(V, V)$. Only ``time-independent spectacles'' are allowed; no ``grue-bleen'' pictures are permitted. In this case, $\mathcal{U}$ acts transitively on $\mathcal{S}$, and only trivial state propositions $P(x)$ are possible in $\mathcal{I}$. However, there are non-trivial general propositions in $\mathcal{I}$. For example, consider the statement $Q(x,D) = $ ``State $x$ is a fixed point of dynamics $D$; that is, $Dx = x$.'' If we apply the (time-independent) similarity map $V$ to turn instance $(x,D)$ into $(\tilde{x}, \tilde{D})$, we find that $\tilde{D} \tilde{x} = VDV^{-1} V x = V D x = V x = \tilde{x}$. The statement $Q(x,D)$ might be true or not---it is not trivial---but in any case $Q(x,D) \Leftrightarrow Q(\tilde{x}, \tilde{D})$. Even for a schema with a single time interval, we are effectively dealing with {\em two} sets of states: $\mathcal{S}_{0}$ at $t_{0}$ and $\mathcal{S}_{1}$ at $t_{1}$. These are of course both isomorphic to $\mathcal{S}$. One connection between the sets is the dynamical evolution $D \in \mathcal{K}$, which indicates which $x_{0} \in \mathcal{S}_{0}$ evolves to $x_{1} \in \mathcal{S}_{1}$. To claim that our spectacles are ``time-independent'' means that we have another canonical isomorphism between the two, which lets us identify which states in $\mathcal{S}_{0}$ are taken to be {\em identical} to other states in $\mathcal{S}_{1}$. We might denote this canonical isomorphism by the symbol 1, but this hides the fact that there are {\em infinitely many} possible isomorphisms between the two sets. To say unambiguously that a state at $t_{0}$ is the same state as another at $t_{1}$, or to define some spectacles as ``time-independent'', we must invoke this second way (besides the time evolution map $D \in \mathcal{K}$) to link together $\mathcal{S}_{0}$ and $\mathcal{S}_{1}$. We might, of course, simply argue that this link between $\mathcal{S}_{0}$ and $\mathcal{S}_{1}$ is part of the {\em definition} of the system of interest. But if we do not regard this answer-by-definition as satisfactory, the question remains: What is the physical origin of such a link, which is required to make the needed restrictions on $\mathcal{U}\sys{ext}$? If the quantum system is truly isolated, no satisfactory answer is possible, since $D$ itself describes how all parts of the state evolve, and thus expresses everything about the dynamical connection between times $t_{0}$ and $t_{1}$. But once again, a hypothetical outside observer can provide a plausible answer. The external apparatus of the observer can allow us to define what it means for a state to remain the same over time. In effect it provides a fixed reference frame for the Hilbert space of states. Such an explanation seems natural, but of course it invokes an observer that is {\em not} treated as part of the isolated quantum-mechanical system. It runs counter to the letter and spirit of DeWitt's maxim. \subsection{Quantum reference frames} \label{ssec-qrfs} Ours is essentially a reference frame problem, so it is natural to ask whether the existing theory of quantum reference frames \cite{qref} can help resolve it. Unfortunately, it cannot. In quantum reference frames, we begin with an abstract symmetry group $\mathcal{G}$. Any system is made of up of elementary subsystems, each of which has its own unitary representation of $\mathcal{G}$. The symmetry element $g \in \mathcal{G}$ is represented by the unitary operator \begin{equation} \oper{V}_{g} = \oper{V}\sys{1}_{g} \otimes \oper{V}\sys{2}_{g} \otimes \cdots \otimes \oper{V}\sys{N}_{g} \end{equation} for subsystems 1, \ldots, N. These operators are dynamical symmetries for the system, so that the only available operations are symmetric ones, those that commute with $\oper{V}_{g}$. Nevertheless, if part of the system is in an asymmetric state, we can use that state as a resource to perform asymmetric operations on other parts of the system. This asymmetric resource state constitutes a quantum reference frame. To take an example, suppose our subystems are spin-1/2 particles and our symmetry group $\mathcal{G}$ is the set of rotations in 3-D space. Each spin has its own $SU(2)$ representation of this group. We can only perform rotationally invariant operations on the spins. A measurement of $\oper{S}\sys{1}_{z}$ on spin \#1 thus seems out of the question, since we cannot {\em a priori} specify the $z$-axis. However, suppose the remaining N-1 spins are provided in the state $\ket{\uparrow\sys{k}}$, aligned with the (unknown) $z$-axis. Then we can use these extra spins to perform a global rotationally invariant operation that approximates an $\oper{S}\sys{1}_{z}$ measurement on the first spin. We have used the asymmetric $\ket{\uparrow\sys{k}}$ states as a quantum reference frame resource. The decomposition of a quantum system into subsystems can also be described as a quantum reference frame problem \cite{meronomic}. For example, suppose we consider some quantum systems with $\dim \mathcal{H} = 4$ (called ``tictacs'' in \cite{meronomic}), and we wish to specify a particular subsystem decomposition for these into qubit pairs. We can do this by supplying additional tictacs in a special ``asymmetric'' state that encodes the subsystem division. For example, suppose we are considering a series of tictacs in state $\ket{\Phi}$, and we wish to estimate the Schmidt parameter of the entangled state for a particular qubit decomposition. We can accomplish this with the assistance of a supply of tictac pairs in the resource state $\ket{\Psi_{-}\sys{13}} \otimes \ket{\Psi_{-}\sys{24}}$ (where the first tictac is made up of qubits \#1 and \#2 and the second is made up of \#3 and \#4). If we specify how to decompose a particular system into subsystems, we say that we have provided {\em meronomic} frame information. We therefore see that meronomic information for dividing tictacs into qubits can be regarded as a kind of quantum information, information that can in principle be represented by the state of quantum systems. The symmetry group $\mathcal{G}$ (or more precisely its unitary representation $\{ \oper{V}_{g} \}$) is somewhat analogous to our similarity group $\mathcal{U}$. While the symmetry element $g$ remains unknown, we can only make $\mathcal{G}$-invariant statements about our system. Notice that if we add new subsystems to our system, we do not actually enlarge the symmetry group. The symmetry group for N spins is still just a representation of $SU(2)$. Informally, we may say that the ``symmetry frame problem'' stays essentially the same when we enlarge the system, but the additional pieces may provide asymmetric states as resources to help resolve the problem. However, under the DeWitt Principle, the similarity group $\mathcal{U}$ for N spins contains all of $U(2^{\mbox{\tiny N}})$, the full set of unitary operators on the Hilbert space for the spins. The ``similarity frame problem'' gets {\em worse} as we add spins, not better. Even if we are somehow granted the subsystem decomposition between the spins, so that the similarity group contains $U(2) \otimes U(2) \otimes \cdots \otimes U(2)$, the state of the final N-1 spins can provide {\em no information} about the similarity frame of spin \#1. This problem is already present for meronomic frame information. We can provide quantum resources for specifying how tictacs can be divided into qubits, but this protocol presumes that the decomposition of the world into tictacs is already given. That decomposition can be encoded into states of even larger systems, but at every stage we must presume the decomposition of a bigger universe into larger chunks. The meronomic frame problem gets worse as we introduce more quantum resources to resolve it. \section{Remarks}. \label{sec-remarks} We have avoided giving a formal definition of the ``interpretation'' of a theory. But informally, we might say that an interpretation is a set of rules for extracting meaning from the mathematical formalism of a theory. In quantum mechanics, the formalism includes a global quantum state that evolves unitarily. The many-worlds interpretation claims to extract from this formalism various meaningful statements about processes and correlations, including observations made by observer subsystems. The problem is that any mathematical framework of states and time evolution maps ($\mathcal{S}$ and $\mathcal{K}$) entails a group of automorphisms, which we have called ``similarities''. These similarities may be time-independent, or they may be time-dependent (like the shift from {\em green/blue} color language to {\em grue-bleen} color language). When viewed through the spectacles of a similarity transformation, one particular instance of a theory is transformed into another. In some cases---including unitary quantum mechanics---{\em any} instance can be transformed into {\em any} other. The complex universe Q of Section~\ref{subsec:twouniverses} seems very different from the simple universe Q', and any interpretational approach that cannot distinguish them is plainly inadequate. Yet the two universes are related by a similarity transformation of the underlying theory---they are, in effect, two pictures of the {\em same} universe. How is our interpretation to distinguish them? The only way to fix this problem is to impose a restriction on the set of similarities. If we regard quantum theory as a pragmatic set of rules that an observer applies to analyze a limited, external system, then such a restriction is reasonable. It may arise, not from anything ``inside'' the system itself, but from the relationship between the observer and the system. The observer may well insist on this additional structure before applying the theory. But the many-worlds program requires that we regard quantum theory as a description of an entire universe that includes the observer. Recall that Everett titled his detailed account ``The Theory of the {\em Universal} Wave Function'' (\cite{everett1957}, emphasis ours). We are left with a quandary. We must appeal to additional ``frame'' information beyond $\mathcal{S}$ and $\mathcal{K}$ in order to apply quantum theory in a meaningful way. This information is not quantum information---that is, information residing in the state of the system of interest. The interpretational frame is not a quantum reference frame. But if we simply require this frame information on pragmatic grounds, as a mere prerequisite for applying the theory, we have forfeited one of the central motivations of the many-worlds interpretation. Inasmuch as the many-worlds program aims to implement DeWitt's maxim---that the mathematical formalism of quantum mechanics can yield its own interpretation---that program fails. The reader may wonder whether this is simply a new type of many-worlds situation. Perhaps every different possible ``picture'' of an evolving quantum system is equally meaningful, and a full interpretation embraces them all. But this will not do. The ``worlds'' represented in a quantum state correspond to distinct branches or superposition components of the global quantum wave function. The different branches evolve independently according to a given time evolution $\oper{U}(t)$. This allows us to make conditional predictions, e.g., ``Given that the observer's record of the previous spin measurement is that $S_{z} = + \mbox{$\frac{\hbar}{2}$}$, the next measurement will yield the same result.'' But the many-pictures idea supports no sort of predictability at all. All possible time-evolutions, including those with wildly varying Hamiltonians $\oper{H}(t)$, are equally admissible pictures of the same universe. We cannot use the past behavior of the universe, or our present records of that behavior, to make any reliable prediction of future events. A many-pictures approach can yield no meaningful interpretation. We have seen some simple theories (e.g., symbolic dynamics) in which non-trivial interpretational statements are possible even with maximal similarity groups $\mathcal{U}$ and $\mathcal{U}\sys{ext}$. On the other hand, the same difficulties do arise in classical Hamiltonian mechanics. This has not usually been recognized as a problem because the ordinary classical dynamical variables---for instance, the relative positions of particles in space---are generally assumed to have immediate physical meanings. Only with the introduction of quantum mechanics are interpretational issues recognized. Obviously, we are able to use both classical and quantum mechanics to analyze the behavior of systems, extracting meaningful interpretational statements. We resolve the similarity problem, just as we resolve the {\em grue-bleen} color language problem, by appealing to objects and procedures that are not contained within the system of interest. In this view, we always interpret quantum mechanics by appealing, implicitly or explicitly, to sectors of the universe that are not treated as parts of the quantum system. In so doing, we presume that these external entities do not themselves have interpretational ambiguities. Their dynamical variables have immediate physical meaning; their reference frames for subsystem decomposition and time evolution are given. They provide our frame for interpreting the quantum physics of the system of interest. And this is true even if we formally adopt a many-worlds view of the system and its behavior. Or to put the same point another way, {\em a truly isolated quantum system has no interpretaiton.} In this paper we have not proposed or endorsed any particular interpretation of quantum mechanics. Many interpretations seem to offer valuable insights; none of them seem entirely satisfactory. Our point is simply that any successful interpretation---any interpretation that generates non-trivial interpretational statements about a theory---must somehow limit the similarity groups $\mathcal{U}$ and $\mathcal{U}\sys{ext}$ for that theory. However, the mere mathematical structure of Hilbert space and unitary operators does not appear to offer a way to do this. We are fully in agreement with Wallace's cautionary remark about ``additional structure''. Without a resolution of the quantum ``grue-bleen'' problem, no meaningful interpretation is possible. The traditional ``Copenhagen'' interpretation of quantum mechanics relies on a conceptually independent macroscopic ``classical'' domain \cite{copenhagen}. The interaction of subsystems becomes a measurement when the measurement record is irreversibly amplified into this domain. The quantum evolution of an isolated system has no meaning except that given by the possible results of such measurement processes. As John Wheeler said, ``No elementary phenomenon is a phenomenon until it is an observed phenomenon.''\cite{nophenomenon} Thus, although we do not defend any particular interpretation, our considerations here lead us toward a Copenhagen-style point of view. In some theories, including quantum mechanics, we simply cannot construct a viable interpretation of a system based only on the states and dynamical evolution of the system itself. The physical basis for any interpretation must lie outside the system---not necessarily as a separate ``classical'' domain, but as a domain that is somehow excluded from the similarity transformations implicit in the mathematical formalism of the theory. An analogy to our situation may perhaps be found in axiomatic set theory. Given any set $X$, a larger one can be found (e.g., by forming the power set $\mathcal{P}(X)$). Thus, there is no upper limit to the size of the objects describable in the theory. However, the collection of all sets is not a self-consistent set. The ``universe'' of set theory is not an object within the theory \cite{settheory}. Perhaps something similar holds for physical theories like quantum mechanics. There is no fundamental limit to the size of the system that can have a non-trivial interpretation. Even a large system could be embedded in a still larger system that provides the necessary interpretational frame. If we in turn wish to treat the larger system within the theory, we can (in principle) embed it in a simply enormous ``super-system'' to fix its frame. However, it is not possible to have a non-trivial interpretation for a quantum system that includes the entire universe. The authors gratefully acknowledge many helpful comments from Chris Fuchs, Rob Spekkens, Bill Wootters, Austin Hulse, and Fred Strauch. They are of course not responsible for the remaining shortcomings of this paper.
1,941,325,221,212
arxiv
\section{Introduction} In this paper we consider linear inverse problems of the form $Ax = b$ with $A\in \mbbR^{m\times n}$, $x\in\mbbR^n$ and $b\in\mbbR^m$. Here, the right hand side $b$ is the perturbed version of the unknown exact measurements or observations $b_{ex} = b + e$, with $e\sim\mc{N}(0, \sigma^2I_m)$. It is well known that for ill-posed problems some form of regularization has to be used in order to deal with the noise $e$ in the data $b$ and to find a good approximation for the true solution of $Ax = b_{ex}$. One of the most widely used methods to do so is Tikhonov regularization. In its standard from, the Tikhonov solution to the inverse problem is given by \begin{equation}\label{eq:tikhonov} x_\alpha = \argmin_{x\in\mbbR^n}\left\|Ax - b\right\|^2 + \alpha\left\|x\right\|^2, \end{equation} where $\alpha > 0$ is a regularization parameter and $\left\|\cdot\right\|$ denotes the standard Euclidean norm. The choice of the regularization parameter is very important since its value has a significant impact on the reconstruction. If, on the one hand, $\alpha$ is chosen too large, focus lies on minimizing the regularization term $\left\|x\right\|^2$. The corresponding reconstruction $x_\alpha$ will therefore no longer be a good solution for the linear system $Ax = b$, will typically have lost many details and be what is referred to as ``oversmoothed''. If, on the other had, $\alpha$ is chosen too small, focus lies on minimizing the residual $\left\|Ax - b\right\|^2$. This, however, means that the errors $e$ are not suppressed and that the reconstruction $x_\alpha$ will be ``overfitted'' to the measurements. \begin{figure} \centering \input{./tikz/curves} \caption{Sketch of the L-curve (left) and the D-curve (right). The value for $\alpha$ proposed by the L-curve method is typically slightly larger than the one proposed by the discrepancy principle \cite{hansen1992}.} \label{fig:curves} \end{figure} One way of choosing the regularization parameter is the L-curve method. If $x_\alpha$ is the solution of the Tikhonov problem \eqref{eq:tikhonov}, then the curve $(\left\|Ax_\alpha - b\right\|, \left\|x_\alpha\right\|)$ typically has a rough ``L'' shape, see \hypref{figure}{fig:curves}. Heuristically, the value for the regularization parameter corresponding to the corner of this ``L'' has been proposed as a good regularization parameter because is balances model fidelity (minimizing the residual) and regularizing the solution (minimizing the regularization term) \cite{calvetti1999, hansen1992, hansen1993, hansen2010}. The problem with this method is that in order to find this value, the Tikhonov problem has to be solved for many different values of $\alpha$, which can be computationally expensive and inefficient for large scale problems. Another way of choosing the regularization parameter is Morozov's discrepancy principle \cite{morozov1984}. Here, the regularization parameter is chosen such that \begin{equation}\label{eq:morozov} \left\|Ax_\alpha - b\right\| = \eta\varepsilon \end{equation} with $\varepsilon = \left\|e\right\|$ the size of the error and $1\leq\eta$ a tolerance value. The idea behind this choice is that finding a solution $x_\alpha$ with a lower residual can only lead to overfitting. Similarly to the L-curve, we can look at the curve $(\alpha, \left\|Ax_\alpha - b\right\|)$, which we'll refer to as the discrepancy curve or D-curve, see \hypref{figure}{fig:curves}. If $e\sim\mc{N}(0, \sigma^2I_m)$, then it is an easy verification to see that $\varepsilon\approx\sigma\sqrt{m}$, but in general the size of the error may be unknown. In this paper we describe a Newton type method that simultaneously updates the solution $x$ and the regularization parameter $\alpha$ such that the Tikhonov problem \eqref{eq:tikhonov} and Morozov's discrepancy principle \eqref{eq:morozov} are both satisfied. This is done by combining both equations into one big non-linear system in $x$ and $\alpha$ and solving it using Newton's method. However, starting from an arbitrary initial estimate, convergence of the classical Newton's method cannot be guaranteed. In \hypref{section}{sec:ntm} we prove that by starting from a specific initial estimate and placing a bound on the step size of the Newton updates the method will always converge. We also derive an estimate for this step size. For large scale problems computing the Newton search directions and this step size can, however, be computationally expensive. In \hypref{section}{sec:pntm} we therefore combine our method with a projection onto a low dimensional Krylov subspace. In \hypref{sections}{sec:numexp1} and \ref{sec:numexp2} we perform extensive numerical experiments in order to illustrate the workings of these methods and compare them with other regularization methods found in the literature, see \hypref{section}{sec:refmethods}. Finally, in \hypref{section}{sec:concl}, we end the paper with a short discussion on some open questions that remain. \section{Tikhonov-Morozov system}\label{sec:ntm} In order to find $(x, \alpha)\in\mbbR^n\times\mbbR_0^+$ that solves the Tikhonov problem and satisfies the discrepancy principle, we consider the non-linear system \begin{equation}\label{eq:tikmor} \left\{\begin{aligned} F_1(x, \alpha) &= (A^T A + \alpha I)x - A^Tb\\ F_2(x, \alpha) &= \frac{1}{2}(Ax - b)^T(Ax - b) - \frac{1}{2}\varepsilon^2 \end{aligned}\right. \end{equation} for $F:\mbbR^n\times\mbbR_0^+\longmapsto\mbbR^{n}\times\mbbR_0^+$. Here, $F_1(x, \alpha) = 0$ are the normal equations corresponding to the Tikhonov problem \eqref{eq:tikhonov} with regularization parameter $\alpha$ and $F_2(x, \alpha) = 0$ is equivalent to Morozov's discrepancy principle \eqref{eq:morozov} (for simplicity we assume that $\eta = 1)$. If we apply Newton's method to solve this non-linear system of equations, convergence of the method starting from an arbitrary initial estimate cannot be guaranteed. We will prove that by starting from a point $(x_0, \alpha_0)$ satisfying the Tikhonov normal equations $F_1$, we can guarantee convergence of Newton's method by limiting the step size. The idea behind this approach is the observation that for points which ``almost'' satisfy these equations, the Jacobian will be invertible. By placing a bound on the Newton step size, we can force the iterations to remain within this region of interest and prove convergence. \subsection{Newton iterations} If the current Newton iteration for the solution of \eqref{eq:tikmor} is given by $\left(x_{k - 1}, \alpha_{k - 1}\right)$, then we write the next iteration as \[ x_k = x_{k - 1} + \Delta x_k\qquad\text{and}\qquad \alpha_k = \alpha_{k - 1} + \Delta \alpha_k. \] The Jacobian system for the Newton search directions is now given by \[ \begin{pmatrix}A^TA+ \alpha_{k - 1} I & x_{k - 1}\\ (Ax_{k - 1} - b)^T A & 0\end{pmatrix}\begin{pmatrix}\Delta x_k\\ \Delta\alpha_k\end{pmatrix} = -\begin{pmatrix}(A^TA + \alpha_{k - 1} I)x_{k - 1} - A^T b\\ \frac{1} {2}(Ax_{k - 1} - b)^T(Ax_{k - 1} - b)-\frac{1}{2}\epsilon^2 \end{pmatrix}, \] or in short \begin{equation}\label{eq:newtoneq} J(x_{k - 1}, \alpha_{k - 1})\begin{pmatrix}\Delta x_k\\ \Delta\alpha_k\end{pmatrix} = -F(x_{k - 1}, \alpha_{k - 1}). \end{equation} \begin{lemma} For all Newton iterations with $k\in\mbbN_0$, the following relationship holds: \[ F(x_k, \alpha_k) = \begin{pmatrix}\Delta\alpha_k\Delta x_k\\ \frac{1}{2}\Delta x_k^TA^TA\Delta x_k\end{pmatrix}. \] \end{lemma} \begin{proof} Using the definition of $F$, it is a straightforward calculation to find that \begin{align*} F(x_k, \alpha_k) =& F(x_{k - 1} + \Delta x_k, \alpha_{k - 1} + \Delta\alpha_k)\\ =& J(x_{k - 1}, \alpha_{k - 1})\begin{pmatrix}\Delta x_k\\ \Delta\alpha_k \end{pmatrix} + F(x_{k - 1}, \alpha_{k - 1}) + \begin{pmatrix}\Delta \alpha_k\Delta x_k\\ \frac{1}{2}\Delta x_k^TA^TA_k\Delta x\end{pmatrix}. \end{align*} Because the search directions $\Delta x_k$ and $\Delta\alpha_k$ are found by solving \eqref{eq:newtoneq}, the sum of first two terms equals zero, proving the lemma. \end{proof} This lemma implies that \begin{equation}\label{eq:temp} J(x_k, \alpha_k)\begin{pmatrix}\Delta x_{k + 1}\\ \Delta\alpha_{k + 1} \end{pmatrix} = -\begin{pmatrix}\Delta\alpha_k & 0\\ \frac{1}{2} \Delta x_k^TA^TA & 0\end{pmatrix}\begin{pmatrix}\Delta x_k\\ \Delta\alpha_k\end{pmatrix}, \end{equation} resulting in a recurrence relation between two sequential Newton search directions. Another consequence of the lemma is that \begin{equation}\label{eq:temp2} \begin{aligned} &&\left(A^TA + \alpha_kI\right)x_k - A^Tb &= \Delta\alpha_k\Delta x_k\\ \Leftrightarrow&& A^T\left(Ax_k - b\right) &= -\alpha_kx_k + \Delta\alpha_k \Delta x_k. \end{aligned} \end{equation} This means that if we rescale the last row of \eqref{eq:newtoneq} with $\alpha_{k - 1} > 0$ and instead solve \[ \begin{aligned} &\begin{pmatrix} A^TA + \alpha_{k - 1} I & x_{k - 1}\\ \frac{1}{\alpha_{k - 1}} (Ax_{k - 1} - b)^TA & 0 \end{pmatrix}\begin{pmatrix} \Delta x_k\\ \Delta\alpha_k \end{pmatrix}\\ &\hspace{7em}= -\begin{pmatrix} (A^TA + \alpha_{k - 1} I)x_{k - 1} - A^Tb\\ \frac{1} {2\alpha_{k - 1}}(Ax_{k - 1} - b)^T(Ax_{k - 1} - b) - \frac{1}{2 \alpha_{k - 1}}\epsilon^2 \end{pmatrix}, \end{aligned} \] then the same search directions are found and \eqref{eq:temp} and \eqref{eq:temp2} remain valid. \subsection{At the discrepancy curve} Assume we have $\alpha > 0$ and $x$ such that $F_1(x, \alpha) = 0$. This means that $x$ is the solution of the Tikhonov normal equations \[ (A^TA + \alpha I)x = A^Tb\ \Leftrightarrow\ \frac{1}{\alpha}(Ax - b)^TA = -x^T \] and $\left(\alpha, \left\|Ax - b\right\|\right)$ is a point on the discrepancy curve, but not necessarily corresponding to the optimal value of the regularization parameter. In this case, the rescaled Jacobian matrix for the Newton system has the following simplified form: \[ D(x, \alpha) := \begin{pmatrix} A^T A + \alpha I &x \\ -x^T & 0 \end{pmatrix}. \] We now look at the numerical range \cite{givens1952}, which for a matrix $A\in \mathbb{C}^{n\times n}$ is defined as \[ W(A) = \left\{\left.\frac{x^*Ax}{x^*x}\ \right|\ x\in\mbbC^n, x\neq 0 \right\}, \] where $x^*$ denotes the complex conjugate of $x$. This is a useful tool since it contains the spectrum of the matrix and for $D(x, \alpha)\in\mbbR^{(n + 1)\times (n + 1)}$ we find that \[ \begin{pmatrix} u^* & v^* \end{pmatrix}\begin{pmatrix} A^TA + \alpha I & x\\ -x^T & 0 \end{pmatrix}\begin{pmatrix} u\\ v \end{pmatrix} = u^*(A^TA + \alpha I)u + u^*x v - v^* x^T u \] with $u\in\mbbC^n$, $v\in\mbbC$ and $\left(u^T, v^T\right)^T \neq 0$. Since $\alpha > 0$ and $x\in\mbbR^n$, the first term is strictly positive and real and the last two terms add up to a pure imaginary number. This means that $\forall z \in W(D): real(z) > 0$, implying that $0$ is not an eigenvalue and hence that $D$ is invertible. \begin{lemma}\label{lem:normDinv} For any matrix $A\in\mbbR^{m\times n}$, vector $x\in\mbbR^n$ and $\alpha > 0$ the Schur complement of $D(x, \alpha)$ exists and is given by $s = x^T(A^TA + \alpha I)^{-1}x\in\mbbR$. If we set $t := (A^TA + \alpha I)^{-1}x\in\mbbR^n$, then it follows that the inverse of $D$ is given by \[ D^{-1}(x, \alpha) = \begin{pmatrix} (A^TA + \alpha I)^{-1} - \frac{t^Tt}{s} & -\frac{t}{s}\\ \frac{t^T}{s} & \frac{1}{s}\end{pmatrix} \] and that the norm of this matrix is bounded: \begin{equation}\label{eq:normDinv} \left\|D^{-1}\right\|\leq\left(1 + \frac{\left\|x\right\|}{\alpha}\right)^2 \max\left\{\frac{1}{\alpha}, \frac{\alpha + \lambda_1}{\left\|x\right\|} \right\}. \end{equation} Here, $\lambda_1$ is the largest eigenvalue of $A^TA$. \end{lemma} \begin{proof} First note that since $A^TA$ is positive semi-definite, the eigenvalues are given by $\lambda_1\geq\lambda_2\geq\ldots\geq\lambda_n\geq 0$. This means that $\left(A^TA + \alpha I\right)$ is invertible because it has eigenvalues $\lambda_1 + \alpha\geq\lambda_2 + \alpha\geq\ldots\geq\lambda_n + \alpha > 0$. As a result, the Schur complement of $D$ exists and the formula for $D^{-1}$ can easily be verified, see for example \cite{zhang2006}. It now also follows that the eigenvalues of $\left(A^TA + \alpha I\right)^{-1}$ are given by \[ \frac{1}{\alpha + \lambda_n}\geq\frac{1}{\alpha + \lambda_{n - 1}}\geq\ldots \geq\frac{1}{\alpha + \lambda_1} > 0 \] and thus that \[ \left\|\left(A^TA + \alpha I\right)^{-1}\right\|\leq\frac{1}{\alpha + \lambda_n}\leq\frac{1}{\alpha} \] and \[ \left\|t\right\|\leq\frac{\left\|x\right\|}{\alpha}\quad\text{and}\quad \left\|s\right\|\leq\frac{\left\|x\right\|^2}{\alpha}. \] We now write \[ D^{-1} = \begin{pmatrix}I & -t\\0 & I\end{pmatrix}\begin{pmatrix}\left( A^TA + \alpha I\right)^{-1} & 0\\0 & \frac{1}{s}\end{pmatrix} \begin{pmatrix}I & 0\\t^T & I\end{pmatrix} \] and will estimate a bound on the norm of all three matrices. For the first matrix we find that for any unit vector $\left(u^T, v^T\right)^T\in\mbbR^n\times\mbbR$: \begin{align*} \left\|\begin{pmatrix}I & -t\\0 & I\end{pmatrix}\begin{pmatrix}u\\v\end{pmatrix} \right\| &= \left\|\begin{pmatrix}u - tv\\v\end{pmatrix}\right\|\\ &\leq\sqrt{u^Tu - 2u^Ttv + v^2t^Tt + v^2}\\ &\leq\sqrt{1 + 2\left\|t\right\| + \left\|t\right\|^2}\\ &\leq\sqrt{\left(1 + \left\|t\right\|\right)^2}\\ &\leq 1 + \frac{\left\|x\right\|}{\alpha} \end{align*} Analogously, the same bound can be found for the third matrix. For the second matrix we have that \[ \left\|\begin{pmatrix}\left(A^TA + \alpha I\right)^{-1} & 0\\ 0 & \frac{1}{s} \end{pmatrix}\right\| = \max\left\{\left\|\left(A^TA + \alpha I\right)^{-1} \right\|, \frac{1}{s}\right\}. \] It now follows from the min-max theorem \cite{wilkinson1965} that \[ s = x^T\left(A^TA + \alpha I\right)^{-1}x\geq\frac{\left\|x\right\|} {\alpha + \lambda_1}. \] Combining all these results proves the lemma. \end{proof} \subsection{Step size} We already showed that for points on the discrepancy curve, the inverse Jacobian exists and has a bounded norm. However, even when we start from a point on the discrepancy curve, there is no guarantee that the Newton iterations will remain on this curve. Hence, we are not certain that the linear systems for the Newton update will not become singular. In order to avoid this, we will consider two conditions which are sufficient for the Newton iterations to converge: \begin{enumerate}[\indent(C1)] \item The inverse Jacobian exists in the next iteration $(x_k, \alpha_k)$.\label{c1} \item The size of the Newton search direction $\left\|\left(\Delta x_k^T, \Delta\alpha_k \right)^T\right\|$ decreases.\label{c2} \end{enumerate} We now show that by placing a bound on the step size of the Newton iterations both conditions can be fulfilled. In order to derive this bound, we write the Jacobian in any point as a perturbed version of the matrix $D$ using \eqref{eq:temp2}: \begin{equation}\label{eq:JDplusE} \begin{aligned} J(x_k, \alpha_k) =& \begin{pmatrix} A^TA + \alpha_kI & x_k \\ -x_k + \frac{\Delta\alpha_k}{\alpha_k}\Delta x_k & 0 \end{pmatrix}\\ =& \begin{pmatrix} A^T A + \alpha_{k - 1}I & x_{k - 1} \\ -x_{k - 1}^T & 0 \end{pmatrix} + \begin{pmatrix} \Delta\alpha_k I & \Delta x_k \\ -\frac{\alpha_{k - 1}}{\alpha_{k - 1} + \Delta\alpha_k}\Delta x_k^T & 0\end{pmatrix}. \end{aligned} \end{equation} We also replace the Newton updates with a scaled version \[ x_k = x_{k - 1} + \gamma_k\Delta x_k\qquad\text{and}\qquad \alpha_k = \alpha_{k - 1} + \gamma_k\Delta\alpha_k \] with \[ \gamma_k\in I_k := \left\{\begin{aligned} &\left]0, 1\right] && \text{if } \Delta\alpha_k > 0\\ &\left]0, 1\right] && \text{if } \Delta\alpha_k < 0\text{ and }\alpha_{k - 1} + \Delta\alpha_k > 0\\ &\left]0, -\omega\alpha_{k - 1}/\Delta\alpha_k\right] && \text{if } \Delta\alpha_k < 0\text{ and }\alpha_{k - 1} + \Delta\alpha_k < 0 \end{aligned}\right. \] and a tolerance value $\omega\in]0, 1[$. This is to ensure that the iterates for $\alpha_k$ remain positive and the reason why we consider three different cases will become clear in \hypref{lemma}{lem:theta}. This means that \eqref{eq:JDplusE} becomes \[ J(x_k, \alpha_k) = \underbrace{\begin{pmatrix} A^T A + \alpha_{k - 1}I & x_{k - 1} \\ -x_{k - 1}^T & 0 \end{pmatrix}}_{D_{k - 1}^{-1}:=} + \underbrace{\gamma_k\begin{pmatrix} \Delta\alpha_k I & \Delta x_k \\ -\zeta_k\Delta x_k^T & 0\end{pmatrix}}_{E_k:=}. \] with $\zeta_k = \alpha_{k - 1}/(\alpha_{k - 1} + \gamma_k\Delta\alpha_k)$. We also define the matrix \[ M_k := \gamma_k\begin{pmatrix} \Delta\alpha_kI & 0 \\ \frac{1}{2}\Delta x_k^TA^T A & 0 \end{pmatrix}. \] Note that we have already shown that $D_{k - 1} = D(x_{k - 1}, \alpha_{k - 1})$ has a bounded inverse, so we can use the following theorem: \begin{theorem}[Trefethen and Embree]\label{thm:tref} Suppose D has a bounded inverse $D^{-1}$, then for any $E$ with $\left\|E\right\| < 1/\left\|D^{-1}\right\|$, $D + E$ has a bounded inverse $(D + E)^{-1}$ satisfying \[ \left\|(D + E)^{-1}\right\|\leq\frac{\left\|D^{-1}\right\|}{1 - \left\|E\right\| \left\|D^{-1}\right\|} \] Conversely, for any $\mu > 1/\left\|D^{-1}\right\|$, there exists an $E$ with $\left\|E\right\| < \mu$ such that $(D + E)u = 0$ for some non zero $u$. \end{theorem} \begin{proof} For a proof of this theorem we refer to \cite[p. 28]{trefethen2005}. \end{proof} \begin{lemma}\label{lem:normEandM} For the matrices $E_k, M_k\in\mbbR^{(n + 1)\times(n + 1)}$ defined above, the following holds: \begin{align*} \left\|E_k\right\| &\leq \gamma_k\left(\left|\Delta\alpha_k\right| + \sqrt{1 + \zeta_k^2}\left\|\Delta x_k\right\|\right)\\ \left\|M_k\right\| &= \gamma_k\sqrt{\Delta\alpha_k^2 + \frac{1}{4}\left\| A^TA\Delta x_k\right\|^2}. \end{align*} As a consequence we have that \[ \lim_{\gamma_k\rightarrow 0}\left\|E_k\right\| = 0\qquad\text{and} \qquad\lim_{\gamma_k\rightarrow 0}\left\|M_k\right\| = 0. \] \end{lemma} \begin{proof} Using the triangle inequality we find that \[ \left\|E_k\right\|\leq\gamma_k\left(\left\|\begin{pmatrix} \Delta\alpha_kI & 0 \\ 0 & 0 \end{pmatrix}\right\| + \left\|\begin{pmatrix} 0 & \Delta x_k \\ -\zeta_k\Delta x_k^T & 0 \end{pmatrix}\right\|\right). \] The first matrix is a diagonal matrix with entries $\Delta\alpha_k$ and $0$, hence its norm is equal to $\left|\Delta\alpha_k\right|$. For the second matrix we take $u\in\mbbR^n$ and $v\in\mbbR$ and find that \begin{align*} \left\|\begin{pmatrix} 0 & \Delta x_k \\ -\zeta_k\Delta x_k^T & 0 \end{pmatrix} \begin{pmatrix} u \\ v \end{pmatrix}\right\| &= \left\|\begin{pmatrix} \Delta x_kv \\ -\zeta_k\Delta x_k^Tu\end{pmatrix}\right\| = \sqrt{\left\|\Delta x_kv\right\|^2 + \left\|\zeta_k\Delta x_k^Tu\right\|^2}\\ &\leq \sqrt{v^2 + \zeta^2\left\|u\right\|^2}\left\|\Delta x_k\right\|\\ \Rightarrow\left\|\begin{pmatrix} 0 & \Delta x_k \\ -\zeta_k\Delta x_k^T & 0 \end{pmatrix}\right\| &\leq \sqrt{1 + \zeta^2}\left\|\Delta x_k\right\|. \end{align*} The statement about $\left\|E_k\right\|$ now follows. Similarly, we find for $M_k$ that \begin{align*} \left\|M_k\begin{pmatrix}u \\ v\end{pmatrix}\right\| &= \left\| \gamma_k\begin{pmatrix}\Delta\alpha_ku \\ \frac{1}{2}\Delta x_k^TA^TAu\end{pmatrix}\right\| = \gamma\sqrt{\Delta\alpha_k^2\left\| u\right\|^2 + \frac{1}{4}\left\|\Delta x_k^TA^TAu\right\|^2}\\ &\leq \gamma\sqrt{\Delta\alpha_k^2 + \frac{1}{4}\left\|A^TA\Delta x_k \right\|^2}\left\|u\right\| \end{align*} By taking $(u, v) = \frac{1}{\left\|A^TA\Delta x_k\right\|}(A^TA\Delta x_k, 0)$ this is an equality, proving the statement about $\left\|M_k\right\|$. Finally, it should be noted that $\lim_{\gamma_k\rightarrow 0}\zeta_k = 1$, so for $\gamma_k\rightarrow 0$, the norms of both matrices also go to $0$. \end{proof} \begin{theorem}\label{thm:gamma} Starting from an initial point $(x_0, \alpha_0)$ satisfying the Tikhonov normal equations $F_1(x_0, \alpha_0) = 0$, there exist $\gamma_k\in I_k$ such that \begin{align} &\left\|E_k\right\|\left\|D_{k - 1}^{-1}\right\| < 1\label{eq:constr1}\\ &\frac{\left\|D_{k - 1}^{-1}\right\|}{1 - \left\|E_k\right\|\left\|D_{k - 1}^{-1}\right\|} \left\|M_k\right\| < 1\label{eq:constr2}. \end{align} Scaling the Newton search direction with such a step size $\gamma_k$ is sufficient for the Newton iterations to converge. \end{theorem} \begin{proof} If \eqref{eq:constr1} holds, then it follows from \hypref{theorem}{thm:tref} that the inverse Jacobian $J^{-1}(x_k, \alpha_k) = \left(D_{k - 1} + E_k\right)^{-1}$ exists, fulfilling condition \hyprefp{C}{c1}. Furthermore, from the recursion between the Newton updates \eqref{eq:temp} it also follows that \[ \left\|J^{-1}(x_k, \alpha_k)\begin{pmatrix}\Delta\alpha_kI & 0\\ \frac{1}{2}\Delta x_k^TA^T A & 0\end{pmatrix}\right\| < 1. \] is a sufficient condition for \hyprefp{C}{c2} to hold. \eqref{eq:constr2} is simply a stronger version of this condition using the bound on $\left\|J^{-1} (x_k, \alpha_k)\right\|$ given by \hypref{theorem}{thm:tref}. It now remains to be shown that such a $\gamma_k$ always exists. Since \eqref{eq:constr2} is equivalent to \[ \left\|M_k\right\|\left\|D_{k - 1}^{-1}\right\| < 1 - \left\|E_k\right\| \left\|D_{k - 1}^{-1}\right\| \] and the left hand side is positive, \eqref{eq:constr1} is implied by \eqref{eq:constr2}. Also, since the left hand side goes to $0$ when $\gamma_k\rightarrow 0$ and the right hand side goes to 1, there will always exist $\gamma_k\in I_k$ fulfilling both criteria. Finally, by starting from a point $(x_0, \alpha_0)$ satisfying the Tikhonov normal equations, we know that the inverse Jacobian exists in the first iteration. \end{proof} From this theorem it follows that as long as $\gamma_k$ is chosen small enough, the Newton iterations will converge. Small values will however lead to slow convergence, so we will derive an upper bound for $\gamma_k$. In order to do this we will simplify the dependency of the upper bound for $\left\|E_k\right\|$ found in \hypref{lemma}{lem:normEandM} on $\sqrt{1 + \zeta_k^2}$. \begin{lemma}\label{lem:theta} For all $\omega\in]0, 1[$ the following holds: \[ \sqrt{1 + \zeta^2}\leq \left\{\begin{aligned} &\sqrt{2} && \text{If }\Delta\alpha_k > 0\\ &\sqrt{1 + \left(\frac{\alpha_{k - 1}}{\alpha_{k - 1} + \Delta\alpha_k}\right)^2} && \text{If }\Delta\alpha_k < 0\text{ and }\alpha_{k - 1} + \Delta\alpha_k > 0\\ &\sqrt{1 + \frac{1}{\left(1 - \omega\right)^2}} && \text{If } \Delta\alpha_k < 0\text{ and } \alpha_{k - 1} + \Delta\alpha_k < 0 \end{aligned}\right. \] \end{lemma} \begin{proof} Finding an upper bound for \[ \sqrt{1 + \zeta^2} = \sqrt{1 + \left(\frac{\alpha_{k - 1}}{\alpha_{k - 1} + \gamma_k\Delta\alpha_k}\right)^2} \] is equivalent to finding a lower bound on $\left|\alpha_{k - 1} + \gamma_k\Delta\alpha_k\right|$. \begin{itemize} \item If $\Delta\alpha_k > 0$, then $I_k = ]0, 1]$ and this lower bound is found for $\gamma_k = 0$. \item If $\Delta\alpha_k < 0$ and $\alpha_{k - 1} + \Delta\alpha_k > 0$ (meaning that using the unscaled Newton iteration would give a positive regularization parameter), then $I_k = ]0, 1]$ and this lower bound is found for $\gamma_k = 1$. \item If $\Delta\alpha_k < 0$ and $\alpha_{k - 1} + \Delta\alpha_k < 0$ (meaning that using the unscaled Newton iteration would give a negative regularization parameter), then $I_k = \left]0, -\omega\alpha_{k - 1}/\Delta\alpha_k\right]$. If $\omega\rightarrow 1$ then $\alpha_{k - 1} + \gamma_k\Delta\alpha_k \rightarrow 0$ and $\sqrt{1 + \zeta^2}\rightarrow +\infty$. In order to avoid this we take $\omega\in]0, 1[$ to stay way from this singularity and find the lower bound for $\gamma_k = -\omega\alpha_{k - 1}/\Delta\alpha_k$. \end{itemize} Substituting these values for $\gamma_k$ proves the lemma. \end{proof} \begin{corollary}\label{cor:gamma1} If $\theta_k$ is the bound on $\sqrt{1 + \zeta_k^2}$ from \hypref{lemma} {lem:theta}, then the following step size fulfils the conditions \eqref{eq:constr1} and \eqref{eq:constr2} of \hypref{theorem}{thm:gamma}: \[ \gamma_k = \min\left\{\max I_k, \frac{1}{\left(\sqrt{\Delta\alpha_k^2 + \frac{1}{4} \left\|A^TA\Delta x_k\right\|^2} + \left|\Delta\alpha_k\right| + \theta_k\left\|\Delta x_k\right\|\right)\left\|D_{k - 1}^{-1}\right\|} \right\} \] \end{corollary} \begin{proof} This result is found by replacing $\left\|E_k\right\|$, $\left\|M_k\right\|$ and $\sqrt{1 + \zeta^2}$ in \eqref{eq:constr2} by their upperbounds found in \hypref{lemmas} {lem:normEandM} and \ref{lem:theta}. \end{proof} \begin{corollary}\label{cor:gamma2} If $\theta_k$ is the bound on $\sqrt{1 + \zeta_k^2}$ from \hypref{lemma} {lem:theta}, then the following step size only fulfils conditions \eqref{eq:constr1} of \hypref{theorem}{thm:gamma}: \[ \gamma_k = \min\left\{\max I_k, \frac{1}{\left(\left|\Delta\alpha_k\right| + \theta_k\left\|\Delta x_k\right\|\right)\left\|D_{k - 1}^{-1}\right\|} \right\} \] \end{corollary} \begin{proof} This result is found by replacing $\left\|E_k\right\|$ and $\sqrt{1 + \zeta^2}$ in \eqref{eq:constr1} by their upperbounds found in \hypref{lemmas}{lem:normEandM} and \ref{lem:theta}. \end{proof} \noindent Combining the results from this section leads to \hypref{algorithm}{alg:ntm}. \begin{algorithm} \caption{Newton on the Tikhonov-Morozov system (NTM)}\label{alg:ntm} \begin{algorithmic}[1] \State Choose initial $\alpha_0 > 0$ and solve $F_1(x_0, \alpha_0)$ for $x_0$. \For{$k = 1, \ldots,$ maxiter} \State Solve the Jacobian system \eqref{eq:newtoneq} for $\Delta x_k$ and $\Delta\alpha_k$.\label{alg:ntm:jac} \State Calculate $\left\|D^{-1}\right\|$. \State Calculate $\theta_k$ using \hypref{lemma}{lem:theta}. \State Calculate the step size $\gamma_k$ using \hypref{corollary}{cor:gamma1} or \ref{cor:gamma2}. \State $x_k = x_{k - 1} + \gamma_k\Delta x_k$ and $\alpha_k = \alpha_{k - 1} + \gamma_k\Delta\alpha_k$. \If{$\left\|F(x_k, \alpha_k)\right\| <$ tol} \Break \EndIf \EndFor \end{algorithmic} \end{algorithm} \subsection{Remarks} The reason we consider two possible choices for the step size is because we observed in our numerical experiments that both \hypref{corollary}{cor:gamma1} and \ref{cor:gamma2} seem to result in a small value for the step size. This is explained by the fact that the constraints placed on $\gamma_k$ in \hypref{theorem} {thm:gamma} are stronger than \hyprefp{C}{c1} and \hyprefp{C}{c2} and because we used various overestimations in order to derive an upper bound for $\gamma_k$. Another thing to note is that it might not be necessary to start from a point $(x_0, \alpha_0)$ on the discrepancy curve. We use this assumption because it guarantees the existence of the inverse Jacobian in the first iteration. However, as \hypref{theorem}{thm:tref} suggests, it would be sufficient to start from a point for which the perturbation $E$ in the Jacobian $J$ with respect to $D$ sufficiently small. Instead of choosing an $\alpha_0$ and solving $F_1(x_0, \alpha_0) = 0$ for $x_0$ exactly, it could suffice to only solve for $x_0$ up to a limited precision. Finally, for large scale problems, solving the Jacobian system \eqref{eq:newtoneq} and calculating $\left\|D_{k - 1}^{-1}\right\|$ becomes computationally very expensive. We could use the upper bound from \hypref{lemma}{lem:normDinv} to partially solve this problem, but once again, this will only lead to a smaller step size and slower convergence. These issues will be discussed further on in this paper. \section{Numerical experiments I}\label{sec:numexp1} To illustrate the method, we look at a problem with a small random matrix $A$ and solution $x$. More precisely, we take $A\in\mbbR^{700\times 500}$ and $x\in\mbbR^{500}$ with i.i.d. entries drawn from the uniform distribution $\mc{U}(-1, 1)$. Measurements are generated by adding $10\%$ Gaussian noise to the exact right hand side $b_{ex} = Ax$ using $e\sim\mathcal{N}\left(0, \sigma^2 I_m\right)$ with $\sigma = 0.10\left\| b_{ex}\right\|/\sqrt{m}$ and setting $b = b_{ex} + e$. For the discrepancy principle, we will approximate the error norm by $\varepsilon = \sigma\sqrt{m} = 0.10\left\|b_{ex}\right\|$. We repeat this experiment $1000$ times and for each run we start with $\alpha_0 = 1$ and solve the Tikhonov normal equations $F_1(x_0, \alpha_0) = 0$ for $x_0$. After that, we start the Newton iterations with $\omega = 0.9$ and stop when $\left\|F(x_k, \alpha_k)\right\| < 1\snot{-3}$. The results are show in \hypref{figure}{fig:rms} and \hypref{table}{tab:rms}, where case 1 means that \hypref{corollary}{cor:gamma1} was used to calcuate the step size and case 2 means that \hypref{corollary}{cor:gamma2} was used. These results indicate that the overestimations used in our analysis of the method lead to a small step size. By using \hypref{corollary}{cor:gamma2} and weakening the constraints placed on $\gamma$, the method takes substantially larger steps and converges much faster. How much larger the step sizes can become by weakening the constraints is of course problem dependent and hard to predict. Nevertheless, \hyprefp{C}{c2} seems to be a strong constraint placed on the iterations. Also, because both cases converge to the same solution, the same regularization parameter is found. The small standard deviation over all the runs indicates that the regularization parameter is quite similar in all the runs. \begin{figure} \centering \includegraphics[width = 0.49\linewidth]{./images/rms_case1}\hspace{2.5pt} \includegraphics[width = 0.49\linewidth]{./images/rms_case2}\\[2.5pt] \includegraphics[width = 0.49\linewidth]{./images/rms_gamma}\hspace{2.5pt} \includegraphics[width = 0.49\linewidth]{./images/rms_alpha} \caption{Results for one of the runs. For each Newton iteration, we plot the point $(\alpha_k, \left\|Ax_k - b\right\|)$. The top left figure corresponds to case 1 and the top right figure to case 2. Bottom left: the value of the step size $\gamma$ used in each iteration. Bottom right: the value of the regularization parameter $\alpha$ in each iteration.} \label{fig:rms} \end{figure} \begin{table} \centering \begin{tabular}{c||c|c} & \# Iterations & $\alpha$\\ \hline\hline Case 1 & $85$ ($13$) & $15.6581$ ($1.0947$) \\ \hline Case 2 & $16$ ($2$) & $15.6581$ ($1.0947$) \end{tabular} \caption{Average number of iterations for the 1000 runs of the experiment and the standard deviation (rounded). Because both methods converge to the same solution, the same value for $\alpha$ is found in each run, but for all the different random matrices its value turns out to be quite similar, hence the low standard deviation.} \label{tab:rms} \end{table} \section{Projected Tikhonov-Morozov system}\label{sec:pntm} The NTM algorithm can become computationally very expensive because in each iteration $\left\|D^{-1}\right\|$ needs to be computed and the Jacobian system \eqref{eq:newtoneq} needs to be solved for $\Delta x$ and $\Delta\alpha$. Even for small matrices $A\in\mbbR^{m\times n}$ this can quickly become a problem. However, it is possible to project the problem onto a Krylov subspace \cite{saad2003, vandervorst2003} using a bidiagonal decomposition of $A$ \cite{golub1965, paige1982, paige1982_2}. In each outer Krylov iteration, the projected version of the Tikhonov-Morozov system \eqref{eq:tikmor} can then be solved using the NTM algorithm. In this section we describe how this algorithm works using a number of heuristic choices and apply it to different test problems. Roughly speaking, each iteration of the method will consist of the following steps: \begin{itemize} \item Expand the bidiagonal decomposition of $A$. \item Choose an initial point for the NTM method on the projected equations. \item Calculate a number of NTM iterations on the projected equations. \item Check the convergence. \end{itemize} \subsection{Bidiagonal decomposition} \begin{theorem}[Bidiagonal decomposition]\label{thm:bidiag} If $A\in\mbbR^{m\times n}$ with $m\geq n$, then there exist orthonormal matrices \[ U = (u_1, u_2, \ldots, u_m)\in\mbbR^{m\times m}\quad\text{and}\quad V = (v_1, v_2, \ldots, v_n)\in\mbbR^{n\times n} \] and a lower bidiagonal matrix \[ B=\begin{pmatrix}\mu_1\\ \nu_2 & \mu_2\\ & \nu_3 & \ddots\\ & & \ddots & \mu_{n}\\ & & & \nu_{n+1}\end{pmatrix}\in\mathbb{R}^{(n+1)\times n}, \] such that \[ A = U\begin{pmatrix}B\\0\end{pmatrix}V^T. \] \end{theorem} \begin{proof} This was proven by Golub and Kahan in \cite{golub1965}. \end{proof} Starting from a given unit vector $u_1\in\mathbb{R}^m$ it is possible to generate the columns of $U$, $V$ and $B$ recursively using the Bidiag1 procedure proposed by Paige and Saunders \cite{paige1982_2, paige1982}, see \hypref{algorithm} {alg:bidiag1}. Here, the reorthogonalization is added for numerical stability. Note that this bidiagonal decomposition is the basis for the LSQR algorithm and that after $k$ steps of Bidiag1 starting with the initial vector $u_1 = b/\left\|b\right\|$ we have matrices $V_k\in\mbbR^{n\times k}$ and $U_{k + 1}\in\mbbR^{m\times(k + 1)}$ with orthonormal columns and a lower bidiagonal matrix $B_{k + 1, k}\in\mbbR^{(k + 1)\times k}$ that satisfy \begin{equation}\label{eq:bdrel} AV_k = U_{k + 1}B_{k + 1, k} \end{equation} \begin{algorithm} \caption{bidiag1}\label{alg:bidiag1} \begin{algorithmic}[1] \State Choose initial unit vector $u_1$ (typically $b/\left\|b\right\|$). \State Set $\nu_1v_0 = \mu_{n + 1}v_{n = 1} = 0$. \For{$k = 1, \ldots,$ n} \State $r_k = A^Tu_k - \nu_kv_{k-1}$ \State Reorthogonalize $r_k$ with respect to the previous columns of $V$. \State $\mu_k = \left\|r_k\right\|$ and $v_k = r_k/\mu_k$. \State $p_k = Av_k - \mu_ku_k$ \State Reorthogonalize $p_k$ with respect to the previous columns of $U$. \State $\nu_{k+1} = \left\|p_k\right\|$ and $u_{k+1} = p_k/\nu_{k+1}$. \EndFor \end{algorithmic} \end{algorithm} In order to solve the Tikhonov-Morozov system \eqref{eq:tikmor}, we will calculate a series of iterations in the Krylov subspace spanned by the columns of $V$: \[ x_k\in\spn{V_k} = \mc{K}_k(A^TA, A^Tb). \] This means that $x_k = V_ky_k$ for some $y_k\in\mbbR^k$ and using \eqref{eq:bdrel}, the orthonormality of the columns of $U$ and $V$ and the fact that $u_1 = b/\left\| b\right\|$ it is possible to show that \begin{equation}\label{eq:ptik} \min_{x_k\in\spn{V_k}}\left\|Ax_k - b\right\|^2 + \alpha\left\|x_k\right\|^2 = \min_{y_k\in\mbbR^n}\left\|B_{k + 1, k}y_k - c_k\right\|^2 + \alpha\left\|y_k\right\|^2 \end{equation} and \begin{equation}\label{eq:pmor} \left\|Ax_k - b\right\| = \left\|B_{k + 1, k}y_k - c_k\right\|, \end{equation} for $c_k = (\left\|b\right\|, 0, \ldots, 0)^T\in\mbbR^{k + 1}$. We therefore set $x_k = V_ky_k$ and solve the following projected version of \eqref{eq:tikmor}: \begin{equation}\label{eq:ptikmor} \begin{aligned} \left\{\begin{aligned} \wt{F}_1(y_k, \alpha_k) &= (B_{k + 1, k}^TB_{k + 1, k} + \alpha I_k)y_k - B_{k + 1, k}^Tc_k\\ \wt{F}_2(xy_k \alpha_k) &= \frac{1}{2}\left(B_{k + 1, k}y_k - c\right)^T \left(B_{k + 1, k}y_k - c_k\right) - \frac{1}{2}\varepsilon^2 \end{aligned}\right. \end{aligned} \end{equation} Similarly to the to original non-linear system, $\wt{F}_1$ are the normal equations corresponding to the projected Tikhonov problem \eqref{eq:ptik} and $\wt{F}_2$ corresponds to the projected discrepancy principle \eqref{eq:pmor}. \subsection{Inner NTM iterations} In each outer Krylov iteration (numbered with $k$) \eqref{eq:ptikmor} needs to be solved, which we will do using the NTM method. This means that in the inner Newton iterations (numbered with $l$), the following Jacobian system needs to be solved: \begin{equation}\label{eq:pjacsys} \begin{aligned} &\begin{pmatrix} B_{k + 1, k}^TB_{k + 1, k} + \alpha_{k, l - 1}I_k & y_{k, l - 1}\\ \frac{1}{\alpha_{k, l - 1}}\left(B_{k + 1, k}y_{k, l - 1} - c_k\right)^TB_{k + 1, k} & 0\end{pmatrix}\begin{pmatrix}\Delta y_{k, l} \\ \Delta\alpha_{k, l}\end{pmatrix}\\ &\qquad\quad=-\begin{pmatrix} \left(B_{k + 1, k}^TB_{k + 1, k} + \alpha_{k, l - 1}I_k\right)y_{k, l - 1} - B_{k + 1, k}^Tc_k \\ \frac{1}{2\alpha_{k, l - 1}}\left(B_{k + 1, k}y_{k, l - 1} - c_k\right)^T \left(B_{k + 1, k}y_{k, l - 1} - c_k\right) - \frac{1}{2\alpha_{k, l - 1}}\varepsilon^2\end{pmatrix}. \end{aligned} \end{equation} Note that the matrix $B_{k + 1, k}^TB_{k + 1, k}$ has size $k\times k$. This means that as long as the number of outer iterations remains small -- which corresponds to the size of the constructed Krylov basis -- calculating $\left\| D_{l - 1}^{-1}\right\|$ and solving the projected Jacobian system \eqref{eq:pjacsys} of size $(k + 1)\times(k + 1)$ can be done efficiently. A full overview of the method can be found in \hypref{algorithm}{alg:pntm} and below we discuss some of the steps. As a starting point for the original NTM method, we used the solution to the Tikhonov normal equations $F_1(x_0, \alpha_0) = 0$ for a chosen $\alpha_0$. Now, in each outer Krylov iteration, we will use the current best estimate for the regularization parameter, i.e. $\alpha_{k, 0} = \alpha_{k - 1}$ and solve the projected Tikhonov normal equations $\wt{F}_1(y_{k, 0}, \alpha_{k, 0})$. This $k \times k$ linear system can be solved quickly as long as the number of Krylov iterations is small and its solution can be used to initialize the inner Newton iterations. Another important question is how many inner Newton iterations should be performed before the Krylov subspace is expanded. If, on the one hand, the Krylov subspace is too small to contain the solution $x$ of the inverse problem (or a good approximation of it), then the Newton iterations cannot converge. Therefore we would like the number of inner iterations to be small. If, on the other hand, the Krylov subspace is large enough to contain the solution, we don't want to keep expanding it. The maximum number of inner Newton iterations should therefore be large enough for them to converge. This is why we initially limit the number of inner Newton iterations. However, the moment that the residual of the solution becomes less than the discrepancy level $\varepsilon$, we will take a much larger number. This corresponds to \hypref{lines}{alg:pntm:init1}--\ref{alg:pntm:init2} of \hypref{algorithm}{alg:pntm}. Finally, we don't change the stopping criterion for the inner Newton iterations, \hypref{algorithm}{alg:pntm} \hypref{line}{alg:pntm:stop1}. However, because we are now working with the the projected system, $\wt{F}$ may be solved accurately before the original system $F$ is. We therefore don't stop the outer Krylov iterations until the value for the regularization parameter $\alpha_{k}$ stagnates as well, \hypref{algorithm}{alg:pntm} \hypref{line}{alg:pntm:stop2}. The necessity for this will become clear in the numerical experiments, where we will see that this corresponds to finding a solution $x_k$ that satisfied the discrepancy principle, but not the Tikhonov normal equations. \begin{algorithm} \caption{Projected Newton on the Tikhonov-Morozov system (PNTM)}\label{alg:pntm} \begin{algorithmic}[1] \State Choose initial $\alpha_0 > 0$. \State Set FLAG $= 0$. \For{$k = 1, \ldots,$ outeriter} \State Expand $U_{k + 1}$, $B_{k + 1, k}$ and $V_k$ using Bidiag1 \eqref{alg:bidiag1}. \State Set $\alpha_{k, 0} = \alpha_{k - 1}$ and solve $\wt{F}_1(y_{k, 0}, \alpha_{k, 0})$ for $y_{k, 0}$. \If{$\left\|B_{k + 1, k}y_{k, 0} - c_k\right\| > \epsilon$}\label{alg:pntm:init1} \State inneriter $= \min\left\{k, 10\right\}$ \Else \State inneriter $= 10000$\label{alg:pntm:inneriter} \EndIf\label{alg:pntm:init2} \For{$l = 1, \ldots,$ inneriter} \State Solve the projected Jacobian system \eqref{eq:pjacsys} for $\Delta y_{k, l}$ and $\Delta\alpha_{k, l}$. \State Calculate $\left\|D^{-1}_{l - 1}\right\|$. \State Calculate the step size $\gamma$ using \hypref{corollary}{cor:gamma1} or \ref{cor:gamma2}. \State $y_{k, l} = y_{k, l - 1} + \gamma\Delta y_{k, l}$ and $\alpha_{k, l} = \alpha_{k, l - 1} + \gamma\Delta\alpha_{k, l}$. \If{$\left\|\wt{F}(y_{k, l}, \alpha_{k, l})\right\| <$ tol}\label{alg:pntm:stop1} \State FLAG $= 1$ \Break \EndIf \EndFor \State $x_k = V_ky_{k, l}$ and $\alpha_k = \alpha_{k, l}$. \If{FLAG $= 1$ \textbf{and} $\left|\alpha_k - \alpha_{k - 1}\right|/\alpha_{k - 1} <$ tol}\label{alg:pntm:stop2} \Break \EndIf \EndFor \end{algorithmic} \end{algorithm} \section{Reference methods}\label{sec:refmethods} In this section we briefly discuss two methods which we compare the PNTM method to. The first method iteratively solves the Tikhonov problem and also uses an iterative update scheme for the regularization parameter based on the discrepancy principle. The second method does not solve the Tikhonov problem, but combines an early stopping criterion with a right preconditioner in order to include prior knowledge and regularization. \subsection{Generalized bidiagonal-Tikhonov} In \cite{gazzola2014_1, gazzola2014_2, gazzola2014_3} a generalized Arnoldi-Tikhonov method (GAT) was introduced that iteratively solves the Tikhonov problem \eqref{eq:tikhonov} using a Krylov subspace method based on the Arnoldi decomposition of the matrix $A$. Simultaneously, after each Krylov iteration, the regularization parameter is updated in order to approximate the value for which the discrepancy is equal to $\varepsilon$. This is done using one step of the secant method to find the intersection of the discrepancy curve with the tolerance for the discrepancy principle, see \hypref{figure} {fig:curves}, but in the current Krylov subspace. Because the method is based on the Arnoldi decomposition, the method is connected to the GMRES algorithm and it only works for square matrices. However, by replacing the Arnoldi decomposition with the bidiagonal decomposition we used in the previous section the method can be adapted to non-square matrices. The update for the regularization parameter is done based on the regularized and the non-regularized residual. Let, in the $k$th iteration, $z_k$ be the solution without regularization -- i.e. $\alpha = 0$ -- and $y_k$ the solution with the current best regularization parameter -- i.e. $\alpha = \alpha_{k - 1}$. If $r(z_k)$ and $r(y_k)$ are the corresponding residuals, then the regularization parameter is updates using \begin{equation}\label{eq:aup} \alpha_k = \left|\frac{\varepsilon - r(z_k)}{r(y_k) - r(z_k)}\right|\alpha_{k - 1}. \end{equation} A brief sketch of this method is given is \hypref{algorithm}{alg:gbit}, where we use the same stopping criterion as for PNTM, but for more information we refer to \cite{gazzola2014_1, gazzola2014_2, gazzola2014_3}. Note that in the original GAT method, the non-regularized iterates $z_k$ are equivalent to the GMRES iterations for the solution of $Ax = b$. Now, because the Arnoldi decomposition is replaced with the bidiagonal decomposition, they are equivalent to the LSQR iterations for the solution of $Ax = b$. \begin{algorithm} \caption{Generalized bidiagonal Tikhonv (GBiT)}\label{alg:gbit} \begin{algorithmic}[1] \State Choose initial $\alpha_0 > 0$. \For{k = 1, \ldots, maxiter} \State Expand $U_{k + 1}$, $B_{k + 1, k}$ and $V_k$ using Bidiag1 \eqref{alg:bidiag1}. \State Solve $\wt{F}_1(z_k, 0) = 0$ for $z_z$. \State Solve $\wt{F}_1(y_k, \alpha_{k - 1}) = 0$ for $y_k$. \State Calculate $\alpha_k$ using \eqref{eq:aup}. \If{$\left\|\wt{F}(y_k, \alpha_k)\right\| <$ tol \textbf{and} $\left|\alpha_k - \alpha_{k - 1}\right|/\alpha_{k - 1} <$ tol} \Break \EndIf \EndFor \end{algorithmic} \end{algorithm} \subsection{General form Tikhonov and priorconditioning} In its general form, the Tikhonov problem \eqref{eq:tikhonov} is written as \begin{equation}\label{eq:gentikhonov} x_\alpha = \argmin_{x\in\mbbR^n}\left\|Ax - b\right\|^2 + \alpha\left\|L(x - x_0)\right\|^2, \end{equation} with $x_0\in\mbbR^n$ an initial estimate and $L\in\mbbR^{p\times n}$ a regularization matrix, both chosen to incorporate prior knowledge or to place specific constraints on the solution \cite{gazzola2014_1, hansen2010}. If $L$ is a square invertible matrix, then the problem can be written in the standard form \begin{equation}\label{eq:tranftikhonov} z_\alpha = \argmin{z\in\mbbR^n}\left\|\ol{A}z - r_0\right\|^2 + \alpha\left\|z\right\|^2, \end{equation} by using the transformation \begin{equation}\label{eq:stdtransf} z = L(x - x_0), \quad\ol{A} = AL^{-1}, \quad r_0 = b - Ax_0. \end{equation} When $L$ is not square invertible, some form of pseudoinverse has to be used, but the reformulation of the problem remains the same \cite{hansen2010}. After solving \eqref{eq:tranftikhonov}, the solution can be found as \[ x = x_0 + L^{-1}z. \] Instead of solving $Ax = b$, an alternative regularization method called priorconditionning is to solve \[ \left\{\begin{aligned} &AL^{-1}z = b - Ax_0\\ &x = x_0 + L^{-1}z \end{aligned}\right. \] Here, the matrix $L$ is can be seen as a right preconditioner. Its functions is, however, not to improve the convergence of the iterative method, but to incorporate regularization and prior knowledge into the solution \cite{calvetti2015}. This priorconditionned linear system can now be solved with CGLS combined with an early stopping criterion based on the discrepancy principle. Note that this method will find a solution in the same Krylov subspace as PNTM, but that PNTM selects another element of this space due to the presence of the regularization term. \section{Numerical Experiments II}\label{sec:numexp2} \subsection{Large random matrix problem} As a first numerical experiment, we repeat the random matrix experiment from \hypref{section}{sec:numexp1}. The only thing we change is the size of the matrices: $21000\times 15000$. The results are shown in \hypref{figure} {fig:rml} and \hypref{table}{tab:rml}, where we used $tol = 1\snot{-3}$ for the stopping criterion. Similarly as with the smaller experiment, there is little difference between the different runs when it comes to the number of iterations (outer and inner) or the optimal regularization parameter. As a comparison, we also solved the problem with GBiT and see that while a similar value for the regularization parameter is found, PNTM requires less Krylov iterations in order to converge. When we compare \hypref{figure}{fig:rms} and \hypref{figure}{fig:rml}, we see that the behaviour of the method is quite different now. In the original NTM method we started from a point on the discrepancy curve and stayed close to it by limiting the step size. Now, with the PNTM method, we solve the problem in Krylov subspaces of increasing size. This means that in the first few iterations, we end up far away from the true discrepancy curve. At some point we have constructed a Krylov subspace in which we can solve the projected system up to the discrepancy principle, but as we observe, not necessarily the true Tikhonov normal equations. At this point we increase the maximum number of inner iterations and we keep performing outer Krylov iterations until the regularization parameter stagnates. Whichever of the two corollaries we use to determine the step size produces similar results. The main difference is the number of inner iterations required to solve the projected system. Using \hypref{corollary}{cor:gamma2}, the method once again requires a significantly lower number of Newton iterations to converge inside each of the Krylov subspaces. \begin{figure} \centering \includegraphics[width = 0.49\linewidth]{./images/rml_case1}\hspace{2.5pt} \includegraphics[width = 0.49\linewidth]{./images/rml_case2}\\[2.5pt] \includegraphics[width = 0.49\linewidth]{./images/rml_gamma}\hspace{2.5pt} \includegraphics[width = 0.49\linewidth]{./images/rml_alpha}\\[2.5pt] \includegraphics[width = 0.49\linewidth]{./images/rml_iters} \caption{For each Newton iteration, we plot the point $(\alpha_k, \left\|Ax_k - b\right\|)$ to see where it lies with respect to the discrepancy curve. The top left figure corresponds to case 1, the top right figure to case 2. Middle left: the value of the step size used in each iteration. Middle right: the value of the regularization parameter in each iteration. Bottom: the number of inner Newton iterations per outer Krylov iteration.} \label{fig:rml} \end{figure} \begin{table} \centering \begin{tabular}{c||c|c|c} & \# Krylov iterations & \# Newton iterations & $\alpha$ \\ \hline\hline PNTM -- case 1 & $16$ ($< 1$) & $16772$ ($432$) & $469.0143$ ($5.98$) \\ \hline PNTM -- case 2 & $16$ ($< 1$) & $576$ ($14$) & $469.0144$ ($5.98$) \\ \hline GBiT & $32$ ($< 1$) & $\cdot$ & $469.3934$ ($5.97$) \end{tabular} \caption{Average number of iterations for the 1000 runs of the experiment and the standard deviation (rounded). The number of outer iterations corresponds to the dimension of the constructed Krylov subspace, whereas the number of inner iterations is the total number of Newton iterations during all the outer iterations. Because both methods converge to the same solution, the same value for $\alpha$ is found in each run, but for all the different random matrices its value turns out to be quite similar, hence the low standard deviation.} \label{tab:rml} \end{table} \subsection{Computed tomography} As a second numerical experiment, we consider x-ray computed tomography. Here, the goal is to reconstruct the attenuation factor of an object based on the loss of intensity in the x-rays after they passed through the object. Classically, the reconstruction is done using analytical methods based on the Fourier and Randon transformations \cite{mallat2009}. In the last decades interest has grown in algebraic reconstruction methods due to their flexibility when it comes to incorporating prior knowledge and handling limited data. Here, the problem is written as a linear system $Ax = b$, where $x$ represents the attenuation of the object in each pixel, the right-hand side $b$ is related to the intensity measurements of the x-rays and $A$ is a projection matrix. The precise structure of $A$ depends on the experimental set-up, but it is typically very sparse. For more information we refer to \cite{joseph1982, hansen2010, siltanen2012}. We also do not construct the matrix $A$ explicitly, but use the ASTRA toolbox \cite{aarle2015, aarle2016} in order to calculate the matrix vector products on-the-fly using their GPU implementation \cite{palenstijn2011}. As a test image we take the modified Shepp--Logan phantom of size $512\times 512$ and take $720$ projection angles in $[0, \pi[$, which corresponds to a matrix $A$ of size $(720\cdot 512)\times(512\cdot 512)$. Similar to the previous experiments we add $10\%$ noise to the exact right hand size (resulting here in $\varepsilon = 4.3513\snot{3}$), but we will only calculate the PNTM reconstruction using the larger step size from \hypref{corollary}{cor:gamma2}. We also calculate the reconstruction using GBiT and the simultaneous iterative reconstruction technique (SIRT) \cite{gregor2008}. The latter is a widely used fixed point iteration method for tomographic reconstructions based on the following recursion: \[ x_{k + 1} = x_k + CA^TR\left(b - Ax_k\right). \] Here, $R$ and $C$ are diagonal matrices whose elements are the inverse row and column sums, i.e. $r_{ii} = 1/\sum_ia_{ij}$ and $c_{jj} = 1/\sum_ia_{ij}$. It can also be shown that this algorithm converges to the solution of the following weighted least squares problem: \[ x^* = \argmin_{x\in\mbbR^n}\left\|Ax- b\right\|_R^2 \] Note that, on the one hand, just like PNTM or GBiT, each SIRT iteration requires one multiplication with $A$ and one with $A^T$. On the other hand, it does not need to construct and store a basis for the Krylov subspace, so it is computationally less expensive and requires much less memory -- two main advantages of the method. The reconstructions are shown in \hypref{figure}{fig:ct1}, with further details in \hypref{Figure}{fig:ct2} and \hypref{table}{tab:ct}. Here, we used $tol = 1\snot{-3}$ for the PNTM and GBiT stopping criterion and stopped the SIRT iterations once the residual was smaller than the discrepancy tolerance $\varepsilon$. Furthermore, because the 2-norm is not always a good measure for how closely two images visually resemble each other, we also consider the structural similarity index (SSIM)\cite{wang2004}. For two images $x$ and $y$ and default values $C_1 = 0.01^2$ and $C_2 = 0.03^2$, this index is given by: \[ SSIM(x, y) = \frac{\left(2\mu_x\mu_y + C_1\right)\left(2\sigma_{xy} + C_2 \right)}{\left(\mu_x^2 + \mu_y^2 + C_1\right)\left(\sigma_x^2 + \sigma_y^2 + C_2\right)}. \] Here, $\mu_x$ and $\mu_y$ are the mean intensity of the images, $\sigma_x$ and $\sigma_y$ their standard deviation and $\sigma_{xy}$ the covariance. This index lies between $0$ and $1$ and the lower its value, the better the image $x$ resembles the reference image $y$. When we look at the results, we see that there is little difference between the errors of the reconstructions, but that SIRT has a much larger SSIM. When looking at the reconstructed images, we see see that this images is indeed smoother than the others. Because SIRT is a stationary method, it also needs more iterations than PNTM and GBiT, which are both Krylov methods. Similarly as with the previous experiment, however, we see that GBiT needs almost twice as many Krylov iterations as PNTM. When we look at \hypref{figure}{fig:ct2} we see that while the value for the regularization parameter stagnates at a similar pace, PNTM more quickly minimizes the value of $\wt{F}$. \begin{figure} \hspace*{0.32\linewidth}\hspace*{1pt} \includegraphics[width = 0.32\linewidth]{./images/phantom}\hspace{1pt} \includegraphics[height = 0.32\linewidth]{./images/colorbar}\\[2.5pt] \includegraphics[width = 0.32\linewidth]{./images/ct_pntm}\hspace{1pt} \includegraphics[width = 0.32\linewidth]{./images/ct_gbit}\hspace{1pt} \includegraphics[width = 0.32\linewidth]{./images/ct_sirt} \caption{Top: original Shepp-Logan phantom with values in $[0, 1]$. Bottom: from left to right the PNTM, GBiT and SIRT reconstructions with values in $[-0.2074, 1.0889]$, $[-0.2071, 1.0899]$ and $[-0.1477, 1.1078]$ respectively. Here, all images are shown on a colorscale $[-0.3, 1.3]$.} \label{fig:ct1} \end{figure} \begin{figure} \centering \includegraphics[width = 0.49\linewidth]{./images/ct_err}\\[2.5pt] \includegraphics[width = 0.49\linewidth]{./images/ct_alpha}\hspace{2.5pt} \includegraphics[width = 0.49\linewidth]{./images/ct_iters}\\[2.5pt] \includegraphics[width = 0.49\linewidth]{./images/ct_stop1}\hspace{2.5pt} \includegraphics[width = 0.49\linewidth]{./images/ct_stop2} \caption{Top: relative error in each iteration. Middle left: value of the regularization parameter in each iteration. Middele right: number of inner Newton iterations in each outer Krylov iterations for PNTM. Bottom: the two parts of the stopping criterion for PNTM and GBiT.} \label{fig:ct2} \end{figure} \begin{table} \centering \begin{tabular}{c||c|c|c|c|c} & \# Iterations & Relative error & Residual & SSIM & $\alpha$ \\ \hline\hline PNTM & $19$ ($2714$) & $0.3159$ & $4.3513\snot{3}$ & $0.2507$ & $2.0399\snot{3}$ \\ GBiT & $38$ & $0.3164$ & $4.3513\snot{3}$ & $0.2499$ & $2.0413\snot{3}$ \\ SIRT & $78$ & $0.2832$ & $4.3443\snot{3}$ & $0.4117$ & $\cdot$ \end{tabular} \caption{Details from the CT reconstructions. The Krylov method PNTM and GBiT require less iterations than SIRT, but again PNTM needs less iterations than GBiT. While the relative error is very similar, the SIRT reconstruction has a much larger SSIM. The total number of inner Newton iterations for PNTM is mentioned in parentheses.} \label{tab:ct} \end{table} \subsection{Suite sparse matrix collection} As a final experiment we take the 26 matrices $A\in\mbbR^{m\times n}$ from the ``SuiteSparse Matrix Collection'' corresponding to a least squares problem \cite{davis2011}. For each matrix we generate a solution vector $x_{ex}\in\mbbR^n$ with entries $x_{ex, i} = \sin(ih)$ for $h = 2\pi/(n + 1)$, calculate the right hand side $b_{ex} = Ax_{ex}\in\mbbR^m$ and add $10\%$ noise. We then solve the resulting inverse problem with PNTM, GBiT and priorconditionned CGLS (CGLS-PC). Again, we use $tol = 1\snot{-3}$ for PNTM and GBiT and only consider the step size from \hypref{corollary}{cor:gamma2}. The CGLS iterations are stopped once the residual is smaller than $\varepsilon$. We also limit the maximum number of (outer) Krylov iterations to $100$ and the number of inner Newton iterations for PNTM to $1000$ (\hypref{algorithm} {alg:pntm} \hypref{line}{alg:pntm:inneriter}). Furthermore, because the $x_{ex}$ is a sine wave, the Tikhonov problem in its standard form will result in poor reconstructions. We therefore consider the regularization matrix \begin{equation}\label{eq:regmatrix} L = \begin{pmatrix} -1 & 1 \\ & -1 & 1 \\ && \ddots & \ddots \\ &&& -1 & 1 \\ &&&& -1 \end{pmatrix}\in\mbbR^{n\times n}, \end{equation} which can be seen as placing a smoothness condition on the derivative. We then solve the problem using the transformation \eqref{eq:stdtransf}. Finally, we always start the iterations from $\alpha_0 = 1$ and $x_0 = 0$ for CGLS. The results are listed in \hypref{table}{tab:ssm}, where the relative discrepancy, the relative error and the relative residue are given by \[ \frac{\varepsilon}{\left\|b\right\|},\qquad\text{ }\qquad \frac{\left\|x - x_{ex}\right\|}{\left\|x_{ex}\right\|}\qquad\text{and}\qquad \frac{\left\|Ax - b\right\|}{\left\|b\right\|} \] respectively with $x$ the reconstruction found by the algorithm. Here, we see that while all methods find a reconstruction with a similar relative error, there are a number of important differences. First of all note that it is logical that the priorconditionned CGLS approach requires the least Krylov iterations. This is because the iterations are stopped when the residual is smaller than $\varepsilon$. It is, however, only at this point that the other two methods start to produce good values for the regularization parameter. Then again, due to the presence of the regularization parameter, PNTM and GBiT can be seen as more flexible. Also note that the regularization parameter $\alpha$ is chosen by PNTM and GBiT such that the residual matches the discrepancy $\varepsilon$. In the results we can see, however, that the PNTM method has only converged in a few cases. It turns out that the $1000$ inner Newton iterations are insufficient for the method to converge in the constructed Krylov subspace. This is why the total number of Newton iterations is close to $10000$ and the relative residual does not equal the relative discrepancy. Increasing the maximum number of inner Newton iterations could in theory solve this issue. However, this also means that computational cost of the method increases. \begin{sidewaystable} \scalebox{0.7}{ \begin{tabular}{l||rrrr|c|rrrrr|rrrr||rrr} &&&&&& \multicolumn{5}{c|}{PNTM} & \multicolumn{4}{c|}{GBiT} & \multicolumn{3}{c}{CGLS-PC} \\ & $m$ & $n$ & \#nnz & cond. & rel. discrp. & rel. err. & rel. res. & $\alpha$ & \#K & \#N & rel. err. & rel. res. & $\alpha$ & \#K. & rel. err. & rel. res. & \#K \\ \hline\hline abb313 & $313$ & $176$ & $1,557$ & $1.8\snot{+18}$ & $0.1001$ & $0.2369$ & $0.0989$ & $5.15\snot{+1}$ & $100$ & $92036$ & $0.2652$ & $0.1001$ & $6.96\snot{+1}$ & $23$ & $0.1499$ & $0.0992$ & $10$ \\ ash85 & $85$ & $85$ & $523$ & $4.6\snot{+2}$ & $0.1005$ & $0.0665$ & $0.0887$ & $5.66\snot{+1}$ & $100$ & $97006$ & $0.0843$ & $0.1005$ & $1.64\snot{+2}$ & $15$ & $0.0658$ & $0.0957$ & $4$ \\ ash219 & $219$ & $85$ & $438$ & $3.0$ & $0.1002$ & $0.0521$ & $0.0943$ & $4.33\snot{+1}$ & $100$ & $97006$ & $0.0648$ & $0.1002$ & $6.23\snot{+1}$ & $12$ & $0.0523$ & $0.0973$ & $4$ \\ ash292 & $292$ & $292$ & $2,208$ & $1.2\snot{+18}$ & $0.0996$ & $0.0685$ & $0.0861$ & $8.96\snot{+1}$ & $100$ & $96010$ & $0.0518$ & $0.0996$ & $3.28\snot{+3}$ & $16$ & $0.0472$ & $0.0959$ & $5$ \\ ash331 & $331$ & $104$ & $662$ & $3.1$ & $0.0989$ & $0.0340$ & $0.0989$ & $9.98\snot{-1}$ & $7$ & $22$ & $0.0284$ & $0.0989$ & $3.15\snot{+1}$ & $28$ & $0.0426$ & $0.0971$ & $8$ \\ ash608 & $608$ & $188$ & $1,216$ & $3.4$ & $0.0994$ & $0.0279$ & $0.0931$ & $4.27\snot{+1}$ & $100$ & $96010$ & $0.0428$ & $0.0994$ & $2.25\snot{+2}$ & $13$ & $0.0340$ & $0.0993$ & $5$ \\ ash958 & $958$ & $292$ & $1,916$ & $3.2$ & $0.0994$ & $0.0215$ & $0.0942$ & $5.01\snot{+1}$ & $100$ & $95015$ & $0.0280$ & $0.0994$ & $3.92\snot{+2}$ & $18$ & $0.0230$ & $0.0988$ & $6$ \\ Delor64K & $64,719$ & $1,785,345$ & $652,140$ & $\cdot$ & $0.0996$ & $0.3342$ & $0.1008$ & $1.00\snot{+0}$ & $16$ & $106$ & $0.3396$ & $0.0996$ & $6.70\snot{+3}$ & $52$ & $0.3312$ & $0.0995$ & $20$ \\ Delor295K & $295,734$ & $1,823,928$ & $2,401,323$ & $\cdot$ & $0.0996$ & $0.0209$ & $0.0997$ & $1.00\snot{+0}$ & $16$ & $106$ & $0.0246$ & $0.0996$ & $3.38\snot{+4}$ & $66$ & $0.0162$ & $0.0996$ & $18$ \\ Delor338K & $343,236$ & $887,058$ & $4,211,599$ & $\cdot$ & $0.0995$ & $0.0111$ & $0.0978$ & $1.08\snot{+0}$ & $100$ & $92036$ & $0.0043$ & $0.0995$ & $5.65\snot{+6}$ & $27$ & $0.0031$ & $0.0995$ & $10$ \\ ESOC & $327,062$ & $37,830$ & $6,019,939$ & $\infty$ & $0.0995$ & $0.0586$ & $0.0985$ & $2.01\snot{-8}$ & $100$ & $74215$ & $0.0591$ & $0.0995$ & $1.34\snot{+14}$ & $100$ & $0.0225$ & $0.0995$ & $53$ \\ illc1033 & $1,033$ & $320$ & $4,719$ & $1.9\snot{+4}$ & $0.0993$ & $0.0404$ & $0.0976$ & $1.50\snot{+1}$ & $65$ & $58486$ & $0.0508$ & $0.0993$ & $2.31\snot{+1}$ & $14$ & $0.0406$ & $0.0991$ & $6$ \\ illc1850 & $1,850$ & $712$ & $8,636$ & $1.4\snot{+3}$ & $0.0993$ & $0.0135$ & $0.0972$ & $1.85\snot{+1}$ & $100$ & $93028$ & $0.0232$ & $0.0993$ & $4.65\snot{+1}$ & $19$ & $0.0172$ & $0.0989$ & $9$ \\ landmark & $71,952$ & $2,704$ & $1,146,848$ & $\infty$ & $0.0995$ & $0.0115$ & $0.0993$ & $4.74\snot{+1}$ & $100$ & $77185$ & $0.0120$ & $0.0995$ & $4.44\snot{+2}$ & $55$ & $0.0134$ & $0.0995$ & $37$ \\ Maragal\textunderscore 1 & $32$& $14$ & $234$ & $4.6\snot{+16}$ & $0.0989$ & $0.2048$ & $0.0989$ & $2.80\snot{+0}$ & $8$ & $60$ & $0.2048$ & $0.0989$ & $2.80\snot{+0}$ & $9$ & $0.1784$ & $0.0936$ & $4$ \\ Maragal\textunderscore 2 & $555$ & $350$ & $4,357$ & $2.9\snot{+47}$ & $0.0982$ & $0.0234$ & $0.0942$ & $2.79\snot{+1}$ & $100$ & $94021$ & $0.0213$ & $0.0982$ & $9.78\snot{+1}$ & $16$ & $0.0191$ & $0.0977$ & $7$ \\ Maragal\textunderscore 3 & $1,690$ & $860$ & $18,391$ & $1.5\snot{+47}$ & $0.0993$ & $0.0194$ & $0.0945$ & $3.72\snot{+1}$ & $100$ & $94021$ & $0.0225$ & $0.0993$ & $4.62\snot{+2}$ & $20$ & $0.0136$ & $0.0991$ & $7$ \\ Maragal\textunderscore 4 & $1,964$ & $1,034$ & $26,719$ & $6.1\snot{+33}$ & $0.0996$ & $0.0218$ & $0.0936$ & $4.47\snot{+1}$ & $100$ & $96010$ & $0.0321$ & $0.0996$ & $9.87\snot{+2}$ & $17$ & $0.0123$ & $0.0996$ & $5$ \\ Maragal\textunderscore 5 & $4,654$ & $3,320$ & $93,091$ & $7.4\snot{+31}$ & $0.0994$ & $0.0192$ & $0.0926$ & $5.62\snot{+1}$ & $100$ & $95015$ & $0.0328$ & $0.0994$ & $8.95\snot{+3}$ & $17$ & $0.0147$ & $0.0985$ & $6$ \\ Maragal\textunderscore 6 & $21,255$ & $10,152$ & $537,694$ & $3.3\snot{+33}$ & $0.0995$ & $0.0153$ & $0.0953$ & $7.69\snot{+1}$ & $100$ & $95015$ & $0.0174$ & $0.0995$ & $6.09\snot{+4}$ & $20$ & $0.0072$ & $0.0994$ & $6$ \\ Maragal\textunderscore 7 & $46,845$ & $26,564$ & $1,200,537$ & $\infty$ & $0.0996$ & $0.0160$ & $0.0960$ & $8.26\snot{+1}$ & $100$ & $74215$ & $0.0066$ & $0.0996$ & $2.70\snot{+3}$ & $77$ & $0.0119$ & $0.0995$ & $41$ \\ Maragal\textunderscore 8 & $33,212$ & $75,077$ & $1,308,415$ & $\infty$ & $0.0994$ & $0.0211$ & $0.0930$ & $5.05\snot{+1}$ & $100$ & $85105$ & $0.0036$ & $0.0994$ & $3.62\snot{+4}$ & $100$ & $0.0064$ & $0.0994$ & $20$ \\ Rucci1 & $1,977,885$ & $109,900$ & $7,791,168$ & $\cdot$ & $0.0995$ & $0.0203$ & $0.1006$ & $1.00\snot{+0}$ & $6$ & $16$ & $0.0248$ & $0.0995$ & $1.38\snot{+2}$ & $28$ & $0.0019$ & $0.0994$ & $9$ \\ sls & $1,748,122$ & $62,729$ & $6,804,304$ & $\cdot$ & $0.0995$ & $0.0068$ & $0.0992$ & $1.64\snot{+0}$ & $100$ & $89065$ & $0.0028$ & $0.0995$ & $2.59\snot{+6}$ & $33$ & $0.0023$ & $0.0995$ & $14$ \\ well1033 & $1,033$ & $320$ & $4,732$ & $1.7\snot{+2}$ & $0.0999$ & $0.0188$ & $0.0988$ & $1.26\snot{+1}$ & $36$ & $27098$ & $0.0229$ & $0.0999$ & $1.83\snot{+1}$ & $19$ & $0.0154$ & $0.0998$ & $8$ \\ well1850 & $1,850$ & $712$ & $8,755$ & $1.1\snot{+2}$ & $0.0998$ & $0.0170$ & $0.0955$ & $2.37\snot{+1}$ & $100$ & $95015$ & $0.0565$ & $0.0998$ & $1.80\snot{+2}$ & $14$ & $0.0339$ & $0.0982$ & $6$ \end{tabular} } \caption{Details of the 26 matrices and the PNTM, GBiT and CGLS-PC reconstructions. \#K indicates the number of Krylov iterations and \#N the total number of inner Newton iterations for PNTM. Because we limited the number this number, the PNTM has trouble satisfying the stopping criterion, despite the fact that reconstruction has similar quality as the other methods.} \label{tab:ssm} \end{sidewaystable} \section{Conclusions \& remarks}\label{sec:concl} In this paper we introduced two different numerical methods: Newton on the Tikhonov- Morozov system (NTM) and projected Newton on the Tikhonov-Morozov system (PNTM). We derived the NTM method based on theoretical results and illustrated two difficulties: the estimated step size and the computational cost. In order to reduce the computational cost we projected the problem onto a low dimensional Krylov subspace. The small estimate for the step size, however, remains an issue. In the numerical experiments it is important to note the difference between GBiT (and by extension GAT) and PNTM. While both methods solve the inverse problem in increasingly larger Krylov subspaces, the value that is minimized in each Krylov subspace and the way the regularization parameter is updated are different. GBiT solves the projected Tikhonov normal equations in each Krylov subspace using a fixed regularization parameter and only afterwards updates the regularization parameter for the next Krylov iteration. This can be seen as alternating between minimizing $\wt{F}_1$ using a Krylov method and minimizing $\wt{F}_2$ using the secant method. The PNTM method minimizes both values simultaneously in the Krylov subspace using Newton's method and only expands the Krylov subspace if the value for the regularization parameter has not stagnated yet. Our numerical experiments seem to indicate that the alternating approach of GBiT is less efficient than the simultaneous update approach of PNTM. This however assumes that the number of inner Newton iterations for PNTM is high enough for them to converge. As a result of the small estimate for the step size we currently use, this may take too many iterations to be a viable alternative. Improving the choice of the step size -- possibly using a backtracking approach -- is therefore necessary in order to improve this method. \section*{Acknowledgments} The authors wish to thank the Department of Mathematics and Computer Science, University of Antwerp, for financial support. \bibliographystyle{siamplain}
1,941,325,221,213
arxiv
\section{Introduction} Let $G$ be a complex reductive Lie group and $B$ be its Borel subgroup. The Bott-Samelson varieties are defined in \cite{BS67} and \cite{De74} to desingularize the Schubert varieties $X$ in the flag manifold $G/B$, and then used to study the Chow ring of $G/B$. In representation theory, through the characters of $H^{0}(X,L) $, the Bott-Samelson varieties provide Demazure's character formula which can be understood as a generalized Weyl's character formula. \medskip In this paper, we study the Bott-Samelson varieties for the case when $G=\rm{GL}_{n}(\mathbb{C})$ is the general linear group over the complex number field $\mathbb{C}$ and $B$ is its Borel subgroup consisting of upper triangular matrices. For any word $\mathbf{i}=(i_{1},\cdots ,i_{l})$ with $% 1\leq i_{j}\leq n-1$, the \textit{Bott-Samelson variety} can be defined as the quotient space% \begin{equation*} Z_{\mathbf{i}}=P_{i_{1}}\times P_{i_{2}}\times \cdots \times P_{i_{l}}/B^{l} \end{equation*}% where $P_{i_{j}}$ is the minimal parabolic subgroup of $G$ associated to the simple reflection $s_{i_{j}}=(i_{j},i_{j}+1)$, and $B^{l}$ acts on the product of $P_{i_{j}}$'s by \begin{equation*} (p_{1},\cdots ,p_{l}). (b_{1},\cdots ,b_{l})=(p_{1}b_{1},b_{1}^{-1}p_{2}b_{2},\cdots ,b_{l-1}^{-1}p_{l}b_{l}). \end{equation*} We may realize the Bott-Samelson variety $Z_{\mathbf{i}}$ as a configuration variety \cite{Ma98}:% \begin{equation*} Z_{\mathbf{i}}\subset {\rm Gr}(\mathbf{i})={\rm Gr}(i_{1},n)\times \cdots \times {\rm Gr}(i_{l},n) \end{equation*}% where ${\rm Gr}(i,n)$ is the Grassmann variety of $i$ dimensional subspaces in $% \mathbb{C}^{n}$. Then we have a natural line bundle induced from the Pl\"{u}% cker bundles on the factors of ${\rm Gr}(\mathbf{i})$. That is, for $\mathbf{m}% =(m_{1},\cdots ,m_{l})\in \mathbb{Z}_{\geq 0}^{l}$,% \begin{equation*} L_{\mathbf{m}}=P_{i_{1}}\times \cdots \times P_{i_{l}}\times \mathbb{C}/B^{l} \end{equation*}% where $(b_{1},\cdots ,b_{l})$ is acting on the right by \begin{equation*} (p_{1}b_{1},b_{1}^{-1}p_{2}b_{2},\cdots ,b_{l-1}^{-1}p_{l}b_{l},\varpi _{i_{1}}(b_{1}^{-1})^{m_{1}}\cdots \varpi _{i_{l}}(b_{l}^{-1})^{m_{l}}v) \end{equation*}% with $\varpi _{i}$ being the $i$-th fundamental weight given by $\varpi _{i}(diag(x_{1},\cdots ,x_{n}))=x_{1}\cdots x_{i}$. From such a realization, Lakshmibai and Magyar \cite{LM98,Ma98} described their standard monomial bases in terms of root operators. \medskip In \cite{GK94}, Grossberg and Karshon studied a family of complex structures on a Bott-Samelson manifold, such that the underlying real manifold remains the same, but the limit complex manifold becomes, what they call, a Bott tower, which admits a complete torus action. An algebraic version of their construction appeared in \cite{Pa08}. Our deformation is algebraic in nature, yet is seemingly different, as can be seen in examples and also from the fact that in the limit, the relationship between ${Z_{\mathbf i}}$ and $G/B$ naturally extends to the whole flat family. \medskip As it is the case for the Grassmann varieties and the flag varieties, we can investigate the Pl\"{u}cker coordinates in terms of Young tableaux or minors over a matrix and straightening relations among them. Using the language of row-convex tableaux introduced by Taylor \cite{Ta01}, we study the homogenous coordinate rings of the Bott-Samelson varieties and their explicit standard monomial type bases. Then from SAGBI-Gr\"{o}bner degeneration techniques (e.g., \cite{St95}), we obtain toric degenerations of the Bott-Samelson varieties. In a separate section, we provide a detailed study of an example for the case of $\rm{GL}_{3}(\mathbb{C})$, including toric degenerations, the corresponding moment polytopes, and computations of the Hilbert polynomial. \medskip This paper is arranged as follows: in Section 2, we study the homogeneous coordinate rings of the Bott-Samelson varieties and their toric degenerations. In Section 3, we give an example for the case of a three-dimensional Bott-Samelson variety. In Section 4, we study the standard monomial theory for the Bott-Samelson varieties in terms of row-convex tableaux and give a description via integral points in polyhedral cones. \section{Homogeneous coordinate ring and Toric degeneration} \subsection{Homogeneous coordinate ring} We shall consider the section ring of $Z_{\mathbf{i}}$\ with respect to $L_{% \mathbf{m}}$:% \begin{equation*} \mathcal{R}_{\mathbf{i,m}}=\bigoplus_{d\geq 0}H^{0}(Z_{\mathbf{i}},L_{% \mathbf{m}}^{d}) \end{equation*}% To obtain an explicit expression, first we fix a word $\mathbf{i}$ associated to a reduced expression of the longest element of the symmetric group $\mathfrak{S}_{n}$, then we describe $\mathcal{R}_{\mathbf{i,m}}$ as a ring generated by tableaux of shape defined by $\mathbf{i}$ and $\mathbf{m}$. \medskip Let us consider the following reduced decomposition of the longest element in $\mathfrak{S}_{n}$:% \begin{equation*} w^{(n)}=(s_{1})(s_{2}s_{1})\cdots (s_{n-1}s_{n-2}\cdots s_{1}) \end{equation*}% Note that the length of $w^{(n)}$ is $l=n(n-1)/2$. Once and for all, we fix \begin{equation} \mathbf{i}=(i_{1},\cdots ,i_{l}) \label{reduced word} \end{equation}% associated to the reduced expression $w^{(n)}=s_{i_{1}}s_{i_{2}}\cdots s_{i_{l}}$ of the longest element given above. Also, we fix the following sets, which we shall call the \textit{column sets},% \begin{equation} C^{(k)}=s_{i_{1}}s_{i_{2}}\cdots s_{i_{k}}\{1,\cdots ,i_{k}\} \label{column sets} \end{equation}% for $1\leq k\leq l$. Then the following is easy to check. \begin{lemma} \label{row-convex}i) For each $k$, if $a<c<b$ and both $a$ and $b$ are in $% C^{(k)}$, then $c\in C^{(k)}$. ii) For each $j$ with $3\leq j\leq n$, set $% p_{j}=j(j-1)/2$. Then, the column sets are% \begin{equation*} C^{(p_{j}+t)}=\{t+2,t+3,\cdots ,j\} \end{equation*}% for $0\leq t\leq j-2$, and $C^{(1)}=\{2\}$. \end{lemma} This shows that if we stack $C^{(k+1)}$ on top of $C^{(k)}$, the column sets we defined may form a \textit{row-convex shape} which is defined in \cite% {Ta01} as a generalized skew Young diagram. \begin{example} \label{exii}i) For $n=3,$ $\mathbf{i}=(121)$ and $C^{(1)}=\{2\},C^{(2)}=% \{2,3\},C^{(3)}=\{3\}$. For $n=4,$ $\mathbf{i}=(121321)$ and we have additional column sets: $C^{(4)}=\{2,3,4\},C^{(5)}=\{3,4\},C^{(6)}=\{4\}$ ii) Then the corresponding row-convex shapes for $n=3$ and $n=4$ indicated by $X$ are% \begin{equation*} \young(\ \ X,\ XX,\ X\ ),\young(\ \ \ X,\ \ XX,\ XXX,\ \ X\ ,\ XX\ ,\ X\ \ ) \end{equation*} \end{example} \medskip Let $M_{n}=M_{n}(\mathbb{C})$ be the space of complex $n\times n$ matrices and $B_{n}=\overline{B}$ be the subspace consisting of upper triangular matrices:% \begin{equation*} B_{n}=\{(x_{ij})\in M_{n}:x_{ij}=0\text{ for }i>j\}. \end{equation*}% For $k\leq n$, consider subsets $R=\{r_{1}<\cdots <r_{k}\}$ and $% C=\{c_{1}<\cdots <c_{k}\}$ of $\{1,\cdots ,n\}$. Then, we let $[R:C]$ or $% (r_{1},\cdots ,r_{k}|c_{1},\cdots ,c_{k})$ denote the map from $B_{n}$ to $% \mathbb{C}$ by assigning to a matrix $b\in B_{n}$ the determinant of the $% k\times k$ minor of $b$ formed by taking rows $R$ and columns $C$:% \begin{eqnarray*} \lbrack R:C] &=&(r_{1},\cdots ,r_{k}|c_{1},\cdots ,c_{k}) \\ &=&\det \left[ \begin{array}{ccc} x_{r_{1}c_{1}} & \cdots & x_{r_{1}c_{k}} \\ \vdots & \ddots & \vdots \\ x_{r_{k}c_{1}} & \cdots & x_{r_{k}c_{k}}% \end{array}% \right] \end{eqnarray*}% where $x_{rc}=0$ if $r>c$. For subsets $S$ and $S^{\prime}$ of $\{1,\cdots ,n\}$ with the same size, we can impose a partial ordering: $S\preceq S^{\prime}$ if for each $k$, the $k$% th smallest element of $S$ is less than or equal to the $k$th smallest element of $S^{\prime}$. Then note that $[R:C]$ is non-zero only if $% R\preceq C$. This property is called \textit{flagged}. Since we are only considering flagged cases, from now on we continue to assume this property. \medskip By using a Young diagram with a single row consisting of $n$ boxes, we can record $[R:C]$ by filling in the $c_{i}$th box counting from left to right with $r_{i}$ for each $i$. For example, for $n=6$ if $R_{1}=\{1,3,4\}$ and $C_{1}=\{2,3,4\}$ then $% [R_{1}:C_{1}]$ can be drawn as% \begin{equation*} \young(\ 134\ \ ) \end{equation*}% The product of $k$ of these row tableaux $[R_{i}:C_{i}]$ is denoted by a $% k\times n$ tableau whose $(k+1-i)$th row counting from top to bottom is $[R_{i}:C_{i}]$ for $1\leq i\leq k$. For example, if $R_{2}=\{2,3,5\},C_{2}=\{3,4,5\},R_{3}=\{4,5\}$ and $% C_{3}=\{5,6\}$, then $\prod_{1\leq i\leq 3}[R_{i}:C_{i}]$ is% \begin{equation*} \young(\ \ \ \ 45,\ \ 235\ ,\ 134\ \ ) \end{equation*} \begin{remark} We note that our notation for tableaux agrees with that of \cite{Ta01} after erasing empty cells. To make it compatible with the notation given in \cite{Ma98} we need to take the transpose, i.e., write $[R_{i}:C_{i}]$ in a column tableau, and then the product can be recorded in a $n\times k$ tableau whose $i$th column represents $[R_{i}:C_{i}]$ for $1\leq i\leq k$. \end{remark} \medskip Let us fix $l=n(n-1)/2$. For $\mathbf{m}=(m_{1},\cdots ,m_{l})\in \mathbb{Z}% _{\geq 0}^{l}$, which we shall call the \textit{multiplicity}, consider a collection \begin{equation*} \{[R_{j_{i}}^{(i)}:C^{(i)}]|\text{ for each }i\text{, }% |R_{j_{i}}^{(i)}|=|C^{(i)}|\text{ for }1\leq j_{i}\leq m_{i}\} \end{equation*}% where $C^{(i)}$'s are the column sets defined in (\ref{column sets}) with respect to $\mathbf{i}$. We shall use the notation $|\mathbf{m}|$ for $% \sum_{k}m_{k}$. Then, by repeating $C^{(i)}$'s $m_{i}$ times for each $i$, the product $\mathsf{t}$ of $[R_{j}^{(i)}:C^{(i)}]$'s can be drawn as a $|% \mathbf{m}|\times n$\ tableau having $[R_{j}^{(i)}:C^{(i)}]$ as its $(|% \mathbf{m}|+1-\left( m_{1}+\cdots +m_{i-1}+j\right) )$th row:% \begin{equation*} \mathsf{t}=\left( \prod_{1\leq j\leq m_{1}}[R_{j}^{(1)}:C^{(1)}]\right) \cdot \left( \prod_{1\leq j\leq m_{2}}[R_{j}^{(2)}:C^{(2)}]\right) \cdot ...\cdot \left( \prod_{1\leq j\leq m_{l}}[R_{j}^{(l)}:C^{(l)}]\right) \end{equation*}% and we call $\mathsf{t}$ a \textit{tableaux} of shape $(\mathbf{m},\mathbf{i)% }$. Note that up to sign, we can always assume that the entries in each row of $% \mathsf{t}$ are increasing from left to right. If such is the case, then $% \mathsf{t}$ is called a \textit{row-standard tableau}. \medskip From the realization of $Z_{\mathbf{i}}$ as a configuration space, \cite% {Ma98} obtains explicit descriptions of the space of sections $H^{0}(Z_{% \mathbf{i}},L_{\mathbf{m}})$ over the line bundle $L_{\mathbf{m}}$. We also note that bases for the spaces $% H^{0}(Z_{\mathbf{i}},L_{\mathbf{m}})$ can be described in more general settings. See \cite{LLM02,LM98} for this direction. \begin{proposition}[\S 3 \protect\cite{Ma98}] \label{coordinate ring}For $\mathbf{m}=(m_{1},\cdots ,m_{l})\in \mathbb{Z}% _{\geq 0}^{l}$, let $\mathsf{M}(\mathbf{m})$ be the space spanned by tableaux of shape $(\mathbf{m},\mathbf{i)}$. Then,% \begin{equation*} \mathsf{M}(\mathbf{m})\cong H^{0}(Z_{\mathbf{i}},L_{\mathbf{m}})\text{.} \end{equation*} \end{proposition} Then, as explained in \S 7 \cite{Ta01}, we can consider the section ring $% \mathcal{R}_{\mathbf{i,m}}$ with respect to $L_{\mathbf{m}}$ as the $\mathbb{% Z}_{\geq 0}$ graded algebra generated by tableaux of shape $(\mathbf{m},% \mathbf{i)}$:% \begin{equation*} \mathcal{R}_{\mathbf{i,m}}=\bigoplus_{d\geq 0}\mathsf{M}(d\mathbf{m}) \end{equation*}% where $d\mathbf{m}=(dm_{1},\cdots ,dm_{l})$. The multiplicative structure of this ring can be described by the \textit{straightening laws}, which are in our case essentially Grosshans-Rota-Stein syzygies given in \cite{DRS76}. See \S 5 \cite{Ta01} for more detail. \medskip \subsection{Toric degeneration} \begin{definition} A row-standard \textit{tableau} $\mathsf{t}$ of shape $(\mathbf{m},\mathbf{i)% }$ is called a straight tableau, if $\mathsf{t}$ as a $|\mathbf{m}|\times n$ tableau% \begin{equation*} \mathsf{t}=[R_{1}^{(1)}:C^{(1)}]\cdot ...\cdot \lbrack R_{m_{l}}^{(l)}:C^{(l)}], \end{equation*}% satisfies the following condition: for two cells $(i,k)$ and $(j,k)$ with $% i<j$ in the same column, the entry in the cell $(i,k)$ may be strictly larger than the entry in $(j,k)$ only if the cell $(i,k-1)$ exists and contains an entry weakly larger than the one in the cell $(j,k)$. \end{definition} For example, the first three tableaux below can be parts of straight tableaux while the last one can not be, because $3$ in the second column is less than $4$ in the same column and $1$ left to the $4$ is less than $3$:% \begin{equation*} \young(\ \ 12,3456,\ \ 5\ ,\ 57\ ),\ \young(\ \ 12,3456,\ \ 3\ ,\ 67\ ),\ % \young(\ \ 25,3457,\ \ 6\ ,\ 38\ ),\ \young(\ \ 25,1457,\ \ 7\ ,\ 38\ ) \end{equation*} \medskip A monomial order on the polynomial ring $\mathbb{C}[M_{n}]$ is called a \textit{diagonal term order} if the leading monomial of a determinant of any minor over $M_{n}$ is equal to the product of the diagonal elements. For a subring $\mathcal{R}$ of the polynomial ring we let $in(% \mathcal{R})$ denote the algebra generated by the leading monomials $in(f)$ of all $f\in \mathcal{R}$ with respect to a given monomial order. Note that the collection of leading monomials forms a semigroup, therefore $in(% \mathcal{R})$ is a semigroup algebra and $Spec(in(\mathcal{R}))$ forms an affine toric variety \cite{St95}. \begin{proposition}[\protect\cite{Ta01}] \label{STM basis}Let $(\mathbf{m},\mathbf{i)}$ be a row-convex shape. i) Straight tableaux of shape $(\mathbf{m},\mathbf{i)}$ form a $\mathbb{C}$-basis for the space $\mathsf{M}(\mathbf{m})$ spanned by \textit{% tableau} of shape $(\mathbf{m},\mathbf{i)}$. ii) Straight tableaux of shape $(\mathbf{m},\mathbf{i)}$ form a SAGBI basis of the graded algebra $\mathcal{R}\subset \mathbb{C}[M_{n}]$ generated by \textit{tableaux} of shape $(\mathbf{m},\mathbf{i)}$ with respect to any diagonal term order. \end{proposition} This directly shows that straight tableaux form a $\mathbb{C}$-basis of $\mathcal{R}_{\mathbf{i,m}}$, and the straight tableaux of shape $(\mathbf{m},\mathbf{i)}$ form a SAGBI basis for $\mathcal{R}_{\mathbf{i,m}}$. Now we study a toric degeneration of $Z_{\mathbf{i}}$. The technique is basically the same as the one for the Grassmannians and the flag varieties given in \cite{St95,MS05}. \begin{theorem} \label{deformation}The Bott-Samelson variety $Z_{\mathbf{i}}$ can be flatly deformed into a toric variety. \end{theorem} \begin{proof} We show that there is a flat $\mathbb{C}[t]$ module $\mathcal{R}_{\mathbf{i,m% }}^{t}$ whose general fiber is isomorphic to $\mathcal{R}_{\mathbf{i,m}}$ and special fiber is isomorphic to the semigroup ring $in(\mathcal{R}_{\mathbf{i,m}})$. The first statement of Lemma \ref{row-convex} shows that any tableau\ of shape $(\mathbf{m},\mathbf{i)}$ with the column sets $\{C_{1}^{(1)},\cdots ,C_{l}^{(l)}\}$ given in (\ref{column sets}) is a row-convex tableau. Therefore, we can apply the above Proposition to $\mathcal{R}_{\mathbf{i,m}}$ to conclude that the set of straight tableaux of shape $(\mathbf{m},\mathbf{% i)}$ forms a SAGBI basis for the ring $\mathcal{R}_{\mathbf{i,m}}$ with respect to a diagonal term order. Then, from the existence of a finite SAGBI basis, by \cite{CHV96}, there exists a $\mathbb{Z}_{\geq 0}$ filtration $% \{F_{\alpha }\}$ on $\mathcal{R}_{\mathbf{i,m}}$ such that the associated graded ring of the Rees algebra $\mathcal{R}_{\mathbf{i,m}}^{t}$ with respect to $\{F_{\alpha }\}:$% \begin{equation*} \mathcal{R}_{\mathbf{i,m}}^{t}=\bigoplus_{\alpha \geq 0}F_{\alpha }(\mathcal{% R}_{\mathbf{i,m}})t^{\alpha } \end{equation*}% is isomorphic to $in(\mathcal{R}_{\mathbf{i,m}})$. Then, by the general property of the Rees algebra, $\mathcal{R}_{\mathbf{i,m}}^{t}$ is flat over $% \mathbb{C}[t]$ with general fiber isomorphic to $\mathcal{R}_{\mathbf{i,m}% }$ and the special fiber isomorphic to the associated graded ring which is $in(\mathcal{R}_{\mathbf{i,m}})$. \end{proof} \medskip \section{Three-dimensional example} In this section we will consider explicit examples of toric degenerations of a three-dimensional Bott-Samelson variety. As well as in (\ref{reduced word}), we will choose the word ${\mathbf{i}}=(121)$. Accordingly, let $P_1$ and $P_2$ be the following parabolic subgroups of $\rm{GL}_{3}(\mathbb{C})$: $$ P_1=\left(\begin{array}{ccc} * & * & * \\ * & * & * \\ 0 & 0 & * \end{array} \right) \ , \ \ \ P_2=\left(\begin{array}{ccc} * & * & * \\ 0 & * & * \\ 0 & * & * \end{array} \right) \ . $$ We also denote by ${\bar P}_1$ and ${\bar P}_2$ their closures in the space $M_3$ of $3\times 3$ matrices. Let $Z$ be the Bott-Samelson variety defined by $$ Z = P_1\times P_2 \times P_1 / B^3 , $$ where $B^3$ acts as before: $$ (p_1, p_2, p_3) . (b_1, b_2, b_3) = (p_1 b_1, b_1^{-1}p_2 b_2, b_2^{-1}p_3 b_3) \ . $$ The variety $Z$ can also be viewed as an invariant theory quotient of the product of the closures ${\bar P}_1\times {\bar P}_2 \times {\bar P}_1$ by the action of $B^3$ in the obvious way. Let us denote the elements of the first copy of $P_1$ by $$ p_1=\left(\begin{array}{ccc} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ 0 & 0 & a_{33} \end{array} \right) \ , \ $$ the elements of $P_2$ by $$ p_2=\left(\begin{array}{ccc} b_{11} & b_{12} & b_{13} \\ 0 & b_{22} & b_{23} \\ 0 & b_{32} & b_{33} \end{array} \right) \ , $$ and the elements of the second copy of $P_1$ by $$ p_3=\left(\begin{array}{ccc} c_{11} & c_{12} & c_{13} \\ c_{21} & c_{22} & c_{23} \\ 0 & 0 & c_{33} \end{array} \right) \ . $$ The same notation will be used for the elements of their closures in $M_3$. Next, we will describe a Pl\"ucker-type embedding of $Z$ into the product of three projective spaces: $$ {\mathcal H}:={\rm Proj}(s_1, s_2)\times {\rm Proj}(r_{23}, r_{13}, r_{12})\times {\rm Proj}(q_1, q_2, q_3)\simeq {\mathbb C}{\mathbb P}^1\times {\mathbb C}{\mathbb P}^2\times {\mathbb C}{\mathbb P}^2\ . $$ Let a point in $Z$ be represented by three matrices $(p_1, p_2, p_3)$ in the above form, then we take $s_1=a_{11}$ and $s_2=a_{21}$ - that is to say that $s_i$ is the $1\times 1$ minor of the matrix $p_1$ with column 1 and row $i$ (note that $s_3$ would be identically equal to zero, so we do not use it). Next, $r_{ij}$ is the $2\times 2$ minor of the matrix $p_1p_2$ with columns $1,2$ and rows $i,j$. Explicitly, $$ r_{23}=a_{21}a_{33}b_{11}b_{32}, \ \ r_{13}=a_{11}a_{33}b_{11}b_{32}, \ \ {\rm and } \ $$ $$ r_{12}=a_{11}b_{11}(a_{22}b_{22}+a_{23}b_{32})- a_{21}b_{11}(a_{12}b_{22}+a_{13}b_{32})\ . $$ Finally, $q_{i}$ is the $1\times 1$ minor of the matrix $p_1p_2p_3$ with column $1$ and row $i$: $$ q_1 = a_{11}b_{11}c_{11}+(a_{11}b_{12}+a_{12}b_{22}+a_{13}b_{32})c_{21}, $$ $$ q_2 = a_{21}b_{11}c_{11}+(a_{21}b_{12}+a_{22}b_{22}+a_{23}b_{32})c_{21}, $$ $$ {\rm and} \ \ \ q_3 = a_{33}b_{32}c_{21}. $$ Therefore, $Z$ can be viewed as a subvariety of ${\mathcal H}$, the product of three projective spaces, defined by the following two homogeneous equations (or Pl\"ucker relations): $$ s_1r_{23}+s_2r_{13}=0 \ \ \ {\rm and} \ \ \ q_1r_{23}+q_2r_{13}+q_3r_{12}=0\ . $$ \begin{proposition} The Hilbert polynomial of $Z$ is given by $$ {\rm HP}_Z(s) = \frac{5s^3+11s^2+8s+2}{2}. $$ \label{propHP} \end{proposition} \begin{proof} Let, as before, ${\mathcal H}={\mathbb C}{\mathbb P}^1\times {\mathbb C}{\mathbb P}^2 \times {\mathbb C}{\mathbb P}^2$ and let $\pi_1$, $\pi_2$ and $\pi_3$ stand for the projections onto the corresponding factors. Denote $L=\pi_1^*({\mathcal O}(1))$, $M_1=\pi_2^*({\mathcal O}(1))$, and $M_2=\pi_3^*({\mathcal O}(1))$. We will also denote by the same letters $L$, $M_1$, and $M_2$ the corresponding classes of divisors in the Chow ring of ${\mathcal H}$. Let $X$ be the element of the Chow ring of ${\mathcal H}$, corresponding to $Z$, and let $D = n(L+M_1+M_2)$. For large enough integral values of $s$, the Hilbert polynomial ${\rm HP}_Z(s)$ coincides with ${\rm dim}(H^0(sD_{\vert Z}))$, which, due to vanishing, is the same as the Euler characteristic of $sD_{\vert Z}$. The Riemann-Roch theorem for smooth Fano threefolds asserts that \cite{IP99} $$ \chi(nD_{\vert Z})=\frac{D^3_{\vert Z}}{6}n^3-\frac{D^2_{\vert Z}K_Z}{4}n^2+ \frac{D_{\vert Z}(K_Z^2+c_2(Z))}{12}n+1. $$ Now, $X=(L+M_1)(M_1+M_2)$, therefore, by the adjunction formula we get $-K_Z=(L+M_1+2M_2)_{\vert {Z}}$ and hence $(L+M_1+2M_2)_{\vert Z}c_2(Z)=24$. To find $(M_2)_{\vert Z}c_2(Z)$, we will use the same Riemann-Roch formula, but for $M_2$, noting that ${\rm dim}(H^0(M_2))=3$. Finally, the intersection products satisfy $L^2=0$, $M_1^3=M_2^3=0$ and $LM_1^2M_2^2=1$, which leads to a straightforward computation of the required polynomial. \end{proof} Forgetting the first component, one can consider the projection: $$ {\mathcal H}\to {\mathbb C}{\mathbb P}^2\times {\mathbb C}{\mathbb P}^2\ . $$ The image of $Z$ under this projection is naturally the 3-dimensional flag variety $F:={\rm Fl}_3$, sitting inside ${\mathbb C}{\mathbb P}^1\times {\mathbb C}{\mathbb P}^2$ as the zero set of the second Pl\"ucker relation. There are two naive ways to construct toric degenerations of $Z$, the first is to consider the family of varieties, parameterized by $\tau\in{\mathbb C}$, where the second equation is modified to $$ q_1r_{23}+q_2r_{13}+\tau q_3r_{12}=0\ . $$ One can easily notice that the special toric fiber of this family, corresponding to $\tau=0$ is a reducible variety and has two irreducible components: one, denoted by ${\mathcal G}$, is isomorphic to ${\mathbb C}{\mathbb P}^1\times {\mathbb C}{\mathbb P}^2$, and corresponds to $r_{23}=r_{13}=0$, and the second component, denoted by $D_3$, a three-dimensional toric variety, which is actually non-singular. Combinatorially, the moment polytope for $D_3$ is a cube, and is drawn schematically on the figure below. (To simplify computations, we assumed that the members of the family are polarized by the invertible sheaf induced from ${\mathcal O}(1)\times {\mathcal O}(1)\times {\mathcal O}(1)$ on ${\mathcal H}$.) \begin{center} \begin{picture}(140,43)(0,0) \put(50,3){ \unitlength=7mm \drawpolygon(0,0)(0,3)(2,3)(2,1) \drawline[AHnb=0](2,1)(5,4)(5,5)(3,5)(0,3) \drawline[AHnb=0](2,3)(5,5) \drawline[AHnb=0,dash={0.2}0](0,0)(3,3)(3,5) \drawline[AHnb=0,dash={0.2}0](3,3)(5,4) } \end{picture} FIGURE 1. \end{center} \medskip The fact that the special fiber of this flat family of varieties over ${\mathbb C}$ is reducible and is given by the union of two non-singular components is quite amusing. The intersection of these two components is a smooth two-dimensional toric variety, denoted by $K_2$, known as the Hirzebruch surface of degree one. One can compute the Hilbert polynomials for the chosen polarization, denoted by ${\rm HP}$ of the irreducible components, which are known \cite{MS05} to be the same as the Ehrhart polynomials, denoted by ${\rm EP}$ of their moment polytopes, as well as their Ehrhart series, ${\rm ES}$. Using a computer program \cite{Latte}, we have obtained: $$ {\rm ES}(D_3) = \frac{3t^2+8t+1}{(1-t)^4}, \ \ \ {\rm EP}(D_3) = {\rm HP}(D_3) = 2s^3+5s^2+4s+1 = (s+1)^2(2s+1), $$ $$ {\rm ES}({\mathcal G}) = \frac{1+2t}{(1-t)^4}, \ \ \ {\rm EP}({\mathcal G}) = {\rm HP}({\mathcal G}) = \frac{s^3+4s^2+5s+2}{2} = \frac{(s+1)^2(s+2)}{2} $$ $$ {\rm ES}(K_2) = \frac{1+2t}{(1-t)^3}, \ \ \ {\rm EP}(K_2) = {\rm HP}(K_2) = \frac{3s^2+5s+2}{2} = \frac{(s+1)(3s+2)}{2}. $$ This allows us to check that $$ {\rm HP}(Z) = {\rm HP}(D_3) + {\rm HP}({\mathcal G}) - {\rm HP}(K_2) = \frac{5s^3+11s^2+8s+2}{2} = \frac{(s+1)(5s^2+6s+2)}{2}. $$ This fact was also verified, independently, using a software package \cite{Sing}, by representing $Z$ as a subvariety in ${\mathbb C}{\mathbb P}^{17}$ via Segre embedding, defined by the following 95 equations, where $[a_1:\cdots :a_9:b_1:\cdots :b_9]$ are the homogeneous coordinates on ${\mathbb C}{\mathbb P}^{17}$: $$ a_ib_j=a_jb_i,\ \ {\rm for} \ \ 1\le i < j \le 9, $$ $$ a_ka_l=a_ma_n, \ a_kb_l=a_mb_n, \ a_kb_l=b_ma_n, \ b_ka_l=a_mb_n, \ b_ka_l=b_ma_n, \ b_kb_l=b_mb_n $$ for the following nine choices of quadruples of indexes $(k,l,m,n)$: $$ (1,5,2,4), \ (1,6,3,4), \ (2,6,3,5,), \ (1,8,2,7), \ (1,9,3,7), $$ $$ (2,9,3,8), \ (4,8,5,7), \ (4,9,6,7), \ {\rm and} \ (5,9,6,8), $$ and the last five: $$ a_1+b_4=0, \ \ a_2 + b_5 = 0, \ \ a_3 + b_6 = 0, $$ $$ a_1+a_5+a_9 = 0, \ \ \ \ b_1+b_5+b_9 =0. $$ The second way to obtain a flat toric degeneration of $Z$ is to consider a different family of varieties inside ${\mathcal H}$, also parameterized by $\tau\in{\mathbb C}$ and given by the following two equations: $$ s_1r_{23}+s_2r_{13}=0, \ \ \ {\rm and} \ \ \ q_1r_{23}+\tau q_2r_{13}+q_3r_{12}=0\ . $$ One can see that the special fiber of this flat family, corresponding to $\tau=0$, is a singular toric variety, denoted by $Y_3$, whose moment polytope combinatorially is represented by the picture drawn below: \begin{center} \begin{picture}(140,43)(0,0) \put(50,3){ \unitlength=10mm \drawpolygon(0,0)(0,2)(2,2)(2,1)(1,0) \drawline[AHnb=0](1,0)(4,2)(5,3)(2,1) \drawline[AHnb=0](0,2)(3,3)(5,3)(2,2) \drawline[AHnb=0,dash={0.2}0](0,0)(3,2)(3,3) \drawline[AHnb=0,dash={0.2}0](3,2)(4,2) } \end{picture} FIGURE 2. \end{center} \medskip The Ehrhart series and the Ehrhart polynomial of the moment polytope of special fiber corresponding to the same, previously chosen, polarization, is given by $$ {\rm ES}(Y_3) = \frac{5t^2+9t+1}{(1-t)^4}, \ \ \ {\rm EP}(Y_3) = \frac{5s^3+11s^2+8s+2}{2}. $$ Not surprisingly, again we see that ${\rm EP}(Y_3) = {\rm HP}(Z)$. \medskip \section{Standard Monomials} In this section, we study more details on the $\mathbb{C}$-basis of the space $\mathsf{M}(\mathbf{m}% )\cong H^{0}(Z_{\mathbf{i}},L_{\mathbf{m}})$ given by straight tableaux in Proposition \ref{STM basis}, and then its connection to the natural map from the Bott-Samelson variety to the flag variety. We also describe the leading monomials of tableaux in terms of integral points in $\mathbb{R}^N$. To simplify our notation, we shall keep using the notation $l=n(n-1)/2$ and $% p_{j}=j(j-1)/2$ for $2\leq j\leq n-1$. Also, fix an arbitrary multiplicity $% \mathbf{m}=(m_{1},\cdots ,m_{l})\in \mathbb{Z}_{\geq 0}^{l}$ and recall that the word $\mathbf{i}$ of length $l$ is as defined in (\ref{reduced word}). \subsection{Contra-tableaux} \begin{definition} A contra-tableau is a filling of a skew Young diagram $$(k,k,\cdots ,k)\backslash (\lambda _{1},\lambda _{2},\cdots )$$ with $k\geq \lambda _{1}\geq \lambda _{2}\geq \cdots \geq 0$ such that the entries in each column are weakly increasing from top to bottom and the entries in each row are strictly increasing from left to right. \end{definition} For example, the following is a contra-tableau of shape $(4,4,4,4,4)% \backslash (3,3,3,2,1)$:% \begin{equation*} \young(\ \ \ 1,\ \ \ 1,\ \ \ 2,\ \ 12,\ 134) \end{equation*}% Recall that the usual semistandard tableaux can encode weight basis elements for irreducible polynomial representations of the general linear group. Similarly, one can use contra-tableaux to encode weight vectors of a contragradient representation of an irreducible polynomial representation of the general linear group. Here, our goal is to decompose a straight tableau into contra-tableaux. First, we can decompose the shape $(\mathbf{m},\mathbf{i})$ into $n-1$ number of skew Young diagrams as follows. For each $j$ with $1\leq j\leq n-1$, let us set $\mathbf{m}(j)=(m_{1}^{\prime },\cdots ,m_{l}^{\prime })$ where $% m_{i}^{\prime }=m_{i}$ for $p_{j}<i\leq p_{j+1}$ and $m_{i}^{\prime }=0$ otherwise. Then $\mathbf{m}=\mathbf{m}(1)+\cdots +\mathbf{m}(n-1)$. \begin{example} If $n=4$ and $\mathbf{m}=(1,1,\cdots ,1)\in \mathbb{Z}_{\geq 0}^{6}$, then $% \left( \mathbf{m}(1),\mathbf{i}\right) $, $\left( \mathbf{m}(2),\mathbf{i}% \right) $, $\left( \mathbf{m}(3),\mathbf{i}\right) $ respectively correspond to the shapes:% \begin{equation*} \young(\ X\ \ ),\ \young(\ \ X\ ,\ XX\ ),\ \young(\ \ \ X,\ \ XX,\ XXX) \end{equation*}% Note that this is equivalent to the decomposition of the shape $(\mathbf{m},% \mathbf{i})$ given in Example \ref{exii} into maximal possible Young diagrams. \end{example} If $\mathbf{m}=(1,1,\cdots ,1)$, then from the second statement of Lemma \ref% {row-convex}, the shape $\left( \mathbf{m}(j),\mathbf{i}\right) $ is a skew Young diagram $(j+1,j+1,\cdots ,j+1)\backslash (j,j-1,\cdots ,1)$ of length $% j$. By repeating the $k$-th rows $m_{p_{j}+k}$ times, we have a skew Young diagram of length $|\mathbf{m}(j)|$. Then from the definition of straight tableaux, it is straightforward to check that every straight tableau in a skew diagram is a contra-tableau. See also Proposition 4.3 \cite{Ta01}. \begin{lemma} \label{straight equal contra}For each $j$, every straight tableau of shape $% \left( \mathbf{m}(j),\mathbf{i}\right) $ is a contra-tableau. \end{lemma} Note that this lemma shows that the basis of the space $\mathsf{M}(\mathbf{m}(j))\cong H^{0}(Z_{\mathbf{i}% },L_{\mathbf{m}(j)})$ is simply given by contra-tableaux tableaux, and then as a consequence we can obtain a description of elements in the section ring $\mathcal{R}_{\mathbf{i,m}}$ as products of contra-tableaux. That is, we have a natural projection \begin{equation} \mathsf{M}(\mathbf{m}(1))\otimes \cdots \otimes \mathsf{M}(\mathbf{m}% (n-1))\rightarrow \mathsf{M}(\mathbf{m}) \label{projection} \end{equation}% sending $\mathsf{t}_{1}\otimes \cdots \otimes \mathsf{t}_{n-1}$ to the product $\mathsf{t}_{1}\cdot ...\cdot \mathsf{t}_{n-1}\in \mathsf{M}(\mathbf{% m})$ where $\mathsf{t}_{j}$ is a contra-tableau in $\mathsf{M}(\mathbf{m}(j)) $ for each $j$. For example, if $n=4$ and $\mathbf{m}=(1,2,1,1,1,3)$, then the product map $\mathsf{t}_{1}\otimes \mathsf{t}_{2} \otimes \mathsf{t}_{3} \rightarrow \mathsf{t}$ gives \begin{equation} \young(\ 1\ \ )\otimes \young(\ \ 2\ ,\ 13\ ,\ 23\ )\otimes \young(\ \ \ 1,\ \ \ 1,\ \ \ 2,\ \ 12,\ 134)\rightarrow \young(\ \ \ 1,\ \ \ 1,\ \ \ 2,\ \ 12,\ 134,\ \ 2\ ,\ 13\ ,\ 23\ ,\ 1\ \ ) \label{product} \end{equation}% Note that the product is not a straight tableau, but it can be expressed by a linear combination of straight tableaux in $\mathsf{M}(\mathbf{m})\subset \mathcal{R}_{\mathbf{i,m}}$ by successive application of straightening laws mentioned after Proposition \ref{coordinate ring}. \medskip \subsection{Integral points in a cone} From Proposition \ref{STM basis}, straight tableaux of shape $(\mathbf{m},% \mathbf{i)}$ form a SAGBI basis of the graded algebra $\mathcal{R}_{\mathbf{% i,m}}$ with respect to a diagonal term order. Since every tableau $% \mathsf{t}$ in $\mathsf{M}(d\mathbf{m})\subset \mathcal{R}_{\mathbf{% i,m}}$ is a product of determinants, it is easy to read its leading monomial $in(\mathsf{t})$ with respect to a diagonal term order, i.e., \begin{equation*} in(\mathsf{t})=\prod_{1\leq i,j\leq n}x_{ij}^{\alpha _{ij}} \end{equation*}% where $\alpha _{ij}$ is the number of $i$'s appearing in the $j$-th column of the tableau $\mathsf{t}$. Since the $j$-th column contains entries less than or equal to $j$, $\alpha _{ij}=0$ for $i>j$ and then by looking at its exponent $(\alpha _{ij})$, we can identify $in(\mathsf{t})$ with an integral point in the cone $\mathbb{R}_{\geq 0}^{n(n+1)/2}$. Moreover, if the shape of $\mathsf{t}$ is fixed then the sum $\alpha _{1j}+\alpha _{2j}+\cdots +\alpha _{jj}$ is fixed for each $j$. Therefore we can consider the collection of the leading monomials $in(\mathsf{t})$ of $% \mathsf{t}$ with a fixed shape in a convex polytope in $\mathbb{R}_{\geq 0}^{n(n-1)/2}$. For the leading monomial $in(\mathsf{t})=\prod_{1\leq i,j\leq n}x_{ij}^{\alpha _{ij}}$ of a tableau $\mathsf{t}$ of shape $% \left( \mathbf{m},\mathbf{i}\right) $, we define its corresponding integral point $\mathsf{p}_{\mathsf{t}}$ in $\mathbb{R}_{\geq 0}^{n(n-1)/2}$ as follows:% \begin{equation*} \mathsf{p}_{\mathsf{t}}=(\mathsf{p}^{(1)},\mathsf{p}^{(2)},\cdots ,\mathsf{p}% ^{(n)}) \end{equation*}% where $\mathsf{p}^{(k)}=(\mathsf{p}_{n}^{(k)},\mathsf{p}_{n-1}^{(k)},\cdots ,% \mathsf{p}_{1}^{(k)})\in \mathbb{Z}_{\geq 0}^{n}$ for $1\leq k\leq n$ and \begin{eqnarray*} \mathsf{p}^{(n)} &=&(\alpha _{n,n},0,\cdots ,0); \\ \mathsf{p}^{(r-1)} &=&\mathsf{p}^{(r)}+(\alpha _{r-1,n},\alpha _{r-1,n-1},\cdots ,\alpha _{r-1,r-1},0,\cdots ,0) \end{eqnarray*}% for $r\geq 2$. Note that for each $j$, $\mathsf{p}_{j}^{(1)}$ is equal to the number of non-empty cells in the $j$-th column of the shape $\left( \mathbf{m},% \mathbf{i}\right)$. Hence, $\mathsf{p}^{(1)}$ is determined by the shape $% \left( \mathbf{m},\mathbf{i}\right) $, and we can think of $(\mathsf{p}% ^{(2)},\cdots ,\mathsf{p}^{(n)})$ as an integral point in $\mathbb{R}_{\geq 0}^{n(n-1)/2}$. \medskip Note that if we transpose a contra-tableau and then reverse the order of its entries, then it is a usual semistandard tableau. Then via the well known conversion procedure between semistandard tableaux and Gelfand-Tsetlin patterns, contra-tableaux can be related to Gelfand-Tsetlin patterns. In fact, our definition of integral points $\mathsf{p}_{\mathsf{t}}$ corresponding to tableaux $\mathsf{t}$ is compatible with the tableaux-pattern conversion procedure. Therefore, for $1\leq j\leq n-1$, if we take a straight tableau $% \mathsf{t}$ of shape $(\mathbf{m}(j),\mathbf{i})$, then $\mathsf{t}$ is a contra-tableau as we saw in Lemma \ref{straight equal contra}\ and then we can express the corresponding integral point $\mathsf{p}_{\mathsf{t}}$ in terms of a Gelfand-Tsetlin pattern. Instead of the set of straight tableaux, we can consider a rather larger generating set for $\mathcal{R}_{\mathbf{i,m}}$. Let us consider the image $% \Pi $\ of the projection $\pi $ given in (\ref{projection}):% \begin{equation*} \Pi =\{\mathsf{t}_{1}\cdot ...\cdot \mathsf{t}_{n-1}:\mathsf{t}_{j}\text{ is a contra-tableau in }\mathsf{M}(\mathbf{m}(j))\text{ for }1\leq j\leq n-1\} \end{equation*}% Since $\Pi $ contains all the straight tableaux of shape $\left( \mathbf{m},% \mathbf{i}\right) $, $\Pi $ forms a SAGBI basis for $\mathcal{R}_{\mathbf{i,m% }}$. If we let $\mathsf{t}=\mathsf{t}_{1}\cdot ...\cdot \mathsf{t}_{n-1}$ and $\mathsf{p}_{\mathsf{t}_{k}}$ be the integral point in $\mathbb{R}^{(k+1)(k+2)/2}$ corresponding to $% \mathsf{t}_{k}$ for each $k$, then $\mathsf{p}_{\mathsf{t}}$ can be realized as the sum of $\mathsf{p}_{\mathsf{t}_{k}}$. For example, if $\mathsf{t}_{1}$% ,$\mathsf{t}_{2}$, and $\mathsf{t}_{3}$ are given as% \begin{equation*} \young(\ 1\ \ ),\young(\ \ 2\ ,\ 13\ ,\ 23\ ),\young(\ \ \ 1,\ \ \ 1,\ \ \ 2,\ \ 12,\ 134) \end{equation*}% then the corresponding integral points $\mathsf{p}_{\mathsf{t}_{1}},\mathsf{p}% _{\mathsf{t}_{2}}$, and $\mathsf{p}_{\mathsf{t}_{3}}$ can be realized as the following Gelfand-Tsetlin patterns% \begin{equation*} \begin{array}{ccc} 1 & & 0 \\ & 0 & \end{array}% ,% \begin{array}{ccccc} 3 & & 2 & & 0 \\ & 3 & & 1 & \\ & & 2 & & \end{array}% ,% \begin{array}{ccccccc} 5 & & 2 & & 1 & & 0 \\ & 3 & & 1 & & 0 & \\ & & 1 & & 1 & & \\ & & & 1 & & & \end{array}% \end{equation*}% and $\mathsf{p}_{\mathsf{t}}$ corresponding to $in(\mathsf{t})=in(\mathsf{t}_{1})\cdot in(\mathsf{t}_{2})\cdot in(\mathsf{t}_{3})$ can be understood as the sum of $\mathsf{p}_{% \mathsf{t}_{1}}$, $\mathsf{p}_{\mathsf{t}_{2}}$, and $\mathsf{p}_{\mathsf{t}% _{3}}$:% \begin{equation*} \begin{array}{ccccccc} 5 & & 5 & & 4 & & 0 \\ & 3 & & 4 & & 1 & \\ & & 1 & & 3 & & \\ & & & 1 & & & \end{array}% \end{equation*}% where $\mathsf{t}=\mathsf{t}_{1}\cdot \mathsf{t}_{2}\cdot \mathsf{t}_{3}$ is as in (\ref{product}). \subsection{Projection to $G/B$} Now we briefly discuss the well known natural map from the Bott-Samelson variety $Z_{\mathbf{i}}$ to the flag variety $G/B$ in terms of our basis description. The projection map (\ref{projection}) is compatible with the decomposition of a straight tableau of shape $(\mathbf{m},\mathbf{i})$ into contra-tableaux. More precisely, a straight tableau $\mathsf{t}$ of shape $% \left( \mathbf{m},\mathbf{i}\right) $ can be factored into a product $% \mathsf{t}_{1}\cdot ...\cdot \mathsf{t}_{n-1}$ of straight tableaux $\mathsf{% t}_{j}$\ of shape $(\mathbf{m}(j),\mathbf{i})$ for $1\leq j\leq n-1$. Then by Lemma \ref{straight equal contra}, $\mathsf{t}_{j}$ are contra-tableaux for all $j$. In particular, for each $1\leq j\leq n-2$, let us consider a straight tableau $\mathsf{t}_{j}^{0}$ of shape $\left( \mathbf{m}(j),\mathbf{i}% \right) $ such that for each $a$ and $b$ such that $1\leq b\leq m_{j}$ and $% p_{j}+1\leq a\leq p_{j+1}$, the row indices and the column indices are equal: $R_{b}^{(a)}=C^{(a)}$, i.e.,% \begin{eqnarray*} \mathsf{t}_{1}^{0}&=&[C^{(2)}:C^{(2)}]^{m_{1}}; \\ \mathsf{t}_{j}^{0}&=&[C^{(p_{j}+1)}:C^{(p_{j}+1)}]^{m_{p_{j}+1}}\cdot \lbrack C^{(p_{j}+2)}:C^{(p_{j}+2)}]^{m_{p_{j}+2}}\cdot ...\cdot \lbrack C^{(p_{j+1})}:C^{(p_{j+1})}]^{m_{p_{j+1}}} \end{eqnarray*}% for $2 \leq j \leq n-2$. This is equivalent to say that $\mathsf{t}_{j}^{0}$ is obtained by filling in all the cells corresponding to the subshapes $\left( \mathbf{m}(j),% \mathbf{i}\right) $ of the shape $\left( \mathbf{m},\mathbf{i}\right) $\ with maximum possible numbers. Then for any contra-tableau $\mathsf{t}$\ of shape $\left( \mathbf{m}(n-1),% \mathbf{i}\right) $, we can find a straight tableau $\widehat{\mathsf{t}}$ of shape $\left( \mathbf{m},\mathbf{i}\right) $ such that% \begin{equation*} \widehat{\mathsf{t}}=\left( \mathsf{t}_{1}^{0}\cdot ...\cdot \mathsf{t}% _{n-2}^{0}\right) \cdot \mathsf{t} \end{equation*}% and this provides the following injection:% \begin{eqnarray*} H^{0}(G/B,L_{\mathbf{\lambda }}) &\rightarrow &\mathsf{M}(\mathbf{m}) \\ \mathsf{t} &\mapsto &\left( \mathsf{t}_{1}^{0}\cdot ...\cdot \mathsf{t}% _{n-2}^{0}\right) \cdot \mathsf{t} \end{eqnarray*}% where $H^{0}(G/B,L_{\mathbf{\lambda }})$ is the space of section of the line bundle $L_{\mathbf{\lambda }}$ on $G/B$ and $\lambda $ is the dominant weight determined by the shape $\mathbf{m}(n-1)$ as a Young diagram. For example, \begin{equation*} \young(\ \ \ 1,\ \ \ 1,\ \ \ 2,\ \ 12,\ 134)\rightarrow \young(\ \ \ 1,\ \ \ 1,\ \ \ 2,\ \ 12,\ 134,\ \ 3\ ,\ 23\ ,\ 23\ ,\ 2\ \ ) \end{equation*} \medskip \section*{Acknowledgements} We thank Mikhail Kogan for his contribution on an early stage of the project. We also thank Ivan Cheltsov for help with Proposition \ref{propHP}. \bigskip
1,941,325,221,214
arxiv
\section{Introduction} The section of the Perseus arm visible from the Northern hemisphere is a Galactic region rich in young stars, with many OB associations and young open clusters \citep{hum1978}. Given its proximity to the Sun \citep[with typical distances ranging between 3~kpc at $l\sim100\degr$ to 2~kpc at $l\sim140\degr$;][]{choi2014}, it offers important advantages for the study of stellar populations over other Galactic regions. Located towards the outskirts of the Milky Way, it presents a moderately low reddening, which makes young blue stars easily accessible. In consequence, the high-mass population in Perseus has been widely studied for decades \citep[e.g.][]{hum1978}. Among the young stars in Perseus, there are also many red supergiant (RSG) stars \citep[$>70$;][]{hum1978,lev2005}. These stars possess moderately-high mass ($\sim10$ to $\sim40\:$M$_{\odot}$), high luminosity \citep[$\log(L/L_{\sun})\sim4.5$\,--\,5.8;][]{hum1979}, low temperature\footnote{The temperature scale of RSGs is still an open question. Over the last decade, different works (\citealt{lev2007}, \citealt{dav2013}, and Tabernero et al. submitted) have reported quite different temperature ranges. In all cases, though, the effective temperatures of these stars are well below 4\,500\:K.}, and late (K or M) spectral type (SpT). Although they have evolved off the main sequence, RSGs are still young stars \citep[with ages between $\sim8$ and $\sim25\:$Ma;][]{eks2013}. In consequence, they are associated to regions of recent stellar formation. The correct characterisation of the RSG phase plays a major role in the understanding of the evolution and final fate of high-mass stars \citep[e.g.][]{eks2013}. Despite this pivotal position, there are still many critical questions about them that remain without definitive answers; among them, the definition of a temperature scale and its relation with luminosity, as discussed in \citet[from now on Paper~II]{dor2016a}. To bring some light to these questions, we started an ambitious observational programme on RSGs, aimed at characterising their properties by using statistically significant samples. In \citet[from now on, Paper~I]{gon2015} we presented the largest spectroscopic sample to date of cool supergiants\footnote{"Cool supergiants" is a denomination that includes all red and some yellow supergiants. In \citetalias{gon2015}, we showed that G-type SGs in the SMC (and presumably other low-metallicity environments) are part of the the same population as RSGs. This is not the case in the Milky Way, but a few luminous G-type supergiants are part of our calibration sample. Thus, we use the term CSG to make reference to the present sample. Despite this, the term RSG is used in many cases, in reference to the samples of K and M supergiants studied in previous works \citep[e.g.\ ][]{hum1978,lev2005}.} (CSGs) from the Magellanic Clouds (MCs). By combining this large sample with an important number of well-characterised Milky Way RSGs, in \citetalias{dor2016a} we could present firm statistical confirmation of a correlation between SpT and temperature, or the relation between SpT, luminosity, and mass loss. Taking advantage of this sample, in \citet[Paper~III]{dor2016b} we developed an automated method for the identification of CSGs using the atomic and molecular features in the spectral range around the infrared Calcium Triplet (CaT). Finally, Tabernero et al. (submitted) have calculated the effective temperatures for the sample in \citetalias{gon2015} and studied the temperature scales of the RSGs from the MCs. The present work is the next step in our study of CSGs. After analysing the CSG population from the MCs, we extend our study to the Milky Way population of CSGs. As many of the properties of a given CSG population (e.g.\ its typical SpT and temperatures) depend on its metalliticy \citep[][]{eli1985}, we selected a specific region of the Galaxy where we can expect rather uniform (typically solar) metallicities: the section of the Perseus arm between $l=97\degr$ and~$150\degr$, with Galactocentric distances in the $\sim8$ to $10\:$kpc range. This region was chosen because of the many RSGs that were previously known and well characterised, but also because its CSGs have very low apparent magnitudes and can be observed efficiently with long-slit spectrographs. A systematic search for CSGs in an area that is considered well studied allows a good estimation of the incompleteness of previous samples. Moreover, as the extinction towards the Perseus arm is relatively low, its blue population is well known. In consequence, the relation between OB stars and CSGs can be studied. This analysis would be specially interesting, because many clusters and OB associations in the Perseus arm have total masses and ages coherent with the presence of CSGs. In this paper, we apply the methods developed in \citetalias{dor2016b} to a sample of candidate RSGs from the Perseus arm, to test their reliability and obtain a statistically significant sample of CSGs in the area. In addition, we develop a method to compute the likelihood that a given star is indeed a supergiant and estimate the reliability of our identification. We also study some basic properties of the CSG population at solar metallicities, such as its SpT distribution and its relation with the luminosity class (LC). In a future work, we will carry out a deeper study of the astrophysical properties of the the CSG sample found here, analysing its spatial and kinematic distributions, as well as its connection to the known population of high-mass stars close to the main sequence. \section{Observations and measurements} \label{Per_arm} \subsection{Target selection} \label{targ_sel} To identify RSG candidates in the Perseus arm, we performed a comprehensive photometric search in the Galactic Plane ($b=+6\degr$ to $-6\degr$, and $l=97\degr$ to~$150\degr$). We used as a guide the works of \cite{hum1970,hum1978}. The selection is the result of the following steps: \begin{itemize} \item From \cite{hum1978} we selected those regions with detected RSGs and distance moduli (DM) coherent with being part of the Perseus arm. \item Using these DM, along with the measured $A_{\mathrm{V}}$, we selected from 2MASS those sources with $K$~band magnitudes bright enough to be a RSG, assuming a lower limit for their intrinsic brightness at $M_{K}=-5$. This may seem a very low limit, as for example in \citetalias{gon2015} there are no CSGs below $M_\mathrm{K}\sim-7$, but it allows for large errors in DM and/or extinction while keeping the CSG candidate sample as complete as possible. This step gets rid of most of the foreground and background undesired populations, as the expected density profile of the Galaxy along this line of sight allows us to adopt a low luminosity threshold without risking too much contamination (more distant RSGs will likely be also included, but they are expected to be rare in the outer reaches of the Galaxy and will be of interest for future studies). This leaves only nearby dwarfs and giants with types later than M3 as main interlopers. \item The filtered sample was then cross-correlated with well known catalogues of optical photometry, such as USNO-B1 \citep{mon2003} and UCAC3 \citep{zac2010}, obtaining $I$~band magnitudes and proper motions. Candidates are required to have $(I-K_{\textrm{S}})_{0} > 2$ (roughly, the colour of a K0 star) and proper motions similar to those of the blue and red supergiants already known in the field. This step cleans the sample of most of the foreground stars, as they have higher motions. \item The remaining catalogue was then submitted to SIMBAD and all the stars with confirmed SpTs were removed, although we kept $51$ previously-studied RSGs, for a number of reasons: check spectral variations, test the efficiency of our methods and provide a comparison sample. In fact, 43 of these objects with reliable SpT or marked as MK standards were used for the calibration sample used in \citetalias{dor2016b}. In consequence, we are not considering these 43 SGs as part of the test sample, but we include them to calculate the efficiency of the photometric selection in Sect.~\ref{phot_eff}. \end{itemize} \subsection{Observations} \label{obs} The targets were observed during two different campaigns. The first one was done in 2011, on the nights of October 16th, 17th, and 18th. The second campaign was carried out in 2012, from September 3rd to 7th. We used the Intermediate Dispersion Spectrograph (IDS), mounted on the 2.5~m Isaac Newton Telescope (INT) in La Palma (Spain). We used the \textit{Red}+2~CCD with its 4096-pixel axis along the wavelength direction. The grating employed was R1200R, which covers an unvignetted spectral range $572\:$\AA{} wide, centred on $8500\:$\AA{} (i.e.\ the spectral region around the infrared Calcium Triplet, CaT). This configuration, together with a slit width of $1^{\prime\prime}$, provides a resolving power of $R\sim10\,500$ in the spectral region observed. This $R$ is very similar to the resolution of the data used in \citetalias{gon2015} ($R\sim11\,000$). The reduction was carried out in the standard manner, using the {\sc IRAF} facility\footnote{IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation}. In total, we observed 637 unique targets, 102 in 2011 and 535 in 2012, without any overlap between epochs. As discussed above, 43 of them are CSGs with well determined SpTs (all but one observed in the 2012 run) that were included in the calibration sample of \citetalias{dor2016b} (see appendix~B in that work). These objects are not considered part of the Perseus sample studied here. This leaves 594 targets in our sample, which are detailed in Table~\ref{cat_perseo}. \subsection{Manual classification and spectral measurements} \label{clas_meas} We performed a visual classification for all the stars in the sample, using the classical criteria for the CaT spectral region explained in \cite{neg2012}. All the carbon stars found (46) were marked and removed from later calculations. Thus, we do not use them in the present work, but they are included in our complete catalogue (see Table~\ref{cat_perseo}). Without the carbon stars, our sample has 548 targets. For the analysis of our sample, we used the principal component analysis (PCA) method described in \citetalias{dor2016b}. This method begins with the automated measurement of the main spectral features in the CaT spectral region. We measured all the features needed to calculate the principal components (PCs) of our stars (i.e.\ those marked as shortened input list in table C.1 from \citetalias{dor2016b}). The method to measure these features is the same as for the calibration sample in \citetalias{dor2016b}. Although the resolution of our sample is not exactly the same as in the calibration sample, it is close enough not to introduce any significant difference in the result, as explained in \citetalias{dor2016b}. Finally we combined linearly the PCA coefficients (tables~D.1 and D.2 in \citetalias{dor2016b}) with the spectral measurements of each star in our sample, obtaining their corresponding PCs. We also calculated their uncertainties, propagating the uncertainties of the EWs and PC coefficients through a lineal combination. \section{Analysis} \label{analysis} \subsection{Estimating the probability of being a CSG} \label{individual_prob} In \citetalias{dor2016b} we revisited the main criteria classically used to identify RSGs, discussing the advantages and limitations of each one. We also proposed an original method, based on the PCs calculated through a large calibration sample and the use of Support Vector Machines (SVM). All the classical criteria, as well as the PCA method, use boundaries between the SGs and non-SGs as separators (our method uses many boundaries in a multidimensional space, but it is qualitatively the same in concept). Thus, they provide a binary classification for the targets (each of them is classified as either SG or non-SG), but without any direct estimation of the reliability of their classifications. In \citetalias{dor2016b} we also defined two useful concepts for our analysis: efficiency and contamination. Efficiency is the fraction of all SGs that is identified as such by a given criterion, while contamination is the fraction of the stars selected as SGs by a given criterion that are not really SGs. Efficiencies and contaminations obtained for the calibration sample are based on the statistics of the whole sample, and give a good idea of the reliability of each method when it is applied to a large number of candidates. However, it is not a good measurement of the reliability of the individual classification of each target: the result is the same for a star that lies close to the boundary as for one that is far away from it. In consequence, we wanted to measure the reliability of each individual identification. For this, we used a Montecarlo process that delivers the individual probability of each target being a SG ($P(\mathrm{SG})$). We detail the process and the results for the calibration sample in the following Section~\ref{cal_prob}. Later, after testing the method in the calibration sample, we calculate the probabilities for the test sample of this work in Section~\ref{prob_per}. \subsubsection{Calculation} \label{cal_prob} For each one of the three classification methods described in the following paragraph, we obtained uncertainties through a Montecarlo process using each target in the calibration sample from \citetalias{dor2016b}. We took the variables needed for each method and their errors, and we drew a new value for the variable from a random normal distribution, with the original measurement as centre and the error as its standard deviation. For each target, we sampled 1\,000 draws, and so we obtained 1\,000 different sets of derived variables. To these we applied the corresponding classification methods, and checked how many times the target was classified as a SG or not in each draw. The $P(\mathrm{SG})_{\mathrm{method}}$ of a target is the fraction of realizations which resulted in a positive identification. For what we call the PCA method ($P(\mathrm{SG})_{\mathrm{PCA}}$), we used the first 15 PCs (which contain 98\% of the accumulated variance), and the SVM calculation defined in \citetalias{dor2016b} (using a putative boundary at M0; see \citetalias{dor2016b}), obtaining the $P(\mathrm{SG})_{\mathrm{PCA}}$ for each target. The results of this procedure are shown in Fig.~\ref{PC1_PC3_prob}. We also calculated the $P(\mathrm{SG})_{\mathrm{CaT}}$ for the criterium based on the strength of the CaT (a target is identified as a SG if the sum of the EWs of its three Ca lines is equal to or higher than $9\:$\AA{}), and $P(\mathrm{SG})_{\mathrm{Ti/Fe}}$ for the Ti/Fe method (which uses as boundary the line ($\mathrm{EW}(8514.1)=0.37\cdot~\mathrm{EW}(8518.1)+0.388$ in the Fe\,{\sc{i}}~$8514\:$\AA{} vs.\ Ti\,{\sc{i}}~$8518\:$\AA{} diagram). The results are shown in Figs.~\ref{CaT_prob} and~\ref{Ti_Fe_prob}. The other classical criteria considered in \citetalias{dor2016b}, based on the strength of the blend at $8468$\:\AA{} and the EWs of only the two strongest lines of the CaT, have been not used here because of their low efficiency or high contamination. \begin{figure*} \centering \includegraphics[trim=1cm 0.5cm 2.4cm 1.2cm,clip,width=\columnwidth]{PC1_PC3_reduc_b.pdf} \includegraphics[trim=1cm 0.5cm 2.4cm 1.2cm,clip,width=\columnwidth]{PC1_PC3_prob.pdf} \caption{PC1 versus PC3 diagram for the calibration sample. The shapes indicate their origin: circles are from the SMC survey, squares are from the LMC survey, diamonds are Galactic standard stars, and inverted triangles are the stars from the Perseus arm survey used as part of the calibration sample (see Section~\ref{clas_meas}). The cross indicates the median uncertainties, which have been calculated by propagating the uncertainties through the lineal combination of the input data (EWs and bandheads) with the coefficients calculated in \citetalias{dor2016b}. {\bf Left (\ref{PC1_PC3_prob}a):} The colour indicates LC (identical to figure~7b in \citetalias{dor2016b}). {\bf Right (\ref{PC1_PC3_prob}b):} The colour indicates the probability of being a SG (see~\ref{cal_prob}).} \label{PC1_PC3_prob} \end{figure*} \begin{figure*} \centering \includegraphics[trim=1cm 0.5cm 2.5cm 1.2cm,clip,width=\columnwidth]{TiO_CaT.pdf} \includegraphics[trim=1cm 0.4cm 2.4cm 1.2cm,clip,width=\columnwidth]{CaT_TiO_prob.pdf} \caption{Depth of TiO bandhead at $8859$\:\AA{} versus total equivalent width of the Calcium Triplet ($8498$\:\AA{}, $8542$\:\AA{}, and $8662\:$\AA{}), for the calibration sample. The strength of the TiO~$8859\:$~\AA{} bandhead is simply an indicator of the spectral sequence for early to mid-M stars (see Section~4.3.4 in \citetalias{dor2016b}) and is included here simply to display the measurements in a 2D graphs, so that the CaT criterion is easily visualised. Symbol shapes are the same as in Fig.~\ref{PC1_PC3_prob}. The black cross indicates the median uncertainties. In these panels the probability of being a SG (see~\ref{individual_prob}) can be compared to the actual LC classification. {\bf Left (\ref{CaT_prob}a):} The colour indicates LC. {\bf Right (\ref{CaT_prob}b):} The colour indicates the probability of being a SG (see~\ref{cal_prob}). } \label{CaT_prob} \end{figure*} \begin{figure*} \centering \includegraphics[trim=0.80cm 0.4cm 2.4cm 1.2cm,clip,width=\columnwidth]{Ti_Fe.pdf} \includegraphics[trim=0.80cm 0.4cm 2.4cm 1.2cm,clip,width=\columnwidth]{Ti_Fe_prob.pdf} \caption{EWs of the lines Fe\,{\sc{i}}~$8514\:$\AA{} and Ti\,{\sc{i}}~$8518\:$\AA{} for the calibration sample. Symbol shapes are the same as in Fig.~\ref{PC1_PC3_prob}. The cross indicates the median uncertainties. In these panels the probability of being a SG (see~\ref{individual_prob}) can be compared with the actual LC classification. {\bf Left (\ref{CaT_prob}a):} The colour indicates LC (Equivalent to Fig.~12b from \citetalias{dor2016b}). {\bf Right (\ref{CaT_prob}b):} The colour indicates the probability of being a SG (see~\ref{cal_prob}). } \label{Ti_Fe_prob} \end{figure*} \subsubsection{Identification based on individual probabilities} \label{ident_indiv_prob} With the classical criteria studied, based on the CaT and on the Ti/Fe ratio, a large fraction of the SGs ($>0.85$ and $>0.70$) in the sample have $P(\mathrm{SG})=1$ and most non-SGs have $P(\mathrm{SG})=0$. Only those stars close to the boundary used by these methods present intermediate values of $P(\mathrm{SG})$. Since the boundaries between SGs and non-SGs in these diagrams are straight lines, a given star can be identified as a SG if it has $P(\mathrm{SG})\geq0.5$ -- this is equivalent to the simple assignment to one of the two categories. On the other hand, in the PCA method there are not many targets with their $P(\mathrm{SG})$ equal to $1$ or to $0$. This is because the PCA uses many boundaries in the multidimensional space of the PCs, not a single boundary in a two dimensional diagram, as is the case of the classic criteria. Thus it is more difficult to stay far away from every boundary and the probabilities tend to have intermediate values. To illustrate this, and also to evaluate the application of this method to the identification of SGs, we calculated how many targets have their individual probability $P_{i}$ equal to or higher than a given $P(\mathrm{SG})$ value. As the SGs from each galaxy in the calibration sample have different typical SpTs (\citealt{lev2013}; \citetalias{dor2016a}), we performed this calculation for six different subsamples taken from the calibration sample: SGs from the SMC, from the LMC, from the MW, all SGs, all non-SGs, and the whole sample. We present the results for each of these subsamples as fractions ($F(P_{i}\geq P(\mathrm{SG}))$) with respect to their corresponding total size, in Figs.~\ref{all_pca}, \ref{all_cat}, and~\ref{all_ti_fe}. For all three classification criteria, the SGs from both MCs present very similar behaviours, but the SGs from the MW present slightly lower probabilities. This small difference is likely due to the lower efficiency of all criteria towards later subtypes, as it is well known that SGs in the MW tend to have later subtypes than those in the MCs \citep{lev2013}. \begin{figure} \centering \includegraphics[trim=0.8cm 0.3cm 1.2cm 1.2cm,clip,width=\columnwidth]{all_pca.pdf} \caption{Fraction of the calibration sample that has a probability of being a SG (calculated through the PCA method) equal to or higher than the corresponding $x$-axis value. The colours indicate the subsample: black for whole sample, red for non-SGs, blue for all SGs, magenta for SMC SGs, cyan for LMC SGs, and green for MW SGs. Each fraction is calculated with respect to the size of its own subsample.} \label{all_pca} \end{figure} \begin{figure} \centering \includegraphics[trim=0.8cm 0.3cm 1.2cm 1.2cm,clip,width=\columnwidth]{all_cat.pdf} \caption{Fraction of the calibration sample that has a probability of being a SG (calculated through the CaT method) equal to or higher than the corresponding $x$-axis value. The colours indicate the subsample, as explained in Fig.~\ref{all_pca}. Each fraction is calculated with respect to the size of its own subsample.} \label{all_cat} \end{figure} \begin{figure} \centering \includegraphics[trim=0.8cm 0.3cm 1.2cm 1.2cm,clip,width=\columnwidth]{all_ti_fe.pdf} \caption{Fraction of the calibration sample that has a probability of being a SG (calculated through the ratio of the Fe\,{\sc{i}}~$8514\:$\AA{} to Ti\,{\sc{i}}~$8518\:$\AA{} lines) equal to or higher than the corresponding $x$-axis value. The colours indicate the subsample, as explained in Fig.~\ref{all_pca}. Each fraction is calculated with respect to the size of its own subsample.} \label{all_ti_fe} \end{figure} The CaT and the Ti/Fe criteria result in a large fraction of SGs with high values of $P(\mathrm{SG})$, but there are non-SGs with probabilities as high as $P(\mathrm{SG})=1$. Thus, these methods provide a quick way to identify most SGs in the sample, but at the price of having a a significant contamination. Of these two methods, the CaT one is less strict, finding more SGs, but also including more non-SGs with high $P(\mathrm{SG})$ values. The PCA method finds a very small fraction of SGs with $P(\mathrm{SG})>0.9$ (and this fraction is significantly higher for SMC SGs than for MW ones, as can be seen in Fig.~\ref{all_pca}). However, non-SGs present significantly lower values of $P(\mathrm{SG})$, with none of them having $P(\mathrm{SG})>0.75$. For this value the fraction of SGs identified is about $0.90\pm0.04$ ($\sim0.80\pm0.13$ for the SGs from the MW). Therefore, using this value as a threshold, the vast majority of SGs can be identified without any contamination. In addition, it is also possible to identify a group of likely SGs with a relatively low contamination, by taking the targets whose $P(\mathrm{SG})$ lies within the interval between $P(\mathrm{SG})=0.75$ and a lower limit set at convenience (depending on the level of contamination that may be considered acceptable). For a new sample, such as the Perseus arm sample in this paper, it is possible to estimate the value of this lower limit of $P(\mathrm{SG})$ that results in an optimal selection of potential SGs. In such a sample, the only information available will be the shape of the $P(\mathrm{SG})$ fraction curve (the black line in our figures). This curve, however, will always have an inflexion point at the $P(\mathrm{SG})$ value where most SGs have already been selected, while most non-SGs have lower values of $P(\mathrm{SG})$. Thus, from this point towards lower probabilities, the addition of extra targets to the selection becomes dominated by non-SGs. Therefore, this inflexion point can be used as a lower boundary for the group of potential SGs, and can be easily estimated for any sample under study, as we do for the Perseus sample in next Section. In the calibration sample, the inflexion point is at $P(\mathrm{SG})\sim0.60$. Taking this value as a lower boundary, the efficiency of the resultant selection is higher than $0.95\pm0.04$ ($\sim0.90\pm0.13$ for SGs from the MW), while the contamination is only $0.03\pm0.04$ ($0.08\pm0.13$ in the case of the MW sample). Note that the contaminations were calculated for the total number of stars tagged as SGs, i.e. all those having $P(\mathrm{SG})\geq0.60$). For similar efficiencies in the CaT and Ti/Fe ratio criteria, the contaminations are slightly higher ($\sim0.08\pm0.04$ in both cases). These values become slightly worse in the case of MW SGs, with contaminations of $0.17\pm0.13$ for the Ti/Fe ratio criterion and $0.20\pm0.13$ for the CaT one. In \citetalias{dor2016b} we found that the PCA method provides a higher quality method to identify SGs than the other two, because it has a significantly lower contamination. In this work, we found another advantage: the possibility to identify a large fraction of SGs without any contamination. \subsection{Probabilities for the Perseus sample} \label{prob_per} Before the analysis of our Perseus sample, we must stress that the SGs from the MW typically have M subtypes. We may thus expect our sample to be dominated by these subtypes. Moreover, most of the interlopers found in the manual classification are red giants with M types. In consequence, the diagrams obtained for the Perseus sample have their datapoints concentrated in the regions typical of M-type stars, and look quite different from the distributions seen in the calibration sample (see Figs.~\ref{PC1_PC3_prob}, \ref{CaT_prob}, and \ref{Ti_Fe_prob}), whose SpT range spans from G0 till late-M subtypes. For further details about the calibration sample and their SpT distribution, see \citetalias{dor2016b} and figs.~7a, 9, and 12a therein. We calculated the individual probabilities of being a SG for each target in the Perseus sample, following the same method described for the calibration sample (Section~\ref{individual_prob}). Using the PCs previously obtained for our targets, $P(\mathrm{SG})_{\mathrm{PCA}}$ was calculated through a Montecarlo process (generating 1\,000 new sets of PCs per target). The results are given in Table~\ref{cat_perseo} and represented in a PC1 to PC3 diagram in Fig.~\ref{PC1_PC3_prob2}. \begin{figure} \centering \includegraphics[trim=1cm 0.3cm 2.3cm 1.2cm,clip,width=\columnwidth]{PC1_PC3_prob_per.pdf} \caption{PC1 versus PC3 diagram for the Perseus sample. The shapes indicate epoch: 2011 circles, 2012 squares. The black cross indicates the median uncertainties, which have been calculated by propagating the uncertainties through the lineal combination of the input data (EWs and bandheads) with the coefficients calculated. The colour indicates $P(\mathrm{SG})_{\mathrm{PCA}}$. The plot is shown at the same scale as Fig.~\ref{PC1_PC3_prob}, to ease comparison. The differences in the target distribution with respect to the calibration sample are due to the different ranges of spectral types.} \label{PC1_PC3_prob2} \end{figure} \begin{figure} \centering \includegraphics[trim=0.8cm 0.2cm 1.2cm 1.2cm,clip,width=\columnwidth]{prob_pca_perseo.pdf} \caption{Fraction of the Perseus sample that has a probability of being a SG (calculated through the PCA method) equal to or higher than the corresponding $x$-axis value.} \label{prob_pca} \end{figure} Although the PCA method provides significantly better results than classical criteria, we also calculated the probabilities for them (CaT and Ti/Fe). We include these criteria because they are useful for a quick estimate despite their limitations. In addition, this is the first time that these criteria are systematically applied them to a very large sample at solar metallicity: more than 500 targets, instead of the $\sim100$ MW stars from the calibration sample. The results are given in Table~\ref{cat_perseo}, and presented in Figs.~\ref{TiO_CaT_prob} and \ref{Ti_Fe_prob2}. \begin{figure} \centering \includegraphics[trim=1cm 0.4cm 2.3cm 1.2cm,clip,width=\columnwidth]{TiO_CaT_prob_per.pdf} \caption{Depth of the TiO bandhead at $8859$\:\AA{} with respect to the sum of the EWs of the CaT lines. The shapes indicate epoch: 2011 circles, 2012 squares. The black cross indicates the median uncertainties. The colour indicates $P(\mathrm{SG})_{\mathrm{CaT}}$. Note again the difference in SpT distribution with respect to the calibration sample (Fig.~\ref{CaT_prob}).} \label{TiO_CaT_prob} \end{figure} \begin{figure} \centering \includegraphics[trim=0.8cm 0.4cm 2.3cm 1.2cm,clip,width=\columnwidth]{Ti_Fe_prob_per.pdf} \caption{EWs of the Fe\,{\sc{i}}~$8514\:$\AA{} and Ti\,{\sc{i}}~$8518\:$\AA{} lines. The shapes indicate epoch: 2011 circles, 2012 squares. The black cross indicates the median uncertainties. The colour indicates $P(\mathrm{SG})_{\mathrm{Ti/Fe}}$. Comparison to Fig.~\ref{Ti_Fe_prob} highlights the lack of stars with G and K spectral types.} \label{Ti_Fe_prob2} \end{figure} \section{Results} \subsection{Supergiants identified} \label{SG_ident} When we studied the distribution of $P(\mathrm{SG})_{\mathrm{PCA}}$ among the components of the calibration sample, we found that only true SGs present values higher than $P(\mathrm{SG})_{\mathrm{PCA}}=0.75$ (see Section~\ref{ident_indiv_prob}). Thus, we were able to obtain a group of SGs a priori free from any non-SG (the ``reliable SGs" set). In addition, it is possible to define an interval of probabilities between $P(\mathrm{SG})_{\mathrm{PCA}}=0.75$ and a lower limit, that increases the selection of SGs, while keeping the contamination very low (the ``probable SGs" set). The optimal lower limits for the Galactic samples were selected through the diagram shown in Fig.~\ref{prob_pca}, by the estimation of the inflexion point in the corresponding curve. For the Perseus sample we estimated it at $P(\mathrm{SG})_{\mathrm{PCA}}\sim0.55$. The number of SGs found by these cuts is indicated in Table~\ref{prob_pca_tabla}. \begin{table} \caption{Number of targets tagged as ``reliable SGs" or ``probable SGs" (see~\ref{SG_ident}) through the analysis of $P(\mathrm{SG})_{\mathrm{PCA}}$. The luminosity class was assigned through the manual classification. We also show the fraction that these groups represent with respect to the number of total targets in the sample (594). The 2-sigma uncertainties for the given fractions are equal to $1/\sqrt[]{n}$, where $n$ is the total number of targets. Thus, the uncertainty of both fractions is equal to $\pm0.04$.} \label{prob_pca_tabla} \centering \begin{tabular}{c c c | c c c} \hline\hline \noalign{\smallskip} \multicolumn{3}{c |}{Number}&\multicolumn{3}{c}{Fraction}\\ Reliable&Probable&&Reliable&Probable&\\ SGs&SGs&Total&SGs&SGs&Total\\ \noalign{\smallskip} \hline \noalign{\smallskip} 116&75&191&$0.20$&$0.13$&$0.33$\\ \noalign{\smallskip} \hline \end{tabular} \end{table} Classical methods are based on a linear boundary in a two-dimensional space. In consequence, when curves of $P(\mathrm{SG})$ are plotted for them (see Section~\ref{ident_indiv_prob}), there is no hint of a threshold value for ``realiable SGs" as in the case of $P(\mathrm{SG})_{\mathrm{PCA}}$. Thus, the only reasonable minimum value, given the two-dimensional nature of the boundary, is $P(\mathrm{SG})=0.5$. The number of targets identified as SGs are given in Table~\ref{SG_class}. \begin{table} \caption{Number of SGs found by different methods, and the fraction that they represent with respect to the total number of targets observed (594). For the classical criteria, we used a threshold of $P(\mathrm{SG})=0.5$; for the PCA method, we adopted a threshold of $P(\mathrm{SG})=0.55$ (see Sect.~\ref{prob_per}). The 2-sigma uncertainties of the fractions are equal to $1/\sqrt[]{n}$, where $n$ is the total number of targets.} \label{SG_class} \centering \begin{tabular}{c c c } \hline\hline \noalign{\smallskip} Criterion&Number of SGs&Fraction\\ \noalign{\smallskip} \hline \noalign{\smallskip} CaT&304&$0.51\pm0.04$\\ Ti/Fe&238&$0.40\pm0.04$\\ PCA&193&$0.32\pm0.04$\\ \noalign{\smallskip} \hline \end{tabular} \end{table} The targets tagged as SGs through $P(\mathrm{SG})_{\mathrm{PCA}}$ represent a significant fraction (almost one third) of the Perseus sample. Moreover, most of them ($\sim66$\%) are tagged as ``reliable SGs"; we can thus consider this group in good confidence free of any interloper. The number of SGs found through the PCA method is, however, significantly lower than the numbers found through the CaT and Ti/Fe criteria. We must be cautious with the results obtained using these methods, as their contaminations were higher ($0.17\pm0.13$ for Ti/Fe and $0.20\pm0.13$ for CaT) than for the PCA ($0.08\pm0.13$) among MW stars in the calibration sample (see \citetalias{dor2016b}). The difference in the expected contamination is not enough to explain the number of stars tagged as SG, but it seems clear that the higher the contamination is for a method, the larger number of stars it identifies as SGs. Moreover, we have to take into account that the Galactic set from the calibration sample is limited in two ways. Firstly, the subsample was relatively small, which causes high uncertainties in our fractions ($\pm0.13$). Secondly, this sample is not comparable to any observed sample, because it was intentionally created by assembling a similar number of well-known SGs and non-SGs. Thus, it will not be at all representative in terms of the number of non-SG stars that one may expect to find as interlopers when using photometric criteria to select SG candidates in the Galactic Plane. In view of these limitations, to study the efficiency and contamination of our methods in the Perseus sample, we resort to a direct calculation, in the next Section. \subsection{Efficiency of the photometric selection} \label{phot_eff} The most important source of contaminants in the photometric selection comes from the magnitude/distance degeneracy. In this case, we are interested in structures relatively close to Earth, and in stars that are intrinsically bright, so we can enforce strict limits in apparent magnitude that will filter out most of the intrinsically dimmer populations along the line of sight. The overall efficiency of the selection criteria outlined in Sect.~\ref{targ_sel} is $47\%$. This includes the $43$~MK standards mentioned in Section~\ref{obs}, as these were not included a posteriori but picked up by the selection algorithm. \begin{figure} \centering \includegraphics[trim=0.9cm 0cm 0cm 0cm,clip,width=8.8cm]{frac.pdf} \caption{Fraction of SGs found in the target sample as a function of apparent $K\mathrm{\!s}$ magnitude and colour. The dashed line marks the total average fraction, $47\%$. Of these detected SGs, $\sim5\%$ where already known.} \label{selection_eff} \end{figure} As can be seen in Fig.~\ref{selection_eff}, the efficiency decays with magnitude: at $m_{K_{\mathrm{S}}}\sim4.5$ most of the observed stars turn out to be interlopers. This agrees roughly with Paper I, as at the low end of the brightness distribution of SGs the selected sample is dominated by bright giants. Similarly, while the fraction of SGs is more or less homogeneous with colour, the red end of the distribution (stars with $(J-K_{\mathrm{S}})\geq1.7$) is mostly composed of bright carbon stars. These results for a MW sample confirm those obtained in the MCs, in \citetalias{gon2015}, and will also be useful for future photometric selections. However, we must caution that such a red cut-off can only be used to discriminate carbon stars in fields of low (such as the MCs) or moderate (like the present sample) extinction. For the high extinctions ($A_{V}\ga5$~mag) found in many lines of sight towards the inner Milky Way, M-type stars would be shifted to very high values of $(J-K_{\mathrm{S}})$ and other discriminants must be found. \subsection{Efficiency and contamination in the PCA method} \subsubsection{Efficiency} \label{efficiency} To estimate directly the efficiency of our survey in the Perseus arm, we used the manual classification previously performed. We have to note that this classification is not \textit{a priori} more reliable than our automatised methods. Manual classification was done before we developed the automated process detailed in \citetalias{dor2016b}. For the manual classification we used classical criteria, such as the EW of the Calcium Triplet, the ratio between nearby Ti and Fe lines (Fe\,{\sc{i}}~$8514\:$\AA{} and Ti\,{\sc{i}}~$8518\:$\AA{} among others) and the EW of the blend at $8468\:$\AA{}. In \citetalias{dor2016b}, we demonstrated that the criteria based on these features have an efficiency slightly worse (at best) than our automated method. The manual classification can be somewhat better than these methods at identifying SGs, as it is a global process (like our PCA method), not based on any single spectral feature. Thus, the efficiency found in this work is useful to estimate the average quality of the classification methods under study with respect to a manual classification done following the classical criteria for the CaT range. In the first place, we calculated the efficiency for each method (see Table~\ref{effi_r_per}). The efficiency in this case is the fraction of all SGs found through the manual classification, which were also tagged as such by a given automated criterion. The PCA method has the lowest global efficiency. It is similar to the value for the Ti/Fe criterion, but significantly lower than the efficiency of the CaT criterion. Nevertheless, when the LC of the targets is taken into account, the results can be seen in a very different light. \begin{table*} \caption{Number of targets from the Perseus sample tagged as SGs through the manual classification that were also identified as such by the different methods considered. Note that we found 241 SGs through the manual classification. Among them, 90 were classified as Ia or Iab, 85 as Ib, and 66 as Ib\,--\,II. Thus, the efficiencies, and their uncertainties (that are equal to $1/\sqrt[]{n}$) are calculated with respect to these values, and modified by the definition of efficiency (an efficiency $>1$ is not possible).} \label{effi_r_per} \centering \begin{tabular}{c | c c c c | c c c c} \hline\hline \noalign{\smallskip} &\multicolumn{4}{c |}{Number of SGs found}&\multicolumn{4}{c}{Efficiency}\\ Method&All&Ia to Iab&Ib&Ib\,--\,II&All&Ia to Iab&Ib&Ib\,--\,II\\ \noalign{\smallskip} \hline \noalign{\smallskip} PCA&182&86&83&13&$0.76\pm0.06$&$0.96^{+0.04}_{-0.11}$&$0.98^{+0.02}_{-0.11}$&$0.20\pm0.12$\\ CaT&204&86&81&37&$0.85\pm0.06$&$0.96^{+0.04}_{-0.11}$&$0.95^{+0.02}_{-0.11}$&$0.56\pm0.12$\\ Ti/Fe&194&83&80&31&$0.80\pm0.06$&$0.92^{+0.08}_{-0.11}$&$0.94^{+0.06}_{-0.11}$&$0.47\pm0.12$\\ \noalign{\smallskip} \hline \end{tabular} \end{table*} The calibration sample (see \citetalias{dor2016b} for details) is dominated by high- and mid-luminosity SGs (Ia and Iab), with only a small fraction of Ib or less luminous SGs (LC~Ib\,--\,II). In consequence, our PCA method is optimized to find Ia and Iab SGs. In view of this, in the Perseus sample we considered the efficiency for different LCs separately. The efficiencies of the PCA and CaT criteria for high-luminosity SGs are the same, $0.96\pm0.11$, and comparable to those found for the calibration sample. The efficiencies for low luminosity SGs (Ib) are also similar in both methods, and compatible with the results obtained for Ia and Iab stars. However, for the Ib\,--\,II stars the efficiencies are significantly different depending on the criterion used. The higher efficiency of the CaT method in the Ib\,--\,II group stems from the fact that this criterion is much less strict than the PCA one, but at the price of being more susceptible to contamination of red giants (see the following subsection). As the Ib\,--\,II subclass is the boundary between SGs (LC~I) and bright giants (LC~II), the morphology of the objects with this tag is intermediate. Moreover, there are AGB stars, which are not high-mass stars, whose spectra are pretty similar to those of a low luminosity SGs (Ib). The perfect example of this is $\alpha$~Her. This star is the high-luminosity MK standard with the latest spectral type available \citep[M5\,Ib\,--\,II;][]{kee1989}. However, \citet{mor2013} show that this star is not a high-mass star ($M_{*}\ga10\:$M$_{\odot}$), but an AGB star with a mass around $3\:$M$_{\odot}$, even though its spectral morphology is very close to that of a SG. In view of this, through the manual classification we probably identified as SGs stars that are not really SGs, but pretty similar to them morphologically. The PCA criterion, instead, is more restrictive, and only selects as SGs those objects similar enough to the luminous (high-mass) SGs (those having LC Ia and Iab) used to calibrate it. Our methods, and especially the PCA method, are very efficient for mid- to high-luminosity SGs (Iab to Ia), and also for lower luminosity supergiants (Ib). However, there is also a small number of stars ($6$) manually classified between Ia and Ib that were not identified as SGs by the PCA. All these 6 stars have mid- to late-M types. All but one of them are M5 or later, with most of them (four) having very late SpTs (M7 or M7.5). In fact, these stars are the majority of the RSGs with SpTs M5 or later in the whole Perseus sample, as there are only two other M5\,Ib stars (which were correctly identified by the PCA method). The only star earlier than M5 (it was classified as M3) which was not identified as a SG is S~Per, an extreme RSG (ERSG). The reason why this object was not correctly identified is clear: its lines are weakened by veiling, an effect that may appear in ERSG stars which has been reported before for S~Per \citep{hum1974a}. For more details about ERSGs and veiling, see Section~4.4 from \citetalias{dor2016b} and references therein. Just like the PCA method, the CaT and the Ti/Fe criteria fail for mid- to late-M SGs. They failed to identify the same true supergiants that were not found by the PCA. In addition, they also failed for a group of Ia to Ib stars with slightly earlier SpTs (M3 and M4). The obvious conclusion is that all methods fail almost completely in the identification of mid- to late-M RSGs. However, the PCA method provides significantly better results for mid-M SGs (up to M5) than the other criteria. This, in turn, cannot be considered a major drawback, as the number of mid- to late-M RSGs is very small, with only a handful of supergiants presenting spectral types later than M5 (and most of them presenting spectral variability). \subsubsection{Contamination} The three identification methods studied above have similar efficiencies for mid- to high-luminosity subsamples. The advantage of the PCA method over the other two is to provide significant lower contaminations, at least for the calibration sample. Therefore, we estimated the contamination obtained through each method for the Perseus sample. The contamination in this case is the fraction of the stars selected as SGs by a given automated criterion that were not identified as real SGs through the manual classification. The results are shown in Table~\ref{contamination_per}. \begin{table*} \caption{Contaminations obtained through different methods for the Perseus sample. As the contamination is the fraction of targets tagged as SGs that actually are not SGs, its 2-sigma uncertainty is equal to $1/\sqrt[]{n}$, where $n$ is the number of objects identified as SGs.} \label{contamination_per} \centering \begin{tabular}{c c c c} \hline\hline \noalign{\smallskip} &Number of targets&Number of non-SGs&\\ Method&tagged as SGs&wrongly identified&Contamination\\ \noalign{\smallskip} \hline \noalign{\smallskip} PCA&193&11&$0.06\pm0.07$\\ CaT&304&100&$0.33\pm0.06$\\ Ti/Fe&238&43&$0.18\pm0.07$\\ \noalign{\smallskip} \hline \end{tabular} \end{table*} The method with the lowest contamination is by far PCA. All the non-SGs wrongly selected by the $P(\mathrm{SG})$ have LC~II in the manual classification, and therefore their spectra are very similar morphologically to those of low-luminosity RSGs. Indeed, we cannot dismiss a priori the possibility that they may be low-luminosity SGs wrongly identified in the manual classification. The Ti/Fe criterion has a significantly higher contamination, but the CaT criterion works significantly worse than the other two in this respect. This is not completely unexpected, as the strength of the CaT lines is not only a function of luminosity, but also effective temperature and metallicity \citep{dia1989}. The contamination found in the Perseus sample through the PCA method ($0.06\pm0.07$) is compatible with those obtained for the calibration sample ($0.03\pm0.04$) and its MW subset ($0.08\pm0.13$) in \citetalias{dor2016b}. In the case of the CaT and Ti/Fe methods, their contaminations when applied to the MW subset of the calibration sample are $0.17\pm0.13$ for the Ti/Fe criterion and $0.20\pm0.13$ for the CaT criterion, which are again compatible with those obtained in this work for these methods (see Table~\ref{contamination_per}). Therefore the results for the Perseus sample corroborate the conclusions that we reached based on the subsample of MW stars in the calibration sample in \citetalias{dor2016b}, this time for a significantly larger sample. \subsection{The population of cool supergiants in Perseus} \label{gal_pol} As explained in Section~\ref{prob_per}, with the values proposed for the $P(\mathrm{SG})_{\mathrm{PCA}}$ we identified 191 targets as SGs in Perseus (86 of them having LC~Ia or Iab according to the manual classification), while our manual identification found 258 (96 of them having LC~Ia or Iab), including all the 191 PCA SGs. The difference between both sets is mainly due to Ib\,--\,II stars, which, as discussed above, may in fact not be true SGs, but bright giants. The rest of the difference is due to the late-M stars, which are not correctly selected by any of the automated criteria studied, even though their SG nature is very likely. Thus, for the present analysis we decided to adopt the PCA selection, but also include the five SGs (Ia to Ib) with late subtypes (M5 to M7) that were identified through manual classification, as well as S~Per, which is a well-known ERSGs (see Sect.~\ref{efficiency}). The supergiant content of the Perseus arm was studied by \cite{hum1970,hum1978}, who found more than 60 CSGs in this region. Later, \cite{lev2005} studied the RSG population of the Galaxy, adding a handful of new stars to the list of known RSGs in the Perseus arm. We also took into account a small number of CSG standards from \cite{kee1989} located in the Perseus arm. Using these works and crossing their lists, we obtained a list of 77 previously known CSGs in the Perseus arm. Among the 197 CSGs we found, there are only six that were included in this list. Thus, our work increases the number of CSGs known in Perseus in 191 stars, more than trebling the size of previous compilations (from 77 to 268 CSGs). This large number of CSGs allows us to study statistically the population of CSGs in the Perseus arm with unprecedented significance. Indeed, this sample permits a direct comparison of the CSG population in the Perseus arm and those in the MCs studied in \citetalias{dor2016a}. For this analysis, we used the SpT and LC given through the manual classification for the CSGs in our Perseus sample, and the classification given in the literature for the rest of the Perseus SGs that had gone to the calibration sample. Unfortunately, the distances to many of these stars still have significant uncertainties, which do not allow us to compare absolute magnitudes. However, in the near future \textit{Gaia} will provide reliable and homogeneous distances for almost all of them. We will then use these distances together with the radial velocities obtained from our spectra (which can be compared to the \textit{Gaia}/RVS radial velocities to detect binarity) to study in detail the spatial and luminosity distributions for the CSG population in the Perseus arm. In the present work we only analyse the SpT and LC distributions. When previous works have analysed a given population of RSGs, they have typically found their SpTs to be distributed around a central subtype with maximum frequency. In all populations, the frequency of the subtypes is lower the farther away from the central value the subtype is. The central subtype is related to the typical metallicity of the population, with later types for higher metallicities \citep{hum1979a,eli1985}. This effect has been confirmed by recent works for different low-metallicity environments \citep{lev2012}. In \citetalias{dor2016a} we confirmed this effect for very large samples in both MCs. The SpT distribution of the Perseus CSGs found in the present work (the PCA selection plus the six late RSGs visually identified) is shown in Fig.~\ref{histo_spt}. The median SpT of this sample is M1. We also studied the global population (268 CSGs), which includes all the previously known RSGs from the Perseus arm together with all our newly-found CSGs. Its histogram is shown in Fig.~\ref{histo_spt_all_per}a. Addition of the set of previously known RSGs not included in our own sample (see Fig.~\ref{histo_spt_all_per}a) shifts very slightly the median type to M1.5. Both median types are slightly earlier than values typically given for the MW according in the literature \citep[M2;][]{eli1985,lev2013}. However, the difference is not large enough to be truly significant, given the typical uncertainty of one subtype in our manual classifications. We can thus consider our results consistent with the value found in the literature. Despite this, we note that our sample is intrinsically different from any previous sample of Galactic RSGs. With the possible exception of a few background RSGs (which could be present given our magnitude cut, but should be very rare, because of the steeply falling density of young stars towards the outer Milky Way), our sample is volume-limited; it represents the total RSG population for a section of a Galactic arm. Previous works are mostly magnitude-limited and therefore tend to include an over-representation of later-type M supergiants, as these objects tend to be intrinsically brighter (see \citetalias{dor2016a} and references therein). The SpT distribution shown presents a clear asymmetry due to the presence of a local maximum at early-K types. This local maximum was not detected by \cite{eli1985}, but is present in \cite{lev2013}, in their figure~1. The SGs considered in \cite{eli1985} were mainly of luminosity classes Ia and Iab, while most of the early-K SGs used in \cite{lev2013} are of Ib class. This is also the case in our sample; most early-K (K0\,--\,K3) supergiants present low luminosity classes (Ib or less, see Fig.~\ref{histo_spt_all_per}b). Studies of similar stars in open clusters \citep[e.g.][]{neg2012b,alo2017}, show that these low-luminosity supergiants with early-K types are in general intermediate-mass stars (of $6$\,--\,$8\:$M$_{\sun}$), with typical ages ($\sim50\:$Ma) much older than luminous RSGs (typically between $10$ and $25\:$Ma). Therefore, despite their morphological classification as SGs, these stars should not be considered as true supergiants, because they are not quite high-mass stars. These stars are not very numerous in our sample (we have 19~stars with early-K types and LC Ib or less luminous) nor in the total population (23~stars). Therefore, our median types do not change if we do not consider these stars as part of the CSG population. It is worthwhile stressing that there are very few K-type true supergiants in the Milky Way, to the point that the original list of MK standards contains only one such object (the K3\,Iab standard $o^{1}$~CMa, later moved to K2.5\,Iab; \citealt{mk73}), as opposed to five K\,Ib stars, representative of the lower-mass population discussed above \citep[see][]{johnson53}. This absence of K-type SGs represents the main difference between the present catalogue and those from the MCs, as illustrated by Fig.~\ref{histo_spt_all_per}b. In \citetalias{dor2016a}, we found that RSGs in the MCs present a relation between SpTs and LCs, with later typical types for Ia than for Iab stars. As a consequence, we found an earlier typical SpT for each MC than in previous works by a few subtypes. This difference was caused by the inclusion in our survey of a large number of Iab CSGs, while previous studies studied were centred on the brightest RSGs, mostly Ia (see sect.~4.2 of \citetalias{dor2016a}). In contrast, when we analyse the different LC subsamples in Perseus, we do not find any significant difference between Ia and Iab stars, as both groups have the same median SpT: M1 (see Fig.~\ref{histo_spt_all_per}b). When we consider the global population, Iab supergiants have a median type of M1.5, but a difference of half a subtype cannot be considered significant. These results contrast strongly with the trends found in the MCs. It is unclear, though, if we can derive any reliable conclusions from this difference, because the number of Ia stars in the Perseus sample is too low compared to the number of Iab stars: seven~Ia against 83~Iab in our sample; 19~Ia against 116~Iab in the global population. \begin{figure} \centering \includegraphics[trim=1cm 0.4cm 1.8cm 1.3cm,clip,width=\columnwidth]{hist_simp_SpT} \caption{Distribution of SpTs for the targets identified as SGs using the PCA method.} \label{histo_spt} \end{figure} \begin{figure*} \centering \includegraphics[trim=1cm 0.35cm 1.8cm 1.35cm,clip,width=\columnwidth]{hist_global_SpT} \includegraphics[trim=1cm 0.35cm 1.8cm 1.35cm,clip,width=\columnwidth]{SpT_SG_perseo_global} \caption{ {\bf Left (\ref{histo_spt_all_per}a):} Distribution of SpTs for Perseus CSGs (our sample plus previous identifications). {\bf Right (\ref{histo_spt_all_per}b):} The same sample as in left panel, but split by luminosity class, with red for Ia, blue for Iab, green for Ib, and black for Ib\,--\,II. } \label{histo_spt_all_per} \end{figure*} There are a number of factors to consider before attempting any interpretation. Firstly, there are four early K-type Ia SGs pushing the median type to early types. As mentioned, these spectral types are rare in the MW, and many of these objects present unusual characteristics, such as evidence for binary interaction or heavy mass loss. Due to the small size of the Ia sample, these rare objects may have a disproportionate impact on the average type. Moreover, we may be biasing our sample because of a classification issue: there are no MK SG standards for spectral types later than M4 (except for $\alpha$~Her, mentioned above, which is not a true SG). At these spectral types, luminosity indicators are strongly affected by the molecular bands, specially TiO bands. In fact, for types later than M3, many luminosity indicators (e.g.\ the Ca Triplet) do not separate RSGs from red giants (\citealt{dor2013}, \citetalias{dor2016a}, and \citetalias{dor2016b}). Our sample contains a number of RSGs with mid to late types, which were given a generic I classification, as it was not possible to give a more accurate luminosity subclass \citep[see discussion in][]{neg2012}. For calculation purposes, these objects have been assigned to the intermediate luminosity Iab. This could be incorrect, as the few late-M RSGs found in open clusters tend to have much higher luminosities than earlier RSGs in the same clusters \citep{neg2013,mar2013}. Within our sample, we have an interesting example of the situation explained above in the cluster NGC~7419. This rich cluster contains five RSG members; four of them have M0 to M2\,Iab types, while the last one, MY~Cep, is M7.5\,I \citep{mar2013}. As can be seen in fig.~13 of \cite{mar2013}, MY~Cep is about one and a half magnitude more luminous than the other 4 RSGs. As MY~Cep was the only comparison star available for the manual classification of the late RSGs in our sample, it is reasonable to expect that the three stars classified as M7\,I could also be high luminosity RSGs, as MY~Cep is. Four other Ia stars present types M3 to M4. One of them is S~Per, a known spectral variable that can present types as late as M7, according to \cite{faw1977}. In view of this, it is highly likely that we are underestimating the number of late-M Ia~RSGs. Even though these are also rare objects, given the small size of the Ia sample, they could move the median to later types. In this context, it is important to note that the MC populations studied in \citetalias{dor2016a} include very few mid- or late-M supergiants. Most MC Ia RSGs were M3 or earlier, allowing their LC classification without the complications that affect luminosity indicators at later types. In addition, the distance to the RSGs in the MCs is well known, allowing a direct knowledge of the actual luminosity. In the Perseus sample, we have to resort only to morphological characteristics in most cases, at least until accurate distances are provided by \textit{Gaia}. The low number of Ia SGs may be meaningful in itself. On one side, magnitude-limited samples will always have a bias towards intrinsically bright stars that is not present in the Perseus sample. On the other side, the sample of CSGs in the SMC presented in \citetalias{gon2015}, which may not be complete, but is at least representative, has a much higher fraction of Ia supergiants with respect to the Iab cohort. As discussed in \citetalias{dor2016a}, there may be two different pathways leading to high-luminosity CSGs. Since stellar evolutionary models \citep{eks2012,geo2013,bro2011} indicate that evolution from the hot to the cool side of the HR diagram happens at approximately constant luminosity, the brightest CSGs should be descended from more massive stars (with masses $\sim25\:$M$_{\sun}$ and up to $\sim40\:$M$_{\sun}$). On the other hand, observations of open clusters \citep{neg2013,beasor16} suggest that less massive stars (with masses between $10$ and $\sim20\:$M$_{\odot}$) could evolve from typical Iab CSGs towards higher luminosities and cooler temperatures at some point in their lives. This idea is suggested by the presence in massive clusters of some RSGs with significantly later SpTs and much higher luminosities than most of the other RSGs in the same cluster (as in the example of NGC~7419 mentioned above). The low fraction of Ia CSGs in the Perseus arms may shed some light on these issues. Although there are some very young star clusters and associations (mainly Cep~OB1 and Cas~OB6) in the area surveyed, most of the clusters and OB associations are not young enough to still have any RSGs with high masses ($\ga20\:$M$_{\sun}$). The most massive clusters included in the sample region have ages around $15\:$Ma, with main-sequence turn-offs at B1\,V. This is the case of NGC~7419 \citep{mar2013} or the double Perseus cluster, the core of the Perseus OB1 association \citep{sle2002}, while the clusters in Cas OB8 are even older. For an age $\sim15\:$Ma, according to Geneva evolutionary models \citep{eks2012}, RSGs should be descended from stars with an initial mass $\sim15\:$M$_{\odot}$ and not be much more luminous than $M_{\textrm{bol}}\sim-7$. As can be seen in fig.~16 of \citetalias{dor2016a}, most Ia RSGs are more luminous than this value. Therefore, the scarcity of Ia RSGs in Perseus can be interpreted as a straight consequence of the lack of high-mass RSGs, which supports the idea that Ia CSGs come mainly from stars with initial masses between $20$ and $40\:$M$_{\odot}$. However, there still is a significant fraction ($0.07\pm0.06$) of Ia stars, which are not directly related to any very young cluster. For example, following with the example of Per~OB1, this association contains the well known ERSG S~Per \citep{hum1978}, which has been observed to vary from M4 to M7\,Ia. This suggests that indeed some intermediate mass RSGs may increase their luminosity up to LC~Ia from lower luminosities. Their low number in the sample agrees with small fraction of very luminous RSGs found in massive open clusters. \subsection{Candidates to extreme red supergiants} \begin{figure} \centering \includegraphics[trim=1cm 0.4cm 2.3cm 1.2cm,clip,width=\columnwidth]{TiO_CaT_ERSG_per.pdf} \caption{Depth of the TiO bandhead at $8859$\:\AA{} with respect to the sum of the EWs of the CaT lines, for the Perseus sample. The colour indicates $P(\mathrm{SG})_{\mathrm{PCA}}$, and the shapes indicate epoch (2011 circles, 2012 squares), except for the two stars, which are reference ERSGs. The green star is the S~Per and the red star is UY~Sct. Both ERSGs are represented with their own error bars. The black cross indicates the median uncertainties of the sample. The scale used in this Figure is the same as for Fig.~14a from \citetalias{dor2016b}, which show the same diagram for the calibration sample, to ease the comparison.} \label{TiO_CaT_ERSG} \end{figure} In \citetalias{dor2016b} we proposed the use of two diagrams to detect RSGs affected by veiling, a characteristic effect that ERSGs present at some points in their spectral variation \citep[for details about veiling see][and Section~4.4 in \citetalias{dor2016b}]{hum1974a}. In Fig.~\ref{TiO_CaT_ERSG} we include the location of the two veiled ERSGs, UY~Sct and S~Per (which indeed is one of the stars in the Perseus sample), that were available to us. They indicate the typical region where veiled ERSGs seem to lie. For the Perseus sample we found only one star close to them, outside the main band of giant and supergiant stars. This object, PER433, was rejected as a SG by the $P(\mathrm{SG})_{\mathrm{PCA}}$ (and also by the other methods), but given the effect of veiling on atomic lines, this rejection cannot be considered conclusive. In the bibliography this object, known as V627~Cas, has been identified as some kind of symbiotic star \citep{kol1996}. We checked its spectrum and found that it shows the O\,{\sc{i}} line at 8448\:\AA{} in emission, which is usual in Be stars, but not expected in ERSGs, since it requires higher temperatures. It also has its CaT lines in emission, partially filling them, which explains why this star shows EW(CaT) much smaller than expected for a giant star. Therefore, we can conclude that this star is not an ERSG. \section{Conclusions and future work} In \citetalias{dor2016b} we proposed a method for using PCA in the identification of CSGs. In the present work we have developed it further, obtaining a way to estimate the probability that a given spectrum is a CSG, instead of just giving a binary result (``SG" or ``non-SG"). We have then applied this method to a large sample of galactic stars selected to be part of the Perseus arm. We also compared the results obtained through the method using PCA with two other classical criteria studied in \citetalias{dor2016b} (those based on the CaT and Ti/Fe criteria). Summarising, from the analysis presented in this work we can conclude: \begin{enumerate} \item We find that the efficiencies of all three automated methods are similarly high ($>90\%$) for objects which were visually classified as certain CSGs (Ia to Ib), and compatible with those obtained for the calibration sample in \citetalias{dor2016b}. The results are much worse in the case of those targets visually classified as Ib\,--\,II for the three methods, and especially for the PCA one. However, this group of LC Ib\,--\,II objects is probably formed mostly by non-SGs and the automated methods could be simply pointing this out. Finally, we find that the efficiency is almost zero for stars visually identified as SGs having subtypes later than M5, independently of the method used. \item Although the efficiencies are similarly good in the three cases, the contaminations are very different for each method, when manual classification is used as a reference. As in the case of the MCs, the PCA method provides the cleanest sample of SGs, with a contamination fraction as low as $0.06\pm0.07$, against $0.33\pm0.06$ and $0.18\pm0.07$ for the CaT and Ti/Fe criteria. The contamination found for the PCA method is compatible with that obtained for the calibration sample of \citetalias{dor2016b}. However, the other two methods result in values significantly higher, probably because the Perseus sample has a larger fraction of bright M giants than our MC samples, because in this case we are observing through the Galactic plane. \item Using the PCA method, we identified 191 targets as CSGs, plus 6 RSGs with late SpTs which were identified through the manual classification. These 197 CSGs are a significant fraction of the total sample ($0.33\pm0.04$), demonstrating that the photometric selection criteria used have a very high efficiency at moderate reddenings. This sample represents the largest catalogue of CSGs in the MW observed homogeneously, increasing the census of catalogued CSGs in the Perseus arm dramatically: to the 77 CSGs contained in previous lists, this catalogue adds 191 more objects. The list of stars observed, with their corresponding probabilities of being a SG through different methods is given in Tables~\ref{cat_perseo}. \end{enumerate} The final catalogue, with almost 200~CSGs, is the largest coherent sample of CSGs observed to date in the Galaxy. In the future, we will use this sample to study both the CSG population and its relation to structure ofthe Perseus arm. We will use the radial velocities that we can obtain from our spectra, along with \textit{Gaia} distances (which will be available for these stars by mid 2018), to study the spatial distribution of the CSGs in the Perseus arm and their relation with nearby clusters and OB associations. In addition, we will also analyse the physical properties of these stars, deriving them from their spectra by using the method that we are developing (Tabernero et al. in prep.). Finally, it is our intention to extend the study of CSG populations toward the inner Galaxy, where we should find higher metallicities, but will also have to fight much higher extinction and stellar densities \section*{Acknowledgements} We thank the referee, Prof. Roberta Humphreys, for the swiftness of her response. The INT is operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de Los Muchachos of the Instituto de Astrof\'{\i}sica de Canarias. This research is partially supported by the Spanish Government Ministerio de Econom\'{\i}a y Competitivad (MINECO/FEDER) under grant AYA2015-68012-C2-2-P. This research has made use of the Simbad, Vizier and Aladin services developed at the Centre de Donn\'ees Astronomiques de Strasbourg, France. This research has made use of the WEBDA database, operated at the Department of Theoretical Physics and Astrophysics of the Masaryk University. It also makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France \bibliographystyle{mnras}
1,941,325,221,215
arxiv
\section{Empirical evaluation} \label{sec:case-studies} \subsection{Research questions} \input{research-questions} \subsection{Subjects} \label{subsec:subjects} \input{subjects} \subsection{Procedure and metrics} \input{rq1-proc-metrics} \input{rq2-proc-metrics} \input{rq3-proc-metrics} \subsection{Experimental Results} \input{rq1-results} \input{rq2-results} \input{rq3-results} \textbf{Reproducibility:} We make \tool openly available~\cite{tool}, together with a replication package including features models, test cases, and pointers to the subject apps. \subsection{Threats to validity} \textbf{External validity:} Our results are based on 6 apps and on the test cases defined for them by the authors of PreFest~\cite{LuPZ0L19}. Such test cases have not been designed explicitly for in-vivo execution, although they are designed to exercise preference-dependent portions of the application code. While our results may not generalise to different apps and to test cases explicitly designed for in-vivo execution, we have chosen the benchmark available from the closest related work, dealing with preference-based testing. \textbf{Internal validity:} The measures of overhead were obtained while running test scenarios that are supposed to mimic the typical usage scenarios of the apps. While we did our best to produce such scenarios based on the main functionalities advertised for each app under test, we cannot exclude that the overhead may change under different usage conditions. \section{Conclusion and future work} \label{sec:concl} We have presented \tool, a framework for in-vivo testing of Android apps, and we have measured its overhead on the end-user executions, showing that such overhead is compatible with in-field adoption of our approach. In our future work we plan to optimize configuration monitoring, making it adaptive and distributed. We also intend to investigate test case generation in response to newly discovered configurations. We are also considering other application domains for \tool, such as that of web applications. \tool together with a replication package including features models, test cases, and pointers to the subject apps are available online~\cite{tool}. \section{Motivating Example} \label{sec:approach} Let us consider a hypothetical messaging app for Android devices, which we call \textit{ChatApp} (pronounced shut-up). \textit{ChatApp} supports the exchange of messages and multimedia content between its users. Moreover, \textit{ChatApp} can take a picture of the user when the user creates or updates her\changed{/his} profile. To take a picture, \textit{ChatApp} sends an intent (see \changed{Listing~\ref{code:intent}}) to delegate the task to any app that can take a picture using the camera of the mobile device. \begin{figure}[htb] \begin{lstlisting}[language=Java, label={code:intent}, caption={Intent sent by \textit{ChatApp} to obtain a profile picture}, captionpos=b] Intent cameraIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); cameraIntent.putExtra(MediaStore.EXTRA_OUTPUT, outputImgUri); startActivityForResult(cameraIntent, REQUEST_IMAGE_CAPTURE); \end{lstlisting} \end{figure} Since \textit{ChatApp} relies on external resources (installed camera app; camera hardware) for the successful execution of the add/update profile image functionality, the scenarios in which a failure might occur depend on multiple factors: the hardware installed in the device, since the interaction with some camera models may fail; the configuration of the environment and operating system, since not all camera apps might be compatible with the ChatApp application; the settings of the app itself, since some specific choices might be not well supported by the app; and a combination of all these factors. Hence, adequately testing ChatApp requires addressing the combinatorial exploration resulting from all these factors. \textbf{\emph{Environment configuration}}. We use feature models~\cite{foda1990} to represent and manage the large configuration space that may affect apps. The configuration model of \textit{ChatApp} is shown in Figure~\ref{fig:chatapp_fm}, where inner nodes represent features; leaf nodes represent feature values; and the parent-child edges represent the feature-subfeature decomposition. While the default interpretation of feature decomposition is AND-decomposition, modifiers are available to express OR/XOR-decompositions and to identify a feature as mandatory/optional (see Legend in Figure~\ref{fig:chatapp_fm}). The logical constraints at the bottom-right are added to further constrain the admissible configurations. The configuration of \textit{ChatApp} is decomposed into two main parts: 1) \emph{DeviceConfig}, representing the configuration of the device on which the app is running; and 2) \emph{AppPrefs}, representing the various settings of the app itself. \emph{DeviceConfig} includes the Android version (\textit{OS} feature), the camera apps that can be delegated \changed{to take pictures} (\textit{CameraApp}), and the actual model of the device (\textit{DeviceModel}), all of which are mandatory features. In turn, \textit{CameraApp} can be the default app (\textit{Default}, mandatory feature) or an additional app (\textit{Other}, optional feature). \textit{Default} can be instantiated by a set of mutually exclusive apps (empty arc), while \textit{Other} can be instantiated by a set of non exclusive apps (filled arc). When the device model is \changed{\textit{Sony}}, the camera hardware (\textit{CameraHw} feature) can be either \textit{IMX300} or \textit{IMX400}. \emph{ChatApp} has also a couple of application-specific settings. The first one (\emph{Upload}) represents a preference of the user to upload photos over wifi, mobile data, or both. The other setting (\emph{Backup}) represents the preference of the user on whether or not to backup chats. The feature model contains also a few cross-tree constraints of type ``implies''. For instance, the cross-tree constraint (v4\_x $\Rightarrow$ N, equivalently shown as $\neg$4\_x $\lor$ N in Figure~\ref{fig:chatapp_fm}) indicates that version 4\_x of \textit{GoogleCamera} constrains the version of \textit{Android} to be N (\textit{Nougat}); the camera app \textit{SonyCamera} constrains the device model to be \textit{Sony}. In addition to representing the full configuration space, we need to record also the set of configurations that have been tested so far. Let us consider \emph{ChatApp} at the time it is first deployed to its users and let us assume that pre-release testing has been carried out on an LG phone with default camera on all three Android versions, with user settings specifying that upload is possible only on the wifi and that backup is disabled. The set of tested configurations will include the following tuples of feature values: \vspace{0.2cm} \begin{footnotesize} \noindent $\langle$\textit{N}, \textit{LG}, \textit{LGCam}, \textit{OnWifi}, \textit{No}$\rangle$ \\ $\langle$\textit{O}, \textit{LG}, \textit{LGCam}, \textit{OnWifi}, \textit{No}$\rangle$ \\ $\langle$\textit{P}, \textit{LG}, \textit{LGCam}, \textit{OnWifi}, \textit{No}$\rangle$ \end{footnotesize} \vspace{0.2cm} \textbf{\emph{In-vivo testing of ChatApp}}. \tool includes a run-time in-vivo test component that can monitor the configuration elements relevant to the app and checks whether the current configuration is tested, untested or unknown. This information can be extracted by a run-time probe that queries the device and the app preferences and compares the retrieved information to the tuples of tested configurations. The following are examples of \textit{tested}, \textit{untested} and \textit{unknown} configurations of \textit{ChatApp} \begin{footnotesize} \begin{description} \item [tested] ~~~$\langle$\textit{N, LG, LGCam, OnWifi, No}$\rangle$ \item [untested] ~~~$\langle$\textit{N, Sony, SonyCamera, v4\_x, IMX400, OnWifi, Yes}$\rangle$ \item [unknown] ~~~$\langle$\textit{P, Xiaomi, XiaomiCamera, Xiaomi/Dual camera, v6\_x, OnWifi, OnMobile, No}$\rangle$ \end{description} \end{footnotesize} Different configurations trigger different reactions. A tested configuration triggers no reaction. An untested configuration, triggers in-vivo test execution. An unknown configuration triggers a feedback to testers who are asked to extend the model to incorporate the new cases that were not considered at the beginning, when the full configuration model was produced. In addition, an unknown configuration can be immediately validated with the available test cases. Let us now consider the following hypothetical field failure: \smallskip \noindent \textit{A new camera app, \textit{XiaomiCamera}, is installed. The camera hardware is deployed with a driver that, under Android version \textit{N}, does not initialize the camera if not requested explicitly. When \textit{ChatApp} takes a picture of the user, the request goes through \textit{XiaomiCamera}, which does not explicitly initialize the camera when responding to an intent (it initialises the camera only when activated by the user). Correspondingly, \textit{XiaomiCamera} crashes. In such a case, \textit{ChatApp} times out the request to \textit{XiaomiCamera}, leaving a reference to the requested picture set to null. When later the picture is used, a null pointer exception is thrown and \textit{ChatApp} stops working.} \smallskip In such a scenario, the in-vivo testing component will: \begin{enumerate} \item recognize the configuration as \emph{unknown} (in fact it does not appear in the feature model depicted in Figure~\ref{fig:chatapp_fm}); \item execute the available in-vivo tests, possibly retrieved from a testing server, to check if \textit{ChatApp} works properly with the camera app \textit{XiaomiCamera}; \item expose a failure of \textit{ChatApp} (null pointer exception); \item report the failure, and the configuration that triggered it, to the developers. \end{enumerate} \section{The \tool Framework} \label{sec:framework} The \tool framework provides the architecture and a reference implementation that developers can exploit to add in-vivo testing capabilities to their mobile applications. In this context, we assume that the application under test (AUT) has been designed to support in-vivo testing. Note that this assumption does not necessarily imply that the AUT has been conceived for in-vivo testing from scratch, but rather that the AUT has been at some point extended with the minimal set of features required to support the in-vivo testing process. \subsection{Functional and Non-Functional Requirements} \label{sec:requirements} The functional requirements reported below distinguish the functionalities that must be implemented by the in-vivo framework directly, and the functionalities that must be provided by the AUT by implementing interfaces defined in the framework. \emph{FR-TestSpace}: The framework should be able to read a configuration model and a set of tested configurations from a persistent storage. \emph{FR-ActualConf}: The framework should expose an interface (\texttt{getConfiguration}) to be implemented by the AUT, by which it can determine the \emph{configuration} of the execution environment in which the AUT is deployed and is operated, as well as an interface by which the AUT can inform the framework of a new/updated configuration (\texttt{sendConfiguration}, \texttt{updateConfiguration}). \emph{FR-CheckConf}: The framework should identify the \emph{configuration} of the execution environment as \emph{tested}, \emph{untested}, or \emph{unknown}. \emph{FR-TestExec}: The framework should be able to retrieve and execute in-vivo/ex-vivo tests for the \emph{untested} configurations. \emph{FR-TestGen}: The framework should be able to generate in-vivo/ex-vivo tests for \emph{unknown} configurations, possibly without, but if necessary with, manual intervention. \emph{FR-SelfHeal}: The framework should expose an interface to be implemented by the AUT, by which it can activate failure prevention/self-healing mechanisms in the presence of failing in-vivo/ex-vivo test executions. \emph{FR-Isolation}: The framework must ensure isolation of the in-vivo test executions, so that they do not interfere with regular operation of the AUT and do not have side effects (e.g., modify the persistent data of the user). \smallskip In addition to the functional requirements, we also identified a small set of relevant non-functional requirements that an in-vivo framework should satisfy. \emph{NFR-PerfChecking}: The framework should not impose unacceptable levels of performance overhead when monitoring and checking the test configurations on the deployed AUT. \emph{NFR-PerfTesting}: The framework should not impose unacceptable levels of performance overhead when running tests on the deployed AUT. \emph{NFR-Network}: The network data usage (overhead) due to the communication between client and server components of the framework should be acceptable. \emph{NFR-Energy}: The energy consumption due to the execution of the framework should be low. \emph{NFR-Privacy}: The framework must ensure privacy of the client user when sharing information with the server. \emph{NFR-Security}: The framework must ensure security of the client user in handling resources. Note that, although in this work we focus on mobile applications, our set of requirements are general and can be applied to many different contexts, including desktop and client-side Web applications. \subsection{\tool Architecture} \label{sec:architecture} In order to satisfy the identified requirements, we designed the client-server architecture shown in Figure~\ref{fig:framework}. The client runs in the devices of the users and manages the in-vivo testing process local to the application under test (AUT). In fact, the client-side includes both the mobile app under test (AUT) and the \tool in-vivo framework, which is further organized in two layers, a layer of interfaces implemented by the AUT and a layer of managers responsible for both the in-vivo testing process and the interactions with the server. The server runs remotely and controls the in-vivo testing process by interacting with all the devices augmented with the in-vivo framework. When necessary, for instance when test cases cannot be conveniently executed in-vivo, the server-side can also run ex-vivo test cases. \begin{figure}[tbh] \begin{center} \includegraphics[scale=0.55, trim=1cm 8.5cm 10cm 0.5cm]{img/InvivoAppTestingV2} \end{center} \caption{\tool Architecture} \label{fig:framework} \end{figure} \vspace{-0.5cm} \subsection{Client-Side Components} \label{sec:client} There are four client-side components orchestrated by the ClientService, which offers the same entry point for all components. The \emph{Configuration Manager} is responsible for monitoring configurations, which consist of the hardware and software settings that may influence the behavior of the AUTs. This is done partially autonomously and partially in collaboration with the AUTs. In fact, the Configuration Manager can autonomously extract information about the hardware and the configuration of the operating system available in the client device. However, the Configuration Manager cannot access application-specific data without the collaboration and authorization of the AUT. For instance, the Configuration Manager may neither know where the app preference files are located nor have the right to access these files. To overcome this issue, the AUT must implement the \emph{Configuration Interface}, which is a read-only interface used by the Configuration Manager to extract a representation of the current configuration of the app. The Configuration Manager may simply query the interface when needed. However, a more efficient process may also allow the AUT to notify the Configuration Manager that the current configuration of the app has been modified. This is supported by the \emph{ConfigurationUpdate Interface}, which is defined in the framework and implemented in the AUT so as to generate notifications. The Configuration Manager has also the responsibility to trigger test case execution and notify the existence of unexpected configurations to the server. In fact, every time a configuration is extracted, it is compared to both the Configuration Model, which is a representation of the possibly huge space of all the possible configurations, and the Tested Configurations, which is a representation of the configurations globally validated so far. The comparison of the current configuration to the configuration model and to the tested configurations can produce three possible results, as follows. \begin{definition}[Tested Configuration] A \textit{tested} configuration is a configuration that is valid according to the Configuration Model and is included in the Tested Configurations, which means it has been exercised either in pre-release testing or in-vivo testing. \end{definition} \begin{definition}[Untested Configuration] An \textit{untested} configuration is a configuration that is valid according to the Configuration Model and is not included in the Tested Configurations, which means it has never been exercised, either in pre-release testing or in-vivo testing. \end{definition} \begin{definition}[Unknown Configuration] An \textit{unknown} configuration is a configuration that is not valid according to the Configuration Model. \end{definition} When a tested configuration is discovered, it means that the current configuration has been already validated and nothing is done by the Configuration Manager. When an untested configuration is discovered, the Configuration Manager asks the Test Manager to validate it by running the in-vivo test cases. When an unknown configuration is discovered, the Configuration Manager notifies the server of the incompleteness identified in the Configuration Model, expecting to receive in the future a new version of the Configuration Model where the incompleteness has been fixed. \smallskip The \emph{Test Manager} is the component responsible for running the in-vivo testing process, reporting the results to the server, and updating the test suite available locally. When the Test Manager is triggered to validate an untested configuration, it first checks with the server if the configuration is also untested globally. If the current configuration was already tested by another client, the server responds with an updated representation of the tested configurations and the process stops. Otherwise, if the configuration is globally untested, the Test Manager activates the available isolation mechanisms and runs the in-vivo test cases. The results of the testing process and the tested configuration are reported to the server, which can update the set of tested configurations. \smallskip The \emph{Storage Manager} is a simple component responsible for storing and updating the persistent data that characterize the in-vivo testing process: the configuration model, the tested configurations, and the in-vivo test cases. The Configuration Manager and Test Manager interact with the Storage Manager when these entities have to be retrieved or updated. \smallskip The \emph{Self-Healing Manager} is responsible for activating countermeasures when failures are detected. Some of these countermeasures might be activated treating the AUT as a black-box. However, the most sophisticated strategies may require collaboration from the AUT. To support the latter case, the architecture exposes a \emph{SelfHealing Interface}, to be implemented by the AUT to facilitate self-healing. \section{The Android \tool Framework} \label{sec:implementation} The architecture described in Section~\ref{sec:framework} is general and can be instantiated in multiple contexts using different technologies. One of the most relevant use cases for the \tool framework is for sure the Android ecosystem, where apps must work correctly in very heterogenous environments characterized by different hardware resources, different operating systems, and different user preferences whose combinations are impossible to test exhaustively~\cite{Wei:Fragmentation:TSE:toAppear}. We thus implemented a version of the \tool framework specifically for the Android ecosystem. In our first definition of the Android framework, we focus on the core functional and non-functional capabilities, leaving the implementation of the self-healing capabilities and the implementation of advanced mechanisms for security and privacy for the future. Our implementation of the framework is publicly available~\cite{tool}. We describe the capabilities concerning configuration management, testing and isolation below. \subsection{Configuration Management} \label{sec:configuration} We specify the set of the possible configurations using a feature model~\cite{foda1990}. The feature model is obtained semi-automatically. Part of the configuration space is the same for every app, such as the part of the model that represents the hardware and operating system where an app can be executed. This part can be conveniently specified manually almost once for all. However there are also a number of app-specific settings that may affect the behavior of the apps. The volume of these settings is often significant. For instance, the feature model representing the configuration of \emph{Amaze File Manager}, one of the apps used in our experiment, contains 185 features (131 primitive and 54 compound) resulting in a large configuration space whose size is in the order of $10^{12}$. Another app, \emph{RedReader}, has 567 features (461 primitive and 106 compound) and 4 constraints, resulting in a configuration space of size in the order of $10^{58}$. Manually producing a feature model that represents so many elements might be prohibitive and error prone. To address this challenge, we designed a technique that can automatically extract the feature model corresponding to the preferences that appear in target preference files. Our approach essentially maps the Android Preference hierarchy into a feature model. To this end, we devised an appropriate feature model relation for each type of preference available in Android. In particular, \texttt{PreferenceCategory} and \texttt{PreferenceScreen} become abstract features, and the individual Preference items under them become their children in the feature model. For example, the Android preference \texttt{ListPreference} (a setting in which the user can select one of the available values) becomes a compound feature where each of the values is a primitive feature, all joined as \texttt{Alternative} features. Similarly, we have identified the appropriate mapping for the most common preference types in Android: \texttt{CheckBoxPreference, ListPreference, MultiSelectListPreference, PreferenceCategory, SwitchPreference}. We also introduce suitable heuristics for preference types where a direct mapping to feature model relations is not appropriate. In particular, for \texttt{EditTextPreference} we consider only two options: 'default value' and 'custom value'. Similarly, for numeric preference types, we introduce three options: 'negative', 'zero', 'positive'. Clearly such heuristics could be easily improved or replaced by customized categories depending on the nature of the app under investigation. For generic app preferences where we could not determine their types from their declaration, \tool may not be able to map them automatically into the feature model. Such preferences require manual investigation to determine their type. The two feature models, the one representing the app-independent configurations and the one representing the app-dependent configurations, are merged into a single feature model that is used to \changed{guide} the in-vivo testing process. To manipulate the feature models, we make use of the FAMILIAR framework~\cite{DBLP:journals/scp/AcherCLF13}. In particular, we use FAMILIAR's simple syntax when generating feature models corresponding to Android preferences. Once generated, we transform the feature models in the FAMILIAR format into the SPLOT\footnote{\url{http://www.splot-research.org/}} format for easy manipulation and visual inspection/editing via the FeatureIDE\footnote{\url{http://www.featureide.com/}} plugin for Eclipse. Hence, while generating the entire feature model manually is quite difficult, the models automatically generated by our mapping could be easily inspected and edited by the engineer as appropriate. While it would be possible to handle the representation of the tested configurations similarly, using FAMILIAR, we noticed that repeated application of the merging functionality of FAMILIAR produces complex models, involving a large number of constraints, eventually slowing down the check for tested configurations. Hence we developed an alternative, more compact, representation of the tested configurations using a tree-based representation. In order to retrieve the current configuration of a client-device, our Android implementation exploits two different mechanisms: the app-independent configuration is retrieved by the framework autonomously, without requiring any interaction with the AUT, while the framework interacts with the Configuration Interface to retrieve the content of the preference files that were used to generate the app-dependent part of the feature model. \subsection{In-Vivo Testing} The In-Vivo testing process is performed entirely on the client-side and consists of a set of test cases that are executed to test the untested configurations discovered in the field. Test cases can be unit, integration and system test cases. We implemented the unit and integration test cases with JUnit. Since the existing system testing technologies are not designed to run from the device, we modified Espresso~\cite{Espresso} so that Espresso test cases can be launched and the results collected entirely from the device. Since test cases must be able to stimulate the AUT, the testing interface implemented by the AUT must include a method to launch the test cases, otherwise the in-vivo testing process may violate the security policies of the device. The implementation of this method is almost always the same and does not need to be designed ad-hoc for every target app. \subsection{Isolation} Test case execution should be performed with minimal intrusiveness with respect to user activity. To achieve memory isolation, we exploit Managed Profiles~\cite{managedProfiles}, which are designed to support corporate environments (with corporate apps) on private employee devices. A managed profile represents an ideal technical solution for in-vivo testing, because it supports isolation and sand-boxing. {\em Isolation}: When an app is installed in a managed profile, it shares no data with the same app installed in the regular user profile (they are assigned distinct linux user-ids). Thus, the act of testing an app on the testing profile does not affect the end-user data in the regular user profile. {\em Sand-boxing}: The profile manager can dynamically install and remove an app from/to a managed profile (e.g., before/after running the test suite) and it can also dynamically grant and revoke permissions to apps in the managed profile, thus limiting the side effect of testing, such as information leakage. The \tool framework defines an in-vivo testing profile where the AUT is copied the first time the in-vivo testing process is triggered. In-vivo testing happens within the in-vivo testing profile, thus it produces no side effects on the actual app used by the users and the user data. If, in addition to testing the app under the same hardware and operating system configuration, the app must be tested with the same software configuration and user preferences, the two copies of the app can be designed to communicate through an intent and exchange configuration data. Operations to achieve isolation with respect to external services, if any, must be implemented in the Testing Interface. Unit and integration testing can be performed transparently for the user. However, system testing requires taking control of the screen and would thus be intrusive with respect to user activities. One way to mitigate such issue is to activate in-vivo testing when the screen is about to be locked: the Test Manager runs a single in-vivo test before allowing the screen to freeze. If more tests are to be executed for a given configuration, they will be run one at a time whenever new screen lock requests occur. Moreover, if the same configuration is observed in multiple devices, in-vivo tests are distributed (by the Server-Side Component of \tool) among different devices, hence reducing the impact on each single user. \section{Introduction} \label{sec:intro} Mobile applications are built to operate on a plethora of devices, each running a different version of the operating system and offering different hardware resources (screen resolution, sensors, etc.). Moreover, user preferences can affect various aspects of such applications, from their visual appearance to the enabled/disabled functionalities. Software testing is expected to check the behaviour of mobile apps in all possible configurations. However, this is practically impossible because both the number of combinations is exponential and some configurations are difficult to reproduce during pre-release testing. Previous works~\cite{LuPZ0L19,GazzolaMPP17} show that different configurations may lead to a different coverage of the code and that some faults are ``field-intrinsic'', that is, they are inherently difficult to detect in-house. \changed{In particular,} Lu et al.~\cite{LuPZ0L19} developed a technique to identify the test cases whose behaviour is affected by the user preferences. They found that some faults are exercised and exposed only under very specific preference configurations. Gazzola et al.~\cite{GazzolaMPP17} report an empirical investigation of field failures. They observed that the main reason for the leakage of faults from pre-release testing to field usage is \textit{combinatorial explosion}. i.e., the huge number of configurations in which the software should be tested to expose faults that otherwise might give raise to field failures. Mobile apps have often a very large user base that exercises the software under various configurations. Moving part of the testing activity to the field is therefore an appealing option to deal with the combinatorial nature of configuration-specific mobile app faults. However, the overhead introduced by such form of testing, which we call ``in-vivo'' testing, should be minimized, to make it acceptable for the end user. In this paper we propose a model to represent the configuration space of a mobile app and we describe \tool, a framework that we developed for the Android operating system, which supports in-vivo monitoring and testing of new app configurations. Our framework resorts to managed profiles~\cite{managedProfiles} to isolate the in-vivo testing session from the normal user session. We measured the impact of configuration monitoring on low-end to high-end devices and found that the runtime increase is between imperceptible to negligible (on average between 0\% and 6\% CPU load overhead). Assuming that the actual execution of in-vivo tests takes place when the device is not under active usage (e.g., when it transitions to the screen lock mode), each test case is expected to introduce a delay of about 3 seconds on average. \tool is the first attempt to bring some testing activities to the field for mobile apps. The preliminary results obtained on a benchmark of six Android apps show that \changed{\tool} is a promising approach and that its impact on the end-user can be acceptable. The \tool tool and the experimental material are publicly available online~\cite{tool}. \section*{Acknowledgements} \footnotesize{ This work has been partially supported by the Italian Ministry of Education, University, and Research (MIUR) with the PRIN project GAUSS (grant n. 2015KWREMX); by the H2020 Learn project, funded under the ERC Consolidator Grant 2014 program (ERC Grant Agreement n. 646867); by the H2020 Precrime project, funded under the ERC Advanced Grant 2017 program (ERC Grant Agreement n. 787703). We would like to thank Filip Ivanov Karchev for contributing to the implementation of the initial in-vivo prototype, in particular for engineering a solution for on-device execution of Espresso test cases and for contributing to the initial sketch of the in-vivo server. } \balance \nocite{MurphyKVC09,GazzolaMPP17} \bibliographystyle{abbrv} \section{Related work} \label{sec:rel-work} While to the best of our knowledge \tool is the first framework that supports in-vivo testing for mobile (specifically, Android) applications, there are previous works that deal with related problems. In particular, the problems of in-vivo monitoring and isolation have been already considered, though not in the mobile domain. Preference-based testing for mobile applications is also related to our work, in particular to coverage of the feature combinations described in our configuration models. \textbf{Techniques to isolate in-vivo test execution} Several techniques~\cite{GonzalezSanchezPG09,HuningJSSE2010,LahamiKJ15,MurphyKVC09} have been proposed to support isolation during in-vivo testing. \textit{Duplication} (also called \textit{Cloning})~\cite{GonzalezSanchezPG09,HuningJSSE2010,MurphyKVC09} consists of cloning the execution state (e.g., by forking a parallel process~\cite{MurphyKVC09}) and executing in-vivo tests on the cloned execution state, hence ensuring that there is no interference with the end user execution of the application (in-memory side effects are prevented, but of course other side effects on persistent storage are not dealt with). Another proposed isolation mechanism is \textit{Test mode execution}~\cite{BartoliniJSS2011,GreilerPESOS2010,KawanoOM89,LahamiSCP2016,ZhuTSC2012}. It requires a way to differentiate between the execution of a component in normal operation mode vs. the testing mode. In the latter case, counter measures are taken to ensure that test mode execution does not affect the normal execution state (e.g., by tagging invocations and data with a test tag~\cite{KawanoOM89,LahamiKJ15}). Another clean and elegant solution consists of using a transactional memory~\cite{BobbaPACT2009}. Field tests can perform their operations within a transaction and at the end of their execution, such a transaction is rolled back and normal execution restarts exactly from the memory state where it was interrupted for in-vivo test execution. Other authors propose that developers write \textit{built-in tests}~\cite{SammodiSAC2011}, specifically designed for in-vivo test execution. It is then the developers' responsibility to ensure that such tests are side effect free. Differently from existing works, our solution to the isolation problem takes advantage of the managed profiles available in the Android platform (see Section~\ref{sec:framework}). \textbf{Preference based testing} Lu et al.~\cite{LuPZ0L19} showed that different preference configurations lead to different code coverage by the same test cases and that proper selection of which preference configurations to test can increase statement (resp. branch) coverage on average by 6.8\% (resp. 12.3\%). They also provide evidence that some (five, in their experiment) bugs require specific preference settings to be discovered. Such empirical results represent a major motivation for our work: when the configuration space grows and depends on environment/device/user specific settings, offline, pre-release testing is not enough to exercise the code and expose the faults that depend on such configurations. PreFest~\cite{LuPZ0L19} performs static code analysis to determine the code that is potentially data-dependent on user preferences and selects the test cases that can exercise such a code, along with the associated preference values. \tool resorts to in-vivo test execution to cope with the exponential growth of the possible configurations, as well as their unavailability during in-house testing. \textbf{Combinatorial testing} Testing all valid configurations exhaustively before deploying an app is not feasible because the number of combinations grows exponentially with the number of features and because some combinations might require very specific hardware/software components. Combinatorial testing (e.g., pairwise testing)~\cite{CohenGMC03} offers a way to systematically explore such a large configuration space. However, by sampling a small representative fraction of all possible cases, it leaves several combinations untested. Some of them might be handled incorrectly at runtime, resulting in field failures. \textbf{Empirical studies on field failures} Gazzola et al.~\cite{GazzolaMPP17} investigated the nature of field failures by analyzing the bug reports for five applications. They introduce the notion of ``field-intrinsic fault'', that is, a field fault that is inherently hard to detect in-house, before releasing the software. They also identify the reasons why faults are not detected at testing time. They conclude that there is evidence of a relevant amount of faults that cannot be effectively addressed in-house and should be addressed directly in the field. Such findings represent an important motivation for the work presented in this paper.
1,941,325,221,216
arxiv
\section{} We measure differential rotation and meridional flow in the Sun`s surface shear layer by tracking the motions of the magnetic network seen in magnetograms from SOHO/MDI and SDO/HMI over solar cycles 23, 24, and the start of 25 (1996-2022). We examine the axisymmetric flows derived from 15-24 daily measurements averaged over individual 27-day Carrington rotations. Variations in the differential rotation include the equatorial torsional oscillation - cyclonic flows centered on the active latitudes with slower flows on the poleward sides of the active latitudes and faster flows equatorward. The fast flow band starts at $\sim$45$^\circ$ latitude during the declining phase of the previous cycle and drifts equatorward, terminating at the equator at about the time of cycle minimum. Variations in the differential rotation also include a polar oscillation above 45$^\circ$ with faster rotation at cycle maxima and slower rotation at cycle minima. The equatorial variations were stronger in cycle 24 than in cycle 23 but the polar variations were weaker. Variations in the meridional flow include a slowing of the poleward flow in the active latitudes during cycle rise and maximum and a speeding up of the poleward flow during cycle decline and minimum. The slowing in the active latitudes was less pronounced in cycle 24 than in cycle 23. Polar countercells (equatorward flow) extend from the poles down to $\sim$60$^\circ$ latitude from time to time (1996-2000 and 2016-2022 in the south and 2001-2011 and 2017-2022 in the north). Both axisymmetric flows vary in strength with depth. The rotation rate increases inward while the meridional flow weakens inward. \tiny \fontsize{8}{11}\helveticabold { \section{Keywords:} Solar Velocity Fields, Solar Photosphere, Solar Convection Zone, Rotation, Meridional Flow, Solar Cycle} \end{abstract} \section{Introduction} \label{S-Intro} The Sun's axisymmetric flows (differential rotation and the meridional circulation) are key aspects of the convection zone dynamics and are the principal drivers of the solar activity cycle. The shearing motions of the differential rotation stretch north-south and radially oriented magnetic field into the azimuthal direction thereby producing strong toroidal field that eventually is buoyed up to produce the bipolar active regions of the solar activity cycle. The meridional flow at the top of the surface shear layer transports magnetic field to the poles to reverse the polar fields at cycle maximum and build up new polar fields by cycle minimum. Those polar fields appear to determine the strength of the next solar activity cycle \citep{Babcock61, Leighton69, Schatten_etal78, Svalgaard_etal05}. The equatorward meridional flow deeper within the Sun may play a part in the equatorward drift of the active latitudes \citep{Choudhuri_etal95}. These axisymmetric flows are likely produced by the effects of the Sun's rotation on the convective flows and thermal structure in the Sun's convection zone \citep{Busse70, BelvederePaterno76, GlatzmaierGilman82, Hotta_etal15, FeatherstoneMiesch15}. However, \cite{Karak_etal15}, \cite{2019PhDT.......170M} and \cite{Hotta_etal22} have recently shown that Maxwell stresses associated with the Sun`s magnetic fields may also play an important part. Variations in these flows occur as consequences of the interactions between the flows and the magnetic field and conversely the variations in the flow can have consequences for the magnetic field configuration itself. Ideally we would like to know the variations in both differential rotation and meridional flow at all latitudes and depths within the convection zone over the course of many solar cycles. This would help to answer key questions concerning how the variations result from solar activity (the effects of magnetic field on the flows) and how the variations influence solar activity (effects of the flows on the magnetic field). In actuality we have measurements at a good range of latitudes and depths for just the last two cycles. The differential rotation is fairly well characterized throughout the convection zone by global helioseismology (with the exception of a cone of uncertainty extending from the polar regions and expanding inward) \citep{Howe09}. The meridional flow is an order of magnitude weaker and is therefore more poorly characterized - even at the photosphere - but more so deeper into the convection zone where the equatorward return flow must be \citep{Hathaway12C, Zhao_etal13}. Here we examine variations in these flows over the course of two and a half solar cycles - cycles 23, 24, and the start of 25 - by measuring the flows at a small range of depths in the surface shear layer several times daily using magnetic pattern tracking on magnetograms from the Michelson Doppler Interferometer (MDI) instrument on the ESA/NASA Solar and Heliospheric Observatory (SOHO) spacecraft \citep{Scherrer_etal95} and from the Helioseismic and Magnetic Imager (HMI) instrument on the NASA Solar Dynamics Observatory (SDO) spacecraft \citep{Scherrer_etal12}. The measurements start in May of 1996 and continue through May of 2022 with a short break in the summer of 1998 when radio contact with SOHO was temporarily lost. \section{Measurement Method} \label{S-Obs} The axisymmetric flows can be measured either by direct Doppler measurements of the photospheric plasma or by tracking photospheric features. Tracked features include white-light intensity features (sunspots, faculae, and granules), spectroscopically derived features (H$\alpha$ filaments and the Ca II network), features identified by spectral line Doppler shifts (supergranules and the acoustic waves used in helioseismology), and features identified by spectral line Zeeman splitting (the magnetic network and its elements). Each method and/or feature gives results for a given range of depths within the Sun. Each has advantages and disadvantages depending on spatial coverage, accuracy, sources of systematic errors, and the flow being measured (differential rotation or meridional flow). Sunspots were used as the earliest tracers of solar rotation and led to the discovery of the latitudinal differential rotation \citep{Carrington59B}. Direct Doppler measurements of solar rotation produced the first indications of possible variations in the rotation rate \citep{Howard76} and led to the discovery of the torsional oscillations \citep{HowardLabonte80} - changes in rotation rate at given latitudes that vary with the 11-year period of the solar cycle. \cite{GilmanHoward84} used six decades (1921-1982) of sunspot measurements from the Mt. Wilson Observatory to find cycle related variations in rotation - an increase in rotation rate just after cycle maximum and another increase near minimum - which are now recognized as manifestations of the torsional oscillation signal at high and low latitudes respectively. Helioseismic measurements of the rotation profile with latitude and depth using global oscillation modes now show that the torsional oscillation signal extends deep into the convection zone \citep{Howe09}. Measurements of the meridional flow have been much more difficult with even the direction of the flow being in question until the 1980s. \cite{HowardLabonte81} found a poleward meridional flow of about 10 m s$^{-1}$ for photospheric magnetic features, with evidence of solar cycle related variations (slower early in the cycle and faster later). \cite{Komm_etal93b} measured the movement of small photospheric magnetic features to find a similar meridional flow and further evidence of variations (slower at cycle maximum and faster at minimum). In previous papers \citep{HathawayRightmire10, HathawayRightmire11, HathawayUpton14, Mahajan_etal21} we have adopted a similar method. Here we measure the differential rotation and meridional flow by tracking the movement of the Sun's magnetic network with our MagTrak program. Tracking the magnetic network pattern has several advantages. These features cover the entire surface of the Sun and persist for hours to days. Tracking their motions provide measurements of both the differential rotation and meridional flow at a wide range of latitudes and a range of depths within the Sun`s surface shear layer. We do this by first mapping full disk magnetograms from SOHO/MDI and SDO/HMI onto heliographic longitude and latitude using a bi-cubic interpolator and then cross-correlating long ($\sim$105$^\circ$ in longitude) thin ($\sim$ 2$^\circ$ in latitude - the typical width of the supergranules that form the magnetic network) strips of the mapped data with strips from maps at later and earlier times separated by time lags ranging from 96 minutes to 8 hours. We find the shift in longitude and latitude that maximizes the correlation between the two strips with a precision of about 0.1 pixel or less. In mapping the full disk magnetograms to grids with points equispaced in both latitude and longitude we make several corrections to the image geometry based on studies of the one year overlap between SOHO/MDI and SDO/HMI as well as other reported image issues. The location of disk center in MDI data was found to be different depending upon the orientation of SOHO with $(x_0,y_0) \rightarrow (x_0+0.45,y_0+0.90)$ when north is up and $(x_0,y_0) \rightarrow (x_0+2.10,y_0+1.00)$ when north is down. (Note: we take the origin of the image to be the lower left corner of the lower left pixel rather than the center of that pixel.) We correct the given latitude at disk center, $B_0$, and the position angle of the rotation axis, $P_0$, for the corrections to the orientation of the Sun's rotation axis given by \cite{BeckGiles05} along with $P_0 \rightarrow P_0-0.21$ prior to 2003 and $P_0 \rightarrow P_0-0.27$ after 2003. We then correct the plate scale by taking the radius of the image $r_0 \rightarrow r_0\times0.9996$. For HMI data we also take the origin to be at the lower left corner of the lower left pixel and correct $B_0$ and $P_0$ for the changes in the orientation of the Sun's rotation axis given by \cite{BeckGiles05} Some studies have questioned the accuracy of our measurement technique. \cite{Dikpati_etal10A} and \cite{Guerrero_etal12} examined the effects of diffusion on measurements of the movement of the magnetic pattern by producing artificial data in the form of a diffuse magnetic field with simple dipoles representing active regions, transporting those fields with diffusion and axisymmetric advection, and then measuring the motions. They both conclude that there are large systematic errors in the magnetic feature tracking technique due to the effects of diffusion. While neither of these studies describe precisely how the motions were measured (no mention of time lags or correlation windows) but they do suggest that we should see apparent motion away from the active latitudes. We \citep{HathawayRightmire11}, tested our MagTrak program using artificial data that matched the characteristics of the magnetic network itself by advecting magnetic elements with an evolving supergranule flow field. The resulting magnetic pattern is not diffuse but instead faithfully mimics the network pattern seen on the Sun. Our measurements found no evidence of any apparent flows away from active latitude. Furthermore, we see no evidence of flow away from the active latitudes in actual data as suggested by the effects of diffusion. Recently however, a true source of systematic error has been discovered which was unanticipated by any of these previous tests. In the following section we describe how this error is removed in MagTrak 3.0 along with a number of other improvements to the program. \section{Improved Measurements from Magnetic Network Feature Tracking} \label{S-Improvements} Recently, \cite{Mahajan_etal21} found a spurious signal in flow measurements associated with tracking the Sun's magnetic network. This spurious signal manifests itself as an apparent shift of the magnetic features away from disk center much like that seen for acoustic waves in time-distance helioseismology studies \citep{Zhao_etal12} It has the interesting property of reaching a fixed shift within about an hour of solar rotation. The magnitude of this spurious shift as a function of heliocentric angle from disk center can be determined by measuring the apparent shift using three different time lags as discussed in \cite{Mahajan_etal21}. The correlation tracking measurements can then be corrected by removing this spurious shift from the measured shifts before dividing by the time lag to get flow velocities in m s$^{-1}$. The spurious shift in the north-south direction determined using the HMI data and time lags of 2, 4, and 8 hours is shown in blue in Fig. \ref{fig:Shift} while the shift determined using MDI data with time lags of 96, 192, and 480 minutes is shown in black. Note that a shift of 150 km gives a spurious poleward velocity of 5 m s$^{-1}$ for an 8 hour time lag but increases to 20 m s$^{-1}$ for a 2 hour time lag. This spurious shift has influenced previous measurements, introducing large errors for time lags less than 8 hr and small but significant errors for longer time lags. The source of this signal is still somewhat uncertain but most likely related to the spectral line depth of formation at different angles from disk center. The slight differences between the profiles for MDI and HMI might be attributed to the different spectral lines used along with the differences in spatial and spectral resolution. \begin{figure}[htb] \includegraphics[width=1.0\columnwidth]{SystematicShift.png} \caption{The average (spurious) poleward shift as a function of latitude is shown in blue with 2$\sigma$ error bars for the HMI data and in black with 2$\sigma$ error bars for the MDI era. This apparent shift away from disk center must be removed from meridional flow measurements.} \label{fig:Shift} \end{figure} Further improvements on the measurement method as described in \cite{Mahajan_etal21} include broadening the displacement search area for finding the maximum in the cross correlation. This is to ensure that we are sampling the full range of displacements. We also iteratively shift the data strips by fractional pixels to avoid the ``peak locking'' problem associated with the tendency of these methods to avoid giving half pixel shifts. We have measured the differential rotation and meridional flow profiles at 255 equi-spaced latitude positions from pole to pole at 96 minute intervals with the MDI instrument and at 60 minute intervals with the HMI instrument. The several hundred individual measurements made over each 27-day Carrington rotation are averaged and standard errors are calculated. While the 7$^\circ$ tilt of the Sun`s rotation axis relative to the plane of the Earth`s orbit allows us to see all the way to its north pole in September of each year and all the way to its south pole in March of each year, the foreshortening of our view near the limb makes the polar measurements most uncertain. The differential rotation and meridional flow profiles averaged over nearly 100,000 individual profiles measured with 8-hour, 4-hour, and 2-hour time lags over the 11 years from May 2010 through April 2021 are shown in Fig. \ref{fig:AveFlows}. The differential rotation profile has a flattened peak at the equator with a slight dip right at the equator. The latitudinal velocity shear reaches a maximum at about 30$^\circ$. The meridional flow profile reaches its maxima of 9 m s$^{-1}$ at about 30$^\circ$. The high latitude meridional flow now appears to drop to zero at about 80$^\circ$ instead of going all the way to the poles. This aspect of the meridional flow profile only becomes apparent with the removal of the spurious shift away from disk center. The equatorial differential rotation velocity increases with longer time lags - 30.7 m s$^{-1}$ at 2-hours, 38.1 m s$^{-1}$ at 4-hour time lags, and 40.4 m s$^{-1}$ at 8-hour time lags. The meridional flow speed decreases with increasing time lag but not as much as was reported in \cite{Hathaway12C}. This is directly attributed to the effects of the spurious systematic shift on the short time lag velocities. The 8-hour time lag average profiles are removed from the individual Carrington rotation profiles to reveal the variations in these flows over solar cycles 23, 24, and the start of 25. \begin{figure}[htb] \includegraphics[width=1.0\columnwidth]{AverageAxisymmetricFlowsMulti.png} \caption{The average axisymmetric flows. The north/south symmetric differential rotation (positive in the prograde direction) is shown in black with 2$\sigma$ error bars for 8-hour time lags, in gray for 4-hour time lags, and in light gray for 2-hour timelags. The north/south anti-symmetric meridional flow (multiplied by 10, positive in the northward direction) is shown in dark blue with 2$\sigma$ error bars for 8-hour time lags, in medium blue for 4-hour time lags, and in light blue for 2-hour time lags.} \label{fig:AveFlows} \end{figure} \section{Variations in the Differential Rotation} \label{S-DR} The history of the Carrington rotation averaged profiles of the differential rotation is shown in the top panel of Fig. \ref{fig:DRhistory}. The history of the variations from the average profile is shown in the central panel. The monthly sunspot numbers (V2.0) are shown for reference in the bottom panel to illustrate the progression of the sunspot cycle. The small black dots in each of the two upper panels mark the locations of the sunspot area centroids in each hemisphere for reference to the locations of the active latitudes. \begin{figure}[htb] \includegraphics[width=1.0\columnwidth]{DifferentialRotationHistory3Panel.png} \caption{Differential rotation (east-west flow relative to the frame of reference rotating at the Carrington rate and positive in the direction of rotation) as a function of time and latitude is shown in the upper panel. The residual east-west flow found by removing the north/south symmetric average profile is shown in the central panel. The latitudes of the centroids of sunspot areas for each Carrington rotation are shown by the black dots in each upper panel. The monthly average of the daily sunspot number (V2.0) is shown for reference in the bottom panel.} \label{fig:DRhistory} \end{figure} Variations in the differential rotation are at a level less than about 5\% of the flow speed itself. This is evident in the lack of any obvious variations in the full profile history shown in the top panel of Fig. \ref{fig:DRhistory}. The variations seen in the residuals relative to the average shown in the central panel of Fig. \ref{fig:DRhistory} have two primary components, one at low latitudes and one at high latitudes. These components may, or may not, be connected. The torsional oscillation (an 11-year oscillation of faster and then slower flow in the active latitudes, first reported by \cite{HowardLabonte80}) is seen straddling the sunspot zones in the center panel of Fig. \ref{fig:DRhistory}. Faster than average flows are seen on the equatorward side of the active latitudes delineated by the black dots at the latitudes of the sunspots area centroids in each hemisphere. These faster than average flows can be traced to higher latitudes and earlier times, giving rise to the recognition of an extended cycle with several years of overlap between adjacent cycles \citep{MartinHarvey79, Snodgrass87A, Wilson_etal88}. The slower than average flows are seen on the poleward sides of the sunspot zones and become strongest near the equator after the sunspots have disappeared. The second component presents as an oscillation at latitudes above about 45$^\circ$ with faster rotation at times of cycle maxima and slower rotation at times of minima. Both of These oscillating components vary significantly from cycle to cycle. The faster rotation near the equator was stronger in cycle 24 than it was in cycle 23. The faster rotation at high latitudes near cycle maximum was much stronger in cycle 23 than it was in cycle 24 while the slower rotation at high latitudes near cycle minimum was much stronger at cycle 24/25 minimum than it was at cycle 23/24 minimum. We also see significant north/south differences in these flow variations. The faster rotation at high latitudes near cycle maximum was stronger in the north in cycle 23 but stronger in the south in cycle 24. Notably, this increase in rotation in the north for cycle 24, while present, is almost imperceptible in Fig. \ref{fig:DRhistory}. The slower rotation at high latitudes near cycle minimum was stronger in the south at cycle 23/24 minimum but stronger in the north at cycle 24/25 minimum. Similar cycle-to-cycle and hemispheric differences in the differential rotation as well as other features seen in Fig. \ref{fig:DRhistory} have also been reported by \cite{Lekshmi_etal18} and \cite{Getling_etal21} from helioseismic studies. The variations in the differential rotation can be quantified by fitting each latitudinal profile with vector spherical harmonics. The coefficient histories for the first three axisymmetric and north/south symmetric components are shown in Fig. \ref{fig:DRhistoryLegendre} along with the monthly sunspot numbers for reference. \begin{figure}[htb] \centering \includegraphics[width=1.0\columnwidth]{LegendreCoefficientHistoriesDR_4panel.png} \caption{Legendre polynomial fit coefficient histories for the differential rotation. The top panel shows the $T^0_1$ coefficient which gives the solid body rotation relative to the Carrington rotation frame of reference. Values for each Carrington rotation are shown with 2$\sigma$ error bars. The second and third panels show the $T^0_3$ and $T^0_5$ coefficients respectively. The monthly average of the daily sunspot number (V2.0) is shown for reference in the bottom panel.} \label{fig:DRhistoryLegendre} \end{figure} The $T_1^0$ coefficients shown in the top panel give the solid body rotation relative to the Carrington rotation frame of reference. They exhibit clear solar cycle related variations with faster rotation at cycle maxima and slower at cycle minima. These variations clearly must represent a periodic radial transfer of angular momentum between different layers within the Sun`s convection zone. The rotation rate peak in cycle 24 was only slightly smaller than that for the much larger cycle 23. The rotation rate minimum at cycle 24/25 minimum was significantly slower than at the previous two minima. The $T_3^0$ and $T_5^0$ coefficients characterize the differential rotation. While $T_5^0$ appears to be directly related to the solar activity cycle, $T_3^0$ indicates a much stronger (more negative) latitudinal shear in cycle 24. This difference can be attributed to the weaker polar spin-up seen in cycle 24 in Fig. \ref{fig:DRhistory}. The $T_2^0$ coefficient (not shown here) characterizes the north/south asymmetry in the differential rotation. It shows a faster northern hemisphere throughout cycle 23 and a faster southern hemisphere throughout cycle 24 and into cycle 25. These variations in differential rotation are thought to be consequences of feedback from the magnetic structures of the activity cycle (active latitudes and polar fields). This includes flows associated with thermal structures \cite{Spruit03} and Maxwell stresses associated with the magnetic field itself \cite{Schussler81, Yoshimura81, Rempel12}. The variations we observe may help to further constrain the mechanisms involved. \section{Variations in the Meridional Circulation} \label{S-MFx} The history of the Carrington rotation averaged profiles of the meridional flow is shown in the top panel of Fig. \ref{fig:MFhistory}. The history of the residual variations relative to the average profile is shown in the central panel. The monthly sunspot numbers (V2.0) are shown in the bottom panel to illustrate the progression of the sunspot cycle for reference. The small black dots in each of the two upper panels mark the locations of the sunspot area centroids in each hemisphere for reference to the locations of the active latitudes. \begin{figure}[htb] \centering \includegraphics[width=1.0\columnwidth]{MeridionalFlowHistory3Panel.png} \caption{Meridional flow (north-south flow, positive northward) as a function of time and latitude is shown in the upper panel. The residual meridional flow found by removing the north/south antisymmetric average profile is shown in the central panel. The latitudes of the centroids of sunspot areas for each Carrington rotation are shown by the black dots in each upper panel. The monthly average of the daily sunspot number (V2.0) is shown for reference in the bottom panel.} \label{fig:MFhistory} \end{figure} The variations in the meridional flow are relatively large and are apparent even in the history of the individual flow profiles (top panel in Fig. \ref{fig:MFhistory}). The residuals relative to the average profile (central panel in Fig. \ref{fig:MFhistory}) show a slowing of the poleward meridional flow in both hemispheres during the rise and maximum of cycle 23. This slowing during rise and maximum is much less evident in cycle 24 and 25 and appears to be concentrated in the sunspot zones. Note that this is a slowing of the meridional flow at a range of latitudes, including latitudes on the equatorward sides of the sunspot zones. We do not see inflows toward the sunspot zones. Inflows would be characterized by an increased poleward flow on the equatorward sides of the sunspot zones which is not present here. (This is likely a consequence of the depth associated with these measurements - the middle of the surface shear layer rather than nearer to the surface.) The poleward flow increases during the declining phase of each cycle and the approach to minimum. This increase was stronger in cycle 24 than it was in cycle 23. In addition to the changes in the poleward flow seen in the lower, active latitudes, we also see polar countercells - equatorward flow from the poles down to about 60$^\circ$ latitude appearing from time to time. These countercells can be seen in the full profiles shown in the upper panel of Fig. \ref{fig:MFhistory}. We see a counter cell in the south fully established at the start of the dataset in 1996. This counter cell shrinks and disappears by 2001/2002. As this southern countercell disappears, a countercell forms in the north, extends down to 60$^\circ$ by 2006 and then shrinks and disappears by 2011. From 2011 to 2015 there are no countercells but a cell forms in the south in 2016 and then in the north in 2017 and both continue to exist until the end of the dataset in mid-2022. These countercells are likely real features. They persist for years but come and go without any apparent connection to instrumental changes, spacecraft orientation, or solar magnetic fields. The presence of these countercells and the low latitude variations in the meridional flow can have consequences for the solar dynamo \citep{Dikpati_etal04, Karak10}. \begin{figure}[htb] \centering \includegraphics[width=1.0\columnwidth]{LegendreCoefficientHistoriesMF_4panel.png} \caption{Legendre polynomial fit coefficient histories for the meridional circulation. Values for each Carrington rotation are shown with 2$\sigma$ error bars. The top panel shows the $S^0_1$ coefficient which gives a meridional flow with one cell from pole to pole (positive toward the north). The second panel shows the $S^0_2$ coefficient which gives two cells from pole to pole with positive values indicating poleward flow in each hemisphere. The third panel shows the $S^0_4$ coefficient which gives four cells from pole to pole with negative values giving flow away from the poles at high latitudes. The monthly averages of the daily sunspot number (V2.0) are shown for reference in the bottom panel.} \label{fig:MFhistoryLegendre} \end{figure} The variations in the meridional flow can also be quantified by fitting each latitudinal profile with vector spherical harmonics. The coefficient histories for three key components are shown in Fig. \ref{fig:MFhistoryLegendre} along with the monthly sunspot numbers for reference. The $S_1^0$ coefficients in the top panel represent a single meridional circulation cell extending from pole to pole with positive values indicating flow to the north across the equator. This coefficient is near zero on average (0.4 m s$^{-1}$) but does indicated small but significant variations that give both cross-equator flow and north/south asymmetry to the poleward meridional flow. Positive values indicate stronger poleward flow in the northern hemisphere. The $S_2^0$ coefficients in the second panel characterize the dominant two cell meridional circulation with positive values indicating poleward flow in each hemisphere. The average amplitude of this flow component is 9.6 m s$^{-1}$. These coefficients show the slowing of the poleward flow during the rising phase and maximum of cycle 23 with a less pronounced slowing during the rise of cycle 24. The $S_4^0$ coefficients in the third panel characterize the four cell meridional circulation with positive values indicating poleward flow at high latitudes and equatorward flow at low latitudes. The average amplitude of -3.0 m s$^{-1}$ is enough to effectively stop the poleward meridional flow at about 80$^\circ$ latitude and to push the peak in the average meridional flow profile from 45$^\circ$ down to about 30$^\circ$ as shown in Fig. \ref{fig:AveFlows}. The more negative values starting in 2017 are large enough to produce the countercells in both hemispheres. \begin{figure}[htb] \centering \includegraphics[width=1.0\columnwidth]{MeridionalFlowHistoryCrossEquator.png} \caption{The meridional flow and hemispheric asymmetries in sunspot area. The asymmetry in hemispheric sunspot area, (North - South)/(North + South), is shown multiplied by 30 with the black dots near the equator.} \label{fig:CrossEq} \end{figure} The cross equator flow holds some interest in terms of how it might be related to hemispheric differences in solar activity \citep{Komm22}. In Fig \ref{fig:CrossEq} we superimpose a measure of hemispheric asymmetry - the difference in sunspot area between the two hemispheres divided by the sum of the sunspot areas. We find periods (2012-2016) when there is flow across the equator away from the dominant hemisphere (northward flow away from the stronger south and southward flow away from the stronger north) but there are also times when the opposite is seen (2017). A statistical test of the relationship between cross equator flow and hemispheric sunspot area asymmetry indicates a weak, but significant, anti-correlation - i.e. flow across the equator away from the dominant hemisphere. This flow away from the more active hemisphere is opposite to the flows toward the active hemisphere found by \cite{Komm22}. The likely explanation for this discrepency is the difference in flows around active regions at different depths - inflows near the surface and outflows at greater depths. \section{Variations with Depth} \label{S-Depth} As previously described, We make measurements of the axisymmetric flows using three different time lags for both the MDI and the HMI data. This is necessary for our determination of the systematic shift but it also provides us with information about the flows at various depths within the surface shear layer. Measurements at longer time lags are dominated by the magnetic pattern associated with larger, longer-lived convection cells that extend to greater depths within the Sun. We can estimate depths associated with each time lag by comparing the equatorial rotation rate to that found from global helioseismology at depths within the surface shear layer. This comparison (cf. \cite{Hathaway12C}) indicates that the measurements made with the 8-hour time lags represent flows about 25 Mm deep, while the 4-hour time lag measurements represent 22 Mm deep, and the 2-hour time lag measurements represent 18 Mm deep. \begin{figure}[htb] \centering \includegraphics[width=1.0\columnwidth]{MagneticFeatureMotionRadialGradients.png} \caption{The radial gradients in differential rotation (top panel) and meridional flow (bottom panel) in m s$^{-1}$ Mm$^{-1}$. Positive values for the differential rotation indicate faster rotation going inward. Positive values for the meridional flow indicates faster northward flow going inward.} \label{fig:GradientHistory} \end{figure} Subtracting the differential rotation and meridional flow profiles at shorter time lags from those measured at 8-hour time lags and dividing by the difference in depth gives a measure of the radial gradients in these flows Fig. \ref{fig:GradientHistory} shows these gradients using the 4-hour time lag measurements from HMI and the 192-minute time lag measurements from MDI as the shorter time lags. Virtually the same results are found using the even shorter time lags - 2-hours for HMI and 96-minutes for MDI. These radial gradient measurements show that the rotation rate increases inward while the meridional flow speed decreases inward. Fig. \ref{fig:GradientHistory} shows that, in addition to these well known variations with depth, there are variations in these gradients with both time and latitude. It also reveals artifacts which we don't fully understand. The MDI era data suggests that the rotation rate gradient is stronger in the north than in the south while the HMI era data indicates that they are very similar. The MDI era data shows an annual increase in the gradient at nearly all latitudes while this signal is absent from the HMI era data. This is an obvious artifact. Both MDI and HMI data show an enhanced gradient in the rotation rate at about 28$^\circ$. This enhancement does not drift equatorward with the active latitudes, but fades away during the HMI era - suggesting that it may not be an artifact. The radial gradients in the differential rotation do not reveal the torsional oscillation signal. This supports the findings from helioseismology indicating that these flow variations extend relatively unchanged through the surface shear layer \citep{Howe_etal00}. The radial gradient in the meridional flow also has surprises. During the MDI era the slowing of the meridional flow is evident at all latitudes. During the HMI era the slowing of the meridional flow appears to be concentrated in the active latitude bands for both cycle 24 and 25. \section{Conclusions} \label{S-Conclude} We have used our improved MagTrak program for measuring the differential rotation and meridional flow indicated by the motions of the magnetic network features to determine the latitudinal profiles of the flows several times each day from mid-1996 through mid-2022. This dataset covers the entirety of cycles 23 and 24 and catches the rise of cycle 25. Our findings confirm several aspects of these flows and their variations and introduce new features. Fig. \ref{fig:MagBfly} shows the axisymmetric flow variations along with the longitudinally averaged magnetic field to help guide our conclusions. \begin{figure}[htb] \centering \includegraphics[width=1.0\columnwidth]{AxisymmetricFlowHistory3Panel.png} \caption{Axisymmetric flow variations and associated magnetic field variations. The differential rotation flow variations are shown in the top panel. The meridional flow is shown in the bottom panel. The evolution of the Sun`s surface magnetic field averaged over longitude for each Carrington rotation is shown for reference in the central panel.} \label{fig:MagBfly} \end{figure} Our measurements of the differential rotation find the torsional oscillations associated with the sunspot zones and their extensions to higher latitudes earlier than the appearance of sunspots. The cyclonic nature of these flow variations are consistent with either their association with inflows toward the sunspot zones \citep{Spruit03} or Maxwell stresses due to the presence of magnetic fields \citep{Schussler81, Yoshimura81, Rempel12} and would suggest that they are a response to the activity and not a cause of it. However, we find that the active latitude variations were stronger in cycle 24 than in cycle 23 in spite of the fact that cycle 24 was the weaker cycle. We might expect stronger torsional oscillation flows to be associated with the stronger inflows or magnetic structures in the more active cycle. We also find a weak association with cross equator flow directed away from the more active hemisphere - an indication of outflow from, not inflow to, the activity bands at the depths we probe. The apparent lack of any signal associated with the torsional oscillation in the radial gradient data shown in Fig. \ref{fig:GradientHistory} suggests the torsional oscillations flows don`t change with depth. Yet the change from inflows to outflows with depth - if that is the source - would suggest a more significant change. The high latitude spin-up at cycle maxima is most prominent in the north in cycle 23. It is seen weakly in the south in cycle 24 but is virtually nonexistent in the north in that cycle. The spin-ups don`t appear to be associated with changes in the meridional flow - a faster poleward flow might tend to spin up the poles but we see a slowing of the high latitude meridional flow at the maxima in cycle 23 and a speed up at high latitudes in the north in cycle 24 - the opposite to what would be expected if the meridional flow was the source of this signal. These observations suggest that neither the low latitude nor the high latitude torsional oscillations are caused by the Coriolis force acting on meridional flows. The meridional flow is known to play a dominant role in transporting magnetic fields poleward as seen in a multitude of surface magnetic flux transport studies \citep{Jiang_etal14, Wang17}. We now find that the meridional flow tends to disappear above about 80$^\circ$ - a feature employed ad hoc in some surface flux transport models. We also find that the speed of the meridional flow varies by 10-20\% and, more importantly, exhibits polar countercells from time to time. The effects of these meridional flow variations are not immediately obvious when comparing the meridional flow and the magnetic field histories shown in Fig. \ref{fig:MagBfly}. Further experimentation with these flows in surface flux transport models are needed to explore these connections. \section{Acknowledgements} DHH and SSM are supported by NASA contract NAS5-02139 (HMI) to Stanford University. LAU was supported by NASA Heliophysics Living With a Star grants NNH16ZDA010N-LWS and NNH18ZDA001N-LWS and by NASA grant NNH18ZDA001N-DRIVE to the COFFIES DRIVE Center managed by Stanford University. HMI data used in this study are courtesy of NASA-SDO and the HMI science team. MDI data used in this study are courtesy of NASA/ESA-SOHO and the MDI science team. \bibliographystyle{Frontiers-Harvard}
1,941,325,221,217
arxiv
\section{Introduction and outline} The Standard Model (SM) is successful in describing the strong suppression of the FCNC and CP violating processes but this success strongly relies on the pattern of fermion masses and mixing angles taken from experiment. It has since long been a big theoretical challenge to find extensions of the SM that address the origin of the Yukawa couplings and simultaneously solve the hierarchy problem of the SM, in no conflict with the FCNC and CP violation data. The flavour structure of the new physics, needed to explain the pattern of the Yukawa matrices, also has to control the new physics at TeV scale that protects the Higgs potential from large radiative corrections, so that the new sources of the FCNC and CP violation are strongly suppressed. It is an old and interesting proposal that the flavour dynamics and the hierarchy problem can be simultaneously addressed in supersymmetric models with spontaneously broken horizontal gauge symmetries and the Froggatt-Nielsen (FN) mechanism for Yukawa couplings \cite{fn,ns,ir,binetruy,dps,dps2,Chankowski:2005qp,buras}. An extensive theoretical and phenomenological work shows that such models with Abelian or non-Abelian \cite{nonabelian1,nonabelian2} horizontal symmetries can correctly reproduce the pattern of Yukawa matrices. At the same time, they control the flavour structure of the soft supersymmetry breaking terms in the gravity mediation scenario and can be compatible with very strong experimental constraints from the FCNC and CP violation sector, without the need to raise the scale of sfermion masses beyond that needed to solve the little hierarchy problem. However, this compatibility often requires restricted range of supersymmetric parameters and/or some additional structural assumptions \cite{ns,dps2,kawamura} (see \cite{Lalak:2010bk} for a recent discussion). In general, there is not much room for manoeuvre and one may expect FCNC to be close to the present bounds. More recently, it has been proposed that the pattern of Yukawa matrices and the suppression of FCNC effects in supersymmetric theories can be understood as solely due to strong wave function renormalisation (WFR models) of the matter fields, superimposed on the initial flavour anarchical structure at the very high (Planck) scale $M_0$ \cite{Nelson:2000sn}. The origin of such effects could be RG running down to some scale $M$ few orders of magnitude below $M_0$, with large anomalous dimensions of the matter fields generated by the coupling of the MSSM sector to a conformal sector \cite{Nelson:2000sn,Kobayashi:2001kz,Nelson:2001mq} or different localisation of different matter fields in a (small) extra dimension introduced just for flavour \cite{Choi:2003fk,nps}. The latter idea has also been extensively discussed as a solution to the flavour problem in non-supersymmetric Randall-Sundrum type models, with strongly warped extra dimensions \cite{csaki}. As it has already been noticed in the original paper by Nelson and Strassler, the predictions of the WFR mechanism in supersymmetric models resemble those of the flavour models based on the FN mechanism, with horizontal abelian symmetries, with all SM fermions carrying charges of the same sign and with one familon field. Indeed, the predictions of the two approaches for the Yukawa matrices are identical, after proper identification of the corresponding parameters. There is a finite number of FN supersymmetric models (of the horizontal charge assignments) with abelian horizontal symmetry, one familon field and all fermion charges of the same sign that are a) theoretically consistent, b) correctly describe quark and lepton masses and mixings. Each horizontal charge assignment can be identified with concrete values of the set of free parameters in the WFR approach, with the same predictions for the Yukawa matrices. Thus, using the previous results on the FN models we can easily infer a viable set of the WFR models. We point out that this set is likely to be a complete set of such models if we require gauge coupling unification. The magnitude of the FCNC and CP violation at the electroweak scale is determined by the coefficients of the dimension 6 operators in the effective SM lagrangian obtained after integrating out the supersymmetric degrees of freedom \cite{masiero}. As we discuss below in detail, the two approaches differ significantly in their predictions for the suppression factors for some of the dimension 6 operators. It is therefore of some interest to compare other predictions, in particular for the FCNC and CP violation suppression, of the WFR models with predictions of the FN models that successfully describe fermion masses and mixing. This is the purpose of this note. It is easy to make such a comparison for each pair of the models introduced above. However, the FN models that are successful in the Yukawa sector also include models which have no correspondence to the WFR approach, like models with charges of both signs or models with several U(1)'s. Here the comparison is less straightforward but one can see some general differences. In Sec.~\ref{structure}, putting aside the potential origin of the strong WFR effects that could be responsible for the hierarchy of Yukawa couplings, we compare the structure of the operators violating flavour in the two approaches from a 4d point of view , and discuss their phenomenological predictions. We draw attention to certain important structural differences in favour of WFR, such as no distinction in wave function renormalisation between fermions and antifermions, no $D$-term contribution to sfermion masses and no problem with uncontrolled coefficients of order unity. We notice that whereas significant suppression of FCNC is achieved in the squark sector, the constraints in the leptonic sector coming from $\mu \to e \gamma$ are still difficult to satisfy. Sec.~\ref{uni} is devoted to the discussion of the gauge coupling unification in the WFR model. A stunning coincidence is pointed out with the Green-Schwarz anomaly cancelation conditions in the horizontal symmetry models. In Sec.~\ref{origin} we discuss in some detail the possible origin of the strong WFR effects. For the extra dimensional interpretation, we comment on the differences and the benefits compared to RS and propose a CFT origin of the electron mass which has the virtue of decoupling the $A$-term of the electron from its Yukawa coupling. The end result is a strong suppression of the leptonic FCNC violations ($\mu \to e \gamma$) compared to the 4d discussion in Sec.~\ref{structure}. Sec.~\ref{conclusions} contains our conclusions. \section{Horizontal symmetry versus WFR: structure and predictions} \label{structure} We consider effective supersymmetric models with softly broken supersymmetry, described by a K\"ahler potential and a superpotential, below the flavour symmetry breaking scale $M$ but above the soft supersymmetry breaking scale $M_{susy}$. The flavour structure may be present in the kinetic terms, the superpotential (in general, non-renormalisable) and in the pattern of soft terms. We concentrate only on models with positive FN charges, which are relevant for the WFR case. The effective action is determined by \begin{eqnarray} && W = \epsilon^{q_i+u_j+h_u} (Y^U_{ij} + A^U_{ij} X ) Q_i U_j H_u + \epsilon^{q_i+d_j+h_d} (Y^D_{ij} + A^D_{ij} X ) Q_i D_j H_d \nonumber \\ &&+ \epsilon^{l_i+e_j+h_d} (Y^E_{ij} + A^E_{ij} X ) L_i E_j H_d \nonumber \\ && K = \epsilon^{|q_i-q_j|} (1 + C_{ij} X^{\dagger} X ) Q_i^{\dagger} Q_j + \cdots \ , \label{h1} \end{eqnarray} where $\epsilon = \theta/M$, with $\theta$ a chiral (super)field of $U(1)$ charge $-1$, $X = \theta^2 F$ is the SUSY breaking spurion and all flavour matrix elements $Y^U_{ij}$, etc are considered to be of order one. The family charges of fermion superfields are defined as $q_i$ for the flavour components of the left-handed doublet $Q_L$, and $u_i$ and $d_i$ for the flavour components of the (left-handed) quark singlet fields $U^c$ and $D^c$, the charge conjugate of the right-handed flavour triplets $U_R$ and $D_R$, respectively, and similarly for leptons. Horizontal charges are defined in some electroweak basis. In that basis, flavour mixing is present also in the kinetic terms. However, the rotation to the canonical basis does not change the leading powers of $\epsilon$ in the rest of the lagrangian (we assume all coefficients $C_{ij}$, $Y_{ij}$ and $A_{ij}$ to be of $\mathcal O(1)$) and we shall always refer to the canonical basis. In the WFR case the effective action at the scale $M$ is determined by \begin{eqnarray} && W = (Y^U_{ij} + A^U_{ij} X ) Q_i U_j H_u + (Y^D_{ij} + A^D_{ij} X ) Q_i D_j H_d \nonumber \\ &&+ (Y^E_{ij} + A^E_{ij} X ) L_i E_j H_d \nonumber \\ && K = \epsilon^{- 2 q_i} Q_i^{\dagger} Q_i + C_{ij} X^{\dagger} X Q_i^{\dagger} Q_j \cdots \ . \label{h2} \end{eqnarray} Here the factors $\epsilon^{- 2 q_i}$ are the wave function renormalisation factors, originating from the physics between $M_0$ and $M$, and in the notation suitable for the comparison of the two approaches. Any order unity flavour mixing in the kinetic terms at the scale $M_0$ has already been rotated away. After rescaling of the wave functions $Q_i \to \epsilon^{q_i} Q_i$, etc (also including the possiblity of rescaling of the Higgs fields), the effective action in the WFR case is given by \begin{eqnarray} && W = \epsilon^{q_i+u_j+h_u} (Y^U_{ij} + A^U_{ij} X ) Q_i U_j H_u + \epsilon^{q_i+d_j+h_d} (Y^D_{ij} + A^D_{ij} X ) Q_i D_j H_d \nonumber \\ && + \epsilon^{l_i+e_j+h_d} (Y^E_{ij} + A^E_{ij} X ) L_i E_j H_d \nonumber \\ && K = Q_i^{\dagger} Q_i + C_{ij} \epsilon^{q_i+q_j} X^{\dagger} X Q_i^{\dagger} Q_j \cdots \ . \label{h3} \end{eqnarray} The comparison of the two approaches is immediate. For the two models to give identical predictions for Yukawa couplings at the high scale the parameters of the supersymmetric WFR models are fixed in terms of the charge assignment in the FN models. However, since the wave function renormalisation does not distinguish between particles and antiparticles, the suppression of sfermion masses is much stronger in the WFR case. Similar observation in the non-SUSY case has been made in \cite{davidson}. Actually the class of FN models which really compare directly to WFR models are the ones with only one $U(1)_X$, positive charges and with only one familon field of negative charge breaking it, such that all Yukawas are generated by holomorphic couplings to the familon. For any comparison with experimental data we have to be in the basis where the quark mass matrices are diagonal. Since the main experimental constraints come from the down quark sector it is very convenient to remain in an electroweak basis (for explicit $SU(2)\times U(1)$ gauge invariance) but the one in which the down quark Yukawa matrix is diagonal. Thus, the scalar field terms in the Lagrangian are subject to (appropriate) left and right rotations that diagonalise down quark Yukawa matrix. Such rotations, acting on the off-diagonal terms in the sfermion mass matrices do not change their leading suppression factors (powers of $\epsilon$) but generically are the source of additional contributions to the off-diagonal terms coming from the splitting in the diagonal entries. For FN models, the two obvious sources of the diagonal splitting are potentially flavour dependent order unity coefficients $C_{ii}\neq C_{jj}$ and generically present flavour dependent U(1) $D$-terms. Those additional contributions to the off-diagonal terms are of the order of the rotation angles diagonalising the down quark Yukawa matrix (roughly speaking of the order of the CKM angles) and are unwelcome. They provide an upper bound (in fact uncomfortably strong) on the suppression of the off-diagonal terms. It so happens that in the discussed here models with all fermions carrying the same sign horizontal charges the suppression factors of the original off-diagonal terms are the same as the suppression of the terms originating from the diagonal splitting so the problem of compatibility with the data is similarly difficult for both components (see \cite{Lalak:2010bk}). However, there are $U(1)$ models with charges of both signs and/or with several $U(1)$ symmetries which do not have WFR counterpart but are often successful in the Yukawa sector and give strong suppression of the original flavour off-diagonal sfermion mass terms. Still, they face the mentioned above generic problem of the flavour dependent $D$-term contribution to the diagonal masses and possible diagonal splitting by uncontrolled by the $U(1)$ symmetry coefficients of order unity. After rotations, the suppression of the off-diagonal terms in the quark mass eigenstate basis is then similar as in the models with the same sign charges and results in certain tensions in the parameter space of the soft supersymmetry breaking terms \cite{Lalak:2010bk}. It is clear that WFR approach avoids those problems. There is no $U(1)$ symmetry and no $D$-terms and the flavour dependent diagonal terms in the sfermion mass matrices are suppressed by powers of $\epsilon$, so there is also no problem of uncontrolled coefficients of order unity. In addition to working in the electroweak basis with diagonal down quarks, for a meaningful comparison with experimental data we have to include all the MSSM-like renormalisation effects for the running from the scale $M$ to the electroweak scale. Finally, the standard analysis of the FCNC and CP violation data is performed in terms of the coefficients of the dimension 6 operators in the effective SM lagrangian obtained after integrating out the supersymmetric degrees of freedom \cite{masiero}. The coefficients of those operators are calculable in terms of the soft supersymmetry breaking parameters and the discussed above suppression factors have a direct correspondence in the suppression factors of the higher dimension operators. Let us compare the flavour properties of some models of fermion mass hierarchies under the two paradigms of family symmetry and WFR. From our discussion above it is clear that once we have a FN model with all fermion charges of the same sign that correctly reproduces the fermion mass hierarchy it can immediately be translated into a WFR model.\footnote{FN are seemingly more constrained, as some form of anomaly cancelation has to be imposed. We will see in Sec.~\ref{uni} that preservation of the successful MSSM gauge coupling unification in WFR models places very similar constraints on the assignment of "charges"(suppression factors) in the latter case.} In the following we shall compare some of the corresponding pairs of models mentioned above. As we discussed earlier, for a global picture one should also compare the set of viable WFR models with FN models that do not have any WFR correspondence but are successful in the Yukawa sector, too. However, after inclusion of the effects of the splitting on the diagonal and of the $D$-term contributions to the sfermion masses such models give predictions for FCNC very close to the same sign charge models, so we don't discuss them any more. From the point of view of proton decay operators, both approaches can generate some suppression: $U(1)_X$ FN can also completely kill proton decay if for example the lepton charges are $l_i= n_i+ 1/3, e_i = m_i-1/3$, with $n_i,m_i$ integers (all other MSSM charges being integers), since then there is an effective $Z_3$ discrete leptonic symmetry protecting the proton to decay. More generally, both FN and WFR generate some suppression for the first generations due to their large charges. The flavour suppression is parameterised by the variable $\epsilon$ introduced earlier. We have in mind $\epsilon$ to be of the order the Cabbibo angle, $\epsilon\sim 0.22$, but certainly other values can be considered provided one appropriately rescales the charges. Consistent charge assignments have for instance been classified in Refs.~\cite{dps,Chankowski:2005qp}. Here we will consider 3 models:\footnote{A and B are taken from Ref.~\cite{Chankowski:2005qp}, where they are called models 1 and 5 respectively, model C was studied in Ref.~\cite{dps}.} \begin{gather} q=u=e=(3,2,0)\,,\quad d=\ell=(2,0,0)+d_3\,, \tag{Model A}\\ q=u=e=(4,2,0)\,,\quad d=\ell=(1,0,0)+d_3\,,\tag{Model B}\\ q=(3,2,0)\,,\quad u=(5,2,0)\,,\quad d=(1,0,0)+d_3\,,\quad \ell= q + \ell_3\,,\quad e= d-\ell_3 \,.\tag{Model C} \end{gather} In all three cases the horizontal charges of the Higgs fields are zero. Notice that the choice $q_3=u_3=0$ is a requirement for obtaining a heavy top, while the freedom in $d_3$ is related to $\tan\beta $ via the bottom Yukawa: \begin{equation} \epsilon^{-d_3}\,\tan\beta \sim\frac{m_t(M_c)}{m_b(M_c)}\sim \epsilon^{-3}\,. \label{tanb} \end{equation} The resulting Yukawa couplings for model A are displayed in the last column of Tab.~\ref{tabA}. They readily reproduce the observed masses and mixings of the SM fermions for suitable choices of $O(1)$ coefficients. \newcommand{\hline\\[-.2cm]}{\hline\\[-.2cm]} \newcommand{\\[-.2cm]\hline\\[-.2cm]}{\\[-.2cm]\hline\\[-.2cm]} \newcommand{\\[-.2cm]\\\hline}{\\[-.2cm]\\\hline} \begin{table} \begin{center} \begin{tabular}{c|ccc} \hline\\[-.2cm] $a$&$\tilde m^2_{a,LL}\,/\,m_0^2$&$\tilde m^2_{a,RR}\,/\,m_0^2$ &$ A_{a}\,/\,m_0\sim Y_a $\\ \\[-.2cm]\hline\\[-.2cm] &$r_q\,\mathbb{1} +\epsm{6}{5}{3}{5}{4}{2}{3}{2}{0}$ &$r_u\,\mathbb{1} +\epsm{6}{5}{3}{5}{4}{2}{3}{2}{0}$ \\[-.8cm] $u$&&&$\epsm{6}{5}{3}{5}{4}{2}{3}{2}{0}$\\[-.8cm] & $r_q\,\mathbb{1} +\epsm{0}{1}{3}{1}{0}{2}{3}{2}{0}$ &$r_u\,\mathbb{1} +\epsm{0}{1}{3}{1}{0}{2}{3}{2}{0}$ \\ \\[-.2cm]\hline\\[-.2cm] &$r_q\,\mathbb{1} +\epsm{6}{5}{3}{5}{4}{2}{3}{2}{0}$ &$r_d\,\mathbb{1} +t_\beta^2\epsm{10}{8}{8}{8}{6}{6}{8}{6}{6}$ \\[-.8cm] $d$&&&$t_\beta \epsm{8}{6}{6}{7}{5}{5}{5}{3}{3}$\\[-.8cm] &$r_q\,\mathbb{1} +\epsm{0}{1}{3}{1}{0}{2}{3}{2}{0}$ &$r_d\,\mathbb{1} +\epsm{0}{2}{2}{2}{0}{0}{2}{0}{0}$ \\ \\[-.2cm]\hline\\[-.2cm] &$r_\ell\,\mathbb{1} +t_\beta^2\epsm{10}{8}{8}{8}{6}{6}{8}{6}{6}$ &$r_e\,\mathbb{1} +\epsm{6}{5}{3}{5}{4}{2}{3}{2}{0}$\\[-.8cm] $e$&&&$t_\beta \epsm{8}{7}{5}{6}{5}{3}{6}{5}{3}$\\[-.8cm] &$r_\ell\,\mathbb{1} +\epsm{0}{2}{2}{2}{0}{0}{2}{0}{0}$ &$r_e\,\mathbb{1} +\epsm{0}{1}{3}{1}{0}{2}{3}{2}{0}$ \\[-.2cm]\\\hline \end{tabular} \caption{Yukawas and soft scalar mass squared matrices for model A \cite{Chankowski:2005qp}: $q=u=e=(3,2,0)$, $\ell=d$ with $d-d_3=(2,0,0)$ and we have used the relation $\tan\beta=\epsilon^{d_3-3}$. The upper row corresponds to a WFR model, while the lower one to a FN one.} \label{tabA} \end{center} \end{table} After inclusion of the renormalisation effects \cite{Louis:1995sp} , the soft mass terms at the electroweak scale are to a good approximation given by \begin{eqnarray} \tilde m_{u, LL, ij}^{2}&\sim& r_q\, m_{1/2}^2\,\delta_{ij}+\hat m_q^2 \epsilon^{|q_i\pm q_j|}\\ \tilde m_{u, RR, ij}^{2}&\sim& r_u\, m_{1/2}^2\,\delta_{ij}+\hat m_u^2 \epsilon^{|u_i\pm u_j|}\\ \tilde m_{u, LR, ij}^{2} &\sim& A_u v\sin\beta\,\epsilon^{q_i+u_j} \end{eqnarray} \begin{eqnarray} \tilde m_{d, LL, ij}^{2}&\sim& r_q\, m_{1/2}^2\,\delta_{ij}+\hat m_q^2 \epsilon^{|q_i\pm q_j|}\\ \tilde m_{d, RR, ij}^{2}&\sim& r_d\, m_{1/2}^2\,\delta_{ij}+\hat m_d^2 \epsilon^{|d_i\pm d_j|}\\ \tilde m_{d, LR, ij}^{2} &\sim& A_d v\cos\beta\,\epsilon^{q_i+d_j} \end{eqnarray} \begin{eqnarray} \tilde m_{e, LL, ij}^{2}&\sim& r_\ell\, m_{1/2}^2\,\delta_{ij}+\hat m_\ell^2 \epsilon^{|\ell_i\pm\ell_j|}\\ \tilde m_{e, RR, ij}^{2}&\sim& r_e\, m_{1/2}^2\,\delta_{ij}+\hat m_e^2 \epsilon^{|e_i\pm e_j|}\\ \tilde m_{e, LR, ij}^{2} &\sim& A_e v\cos\beta\,\epsilon^{\ell_i+e_j} \end{eqnarray} % where we have defined the high scale soft masses $m_{1/2}$, $\hat m_a$ and $A_a$. Baring some additional suppression mechanism, a completely natural theory would require all these terms to be of the same order, and we will henceforth set them all equal to a common mass $m_0$. The terms that are suppressed by powers of $\epsilon$ are multiplied by flavour dependent $\mathcal O(1)$ coefficients that are omitted here for clarity. The charges are all positive or zero, and the positive sign applies to the WFR case whereas the negative one to the FN case. The constants $r_a$ parameterize the leading gauge renormalization and are given approximately by $r_q=6.5$, $r_u=6.2$, $r_d=6.1$, $r_\ell=0.5$ and $r_e=0.15$. Yukawa corrections are expected to be important for the third generation but given the unknown $\mathcal O(1)$ coefficients of the tree level soft mass matrices they are irrelevant for our discussion. The resulting soft mass matrices are also displayed in Tab.~\ref{tabA}. Several points deserve to be emphasized. \begin{itemize} \item For WFR all 1st and 2nd generation mass eigenvalues (and, in fact, some 3rd generation ones as well) are predicted from the running of gauge/gaugino loops, while in the FN case the explicit tree level soft masses give non-negligible contribution, particularly to slepton masses. \item Yukawas and chirality changing soft masses ($A$-terms) receive the same suppression, and are in fact the same for both FN and WFR. \item In the LL and RR sectors the off-diagonal masses are more suppressed for WFR than for FN, as explained above. \end{itemize} To compare with experiment, bounds are usually given for the mass insertion parameters $\delta^a_{ij}$ at a reference sfermion mass. They are defined as \begin{equation} \delta^a_{MN,ij}=\frac{\tilde m^{2}_{a,MN,ij}}{\tilde m_{a,M,i}\,\tilde m_{a,N,j}}\,,\qquad \langle\delta^a_{ij}\rangle=\sqrt{\delta^a_{LL,ij}\delta^a_{RR,ij}} \end{equation} for $a=u,d,e$, $M,N=L,R$, $i\neq j$. The expressions are normalized to the diagonal entries $\tilde m_{a,M,i}$. To the $A$ terms one associates analogous parameters (for any $i,j$): \begin{equation} \delta^a_{LR,ij}=(\delta^a_{RL,ji})^*=\frac{\tilde m^2_{a,LR,ij}}{\tilde m_{a,L,i}\,\tilde m_{a,R,j}}\,. \end{equation} Starting with the limits from the hadron sector, we give the bounds and the results for Model A in Tab.~\ref{hadLLA} and \ref{hadLRA}. All bounds in Tab.~\ref{hadLLA} are comfortably satisfied (even for large $\tan\beta$) and in fact would allow for a much smaller squark mass. Notice that in the FN model with analogous charge assignments it is very difficult to satify the bound on $\langle\delta_{12}\rangle$ \cite{Lalak:2010bk}. Since the squark mass mixing between the first two generations is suppressed at most by two powers of $\epsilon$, to satisfy the bound one needs very strong flavour blind renormalisation effects, i.e.~a large ratio of of the initial values of the gluino mass to the squark mass at the very high scale. The chirality flipping mass insertions of Tab~\ref{hadLRA} are more constraining. In particular, the 11 entries are strongly constrained from the EDM measurements of the neutron. Nevertheless, the corresponding predictions of our model for 1 TeV squark mass marginally satisfy the experimental bounds. Turning to leptons, we quote in Tab.~\ref{lepA} the bounds~\footnote{Note that the decay rate depends on the sum $(\delta_{LR,ij})^2+(\delta_{RL,ij})^2$ \cite{masiero}. The $LL$ and $RR$ entries are much less constrained and we will not consider them here.} resulting from LFV decays of the charged leptons, $\mu\to e\gamma$, $\tau\to e\gamma$ and $\tau\to \mu\gamma$ and the theoretical predictions obtained under the assumption of a universal supersymmetry breaking scale $m_0$ at high energy. Then, at the electroweak scale $A_e\sim m_0$ and the typical scale for sleptons is $\tilde m_{sl}=(r_\ell r_e)^{\frac{1}{4}}m_0$. One observes that even for the slepton mass as high as $400$ GeV (corresponding the $m_0=750$ GeV) the contribution to $\mu\to e\gamma$ is not sufficiently suppressed. It is interesting to know how far one can adjust the charges $e_i$ and $\ell_i$ to ameliorate this problem. To this end, consider the product \begin{equation} \delta^e_{LR,12}\delta^e_{RL,12}\sim \frac{A_e^2v^2}{\tilde m_{sl}^4}\epsilon^{\ell_1+e_2+\ell_2+e_1}\cos^2\beta \sim \frac{A_e^2m_e m_\mu}{\tilde m_{sl}^4}\,. \label{one} \end{equation} It is therefore clear that this product is independent of the concrete charge assignment and can only be lowered by increasing $\tilde m_{sl}$ or decreasing $A_e$. This means that at least one of the individual contributions is bigger than \begin{equation} \frac{A_e\sqrt{m_e m_\mu}}{\tilde m_{sl}^2}\sim 3.5\times 10^{-5} \label{two} \end{equation} where the numbers are for $\tilde m_{sl}=400$ GeV. This is a robust prediction (up to $\mathcal O(1)$ coefficients), and indeed Tab.~\ref{lepA} shows that it holds in particular for model $A$. A stronger suppression can be obtained only if $A$ terms are smaller than $m_0$ and/or $m_0$ has larger value, i.e. $\tilde m_{sl}>400$ GeV. For instance one can get an acceptable decay rate for $A_e\sim 100$ GeV and $\tilde m_{sl}\sim 400$ GeV. Lowering the slepton mass further requires more and more fine tuning of $A_0$, while $\tilde m_{sl}\sim 400$ GeV implies squark masses of the order of 1.9 TeV which is uncomfortably large for the little hierarchy problem. In conclusion, the leptonic bounds are more constraining than the hadronic ones ( see also \cite{Nelson:2000sn}, \cite{nps}). Finally, let us stress that FN models do possess a similar problem (with identical bounds on the $LR/RL$ sector). In addition, they predict insufficient suppression in the $LL$ and $RR$ sectors. In Sec.~\ref{origin} we will point out a novel mechanism to suppress the $\mu\to e\gamma$ decay rate, opening the possibility to lower the superpartner masses without fine-tuning the leptonic $A$ terms in the WFR models. \begin{table} \begin{center} \begin{tabular}{cc|cc|cc|cc} \hline $a$ &$ij$ &\multicolumn{2}{|c|}{$\delta^a_{LL,ij}$} &\multicolumn{2}{|c|}{$\delta^a_{RR,ij}$} &\multicolumn{2}{c}{$\langle \delta^a_{ij}\rangle$} \\ && Exp.& Th &Exp.&Th.&Exp.&Th.\\ \hline $d$ &12 &0.03 &$8.6\times 10^{-5}$ &0.03 &$9.1\times 10^{-7}\, t_\beta^2 $&0.002 &$8.9\times 10^{-6}\, t_\beta$\\ $d$ &13 &0.2 &$1.8\times 10^{-3}$ &0.2 &$9.1\times 10^{-7}\, t_\beta^2$ &0.07 &$4.0\times 10^{-6}\, t_\beta$\\ $d$ &23 &0.6 &$8.1\times 10^{-3}$ &1.8 &$1.8\times 10^{-5}\, t_\beta^2$ &0.2 &$3.8\times 10^{-4}\, t_\beta$\\ \hline $u$ &12 &0.1 &$8.6\times 10^{-5}$ &0.1 &$8.6\times 10^{-5}\, $ &0.008 &$8.6\times 10^{-5}$\\ \hline \end{tabular} \caption{Bounds on hadronic chirality-preserving mass insertions and results from WFR with model A. Bounds (taken from Tab.~IV of Ref.~\cite{Isidori:2010kg}) are valid for a squark mass of 1 TeV and scale linearly with the latter.} \label{hadLLA} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{cc|cc|cc} \hline $a$ &$ij$ &\multicolumn{2}{|c|}{$\delta^a_{LR,ij}$} &\multicolumn{2}{|c}{$\delta^a_{RL,ij}$} \\ && Exp.& Th. &Exp.&Th.\\ \hline $d$ &12 &$2\times10^{-4}$ &$8.1\times 10^{-6}$ &$2\times 10^{-3}$ &$1.8\times 10^{-6}\,$\\ $d$ &13 &0.08 &$8.1\times 10^{-6}$ &0.08 &$3.7\times 10^{-5}\,$\\ $d$ &23 &0.01 &$3.7\times 10^{-5}$ &0.01 &$7.6\times 10^{-4}\,$\\ $d$ &11 &$4.7\times 10^{-6}$ &$3.9\times 10^{-7}$ &$4.7\times 10^{-6}$ &$3.9\times 10^{-7}$\\ \hline $u$ &12 &0.02 &$3.7\times 10^{-5}$ &0.02 &$3.7\times 10^{-5}\, $ \\ $u$ &11 &$9.3\times 10^{-6}$ &$8.1\times 10^{-6}$ &$9.3\times 10^{-6}$ &$8.1\times 10^{-6}$\\ \hline \end{tabular} \caption{Bounds on hadronic chirality-flipping mass insertions and results from WFR with model A. Bounds taken from Tab.~V of Ref.~\cite{Isidori:2010kg} are valid for a squark mass of 1 TeV. While the bounds on the $i\neq j$ ($i= j$) elements grow linearly (quadratically) with the latter, our predictions go down linearly.} \label{hadLRA} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{c|ccc} \hline $ij$ &\multicolumn{3}{|c}{$\delta^e_{MN,ij}$} \\ &Exp. &Th. (LR) &Th. (RL)\\ \hline 12 &$4.8\times 10^{-6}$ &$2.0\times 10^{-5}$ &$9.4\times 10^{-5}$\\ 13 &$1.8\times 10^{-2}$ &$4.3\times 10^{-4}$ &$9.4\times 10^{-5}$\\ 23 &$1.2\times 10^{-2}$ &$8.9\times 10^{-3}$ &$4.3\times 10^{-4}$\\\hline \end{tabular} \caption{Experimental bounds on leptonic mass insertions and results from WFR with model A. Bounds (taken from Tab.~7 of Ref.~\cite{masiero}, using updated bounds on the branching ratios \cite{Amsler:2008zzb}) are valid for a slepton mass of 400 GeV.} \label{lepA} \end{center} \end{table} \section{Unification and wave-function hierarchies} \label{uni} The physical gauge coupling in a supersymmetric field theory is given by \cite{sv,kl} \begin{equation} \frac{4 \pi^2}{g_a^2 (\mu)} \ = \ Re f_a \ + \ \frac{b_a}{4} \log \frac{\Lambda^2}{\mu^2} + \frac{T (G_a)}{2} \log g_a^{-2} (\mu^2) - \sum_r \frac{T_a (r)}{2} \log \det Z_{(r)} (\mu^2) \ , \label{kl1} \end{equation} where \begin{equation} b_a \ = \ \sum_r n_r T_a (r) - 3 T (G_a) \quad , \quad T_a (r) \ = \ Tr_r \ T_{(a)}^2 \ \label{kl2} \end{equation} are the beta function and the Dynkin index of the representation $r$ under the gauge group factor $G_a$, $f_a$ are the holomorphic gauge couplings, $Z_{(r),ij}$ are wave functions of matter fields of flavour indices $i,j$ and the determinant $\det Z_{(r)} (\mu^2)$ is taken in the flavour space. \\ In our case $Z_{(r)} \simeq diag \ (\epsilon^{- 2 q_1^{(r)}} , \epsilon^{- 2 q_2^{(r)}}, \epsilon^{- 2 q_3^{(r)}})$ and therefore \begin{equation} \log \det Z_{(r)} \ = \ - 2 \sum_i q_i^{(r)} \log \epsilon \ , \label{kl3} \end{equation} where $q_i^{(r)}$ are the " $U(1)$ charges" of the matter representations $r = Q,U,D,L,E,H_u,H_d$. Let us define in what follows the quantities \begin{equation} A_a \ = \ - \frac{1}{\log \epsilon} \sum_r \frac{T_a (r)}{2} \log \det Z_{(r)} \ , \label{kl4} \end{equation} which are proportional to the additional contribution to the running coming from a strongly coupled sector, producing the hierarchical wave functions. Notice that usual MSSM unification is preserved if \begin{equation} A_3 \ = \ A_2 \ = \ \frac{3}{5} A_1 \ . \label{kl5} \end{equation} With the field content of MSSM, we find \begin{eqnarray} && SU(3) \ : \qquad A_3 \ = \ \sum_i (2 q_i + u_i + d_i) \ , \nonumber \\ && SU(2) \ : \qquad A_2 \ = \ \sum_i (3 q_i + l_i) + h_u + h_d \ , \nonumber \\ && U(1)_Y \ : \qquad A_1 \ = \ \sum_i (\frac{1}{3} q_i + \frac{8}{3} u_i + \frac{2}{3} d_i + l_i + 2 e_i) + h_u + h_d \ . \label{kl6} \end{eqnarray} Notice also that the quantities $A_i$ can be related simply to the determinants of the mass matrices of the quarks and leptons via \begin{eqnarray} && \det (Y_U Y_D^{-2} Y_L^3) \ = \ \epsilon^{\frac{3}{2} (A_1 + A_2 - 2 A_3)} \ , \nonumber \\ && \det (Y_U Y_D) \ = \ \epsilon^{A_3 + 3 (h_u+h_d)} \ . \label{kl7} \end{eqnarray} The reader familiar with the gauged Froggatt-Nielsen $U(1)$ generating Yukawa hierarchies probably recognized already (\ref{kl5})-(\ref{kl7}), see \cite{ir}, \cite{binetruy}, \cite{dps}. It is worth pointing out the interesting analogy with our present case: \begin{itemize} \item In the gauged FN case the quantities (\ref{kl6}) are precisely the coefficients of the $U(1)_X G_a^2$ mixed anomalies, between the gauged $U(1)_X$ and the SM gauge group factors $G_a = SU(3), SU(2), U(1)_Y$. \item In the FN case (\ref{kl5}) represent the universal (for the heterotic strings) Green-Schwarz anomaly cancelation conditions. \end{itemize} In our case, (\ref{kl5}) represent the unification conditions for the gauge couplings at the energy scale where the strong sector decouples from the running. Interestingly enough, even if there is no gauge $U(1)$ symmetry in our case, unification of gauge couplings requires the "charges" determining the wave function renormalisation to satisfy exactly the same constraints as anomaly cancellation for the U(1) charges in the gauged FN case ! \\ By using the results of \cite{binetruy}, \cite{dps} on the structure of quark and lepton masses, one useful relation can be written \begin{equation} A_1 + A_2 - \frac{8}{3} A_3 \simeq 2 (h_u+ h_d) \ . \label{kl8} \end{equation} The unification conditions (\ref{kl5}) lead therefore to $h_u+ h_d=0$. Since in the WFR models all "charges" are positive or zero, this means that $h_u=h_d=0$. Therefore in the extra dimensional interpretation of the WFR models (to be discussed in the next section) both Higgs doublets are localized on the UV brane. \\ Let us notice that, in the FN case, the mixed anomaly conditions (\ref{kl6}) imposed to the model C of Section (\ref{structure}) gives the result $h_u+h_d=0$, $d_3-l_3= 2/3$. A simple solution is $h_u=h_d=0$, $d_3=1,l_3= 1/3$. In this case, the $U(1)_X$ symmetry breaks to a discrete $Z_3^L$ acting on the leptons, which protects proton decay. \\ \section{Extra dimensional model for the WFR} \label{origin} There are various possible origins for the WFR: 4d strongly coupled or higher-dimensional with flavour-dependent wave-function localization. We use here a variant of the RS setup \cite{rs1}, with an UV brane with energy scale $\Lambda_{UV} $ and an IR brane with energy scale $\Lambda_{IR} \sim M_{GUT}$. The fifth dimension is therefore very small and the hierarchy is given by \begin{equation} \epsilon=\frac{\Lambda_{IR}}{\Lambda_{UV}}=e^{-k\pi R} \end{equation} All MSSM fields live in the bulk \cite{tony}. Following \cite{Choi:2003fk}, start with K\"ahler terms ($0<y<\pi R$) \begin{multline} \hat K= e^{(1-2c_{h_u})k y }H_u^\dagger H_u+ e^{(1-2c_{h_d})k y }H_d^\dagger H_d\\ +e^{(1-2c_{q,i})k y }Q^\dagger_iQ_i+ e^{(1-2c_{u_i})k y }U^\dagger_iU_i+ e^{(1-2c_{d_i})k y }D^\dagger_iD_i\\ +\delta(y)k^{-3}X^\dagger X\left( C_{q,ij}Q^\dagger_iQ_i + C_{u,ij}U^\dagger_iU_i + C_{d,ij}D^\dagger_iD_i + C_{h_u}H_u^\dagger H_u+C_{h_d}H_d^\dagger H_d\right) \,. \label{kahler} \end{multline} where $i,j=1,2,3$ running over families and the coefficients are flavour anarchic $\mathcal O(1)$ numbers. We have kept only fields with zero modes, the conjugate fields $\phi^c$ have Dirichlet boundary conditions $(--)$ and hence have no zero modes. The leptons have an analogous Lagrangian. Brane localized kinetic terms can also be introduced, even with arbitrary flavour dependence, without changing the outcome. We will introduce a superpotential \begin{multline} \hat W=\delta(y) k^{-\frac{3}{2}}\left( \hat Y^u_{ij}H_u Q_i U_j+\hat Y^d_{ij}H_d Q_i D_j+k^{-1}X \hat A^u_{ij} H_u Q_iU_j + k^{-1}X \hat A^d_{ij} H_d Q_iD_j \right)\\ +\delta(y-\pi R) (k\epsilon)^{-\frac{3}{2}}\left( \hat Y'^u_{ij}\epsilon^{c_{h_u}+c_{q_i}+c_{u_j}}H_u Q_i U_j+\hat Y'^d_{ij}\epsilon^{c_{h_d}+c_{q_i}+c_{d_j}}H_d Q_i D_j \right) \label{super} \end{multline} Notice that we have confined the SUSY breaking spurion $X$ to the UV brane at $y=0$. We have introduced arbitrary dimensionless Yukawa couplings on both branes. After integrating over the extra dimension, the kinetic terms pick up wave function renormalisations \begin{equation} Z_{q}=\frac{1}{(1-2c_q)k}\left( \epsilon^{2c_q-1}-1 \right)\,, \\ \end{equation} and therefore \begin{equation} Z_q \sim \frac{\epsilon^{2c_q-1}}{(1-2c_q)k} \ \ {\rm for } \ c < 1/2 \quad {\rm and} \quad Z_q \sim \frac{1}{(2c_q-1)k} \ \ {\rm for} \ c > 1/2 \ . \end{equation} Notice that \begin{itemize} \item For $c_q<\frac{1}{2}$ the field is localized near the IR brane. We assign it "charges" $q=\frac{1}{2}-c_q>0$ and $q'=0$. \item For $c_q>\frac{1}{2}$ the field is localized near the UV brane. The charges are $q=0$ and $q'=c_q-\frac{1}{2}>0$. \item Exact UV (IR) brane localization is obtained by formally sending $q'$ ($q$) to infinity. \end{itemize} After switching to canonical normalization, this leads to Yukawa couplings \begin{eqnarray} Y_{ij}^u&=& \hat Y^u_{ij}\epsilon^{q_i+u_j+h_u} +\hat Y'^u_{ij}\epsilon^{q'_i+u'_j+h'_u}\,,\label{yuku}\\ Y_{ij}^d&=& \hat Y^d_{ij}\epsilon^{q_i+d_j+h_d} +\hat Y'^d_{ij}\epsilon^{q'_i+d'_j+h'_d}\label{yukd}\,. \end{eqnarray} Each field either suppresses $\hat Y$ or $\hat Y'$, depending on whether it is UV or IR localized. Since we will take the $X$ field localized on the UV brane, the physical soft masses and $A$ terms at the high scale are given by the expressions in Sec.~\ref{structure} with \begin{eqnarray} m_0\sim \frac{|F_X|}{k}\,. \end{eqnarray} We consider the following localisation of the MSSM fields\footnote{Similar localization of the MSSM flavours from a different perspective was also considered recently in \cite{tony}.} : \begin{itemize} \item the first two generations of quarks and leptons are localized near the IR brane. In a holographic 4d interpretation, the first two generations are composite states. \item the top quark is localized on or near the UV brane, whereas bottom and tau are localized near the UV brane or near the IR brane, depending on $\tan \beta$. In the holographic language, the top quark is therefore elementary. \item the two Higgs doublets $H_u,H_d$ are localized near the UV brane and therefore have $h_u,\ h_d=0$. They are then elementary from the 4d holographic point of view. In the scenario below, we will consider a finite $h_d'$ describing a non-negligible "tail" near the IR. \item the spurion $X$ is located on the UV brane \end{itemize} One important point to mention here is that the extra dimensional realisation of the WFR approach allows for certain generalisations. They are equivalent (and the analogy with FN is true) only if all Yukawas are localized on the UV brane that is if we neglect the corrections coming from the "tail" of the Higgs fields near the IR brane. By comparing with the standard RS non-SUSY setup with fermion mass hierarchies generated by wave functions overlap, we notice that in the standard RS case, since $\epsilon_{RS} = \frac{\Lambda_{UV}}{\Lambda_{IR}} \sim 10^{-16}$, the bulk masses $c_i$ have to be tuned close to $1/2$ in order not to generate too big hierarchies in the fermion masses. In our case, we choose to work with a very small extra dimension $10^{-3 }\le \epsilon \le 10^{-1}$ and therefore there is no need for such a tuning. Of course, such a small warping does not provide a solution to the hierarchy problem anymore, but since we have low-energy supersymmetry, the strong warping is clearly not needed. Provided $h_u'$ and $h_d'$ are large enough (sharp UV localization), the Yukawa couplings originating from the IR brane (i.e.~the terms proportional to $\hat Y'$ in Eqns.~(\ref{yuku}) and (\ref{yukd})) are always subleading compared to the ones from the UV brane and hence irrelevant. For moderately large values they can become comparable\footnote{This "switching behavior" was exploited in Ref.~\cite{Agashe:2008fe} to generate an anarchical neutrino spectrum and large mixing angles.}, at least for the light generations, and can in fact be exploited to circumvent the $\mu\to e\gamma$ problem pointed out at the end of Sec.~\ref{structure}. For instance, for all 3 generations of leptons IR-localized (small to moderate $\tan\beta$), one has \begin{equation} Y^e_{ij}= \hat Y^e_{ij}\epsilon^{\ell_i+e_j}+\hat Y'^e_{ij}\epsilon^{h'_d}\,, \qquad A^e_{ij}= m_0\hat A^e_{ij}\epsilon^{\ell_i+e_j} \label{YA} \end{equation} Ideally, we would like to suppress the dangerous $A$ terms without suppressing the corresponding Yukawas. This is easy to do: Let us imagine that we increase $\ell_1$ and/or $e_1$ such that $A^e_{12}$ and $A^e_{21}$ are sufficiently small in order to satisfy the bounds for a given slepton mass.\footnote{As a bonus, the $A^e_{11}$ term, responsible for generating an electron EDM, receives additional suppression.} Of course, this will result in a too small electron mass unless we impose that $h_d'$ is responsible for generating $Y^e_{11}$ from the IR brane. We thus choose charges such that \begin{eqnarray} \ell_1+e_1&>& h_d'\\ h_d'& \sim & 5+\ell_3+e_3\\ \ell_2+e_2& \sim & 2+\ell_3+e_3\,. \end{eqnarray} where the last two relations ensure the correct $e-\tau$ and $\mu-\tau$ mass ratios. A possible choice, satisfying also the unification conditions Eq.~(\ref{kl5}), reads \begin{gather} q=(4,2,0)\,,\qquad u=(3,2,0)\,,\qquad e=(5,2,0)\,,\\ d=(5,0,0)+d_3\,,\qquad \ell=(4,0,0)+d_3\,,\qquad h_d'\sim 5+d_3\,, \end{gather} leading to Yukawas \begin{equation} Y^u\sim\epsm{7}{6}{0}{5}{4}{2}{3}{2}{0},\quad Y^d\sim t_\beta \epsm{\underline 8}{7}{7}{\underline 8}{5}{5}{8}{3}{3},\quad Y^e\sim t_\beta \epsm{\underline 8}{\underline 8}{7}{8}{5}{3}{8}{5}{3}\,. \end{equation} The underlined exponents are the ones generated from the new contributions in the second term in Eq.~(\ref{YA}). One sees that only the down and the electron masses are affected by $h_d'$. On the other hand, the A terms are given by \begin{equation} A^u\sim Y^u\,,\quad A^d\sim t_\beta \epsm{12}{7}{7}{10}{5}{5}{8}{3}{3}\,,\quad A^e\sim t_\beta \epsm{12}{9}{7}{8}{5}{3}{8}{5}{3}\,. \end{equation} The suppression in the $12$ and $21$ elements of $A^e$ is now sufficient for a slepton mass around 200 GeV. Notice that the FN models, even with multiple $U(1)$'s, have no analogue of this mechanism. Notice that in order to forbid R-parity violating operators we need to impose R-parity as symmetry of the effective action. Once this is done, there are usually still dangerous dimension five operators. In our case, if the triplet Higgs fields are localized on the UV brane along with the doublets, these operators are naturally generated there, and we find \begin{equation} \frac{1}{\Lambda_{UV}} \epsilon^{q_i + q_j + q_l + l_m} Q_i Q_j Q_k L_m \quad , \quad \frac{1}{\Lambda_{UV}} \epsilon^{u_i + u_j + d_k + e_m} U_i U_j D_k E_m \ . \label{xtra2} \end{equation} Due to the localization of the first two generations on the IR brane, we get an additional suppression, as for the UV localized Yukawas of the first two generations, which is enough in order to bring these operators into their experimental bounds \cite{barbier}. Finally, extra dimensional interpretation may shed some light on the stunning coincidence of the anomaly cancelation conditions and the conditions for the gauge coupling unification discussed in the previous section. The charges $q_i = 1/2-c_i$ can be understood, in a holographic 4d picture, as dimensions of CFT operators that the bulk fields couple to. So it seems that the analog of the gauged $U(1)_X$ is actually the SUSY partner of the dilatation current, the R-symmetry current $U(1)_R$. On the other hand, the anomalies $U(1)_R G_a^2$ are indeed related to the beta functions and therefore to the running of the gauge couplings of the SM gauge factors $G_a$. \section{Conclusions} \label{conclusions} Supersymmetric models with WFR reproduce the success of the FN models for fermion masses with mixings, alleviating at the same time their FCNC problems. Whereas from a 4d perspective, the improvement in the quark sector is phenomenologically quite successful, in the leptonic sector there are still problems with $\mu \to e \gamma$. We however showed that in an extra dimensional realization similar to RS but with an IR brane of mass scale of the order of $M_{GUT}$, with the first two generations composite (IR localized) and the third one elementary (UV localized), the problem can be elegantly solved by generating the electron mass by the strong coupling in the CFT sector. Indeed, the A-term for the electron is strongly suppressed since the supersymmetry breaking spurion field is elementary and the corresponding terms in the action (as well as the other soft breaking terms) are localized on the UV brane. More generally, the analogy between FN and WFR is precise in the warped 5d realization when all Yukawa couplings are elementary (UV localized), whereas strong coupling contributions (IR CFT contributions) add new structure compared to the FN setup. As a side comment, we notice that similarly, we can generate a $\mu$-term on the IR boundary with large suppression factor if $h'_u,h'_d \gg 0$ in the sense discussed in Section \ref{origin}. This is of course useful only if for some reason such a term is absent on the UV brane. We showed that whereas the FN gauge $U(1)$ case is constrained by the various gauge anomaly cancelation conditions, in the WFR case most of these conditions re-emerged in Section \ref{uni} as conditions for gauge coupling unification. More precisely, the same conditions for the corresponding parameters as the mixed anomaly conditions $A_3 \sim U(1)_X SU(3)^2$, $A_2 \sim U(1)_X SU(2)^2$, $A_1 \sim U(1)_X U(1)_Y^2$ for the U(1) charges appear in the threshold corrections to the gauge couplings (\ref{kl1}), (\ref{kl4}). They are then constrained by the unification of the SM gauge couplings precisely in the same way as the U(1) charges by the universal Green-Schwarz anomaly cancelation conditions in the FN case. The mixed anomaly $U(1)_X^2 U(1)_Y$ does not emerge in the WFR setup however and it is therefore still true that in the WFR case the "charges" $q_i \leftrightarrow c_i $ are less constrained than $U(1)_X$ charges in the FN setup. One should also mention that in the FN case $U(1)_X$ can be broken to discrete symmetries $Z_N$ which can have nice features like suppressing proton decay at acceptable levels. There does not seem to be analog of this phenomenon in the WFR case. On the side of the phenomenological predictions of the WFR scheme and the possibility of its experimental verification, one sees that FCNC effect are much more strongly suppressed than in the FN models. Thus, contrary to the predictions of the FN models, one does not expect the FCNC effects to be close to the present bounds (perhaps with exception of the muon decay). However, there is an interesting correlation between the supersymmetric models for flavour and the pattern of superpartner masses. The WFR scheme predicts all superpartner masses, except the stop masses, in terms of the gluino mass. In particular, also slepton masses are predicted in terms of the gluino mass. Finally, it would be interesting to investigate the issue of flavor violation in F-theory models, where similarly there is an analog of the WFR of generating Yukawa hierarchies \cite{vafa} and a different, gauged FN setup generating them \cite{palti}. \subsection*{Acknowledgments} We thank Tony Gherghetta and Claudio Scrucca for stimulating and helpful discussions. The work presented was supported in part by the European ERC Advanced Grant 226371 MassTeV, by the CNRS PICS no. 3059 and 4172, by the grants ANR-05-BLAN-0079-02, the PITN contract PITN-GA-2009-237920, the IFCPAR CEFIPRA programme 4104-2 and by the MNiSZW grant N N202 103838 (2010-2012). SP thanks the Institute for Advanced Studies at TUM, Munich, for its support and hospitality.
1,941,325,221,218
arxiv
\section{Introduction} A better understanding of the spectroscopic properties of atomic nuclei with an odd number of nucleons still remains a major challenge for both experimental and theoretical low-energy nuclear physics. The existence of an unpaired nucleon in the nucleus implies the observation of many new effects in nuclear dynamics like the weakening of pairing correlations, the increase of level densities around the Fermi level, polarization of collective degrees of freedom, breaking of time reversal symmetry in the intrinsic wave function, and a long list of etc. As a consequence, the microscopic description of an odd-A system is far more challenging than in the traditional even-even case \cite{RS,bender2003,Rob19}. This is manifest in the much slower progress in the implementation of symmetry restoration in odd-A nuclei \cite{bally2014,borrajo2016}. In addition, the quantitative side is strongly affected by tiny details of the nuclear interaction, making this kind of systems the perfect test ground to analyze the suitability of new or existing proposals for effective nuclear interactions/functionals, see \cite{Dobaczewski2015} for a recent analysis focusing on superheavy nuclei. Detailed spectroscopic studies of odd-mass and/or odd-odd nuclei, have already been carried out using microscopic approaches such as the large-scale shell model \cite{caurier2005} and the symmetry-projected generator coordinate method (GCM) \cite{bally2014,borrajo2016}. See \cite{RS} for a general introduction to the latter method. From a computational point of view, systematic applications of these approaches are very demanding, if not impossible, for heavy nuclei, especially when a large number of valence nucleons are involved and/or multiple shape degrees of freedom have to be taken into account in the generator coordinate method (GCM) ansatz. To overcome these difficulties we proposed in Ref.~\cite{nomura2019dodd} (in a study of odd-A Au and Pt and odd-odd Au isotopes) to perform constrained Hartree-Fock-Bogoliubov (HFB) calculations based on the Gogny \cite{Gogny} energy density functional (EDF) with the parametrization D1M \cite{D1M}, to obtain energy surfaces as functions of the ($\beta,\gamma$) quadrupole deformation parameters for the neighboring even-even Pt nuclei. The single-particle energies and occupation numbers were computed for the odd neutron and odd proton in the odd-mass Au and Pt as well as odd-odd Au isotopes. Those quantities were then used, as a microscopic input, to completely determine the interacting boson model (IBM) \cite{IBM} Hamiltonian for the even-even nucleus and most of the parameters of the different boson-fermion coupling terms present in the interacting boson-fermion model (IBFM) \cite{iachello1979,scholten1985,IBFM} and the interacting boson-fermion-fermion model (IBFFM) \cite{brant1984,IBFM} Hamiltonians for the odd-A and odd-odd systems, respectively. Only a few coupling constants of the boson-fermion and the residual neutron-proton interaction terms were treated as free parameters. These parameters were determined so as to reproduce reasonably well the experimental low-lying energy levels of the odd-mass and odd-odd nuclei. Though the method involves a few phenomenological parameters, it allows to study simultaneously the spectroscopy of even-even, odd-mass, and odd-odd nuclei within a unified framework. The method reduces significantly the computational cost associated with those calculations and provides the possibility of studying heavy odd and odd-odd nuclei irrespective of their location at the chart of nuclides. In this work, we consider the spectroscopic properties of the odd-odd nuclei $^{124-132}$Cs, using the theoretical framework developed in Ref.~\cite{nomura2019dodd}. The reason for the choice of nuclei is that the $A\approx$130 mass region exhibits a wide variety of structural phenomena. A variety of theoretical models suggested the existence of triaxially-deformed and/or $\gamma$-soft shapes for even-even systems in this mass region \cite{CASTEN1985,sevrin1987,yan1993,vogel1996,mizusaki1997,yoshinaga2004,li2010,nomura2012tri}. A gradual transition, from $\gamma$-soft to nearly spherical shapes, has also been identified \cite{cejnar2010} while several nuclei, such as $^{134}$Ba \cite{casten2000} and $^{128}$Xe \cite{coquard2009}, are suggested to display features of the E(5) critical-point symmetry \cite{iachello2000} of the phase transition. In some odd-odd Cs isotopes, most notably in $^{128}$Cs, chiral doublet bands \cite{frauendorf1997} have been observed \cite{koike2004,grodner2006,starosta2017,grodner2018}. Those bands are associated with nearly degenerate energy levels with equal spins and characteristic electromagnetic properties. The high-spin level structure in the odd-odd nuclei in the mass $A\approx 130$ region, in particular the role of the $(\nu h_{11/2})^{-1}\otimes\pi h_{11/2}$ (neutron hole coupled with proton) or $\nu h_{11/2}\otimes\pi h_{11/2}$ configuration in forming the chiral bands, has been studied by various theoretical approaches, and we particularly mention the IBFFM \cite{brant2004} and shell model \cite{yoshinaga2006,higashiyama2005,higashiyama2007,higashiyama2013} calculations. Furthermore, this mass region represents a challenging testing ground to examine the predictive power of nuclear models for fundamental processes, such as $\beta$-decay and double-$\beta$ decay \cite{zuffi2003,brant2006,mardones2016,engel2017}. Previous phenomenological IBFM and IBFFM spectroscopic studies \cite{arias1985} were also carried out for nuclei in the same mass region considered in this work. The paper is organized as follows. In Sec.~\ref{sec:model}, we outline the theoretical framework used in this study. We begin Sec.~\ref{sec:results}, with a brief discussion of the results obtained for the even-even core nucleus $^{124}$Xe as well as the odd-N and odd-Z nuclei $^{123}$Xe and $^{125}$Cs. In the same section, we discuss the low-energy spectra obtained for the odd-odd systems $^{124-132}$Cs. Moreover, we pay attention to the band structures of higher-spin states to identify features of chirality in some of the considered odd-odd Cs isotopes. Finally, Sec.~\ref{sec:summary} is devoted to the concluding remarks. \section{Theoretical framework\label{sec:model}} \subsection{IBFFM-2 Hamiltonian} Within the employed theoretical scheme, the low-lying structure of the even-even-core nucleus is described in terms of the IBM \cite{IBM}, where correlated pairs of valence nucleons are represented by bosonic degrees of freedom \cite{OAI}. In the IBFM, one unpaired nucleon is explicitly included as an additional degree of freedom to the boson space \cite{iachello1979,scholten1985,IBFM} to handle odd-mass systems. The IBFFM represents a further extension of the IBFM to odd-odd systems that includes, one unpaired neutron and one unpaired proton \cite{brant1984,IBFM}. As in our previous study for odd-odd Au isotopes \cite{nomura2019dodd}, we have used a version of the IBFFM that distinguishes between neutron and proton degrees of freedom (denoted hereafter as IBFFM-2). The IBFFM-2 Hamiltonian reads \begin{equation} \label{eq:ham} \hat H_\text{} = \hat H_\text{B} + \hat H_\text{F}^\nu + \hat H_\text{F}^\pi + \hat H_\text{BF}^\nu + H_\text{BF}^\pi + \hat V_\text{res}. \end{equation} where the first term represents the neutron-proton IBM (IBM-2) Hamiltonian \cite{OAI} that describes the even-even core nuclei ($^{124,126,128,130,132}$Xe). The second (third) term is the Hamiltonian for an odd neutron (proton). The fourth (fifth) term corresponds to the interaction Hamiltonian describing the coupling of the odd neutron (proton) to the IBM-2 core. The last term in Eq.~(\ref{eq:ham}) is the residual interaction between the odd neutron and the odd proton. For the boson-core Hamiltonian $\hat H_\text{B}$ in Eq.~(\ref{eq:ham}) the standard IBM-2 Hamiltonian has been adopted: \begin{equation} \label{eq:ibm2} \hat H_{\text{B}} = \epsilon(\hat n_{d_\nu} + \hat n_{d_\pi})+\kappa\hat Q_{\nu}\cdot\hat Q_{\pi} \end{equation} where $\hat n_{d_\rho}=d^\dagger_\rho\cdot\tilde d_{\rho}$ ($\rho=\nu,\pi$) is the $d$-boson number operator while $\hat Q_\rho=d_\rho^\dagger s_\rho + s_\rho^\dagger\tilde d_\rho^\dagger + \chi_\rho(d^\dagger_\rho\times\tilde d_\rho)^{(2)}$ is the quadrupole operator. The parameters of the Hamiltonian are $\epsilon$, $\kappa$, $\chi_\nu$, and $\chi_\pi$. The doubly-magic nucleus $^{100}$Sn is taken as the inert core for the boson space. We have followed the standard way of counting the number of bosons, i.e., the numbers of neutron $N_{\nu}$ and proton $N_{\pi}$ bosons equal the numbers of neutron-hole and proton-particle pairs, respectively. As a consequence, $N_\pi=2$ and $N_{\nu}=6$, 5, 4, 3 and 2 for $^{124,126,128,130,132}$Xe, respectively. In Eq.~(\ref{eq:ham}), the Hamiltonian for the odd nucleon, i.e., $\hat H_{\text F}^{\rho}$ takes the form \begin{equation} \label{one-body} \hat H_\text{F}^\rho = -\sum_{j_\rho}\epsilon_{j_\rho}\sqrt{2j_\rho+1} (a_{j_\rho}^\dagger\times\tilde a_{j_\rho})^{(0)} \end{equation} where $\epsilon_{j_\nu}$ ($\epsilon_{j_\pi}$) and $j_\nu$ ($j_\pi$) stand for the single-particle energy and the angular momentum of the unpaired neutron (proton). On the other hand, $a_{j_\rho}^{(\dagger)}$ ($a_{j_\rho}$) represents the fermion creation (annihilation) operator while $\tilde a_{j_\rho}$ is defined as $\tilde a_{jm}=(-1)^{j-m}a_{j-m}$. For the fermion valence space, we have taken into account the full neutron and proton major shell $N,Z=50-82$, that include the $3s_{1/2}$, $2d_{3/2}$, $2d_{5/2}$, $1g_{7/2}$, and $1h_{11/2}$ orbitals. For the boson-fermion interaction term, $\hat H_{\rm BF}^\rho$ in Eq.~(\ref{eq:ham}), we employ the form that has been formulated within a simple generalized seniority scheme \cite{scholten1985,IBFM}: \begin{equation} \label{eq:ham-bf} \hat H_\text{BF}^\rho = \Gamma_\rho\hat Q_{\rho'}\cdot\hat q_{\rho} + \Lambda_\rho\hat V_{\rho'\rho} + A_\rho\hat n_{d_{\rho}}\hat n_{\rho} \end{equation} where $\rho'\neq\rho$, and the first, second, and third terms are the quadrupole dynamical, exchange, and monopole terms, respectively. The strength parameters of the interaction Hamiltonian are denoted by $\Gamma_\rho$, $\Lambda_\rho$, and $A_{\rho}$. As in previous studies \cite{scholten1985,arias1986}, we have assumed that both the dynamical and exchange terms are dominated by the interaction between unlike particles, i.e., between the odd neutron and proton bosons and between the odd proton and neutron bosons. We also assume that for the monopole term the interaction between like-particles, i.e., between the odd neutron and neutron bosons and between the odd proton and proton bosons, plays a dominant role. In Eq.~(\ref{eq:ham-bf}), $\hat Q_\rho$ is the bosonic quadrupole operator identical to the one in the IBM-2 Hamiltonian in Eq.~(\ref{eq:ibm2}) with the same value of the parameter $\chi_{\rho}$. The fermionic quadrupole operator $\hat q_\rho$ reads \begin{equation} \hat q_\rho=\sum_{j_\rho j'_\rho}\gamma_{j_\rho j'_\rho}(a^\+_{j_\rho}\times\tilde a_{j'_\rho})^{(2)}, \end{equation} where $\gamma_{j_\rho j'_\rho}=(u_{j_\rho}u_{j'_\rho}-v_{j_\rho}v_{j'_\rho})Q_{j_\rho j'_\rho}$ and $Q_{j_\rho j'_\rho}=\langle l\frac{1}{2}j_{\rho}||Y^{(2)}||l'\frac{1}{2}j'_{\rho}\rangle$ represents the matrix element of the fermionic quadrupole operator in the considered single-particle basis. The exchange term $\hat V_{\rho'\rho}$ in Eq.~(\ref{eq:ham-bf}) reads \begin{eqnarray} \label{eq:exchange} \hat V_{\rho'\rho} =& -(s_{\rho'}^\+\tilde d_{\rho'})^{(2)} \cdot \Bigg\{ \sum_{j_{\rho}j'_{\rho}j''_{\rho}} \sqrt{\frac{10}{N_\rho(2j_{\rho}+1)}}\beta_{j_{\rho}j'_{\rho}}\beta_{j''_{\rho}j_{\rho}} \nonumber \\ &:((d_{\rho}^\+\times\tilde a_{j''_\rho})^{(j_\rho)}\times (a_{j'_\rho}^\+\times\tilde s_\rho)^{(j'_\rho)})^{(2)}: \Bigg\} + (H.c.), \nonumber \\ \end{eqnarray} with $\beta_{j_{\rho}j'_{\rho}}=(u_{j_{\rho}}v_{j'_{\rho}}+v_{j_{\rho}}u_{j'_{\rho}})Q_{j_{\rho}j'_{\rho}}$. In the second line of the above equation the standard notation $:(\cdots):$ indicates normal ordering. For the monopole term, the number operator for the odd fermion is expressed as $\hat n_{\rho}=\sum_{j_{\rho}}(-\sqrt{2j_{\rho}+1})(a^\+_{j_\rho}\times\tilde a_{j_\rho})^{(0)}$. Finally, we adopted the following form of the residual neutron-proton interaction $\hat V_{\text{res}}$ \begin{equation} \label{eq:res} \hat V_{\text{res}}=4\pi u_{\rm D}\delta({\bf r_\nu}-{\bf r_\pi}) +u_{\rm T}\Bigg\{\frac{3(\sigma_{\nu}\cdot{\bf r_{\nu\pi}})(\sigma_{\pi}\cdot{\bf r_{\nu\pi}})}{r^2_{\nu\pi}}-\sigma_{\nu}\cdot\sigma_{\pi}\Bigg\}, \end{equation} where the first and second terms denote the delta and tensor interactions, respectively. We have found that these two terms are enough to provide a reasonable description of the low-lying states in the considered odd-odd nuclei. Note that by definition ${\bf r_{\nu\pi}}={\bf r_{\nu}}-{\bf r_{\pi}}$ and that $u_{\rm D}$ and $u_{\rm T}$ are the parameters of this term. Furthermore, the matrix element $V_\text{res}'$ of the residual interaction $\hat V_\text{res}$ can be expressed as \cite{yoshida2013}: \begin{align} \label{eq:vres} V_\text{res}' &= (u_{j_\nu'} u_{j_\pi'} u_{j_\nu} u_{j_\nu} + v_{j_\nu'} v_{j_\pi'} v_{j_\nu} v_{j_\nu}) V^{J}_{j_\nu' j_\pi' j_\nu j_\pi} \nonumber \\ & {} - (u_{j_\nu'}v_{j_\pi'}u_{j_\nu}v_{j_\pi} + v_{j_\nu'}u_{j_\pi'}v_{j_\nu}u_{j_\pi}) \nonumber \\ &\times \sum_{J'} (2J'+1) \left\{ \begin{array}{ccc} {j_\nu'} & {j_\pi} & J' \\ {j_\nu} & {j_\pi'} & J \end{array} \right\} V^{J'}_{j_\nu'j_\pi j_\nu j_\pi'}, \end{align} where \begin{equation} V^{J}_{j_\nu'j_\pi'j_\nu j_\pi} = \langle j_\nu'j_\pi';J|\hat V_\text{res}|j_\nu j_\pi;J\rangle \end{equation} represents the matrix element between the neutron-proton pairs and $J$ stands for the total angular momentum of the neutron-proton pair. The bracket in Eq.~(\ref{eq:vres}) represents the corresponding Racah coefficient. The terms resulting from contractions are neglected in Eq.~(\ref{eq:vres}), as in Ref.~\cite{morrison1981}. \subsection{Procedure to build the IBFFM-2 Hamiltonian} The basic ingredients of the IBFFM-2 Hamiltonian $\hat H$ in Eq.~(\ref{eq:ham}) are determined as follows \cite{nomura2019dodd}: \begin{enumerate} \item Once the form of the IBM-2 Hamiltonian is fixed, the parameters $\epsilon$, $\kappa$, $\chi_\nu$, and $\chi_\pi$ are uniquely determined \cite{nomura2008,nomura2010} by mapping the $(\beta,\gamma)$-deformation energy surface obtained from the constrained Gogny-D1M \cite{D1M} HFB calculation onto the expectation value of the IBM-2 Hamiltonian in the boson coherent state \cite{ginocchio1980}. \item The single-neutron Hamiltonian $\hat H_{\rm F}^{\nu}$ and the boson-fermion Hamiltonian $\hat H^{\nu}_{\rm BF}$ for odd-N Xe isotopes are built by using the procedure of \cite{nomura2016odd} (see also \cite{nomura2017odd-2} for further details ). In those references, the single-particle energies and occupation probabilities of the odd nucleon, entering both $\hat H_{\rm F}^{\nu}$ and $\hat H^{\nu}_{\rm BF}$, are obtained from Gogny-D1M HFB calculations at zero deformation. The optimal values of the boson-fermion interaction strengths $\Gamma_\nu$, $\Lambda_\nu$, and $A_\nu$ in Eq.~(\ref{eq:ham-bf}), are chosen, separately for positive and negative parity, so as to reproduce with a reasonable accuracy the experimental low-energy levels of each odd-N Xe nucleus. A similar procedure has been employed to determine the parameters $\Gamma_\pi$, $\Lambda_\pi$, and $A_\pi$ for the odd-Z Cs isotopes. \item We use for the IBFFM-2 Hamiltonian in the odd-odd Cs the same strength parameters $\Gamma_\nu$, $\Lambda_\nu$, and $A_\nu$ ($\Gamma_\pi$, $\Lambda_\pi$, and $A_\pi$) obtained for the odd-N Xe (odd-Z Cs) nuclei in the previous step. The single-particle energies and occupation probabilities are, however, computed independently for each of the studied odd-odd systems. \item Finally, the parameters $u_{\rm D}$ and $u_{\rm T}$, in the residual interaction $\hat V_{\text{res}}$, are determined so as to reproduce with reasonable accuracy the low-lying spectra in the odd-odd nuclei under consideration. For simplicity, we have taken the fixed values $u_{\rm D}=0.7$ MeV and $u_{\rm T}=0.02$ MeV for all the considered nuclei and for both parities. \end{enumerate} The values of the IBM-2 parameters adopted for the even-even Xe isotopes are shown in Table~\ref{tab:ibm2para}. In particular, the sum $\chi_{\nu}+\chi_{\pi}$ is somewhat close to zero in many of the considered Xe isotopes. This indicates that these nuclei are close to the O(6) limit of the IBM, which is associated with $\gamma$-soft deformation. The fitted strength parameters of the boson-fermion interactions, $\hat H_{\rm BF}^{\rho}$, are shown in Table~\ref{tab:para-dodd}. The values of some of these strength parameters, i.e., $\Gamma_{\rho}$ and $\Lambda_{\rho}$, for a given configuration ($sdg$ or $h_{11/2}$) gradually change with neutron number. For the positive-parity states in $^{128,130,132}$Cs, the values of $\Gamma_{\pi}$ for the proton $h_{11/2}$ configuration ( which are fitted to the odd-mass nuclei $^{129,131,133}$Cs, respectively) have been modified so that the higher-spin positive-parity states, which are mainly composed of the $(\nu h_{11/2})^{-1}\otimes\pi h_{11/2}$ configuration, become lower in energy. We consider a value of $\approx 0.5$ MeV for the excitation energy $E_{\mathrm x}$. The modified $\Gamma_{\pi}$ values, given in parentheses in Table~\ref{tab:para-dodd}, are also different from those employed for the negative-parity states. Finally, the single-particle energies and occupation probabilities for the odd-odd Cs isotopes, obtained using the Gogny-D1M HFB approach, are given in Table~\ref{tab:vsq-dodd}. They are quite similar to the ones obtained in the case of the odd-N Xe and odd-Z Cs nuclei. \begin{table}[htb!] \begin{center} \caption{\label{tab:ibm2para} Parameters of the IBM-2 Hamiltonian $\hat H_\text{B}$ for the even-even isotopes $^{124-132}$Xe.} \begin{ruledtabular} \begin{tabular}{ccccc} & $\epsilon$ (MeV) & $\kappa$ (MeV) & $\chi_\nu$ & $\chi_\pi$ \\ \hline $^{124}$Xe & 0.45 & $-0.336$ & 0.40 & $-0.50$ \\ $^{126}$Xe & 0.52 & $-0.323$ & 0.25 & $-0.50$ \\ $^{128}$Xe & 0.62 & $-0.315$ & 0.25 & $-0.55$ \\ $^{130}$Xe & 0.82 & $-0.308$ & 0.38 & $-0.50$ \\ $^{132}$Xe & 0.90 & $-0.250$ & 0.20 & $-0.55$ \end{tabular} \end{ruledtabular} \end{center} \end{table} \begin{table}[htb!] \caption{\label{tab:para-dodd} Parameters for the boson-fermion coupling Hamiltonians $\hat H^{\nu}_{\mathrm BF}$ and $\hat H^{\pi}_{\mathrm BF}$ (in MeV). These values have been adopted for describing the odd-odd nuclei $^{124-132}$Cs. For the positive-parity states in $^{128,130,132}$Cs, the values of the parameter $\Gamma_{\pi}$ for the $h_{11/2}$ orbital are different compared to those employed for the negative-parity states and are shown in parentheses.} \begin{center} \begin{ruledtabular} \begin{tabular}{lccccccc} & & $\Gamma_\nu$ & $\Lambda_\nu$ & $A_\nu$ & $\Gamma_\pi$ & $\Lambda_\pi$ & $A_\pi$ \\ \hline $^{124}$Cs & $sdg$ & 3.20 & 0.20 & $-0.14$ & 0.80 & 0.51 & $-0.80$ \\ & $h_{11/2}$ & 3.20 & 4.80 & $-0.20$ & 0.60 & 0.51 & $-2.2$ \\ \hline $^{126}$Cs & $sdg$ & 3.00 & 0.40 & $-0.12$ & 0.80 & 0.40 & $-0.70$ \\ & $h_{11/2}$ & 3.00 & 1.85 & 0.00 & 1.00 & 0.50 & $-1.0$ \\ \hline $^{128}$Cs & $sdg$ & 3.00 & 0.60 & $-0.28$ & 1.00 & 0.40 & $-0.70$ \\ & $h_{11/2}$ & 3.00 & 1.33 & 0.00 & 1.00 (2.60) & 0.50 & $-1.3$ \\ \hline $^{130}$Cs & $sdg$ & 1.60 & 2.20 & $-0.30$ & 1.20 & 0.55 & $-0.80$ \\ & $h_{11/2}$ & 1.60 & 0.92 & $-0.48$ & 1.20 (2.40) & 0.55 & $-1.3$ \\ \hline $^{132}$Cs & $sdg$ & 1.00 & 2.00 & $-0.30$ & 1.20 & 0.58 & $-0.50$ \\ & $h_{11/2}$ & 1.00 & 0.95 & $-0.34$ & 1.20 (3.00) & 0.58 & $-0.55$ \\ \end{tabular} \end{ruledtabular} \end{center} \end{table} \begin{table*}[htb!] \caption{\label{tab:vsq-dodd} Neutron and proton single-particle energies (in MeV) and occupation probabilities for the odd-odd Cs isotopes.} \begin{center} \begin{ruledtabular} \begin{tabular}{cccccccccccccc} & & $3s_{1/2}$ & $2d_{3/2}$ & $2d_{5/2}$ & $1g_{7/2}$ & $1h_{11/2}$ & & $3s_{1/2}$ & $2d_{3/2}$ & $2d_{5/2}$ & $1g_{7/2}$ & $1h_{11/2}$ \\ \hline $^{124}$Cs & $\epsilon_{j_{\nu}}$ & 1.339 & 1.003 & 3.719 & 3.439 & 0.000 & $\epsilon_{j_{\pi}}$ & 2.555 & 2.476 & 0.122 & 0.000 & 3.674 \\ & $v^2_{j_{\nu}}$ & 0.602 & 0.506 & 0.929 & 0.902 & 0.243 & $v^2_{j_{\pi}}$ & 0.034 & 0.047 & 0.303 & 0.352 & 0.023 \\ \hline $^{126}$Cs & $\epsilon_{j_{\nu}}$ & 1.271 & 0.983 & 3.684 & 3.516 & 0.000 & $\epsilon_{j_{\pi}}$ & 2.680 & 2.525 & 0.207 & 0.000 & 3.674 \\ & $v^2_{j_{\nu}}$ & 0.692 & 0.618 & 0.944 & 0.925 & 0.332 & $v^2_{j_{\pi}}$ & 0.032 & 0.047 & 0.290 & 0.362 & 0.024 \\ \hline $^{128}$Cs & $\epsilon_{j_{\nu}}$ & 1.217 & 0.978 & 3.656 & 3.607 & 0.000 & $\epsilon_{j_{\pi}}$ & 2.809 & 2.580 & 0.298 & 0.000 & 3.668 \\ & $v^2_{j_{\nu}}$ & 0.770 & 0.718 & 0.956 & 0.943 & 0.431 & $v^2_{j_{\pi}}$ & 0.030 & 0.046 & 0.276 & 0.373 & 0.024 \\ \hline $^{130}$Cs & $\epsilon_{j_{\nu}}$ & 1.174 & 0.984 & 3.635 & 3.710 & 0.000 & $\epsilon_{j_{\pi}}$ & 2.942 & 2.642 & 0.392 & 0.000 & 3.655 \\ & $v^2_{j_{\nu}}$ & 0.838 & 0.805 & 0.968 & 0.958 & 0.541 & $v^2_{j_{\pi}}$ & 0.028 & 0.045 & 0.261 & 0.384 & 0.025 \\ \hline $^{132}$Cs & $\epsilon_{j_{\nu}}$ & 1.141 & 1.001 & 3.620 & 3.823 & 0.000 & $\epsilon_{j_{\pi}}$ & 3.081 & 2.712 & 0.5 & 0.000 & 3.637 \\ & $v^2_{j_{\nu}}$ & 0.896 & 0.878 & 0.977 & 0.972 & 0.660 & $v^2_{j_{\pi}}$ & 0.026 & 0.044 & 0.246 & 0.395 & 0.025 \end{tabular} \end{ruledtabular} \end{center} \end{table*} Once the value of all the parameters has been obtained, the IBFFM-2 Hamiltonian is diagonalized in the $|L_\nu L_\pi(L);j_\nu j_\pi(J):I\rangle$ basis characterized by the angular momentum of the neutron (proton) bosons $L_\nu$ ($L_{\pi}$), the total angular momentum for the even-even boson core $L$ and the total angular momentum of the coupled system $I$. \subsection{Transition operators} Using the wave functions obtained after the diagonalization of the IBFFM-2 Hamiltonian, the electric quadrupole (E2) and magnetic dipole (M1) properties can be computed. The corresponding $\hat T^{(E2)}$ and $\hat T^{(M1)}$ operators are given by \cite{nomura2019dodd} \begin{align} \label{eq:e2} \hat T^{(E2)}&= e_\nu^B\hat Q_\nu + e_\pi^B\hat Q_\pi -\frac{1}{\sqrt{5}}\sum_{\rho=\nu,\pi}\sum_{j_{\rho}j'_{\rho}} \nonumber \\ &\times(u_{j_{\rho}}u_{j'_{\rho}}-v_{j_{\rho}}v_{j'_{\rho}})\langle j'_{\rho}||e^F_{\rho}r^2Y^{(2)}||j_{\rho}\rangle(a_{j_{\rho}}^\dagger\times\tilde a_{j'_{\rho}})^{(2)}, \nonumber \\ \end{align} and \begin{align} \label{eq:m1} \hat T^{(M1)}&=\sqrt{\frac{3}{4\pi}} \Big\{ g_\nu^B\hat L^B_\nu + g_\pi^B\hat L^B_\pi -\frac{1}{\sqrt{3}}\sum_{\rho=\nu,\pi}\sum_{jj'} \nonumber \\ &\times (u_{j_{\rho}}u_{j'_{\rho}}+v_{j_{\rho}}v_{j'_{\rho}})\langle j'_{\rho}||g_l^\rho{\bf l}+g_s^\rho{\bf s}||j_{\rho}\rangle(a_{j_{\rho}}^\dagger\times\tilde a_{j'_{\rho}})^{(1)} \Big\}. \nonumber \\ \end{align} In Eq.~(\ref{eq:e2}), $e^B_\rho$ and $e^F_{\rho}$ are the effective charges for the boson and fermion systems. We have employed the fixed values $e^B_\nu=e^B_\pi=0.15$ $e$b, and $e^F_\nu=0.5$ $e$b and $e^F_\pi=1.5$ $e$b. In the case of the M1 operator in Eq.~(\ref{eq:m1}), $g_\nu^B$ and $g_\pi^B$ are $g$-factors for the neutron and proton bosons. We have also used the fixed values $g_\nu^B=0\,\mu_N$ and $g_\pi^B=1.0\,\mu_N$ \cite{yoshida1985,IBM}. For the neutron (proton) $g$-factors, the usual Schmidt values $g_l^\nu=0\,\mu_N$ and $g_s^\nu=-3.82\,\mu_N$ ($g_l^\pi=1.0\,\mu_N$ and $g_s^\pi=5.58\,\mu_N$) have been considered. Both the proton and neutron $g_s$ values have been quenched 30 \%. \section{Results and discussion \label{sec:results}} In this section, we will briefly discuss some selected results obtained for even-even Xe and odd-mass Cs nuclei. The nuclei $^{124}$Xe (Sec.~\ref{sec:ee}), $^{123}$Xe and $^{125}$Cs (Sec.~\ref{sec:oe}) will be taken as representative examples. As we are mainly interested in the structure of odd-odd nuclei, most of our discussions will be devoted to the spectroscopic results obtained for such odd-odd systems (Sec.~\ref{sec:oo}). \subsection{Even-even nuclei\label{sec:ee}} \begin{figure}[htb!] \begin{center} \includegraphics[width=\linewidth]{124xe_pes_combined_horizontal.png} \caption{(Color online) The Gogny-D1M and IBM-2 $(\beta,\gamma)$-deformation energy surfaces obtained for $^{124}$Xe are plotted up to 3 MeV from the global minimum. The energy difference between neighboring contours is 100 keV. } \label{fig:124xe-pes} \end{center} \end{figure} \begin{figure}[htb!] \begin{center} \includegraphics[width=0.8\linewidth]{{124xe.basic}.pdf} \caption{(Color online) Theoretical and experimental \cite{data} low-energy excitation spectra for $^{124}$Xe.} \label{fig:124xe-level} \end{center} \end{figure} In Ref.~\cite{nomura2017odd-3}, we have considered transitions from $\gamma$-soft to nearly spherical shapes in the even-even isotopes $^{126-136}$Xe as well as in the case of odd-mass Xe and Cs nuclei. The same Gogny-D1M energy surfaces for $^{126-132}$Xe used in that work have been used to fix, this time, the parameters of the IBM-2 Hamiltonian for these nuclei. Only the energy surface of $^{124}$Xe has been added to the results obtained in previous calculations. A major difference with respect to Ref.~\cite{nomura2017odd-3} is that now we use the IBFM-2 instead of the IBFM-1 model, which does not distinguish between neutron and proton bosons. Another minor difference with respect to Ref.~\cite{nomura2017odd-3} is that now the even-even $^{A+1}$Xe$_{N+1}$ nucleus is taken as a reference to obtain the results for the odd-$N$ isotope $^{A}$Xe$_{N}$. The Gogny-D1M and the (mapped) IBM-2 energy surfaces obtained for $^{124}$Xe are depicted in Fig.~\ref{fig:124xe-pes}. The HFB energy surface exhibits a shallow triaxial minimum with $\gamma\approx 30^{\circ}$. Such a triaxial minimum can only be obtained in the IBM-2 after including higher-order (e.g., three-body) terms. We are, however, neglecting such higher-order terms in this study because of the lack of IBFFM and IBFM computer codes able to handle them. As seen in Fig.~\ref{fig:124xe-pes}, the IBM-2 surface is much flatter than the HFB far away from the global mean-field minimum. This is a consequence of the reduced IBM model space and it has already been found and discussed in great details in our previous studies \cite{nomura2008,nomura2010}. These are not serious limitations as the most relevant configurations for the study of low-lying collective states are those around the global minimum and we have paid special attention to reproduce them. The energy spectrum provided by the IBM-2 Hamiltonian for $^{124}$Xe is compared in Fig.~\ref{fig:124xe-level} with the experimental data \cite{data}. As can be seen, our calculations reproduce well the experimental spectrum without any phenomenological adjustment. Both the theoretical and experimental spectra exhibit features resembling those of the O(6) dynamical symmetry, i.e., $R_{4/2}=E(4^+_1)/E(2^+_1) \approx 2.5$, a low-lying $2^+_2$ level close to the $4^+_1$ one and the nearly staggered energy systematic of the $\gamma$-band (i.e., $2^+_2$, ($3^+_1$, $4^+_2$), ($5^+_1$, $6^+_2$), $\ldots$ etc). \subsection{Odd-mass nuclei\label{sec:oe}} \begin{figure*}[htb!] \begin{center} \includegraphics[width=0.6\linewidth]{{123xe.basic}.pdf}\\ \includegraphics[width=0.6\linewidth]{{125cs.basic}.pdf} \caption{Same as Fig.~\ref{fig:124xe-level}, but for the odd-N $^{123}$Xe and odd-Z $^{125}$Cs nuclei. The spin and/or parity in parentheses have not been established experimentally. } \label{fig:oe-level} \end{center} \end{figure*} Let us turn our attention to the nuclei $^{123}$Xe and $^{125}$Cs. The low-lying positive- and negative-parity states obtained for those nuclei are shown in Fig.~\ref{fig:oe-level}. They are compared with the available experimental data \cite{data}. Our results suggest that the low-lying positive-parity states in $^{123}$Xe are mainly built via the coupling of the odd neutron hole in the $3s_{1/2}$ and $2d_{3/2}$ single-particle orbitals to the even-even boson core ($^{124}$Xe). On the other hand, the negative-parity states are accounted for by the unique-parity $1h_{11/2}$ single-particle configuration. As seen in Fig.~\ref{fig:oe-level} our results agree well with the experiment for both parities. In the case of $^{125}$Cs, the low-lying positive-parity states are mainly based on the $1g_{7/2}$ and $2d_{5/2}$ single-particle configurations. In the lower panels of Fig.~\ref{fig:oe-level} a reasonable agreement between the predicted IBFM-2 and the experimental spectra is observed. \subsection{Odd-odd Cs isotopes\label{sec:oo}} \begin{figure*}[htb!] \begin{center} \includegraphics[width=0.6\linewidth]{{124cs.basic}.pdf} \caption{(Color online) Low-lying positive- and negative-states of the odd-odd nucleus $^{124}$Cs. Experimental energy levels are taken from Ref.~\cite{data}. } \label{fig:124cs} \end{center} \end{figure*} \begin{figure*}[htb!] \begin{center} \includegraphics[width=0.6\linewidth]{{126cs.basic}.pdf} \caption{(Color online) Same as in Fig.~\ref{fig:124cs} but for $^{126}$Cs. Experimental data for positive- and negative-parity states are taken from Refs.~\cite{data} and \cite{li2003cs126}, respectively.} \label{fig:126cs} \end{center} \end{figure*} \begin{figure*}[htb!] \begin{center} \includegraphics[width=0.6\linewidth]{{128cs.basic}.pdf} \caption{Same as in Fig.~\ref{fig:124cs} but for $^{128}$Cs.} \label{fig:128cs} \end{center} \end{figure*} \begin{figure*}[htb!] \begin{center} \includegraphics[width=0.6\linewidth]{{130cs.basic}.pdf} \caption{Same as in Fig.~\ref{fig:124cs} but for $^{130}$Cs.} \label{fig:130cs} \end{center} \end{figure*} \begin{figure*}[htb!] \begin{center} \includegraphics[width=0.6\linewidth]{{132cs.basic}.pdf} \caption{Same as in Fig.~\ref{fig:124cs} but for $^{132}$Cs.} \label{fig:132cs} \end{center} \end{figure*} \subsubsection{Energy spectra for the low-spin low-energy states} Let us now discuss the results obtained for odd-odd Cs nuclei. We will consider low-spin low-energy states up to an excitation energy $E_{\rm x} \approx 1$ MeV. Our calculation indicates that those states are mainly based on normal-parity (i.e., $sdg$) orbitals. The spectra obtained for $^{124,126,128,130,132}$Cs are depicted in Figs.~\ref{fig:124cs}--\ref{fig:132cs}, respectively. In the case of $^{124}$Cs (see, Fig.~\ref{fig:124cs}), the predicted positive- and negative-parity states agree well with the experimental ones. The IBFFM-2 wave function of the $1^+_1$ ground state is composed of the mixture of several single-particle configurations among which, the largest (about 50 \%) contribution comes from the odd neutron hole in the $3s_{1/2}$ orbital. As for the negative-parity states, the predicted IBFFM-2 wave functions for the lowest $4^-_1$, $5^-_1$, and $6^-_1$ states are complex mixtures of different single-particle configurations. In those states, the neutron $\nu h_{11/2}$ coupled to the proton in either $3s_{1/2}$, $2d_{3/2}$, $2d_{5/2}$, or $1g_{7/2}$ positive-parity orbital plays a dominant role. In the case of $^{126}$Cs (see, Fig.~\ref{fig:126cs}), the agreement with the experiment is as good as for $^{124}$Cs. The structure of the wave functions corresponding to the lowest positive-parity states is similar to the one obtained for $^{124}$Cs (i.e., they are mainly accounted for by the $(\nu s_{1/2})^{-1}\otimes\pi sdg$ configuration). In our calculation the lowest-energy negative-parity state is predicted to be the $6^-_1$ one. The main component (47 \%) of the IBFFM-2 wave function of this $6^-_1$ state is the configuration $[(\nu h_{11/2})^{-1}\otimes\pi g_{7/2}]^{(J=8^-)}$. Experimentally, the $4^-$ state is suggested to be the lowest negative-parity state, and the tentative $6^-_1$ level is found at a much higher excitation energy than in our calculation. However, for most of the low-lying negative-parity states both spin and parity have not been firmly established. The experimental data are more scarce for the $^{128}$Cs nucleus, as well as for the heavier ones $^{130,132}$Cs. For the nucleus $^{128}$Cs, experimental information is only available for a couple of $1^{+}$ states. Here, we stress that our calculations reproduce the correct ground-state spin $I=1^{+}_{1}$. Note, that the predicted $1^+_2$ and $1^+_3$ non-yrast states are found below 200 keV excitation energy, somewhat similar to the experimental situation. Furthermore, we also obtain $2^+$ and $3^+$ states below 200 keV. The structure of the $1^+_1$, $2^+_1$, and $3^+_1$ wave functions is similar to the one in $^{124}$Cs and $^{126}$Cs. Concerning the negative-parity states of $^{128}$Cs, the predicted low-spin levels are in reasonable agreement with the experimental ones. However, our calculations suggest several states near the ground state, that have not been observed experimentally (i.e., a $4^-$ and two $5^-$ states, and the second $6^-$ state). The positive-parity low-spin spectrum obtained for $^{130}$Cs is shown in Fig.~\ref{fig:130cs}. Once more, our calculations predict the correct ground-state spin $I=1^+$. However, the two experimental $2^+$ states around 100 keV excitation energy are overestimated by a factor of three. This is not surprising as the excitation energy of levels is often overestimated within the IBM framework, for those nuclei near a shell closure and the reason is the decreasing number of active bosons. This also seems to be the case for both $^{130}$Cs and $^{132}$Cs. In addition, the structure of the IBFFM-2 wave function corresponding to the $1^+_1$ state turns out to be slightly different than the ground states of the lighter odd-odd systems $^{124-128}$Cs. The contribution of the $\nu d_{3/2}$ single-particle configuration becomes larger in $^{130,132}$Cs than in $^{124-128}$Cs. The HFB deformation energy surfaces obtained for even-even Xe isotopes \cite{nomura2017odd-3} exhibit a structural change from $^{128}$Xe ($\gamma$-soft shape with a shallow triaxial minimum) to $^{130}$Xe (nearly spherical shape with a shallow prolate minimum). Such a structural change in the even-even systems seems to be more or less translated into the structure of the IBFFM-2 wave functions of the odd-odd systems. As can be seen from Fig.~\ref{fig:130cs}, the disagreement with the experimental data is more pronounced for negative-parity states. The energies of the $2^-_1$ and $5^-_1$ levels, which are suggested to be the lowest negative-parity states experimentally, are however too high in our calculations. Finally, the positive- and negative-parity low-spin low-energy spectra obtained for $^{132}$Cs are depicted in Fig.~\ref{fig:132cs}. Here, the comparison with the experiment is worse, but one should keep in mind that this nucleus is the closest to the $N=82$ shell closure. As a result, the number of neutron $N_{\nu}=2$ and proton $N_{\pi}=2$ bosons is probably not enough for a detailed description of the level structure in the framework of the IBM. Other possible reasons are first that the single-particle energies and occupation probabilities for odd nucleons, obtained from the Gogny-D1M calculation, may not be realistic enough in this case. Finally, the fixed values of the strengths and/or the forms of the residual neutron-proton interactions employed in the IBFFM-2 Hamiltonian are too restrictive. \subsubsection{E2 and M1 moments of lowest-lying states} \begin{table}[!htb] \begin{center} \caption{\label{tab:oo-mom} Theoretical and experimental quadrupole $Q(I)$ (in $e$b units) and magnetic $\mu(I)$ (in $\mu_N$ units) moments for $^{124-132}$Cs. The experimental values are taken from Ref.~\cite{stone2005}.} \begin{ruledtabular} \begin{tabular}{cccc} & & Theory & Experiment \\ \hline $^{124}$Cs & $Q(1^+_1)$ & $-$0.475 & $-$0.74(3) \\ & $\mu(1^+_1)$ & $+$0.377 & $+$0.673(3) \\ $^{126}$Cs & $Q(1^+_1)$ & $-$0.585 & $-$0.68(2) \\ & $\mu(1^+_1)$ & $+$0.869 & $+$0.777(4) \\ $^{128}$Cs & $Q(1^+_1)$ & $-$0.471 & $-$0.570(8) \\ & $\mu(1^+_1)$ & $+$0.794 & $+$0.974(5) \\ $^{130}$Cs & $Q(1^+_1)$ & $-$0.125 & $-$0.059(6) \\ & $\mu(1^+_1)$ & $+$0.573 & $+$1.460(7) \\ & $Q(5^-_1)$ & $-$0.314 & $+$1.45(5) \\ & $\mu(5^-_1)$ & $-$1.062 & $+$0.629(4) \\ $^{132}$Cs & $Q(2^+_1)$ & $-0.062$ & $+$0.508(7) \\ & $\mu(2^+_1)$ & $+0.940$ & $+$2.222(7) \end{tabular} \end{ruledtabular} \end{center} \end{table} As for the electromagnetic properties of the lowest-lying states in odd-odd Cs isotopes, experimental data are only available for the quadrupole $Q(I)$ and magnetic dipole $\mu(I)$ moments. The theoretical and the available experimental $Q(I)$ and $\mu(I)$ values are compared in Table~\ref{tab:oo-mom}. For the $^{124,126,128}$Cs nuclei, the predicted $Q(I)$ and $\mu(I)$ moments agree well with the experimental ones, in both magnitude and sign. However, some of the moments obtained for some states in $^{130,132}$Cs are opposite in sign to their experimental counterparts. This corroborates that the energy levels of the corresponding states in these nuclei have not been described well with respect to the experimental data (see, Figs.~\ref{fig:130cs} and \ref{fig:132cs}), and could have occurred because of the assumption of using the fixed strength parameters for the residual neutron-proton interaction $\hat V_\mathrm{res}$ and/or, again, because of the more restricted configuration space for the boson system. \begin{figure*}[htb!] \begin{center} \includegraphics[width=0.6\linewidth]{{124cs.hs}.pdf} \caption{(Color online) Band structure of the higher-spin higher-energy positive-parity states in $^{124}$Cs.} \label{fig:124cs-hs} \end{center} \end{figure*} \begin{figure*}[htb!] \begin{center} \includegraphics[width=0.6\linewidth]{{126cs.hs}.pdf} \caption{(Color online) Same as in Fig.~\ref{fig:124cs-hs} but for $^{126}$Cs.} \label{fig:126cs-hs} \end{center} \end{figure*} \begin{figure*}[htb!] \begin{center} \includegraphics[width=0.6\linewidth]{{128cs.hs.normalised}.pdf} \caption{(Color online) Same as in Fig.~\ref{fig:124cs-hs} but for $^{128}$Cs. The theoretical and experimental spectra are normalized with respect to the $10^+_1$ and $9^+_1$ states, respectively, which are the lowest states based on the $(\nu h_{11/2})^{-1}\otimes\pi h_{11/2}$ configuration.} \label{fig:128cs-hs} \end{center} \end{figure*} \begin{figure*}[htb!] \begin{center} \includegraphics[width=0.6\linewidth]{{130cs.hs}.pdf} \caption{(Color online) Same as in Fig.~\ref{fig:124cs-hs} but for $^{130}$Cs.} \label{fig:130cs-hs} \end{center} \end{figure*} \begin{figure*}[htb!] \begin{center} \includegraphics[width=0.6\linewidth]{{132cs.hs}.pdf} \caption{(Color online) Same as in Fig.~\ref{fig:124cs-hs} but for $^{132}$Cs. For the experimental level between the $18^+$ and $20^+$ states in the side band, even the tentative spin and parity are not known \cite{data}.} \label{fig:132cs-hs} \end{center} \end{figure*} \begin{figure*}[htb!] \begin{center} \includegraphics[width=0.8\linewidth]{{cs.em.hs}.pdf} \caption{(Color online) The calculated $B(E2;I\rightarrow I-2)$ and $B(M1;I\rightarrow I-1)$ transition strengths (in Weisskopf units) for the positive-parity bands of the $^{124-132}$Cs nuclei. Left column (panels (a1) to (e1)): the intra-band $B(E2;I_{1,2}\rightarrow (I-2)_{1,2})$ transition rates between the yrast states ($I_1$ and $(I-2)_1$) and between the second lowest states ($I_2$ and $(I-2)_2$) with a given spin $I$. Middle column (panels (a2) to (e2)): the intra-band $B(M1;I_{1,2}\rightarrow (I-1)_{1,2})$ transition strengths. Right column (panels (a3) to (e3)): the inter-band $B(M1;I_{1,2}\rightarrow (I-1)_{2,1})$ transition strengths.} \label{fig:em-hs} \end{center} \end{figure*} \begin{figure}[htb!] \begin{center} \includegraphics[width=\linewidth]{{cs.ratio.hs}.pdf} \caption{(Color online) The calculated and experimental values of the ratio $B(M1;I\rightarrow I-1)/B(E2;I\rightarrow I-2)$ (in $\mu_N^2/e^2$b$^2$ units) are plotted as a function of $I$ for the positive-parity yrast and side bands of the odd-odd nuclei $^{124-132}$Cs. The experimental data are taken from Refs.~\cite{xiong2019,gizon2001,wang2006,paul1989,simons2005,rainovski2003}. Note that calculated $B(M1)/B(E2)$ values lying well outside of the scale of the vertical axis are not shown. } \label{fig:em-ratio} \end{center} \end{figure} \begin{figure}[htb!] \begin{center} \includegraphics[width=\linewidth]{{cs.mom.hs}.pdf} \caption{(Color online) Electric quadrupole moment $Q(I)$ (in $e$b units) and $g$-factor as functions of angular momentum $I$ of the higher-spin yrast and side band states of the studied odd-odd Cs isotopes. In panel (c2), the experimental value for the $g$-factor of $+0.59\pm 0.01$ for the yrast $9^+$ state of $^{128}$Cs \cite{grodner2018} is shown as an open circle.} \label{fig:mom} \end{center} \end{figure} \begin{figure*}[htb!] \begin{center} \includegraphics[width=0.6\linewidth]{{ut.128cs.energies}.pdf} \caption{(Color online) Excitation energies of the low-lying positive- and negative-parity yrast states of the $^{128}$Cs nucleus as functions of the parameters $u_\mathrm{T}$ in the cases of different values of the parameter $u_\mathrm{D}$, i.e., $u_\mathrm{D}=1.4$ MeV (panels (a1,a2)), 0.7 MeV (panels (b1,b2)), 0.0 MeV (panels (c1,c2)), and $-0.7$ MeV (panels (d1,d2)). } \label{fig:utvd} \end{center} \end{figure*} \begin{figure}[htb!] \begin{center} \includegraphics[width=0.7\linewidth]{{ut.128cs.mom}.pdf} \caption{(Color online) The calculated quadrupole (a) and magnetic (b) moments of the $1^+_1$ ground state for the $^{128}$Cs nucleus as functions of the parameters $u_\mathrm{T}$ in the cases of different values of the parameter $u_\mathrm{D}$, i.e., $u_\mathrm{D}=1.4$, 0.7, 0.0, and $-0.7$ MeV. } \label{fig:ut-mom} \end{center} \end{figure} \subsubsection{Band structure of higher-spin states} We have further studied the detailed band structure of the higher-lying higher-spin states in the considered odd-odd Cs isotopes. We have paid special attention to the possible doublet structure expected as a result of the coupling between a neutron hole and a proton in the unique-parity $1h_{11/2}$ orbital. Our calculations suggest that the higher-spin states in most of the considered odd-odd Cs nuclei are almost entirely composed of $[(\nu h_{11/2})^{-1}\otimes\pi h_{11/2}]^{(J)}$ neutron-proton pairs coupled to the even-even boson core, as expected empirically. The high-spin bands predicted for the nuclei $^{124,126,128,130,132}$Cs are depicted from Figs.~\ref{fig:124cs-hs} to \ref{fig:132cs-hs}, respectively. In each of these figures, for both theoretical and experimental states, the two bands on the left-hand side with the $\Delta I=1$ level sequence, and the other two bands on the right-hand side with the $\Delta I=1$ level sequence are identified as the yrast and side bands, respectively. As for the theoretical bands for each nucleus, we have simply grouped the calculated states $I_1$ (the lowest states with spin $I$) and $I_2$ (the second lowest states with spin $I$) into the yrast and side bands, respectively. In the case of $^{124}$Cs (see, Fig.~\ref{fig:124cs-hs}) the experimental band structure is well reproduced, including the energies of the band-head states. However, for $^{126}$Cs (see, Fig.~\ref{fig:126cs-hs}) the band-head energies of the experimental bands are overestimated by a factor of around two. Also, because of the limited size of the boson space as the $N=82$ shell closure is approached the theoretical bands look more stretched than the experimentally identified ones as the spin increases. Nevertheless, for $^{126}$Cs the overall structure of the theoretical spectrum agrees reasonably well with the experimental one. The calculated higher-spin bands for $^{126}$Cs shown in Fig.~\ref{fig:126cs-hs} resemble well the doublet-like bands, i.e., close-lying states with the same spin $I$. However, the main components of the wave functions of the yrast states up to $I\leq 17^+$ are coming from the coupling between the odd neutron and odd proton in the normal-parity $sdg$ orbitals, not the $[(\nu h_{11/2})^{-1}\otimes\pi h_{11/2}]^{(J)}$ neutron-proton pair configurations as in all the other odd-odd Cs nuclei considered. As for $^{128}$Cs, the absolute energies of the observed bands have not been established experimentally. Therefore, in Fig.~\ref{fig:128cs-hs} both the experimental and calculated energy levels for $^{128}$Cs are plotted with respect to the experimental $10^+_1$ state, which is suggested to be the band-head of the lowest-energy band based on the $[(\nu h_{11/2})^{-1}\otimes\pi h_{11/2}]^{(J)}$ configuration. In general, the structures of the bands identified experimentally are well reproduced up to $I\approx 16^+$. However, the energy of higher-spin states is overestimated. With increasing spin the stretching of the predicted bands, as compared with the experiment, becomes larger than in $^{126}$Cs. In Figs.~\ref{fig:130cs-hs} and \ref{fig:132cs-hs}, similar doublet-like band structures are obtained also in $^{130,132}$Cs and are in a good agreement with the experimental spectra up to relatively low spin, e.g., $I\leq 15^+$. The moments of inertia for higher spin states are considerably underestimated due to the fact that the configuration space of the even-even boson core becomes much smaller for those nuclei close to the neutron shell closure $N=82$. For instance, in $^{132}$Cs there is only one $19^+$ state, which is formed by the configuration where four $d$ bosons and a neutron and a proton in the $h_{11/2}$ orbital are all aligned, i.e., $L=8$ and $[(\nu h_{11/2})^{-1}\otimes\pi h_{11/2}]^{(11)}$. Also there is no state with spin higher than $I=19^+$. Possible solutions to improve the description of the higher-spin states in the IBFFM-2 can be, for instance, the inclusions of an additional boson degree of freedom, e.g., $L=4^+$ ($g$) boson, and of higher quasiparticle excitations or broken pairs. These extensions are, however, out of the scope of the present study. \subsubsection{$B(E2)$ and $B(M1)$ systematic in the high-spin states} To identify possible signatures of chirality we have considered, in addition to energy levels, the systematic of the E2 and M1 transitions with increasing spin. Our analysis of the $B(E2)$ and $B(M1)$ patterns suggests that there are many examples in the odd-odd Cs nuclei that can be considered candidates to display chirality. In particular, the observed $B(E2; I\rightarrow I-2)$ and $B(M1; I\rightarrow I-1)$ intra-band and inter-band transitions in the yrast and second-lowest bands of the $^{128}$Cs nucleus show a definite staggering pattern as a function of the angular momentum \cite{grodner2006}. Such a selection rule has been derived from symmetry considerations applied to a simple particle-rotor model \cite{koike2004}. Nevertheless, they can still be used to benchmark our calculations. The predicted $B(E2; I\rightarrow I-2)$ transition rates for most of the considered double-odd nuclei $^{124-132}$Cs (see, panels (a1) to (e1) on the left-hand side of Fig.~\ref{fig:em-hs}) do not show any such staggering as the one that appears in the simplified model \cite{koike2004}. For the yrast band they evolve monotonously or stay rather constant with $I$. In some of the $B(E2)$ transitions shown, at particular spin their values almost vanish, e.g., the $B(E2; 16^+_2\rightarrow 14^+_2)$ transition rate in $^{128}$Cs in panel (c1). Particularly irregular $I$-dependence of the predicted $B(E2)$ rates is found for the $^{126}$Cs nucleus (see, panel (b1)). This is because certain mixing among states with a given spin tends to occur and, therefore, the assignments of the lowest states $I_1$ into the yrast band and of the second-lowest states $I_2$ into the side band are, in some cases, not adequate. In a number of the odd-odd Cs nuclei, however, a certain staggering pattern, similar to the one in the observed $B(M1; I\rightarrow I-1)$ rates for $^{128}$Cs \cite{grodner2006}, has been obtained in the calculated $B(M1; I\rightarrow I-1)$ rates for both the intra- (middle panels (a2) to (e2) of Fig.~\ref{fig:em-hs}) and inter-band (right panels (a3) to (e3)) transitions. As yet another indication of the chiral bands, we show in Fig.~\ref{fig:em-ratio} the ratio of the calculated $B(M1; I\rightarrow I-1)$ to $B(E2; I\rightarrow I-2)$ rates of the yrast and side bands for all the odd-odd Cs nuclei. A number of experimental values for these quantities are available in Refs.~\cite{gizon2001,wang2006,paul1989,simons2005,rainovski2003,xiong2019}. Our results show a staggering pattern of the $B(M1)/B(E2)$ ratio as a function of angular momentum $I$ for both yrast and side bands in all the considered odd-odd Cs nuclei, except $^{124}$Cs (panel (a1) of Fig.~\ref{fig:em-ratio}). The theoretical values are consistent with the empirical trend (shown on the right-hand side of Fig.~\ref{fig:em-ratio}). However, the predicted $B(M1)/B(E2)$ values are much larger in magnitude and show a more irregular $I$-dependence than the experimental data. The quantitative disagreement can be expected from the behavior of the calculated $B(E2; I_{1,2}\rightarrow (I-2)_{1,2})$ (Figs.~\ref{fig:em-hs}(a1-e1)) and $B(M1; I_{1,2}\rightarrow (I-1)_{1,2})$ (Figs.~\ref{fig:em-hs}(a2-e2)) values as functions of $I$. In order to examine whether the predicted yrast and side bands can be considered partners of the chiral doublet, we show in Fig.~\ref{fig:mom} the quadrupole moment $Q(I)$ (in $e$b units) and the $g$-factor for the corresponding states in the considered odd-odd $^{124-132}$Cs nuclei. The $Q(I)$ values are negative and decrease in magnitude with increasing spin. In addition, we have obtained similar $Q(I)$ values and $I$-dependence, for both bands. The above observation does not apply to the results for the $^{126}$Cs nucleus in panel (b1). This is because, as we mentioned earlier, the states in the yrast and side bands for $^{126}$Cs in particular, predicted in the present calculation, have different wave function contents. Namely, the side-band states are mainly composed of the $[(\nu h_{11/2})^{-1}\otimes\pi h_{11/2}]^{(J)}$ single-particle configuration, but this is not the case for those states in the yrast band. In all the odd-odd Cs nuclei, the $g$-factor values, depicted on the right-hand side of the same figure, are quite similar (around 0.5) for both bands. Note, that the $g$-factor obtained for the $I=9^+$ yrast state for $^{128}$Cs agrees well with the experimental value ($+0.59\pm 0.01$) \cite{grodner2018}. \subsubsection{Dependence on the residual neutron-proton interaction} Finally we examine how the spectroscopic results from the IBFFM-2 depend on the choice of the strength parameters for the residual neutron-proton interaction $\hat V_\mathrm{res}$ (see, Eq.~(\ref{eq:res})). In Fig.~\ref{fig:utvd} we depict the evolution of the calculated excitation energies for a few low-lying positive- (panels (a1)--(d1) on the left-hand side of Fig.~\ref{fig:utvd}) and negative-parity (panels (a2)--(d2) on the right-hand side) yrast states of the $^{128}$Cs nucleus as functions of the strength parameter for the tensor interaction $u_\mathrm{T}$ (in MeV), in the cases of different values of the parameter for the delta interaction, i.e., $u_\mathrm{D}=1.4$ MeV (panels (a1,a2)), 0.7 MeV (panels (b1,b2)), 0.0 MeV (panels (c1,c2)), and $-0.7$ MeV (panels (d1,d2)). The calculated excitation energies, especially for the positive-parity ones, do not seem to have a strong dependence on the parameter $u_\mathrm{D}$ for $u_\mathrm{D}>0$ MeV, but are rather sensitive to $u_\mathrm{T}>0$ MeV. Our choice, i.e., the value of the parameter $u_\mathrm{T}$, that is somewhere in the range $0< u_\mathrm{T}<0.05$, as well as the value of $u_\mathrm{D}$ to be approximately 0.7 MeV, seems to be optimal for reproducing the ground-state spin for both the positive- and negative-parities, i.e., $I=1^+_1$ and $6^-_1$, respectively. Similarly, in Fig.~\ref{fig:ut-mom} we plot the calculated quadrupole $Q(1^+_1)$ and magnetic $\mu(1^+_1)$ moments for the ground state $1^+_1$ of the $^{128}$Cs nucleus as functions of the strength parameter $u_\mathrm{T}$, in the cases of different values of the parameter $u_\mathrm{D}=1.4$, 0.7, 0.0, and $-0.7$ MeV. Both the $Q(1^+_1)$ and $\mu(1^+_1)$ values depend somewhat largely on the parameter $u_\mathrm{T}$ with $u_\mathrm{T}>0$ MeV, but are much less sensitive to the parameter $u_\mathrm{D}>0$ MeV. The chosen value for the tensor interaction strength $u_\mathrm{T}$ of 0.02 MeV gives both the $Q(1^+_1)$ and $\mu(1^+_1)$ values close to the corresponding experimental data, $Q(1^+_1)=-0.570\pm 0.08$ $e$b and $\mu(1^+_1)=+0.974\pm 0.005$ $\mu_N$, respectively. Similar parameter dependences of the excitation energies and moments have been obtained for the other odd-odd Cs studied here. In principle, one could use the values of the $u_\mathrm{D}$ and $u_\mathrm{T}$ parameters different from nucleus to nucleus and/or between both parities. We have used the fixed values for the $u_\mathrm{D}$ and $u_\mathrm{T}$ parameters only for the sake of simplicity. Nevertheless, we consider the chosen values of the $u_\mathrm{D}$ and $u_\mathrm{T}$ strength parameters realistic in a sense that the overall description of the energy levels of the low-lying low-spin states for the studied odd-odd nuclei is reasonable. \section{Summary and concluding remarks\label{sec:summary}} The spectroscopic properties of the odd-odd nuclei $^{124-132}$Cs have been analyzed using the interacting boson-fermion-fermion (IBFFM-2) framework with microscopic input from mean field calculations with the Gogny-D1M energy density functional. The $(\beta,\gamma)$-deformation energy surface for even-even boson-core Xe isotopes as well as single-particle energies and occupation probabilities of unpaired nucleons in odd-N Xe, odd-Z Cs and odd-odd Cs nuclei obtained from the mean field calculation are used to build, via a mapping procedure, the corresponding IBFFM-2 Hamiltonian. In its current implementation, the method still requires a few coupling constants of the boson-fermion and residual neutron-proton interactions to be fitted to the experiment. The diagonalization of the corresponding IBFFM-2 Hamiltonian provides wave functions, energy levels as well as other spectroscopic properties such as E2 and M1 transition rates. It has been shown, that the (mapped) IBFFM-2 model describes reasonably well both the positive- and negative-parity low-lying low-spin states of the considered odd-odd Cs nuclei, especially in the case of $^{124,126,128}$Cs. This is a remarkable result, considering the significant reduction of parameters with respect to previous IBFFM calculations. However, in some of the odd-odd nuclei (e.g., in $^{132}$Cs) the ordering of both the positive- and negative-parity levels close to the ground state could not be correctly reproduced by our calculations. Some possible explanations for this failure are the limited number of active bosons in the even-even core near the shell closure; the possibility that the adopted single-particle energies and occupation numbers provided by the Gogny-D1M HFB approach may not be realistic enough; finally the use of fixed strengths for the whole isotopic chain for the residual neutron-proton interaction in the IBFFM-2 Hamiltonian. We have also studied the band structure of the higher-spin positive-parity states in the considered odd-odd Cs nuclei. Our calculations provide a reasonable quantitative description of the excitation energies of these bands up to $I\approx 20^+$ except for the excitation energy of the band-heads of $^{126}$Cs which are overestimated. We have identified many of the double-odd Cs nuclei as good candidate for the existence of chiral doublet bands. In particular, the calculated $B(M1; I\rightarrow I-1)$ transition rates exhibit staggering patterns with increasing angular momentum. This result agrees well with the selection rule derived by simple symmetry considerations \cite{koike2004}. All in all, the results of this study suggest that the employed theoretical methods can be potentially used to describe even such a type of nuclear excitation as chirality. \acknowledgments The work of KN is financed within the Tenure Track Pilot Program of the Croatian Science Foundation and the \'Ecole Polytechnique F\'ed\'erale de Lausanne and the Project TTP-2018-07-3554 Exotic Nuclear Structure and Dynamics, with funds of the Croatian-Swiss Research Program. the Croatian Science Foundation and \'Ecole Polytechnique F\'ed\'erale de Lausanne under the Swiss-Croatian Corporation Program No. TTP-2018-07-3554. The work of LMR was supported by the Spanish Ministry of Economy and Competitiveness (MINECO) Grants No. FPA2015-65929-MINECO and FIS2015-63770-MINECO.
1,941,325,221,219
arxiv
\section{Introduction: Choice of Comparison Comets} Widespread concerns about the survival of comet C/2012~S1 during its forthcoming close encounter with the Sun imply that every effort should be expended to monitor the comet's physical behavior during its journey to perihelion. One way to contribute to this campaign is to compare, step by step, the light curve of C/2012 S1 in the course of this time with the light curves of other comets with very different histories, yet of the same or similar origin. The first arrival from the Oort Cloud is a trait that C/2012~S1 shares with C/1962~C1 (Seki-Lines) and probably also with C/2002~O4 (H\"{o}nig), which makes these objects intriguing candidates for such comparison. Besides the light curve, there are also issues linked to the orbital motion of C/2012~S1 that include possible peculiarities and the perihelion distance of 0.0124~AU or 2.67~{$R_{\mbox{\scriptsize \boldmath $\odot$}}$}. Comet C/1962~C1 moved in an orbit with a perihelion distance closer to that of C/2012~S1 than any other Oort cloud comet, merely 0.0314~AU or 6.75~{$R_{\mbox{\scriptsize \boldmath $\odot$}}$}, and was thus subjected to thermal and radiation conditions almost --- though not quite --- as harsh as are going to be experienced by C/2012~S1. It survived the encounter with its physique apparently intact. On the other hand, comet C/2002~O4 became famous (or, rather, infamous) by disappearing (and obviously disintegrating) before the eyes of the observers almost exactly at perihelion at 0.776~AU from the Sun.. In terms of approach to the Sun, a better choice would have been C/1953~X1 (Pajdu\v{s}\'akov\'a), a disintegrating comet whose perihelion distance was only 0.072~AU or 15.5~{$R_{\mbox{\scriptsize \boldmath $\odot$}}$}. Unfortunately, only a parabolic orbit is available for this object and the Oort Cloud as the site of its origin is highly questionable. In addition, its light curve is, unlike that for C/2002~O4, only poorly known, appearing rather flat over a period of at least 30~days and possibly as long as 70~days (Sekanina 1984). In fact, because of their early disappearance, the disintegrating long-period comets have generally poor orbits. For the disintegrating comet C/1999 S4 (LINEAR), an exception to this rule, use of the nongravitational terms in the equations of motion (Marsden 2000) renders its original orbit indeterminate (Marsden et al.\ 1973). In summary, C/1962 C1 and C/2002 O4 are the best available representatives of two very different, almost extreme, categories of probable Oort Cloud comets that I am aware of. One disappointment with both comparison objects is that they were discovered relatively late:\ C/1962~C1 only 56~days and C/2002~O4 just 72~days before perihelion. A sequence of steps pursued in this investigation of C/2012~S1 begins in early October 2013 with charting the outline, describing the primary objectives, and addressing the specific issues examined. This is the contents of the paper itself. This first step will be followed by a series of brief contributions, to be appended, at a rate of about one per week until mid-November (two weeks before the comet reaches perihelion), to the paper as successive {\it Status Update Reports\/} based on newly available information. No predictions will be attempted, but systematic trends, suggested by the most recent observations, will be pointed out. While this work is limited to only very particular tasks and is not intended to solve the issue of the comet's survival, it should offer an opportunity to potencially refocus the comet's monitoring programs and to gradually adjust the prospects for survival chances. Eventually --- at some point after the show is over --- it will be useful to assess strengths and weaknesses of this approach for improvements in potential future applications to other exceptional comets. \begin{figure*} \vspace{-2.55cm} \hspace{-0.35cm} \centerline{ \scalebox{0.67}{ \includegraphics{f1_2012S1.ps}}} \vspace{-9.9cm} \caption{Light curves of comets C/2012 S1 and C/1962 C1. The total visual magnitude normalized to 1~AU from the earth, $H_\Delta$, is plotted against time from perihelion. The perihelion times are November 28, 2013 for C/2012~S1 and April~1, 1962 for C/1962~C1. The brightness data for the two objects are plotted with different symbols, as indicated. The brightness enhancement of C/1962~C1 around perihelion was due in part to forward scattering of light by microscopic dust parlicles in the coma.{\vspace{0.4cm}}} \end{figure*} \section{The Light Curves} A light curve is in this investigation understood to be a plot of a total brightness (expressed in magnitudes), normalized to a distance $\Delta$ of 1~AU from the earth by employing the usual term \mbox{$5 \log \Delta$}, against time or heliocentric distance. The phase effect is not accounted for, but its potential implications for the light curve are always described in the text. The magnitudes are referred to a visual spectral region. Because every observer measures a comet's brightness in his own photometric system, this heterogeneity has to be eliminated (or reduced as much as possible) by introducing corrections to a standard photometric system. Its zero point is defined by M.~Beyer's brightness data (see below), whose scale is tied to the {\it International Photovisual System \/}(Ipv; e.g., Seares 1922). His light curves of comets were employed not only to calibrate the personal photometric systems of other observers of those same comets (separately for each instrument used) but also extended --- generally by observer/instrument-to-observer/instrument multiple chain comparisons of overlapping rows of brightness estimates --- to fainter magnitudes reported by observers (whether visual or those using CCD detectors) of other comets, including the ones studied in this paper. By multiply crosschecking time overlaps by the same observers it has been possible, by introducing personal/instrumental corrections, to largely remove major systematic magnitude differences for at least some individuals, who then make up a check list of {\it pivotal\/} observers,\footnote{For example, the total magnitudes that K.\ Kadota, one of many involved in monitoring the motion and brightness of C/2012~S1, reports from his CCD observations with a 25-cm f/5 reflector are fainter than the scale of the adopted photometric system and require for comets of a moderately condensed appearance a correction near $-$1.0~magnitude.} to whose photometric scale the magnitude observations by others are readily linked. Compared to Beyer, most observers underestimate the total brightness. Corrections greater than $\sim$2~magnitudes are, however, suspect and such data should not generally be employed in light-curve analyses. \section{Comet C/2012 S1 (ISON)} To compare comet C/2012 S1 with the two other objects, there is no point in presenting its early light curve, since the reference comets were discovered less than 1.6~AU from the Sun. Besides, the behavior of C/2012 S1 beyond 3~AU was recently investigated by Ferrin (2013). In this paper, the light curve is examined from the time after the comet reemerged in August 2013 from its conjunction with the Sun. A total of 28~magnitude observations, mostly from CCD images, was normalized in accord with the procedure of Sec.~2. They were made by five observers between August~16 and October~4, or 104 to 55~days before the comet's perihelion on November 28. The observations were difficult, especially before the end of August, when the comet was less than 32$^\circ$ from the Sun. The phase angle stayed between 9$^\circ$ and 29$^\circ$ during the 50~days of observation, having no major effect on the light curve presented in Figure~1. \section{Comet C/1962 C1 (= 1962 III = 1962{\tiny \bf c})} Discovered independently by R.\ D.\ Lines and T.\ Seki within 8~hours of each other, this comet was well observed right from the beginning. Its perihelion passage occurred on April 1, 1962. The light curve, presented in Figure~1, is based on 244 brightness estimates by 34~observers, including Beyer (1963). The phase angle was steadily increasing from 34$^\circ$ at the time of the first used brightness observation, on February~7, to 90$^\circ$ some 17~days before perihelion, when the comet was 0.70~AU from the Sun. The Henyey-Greenstein law modified and applied to cometary dust by Marcus (2007) suggests that if all light was due to dust, the phase correction would have been between $-$0.5 and $-$0.9 magnitude in this period of time. Forward scattering then began to take over. The phase angle reached 100$^\circ$ about 11~days before perihelion, 110$^\circ$ six days later, and peaked near 115$^\circ$ at \mbox{2--3}~days before perihelion. This is the section of the light curve, where the rate of brightness increase accelerated sharply. The phase correction at the peak phase angle is about +0.3~magnitude. If these numbers are valid, the corrected peak brightness would be, relative to much of the preperihelion light curve, about 1~magnitude less prominent. However, most of the near-perihelion data points were obtained in daylight or near sunset, being highly inaccurate. By the time the comet reached perihelion, the phase angle dropped to $\sim$80$^\circ$ and continued to decrease gradually, attaining 26$^\circ$ at the time of Beyer's last observation 48~days after perihelion, on May 19. \begin{table}[b] \noindent \vspace{0.05cm} \begin{center} {\footnotesize {\bf Table 1}\\[0.08cm] {\sc Twilight and Daytime Observations of Comet C/1962 C1 by\\A.\,D.\,Thackeray at Radcliffe Observatory, Pretoria,\\Attempted on March 29--April 1, 1962, and\\Bortle's limiting daylight magnitudes.}\\[0.12cm] \begin{tabular}{l@{\hspace{0.07cm}}r@{\hspace{0.55cm}}c@{\hspace{0.35cm}}c@{\hspace{0.15cm}}c@{\hspace{0.6cm}}c} \hline\hline\\[-0.25cm] \multicolumn{2}{@{\hspace{-0.2cm}}c}{Time of} & Time from & \multicolumn{2}{@{\hspace{-0.37cm}}c}{Magnitude (mag)} & \\[-0.03cm] \multicolumn{2}{@{\hspace{-0.2cm}}c}{observation} & perihelion & \multicolumn{2}{@{\hspace{-0.37cm}}c}{\rule[0.6ex]{2.6cm}{0.4pt}} & Elongation \\[-0.02cm] \multicolumn{2}{@{\hspace{-0.2cm}}c}{1962 (UT)} & (days) & observed$^{\rm a}$ & limiting & from Sun \\[0.05cm] \hline\\[-0.22cm] March & 29.70 & $-$2.96 & \llap{$\sim$}0 & $-$0.4\rlap{$^{\rm b}$} & \llap{1}0$^\circ\!\!$.7 \\ & 30.67 & $-$1.99 & U & $-$0.7 & 7.9 \\ & 31.5$\;\:$ & $-$1.16 & U & $-$1.1 & 5.3 \\ April & 1.5$\;\:$ & $-$0.16 & U & $-$1.9 & 2.0 \\[0.05cm] \hline\\[-0.28cm] \multicolumn{6}{l}{\parbox{7.5cm}{$^{\rm a}$\, \scriptsize U means the comet was undetected.}}\\[-0.08cm] \multicolumn{6}{l}{\parbox{8.2cm}{$^{\rm b}$\, \scriptsize Not a daytime observation; Bortle formula not strictly applicable.}}\\[-0.4cm] \end{tabular}} \end{center} \end{table} One of the observations plotted in Figure 1 was made by A.~D.~Thackeray at the Radcliffe Observatory in Pretoria, South Africa, with an 8-cm refractor on March 29.70 UT, 71 hours before perihelion and only 43~minutes after sunset. The comet was 10$^\circ\!$.7 from the Sun and the observation, made through a rift in the clouds, was reported by Venter (1962). He remarked that \mbox{Thackeray} also attempted, to detect the comet with an 18-cm finder of the 188-cm reflector of the observatory at sunset on March~30 and with binoculars in broad daylight on March~31 and April~1, always unsuccessfully. These observations, positive or negative alike, are interesting to compare with Bortle's (1985) efforts to determine a limiting magnitude $H_{\rm lim}$ for the faintest cometary objects detectable in daylight as a function of the elongation from the Sun (see also Green 1997). While Bortle's test observations were made on Mercury and Venus, he emphasized that when very slightly defocused, the two planets rather closely mimic the appearance of a daylight comet. He concluded that although his experiments were conducted with 8-cm binoculars, the predicted relationship is not strongly aperture dependent because of the brightness of the sky background. However, the practical result does depend on how well the Sun is occulted during the observation and especially how the instrument's optical surfaces are protected against direct illumination. \begin{table}[b] \vspace{0.3cm} \noindent \begin{center} {\footnotesize {\bf Table 2}\\[0.08cm] {\sc Time from Perihelion and Heliocentric Distance\\for Orbits of Three Investigated Comets.}\\[0.10cm] \begin{tabular}{c@{\hspace{0.85cm}}c@{\hspace{0.85cm}}c@{\hspace{0.85cm}}c} \hline\hline\\[-0.22cm] Time from & \multicolumn{3}{@{\hspace{-0.02cm}}c}{Heliocentric distance (AU)}\\[-0.03cm] perihelion & \multicolumn{3}{@{\hspace{-0.02cm}}c}{\rule[0.6ex]{6.1cm}{0.4pt}}\\[-0.02cm] (days) & C/2012 S1 & C/1962 C1 & C/2002 O4\\[0.05cm] \hline\\[-0.22cm] $\;\:$0 & 0.012 & 0.031 & 0.776 \\ 10 & 0.498 & 0.481 & 0.800 \\ 20 & 0.798 & 0.780 & 0.867 \\ 30 & 1.050 & 1.032 & 0.965 \\ 40 & 1.274 & 1.256 & 1.083 \\ 50 & 1.481 & 1.462 & 1.212 \\ 60 & 1.674 & 1.655 & 1.347 \\ 70 & 1.856 & 1.838 & 1.484 \\ 80 & 2.030 & 2.012 & 1.622 \\ 90 & 2.197 & 2.179 & 1.760 \\ \llap{1}00 & 2.358 & 2.339 & 1.896 \\ \llap{1}10 & 2.513 & 2.495 & 2.030 \\[0.08cm] \hline\\[-0.65cm] \end{tabular}} \end{center} \end{table} The four instances of Thackeray's effort to detect the comet in daylight or twilight are listed in Table~1. The times of his last two observations are not known and are assumed to be early in the afternoon local time. The successful observation on March~29 does not satisfy the conditions for applying Bortle's formula, nevertheless each of the comparisons provides useful information. We do not know how bright in fact the comet was around perihelion, but the curve fitted through the nearby points in Figure~1 suggests for Thackeray's search times the magnitudes $-$2.2 for March~30, $-$2.5 for March~31, and $-$2.8 for April~1, that is, the comet brighter than the limiting magnitudes by, respectively, 1.5, 1.4, and 0.9~magnitudes. However, Thackeray's March 29 estimate is 1.6 magnitudes much too faint (as seen from Figure~1), so the disagreement is not surprising. Either Thackeray did not carry out his daytime observations of C/1962~C1 in compliance with Bortle's conditions or the comet was in close proximity to the Sun{\nopagebreak} fainter than the light curve in Figure~1 indicates. \section{Comet C/2002 O4 (H\"{o}nig)} The results for C/2002 O4 are taken from my earlier paper (Sekanina 2002), to which the reader is referred in order to learn more about the idiosyncrasies of this object. The phase angle stayed during the observations between 39$^\circ$ and 68$^\circ$, with no major effect on the shape of the light curve, which, however, is not plotted in Figure~1 because the comparison with the other two objects would be meaningless. Since activity of a comet depends on the Sun's radiation input to the nucleus, the light curves for comets of very different perihelion distances need to be compared against heliocentric distance, not time. This is clearly seen from Table~2, where the relationship between time and heliocentric distance is shown for the three investigated comets. While this relationship is nearly identical for C/2012~S1 and C/1962~C1 because of their similar perihelion distances, the situation is very different for C/2002~O4, whose scale of heliocentric distances is strongly compressed. The comet needs 30 more days than C/2012~S1 to get from 2~AU to perihelion. \begin{figure*} \vspace{-2.6cm} \hspace{-0.4cm} \centerline{ \scalebox{0.67}{ \includegraphics{f2_2012S1.ps}}} \vspace{-9.9cm} \caption{Plot of preperihelion light curves of comets C/2012 S1, C/1962~C1, and C/2002~O4 against heliocentric distance $r$. The total visual magnitude, $H_\Delta$, is again normalized to 1\,AU from the earth. The perihelion distances of the three comets are, respectively, 0.012\,AU or 2.7\,{$R_{\mbox{\scriptsize \boldmath $\odot$}}$}, 0.031\,AU or 6.7\,{$R_{\mbox{\scriptsize \boldmath $\odot$}}$}, and 0.78\,AU. Their brightness data are plotted with different {\vspace{-0.06cm}}symbols, as indicated. Also depicted are slopes of brightness variations proportional to $r^{-2}$ and $r^{-8}$. The upswing at \mbox{$r < 0.5$\,AU} on the light curve of C/1062~C1 is due in part to forward scattering (at phase angles $>100^\circ$) of light by microscopic dust particles in the coma.{\vspace{0.4cm}}} \end{figure*} \section{Brightness Variation with Distance from Sun} Figure 2 compares the preperihelion light curves of the three comets, with the heliocentric distance being plotted instead of time on the axis of abscissae. For comet C/2002~O4 the discovery magnitude, whose photometric correction is practically impossible to determine, has been omitted in the plot. The figure displays two major differences between the light curves of C/1962~C1 and C/2002~O4 (which happened to be one of the brighter disintegrating comets). The first difference is in the brightness:\ at any heliocentric distance, C/1962~C1 was intrinsically brighter by at least 2~magnitudes. The second difference is in the shape of the light curve:\ the brightness of C/1962 C1 kept continuously climbing with its approach to the Sun, while that of C/2002~O4 began to stall about one month before perihelion, when the comet was at a heliocentric distance of about 0.96~AU, bending eventually downward at an accelerating rate. The relation between C/2012~S1 and C/1962~C1 is very similar to that in Figure~1. The forthcoming magnitude data will determine whether the light curve of C/2012~S1 will essentially coincide with that of C/1962~C1 or will extend below or above it. The data will also show whether the two light curves will or will not be nearly parallel. Most importantly, the upcoming observations should reveal any possible tendency toward brightness stalling, which would be a sign of disappointing performance near the Sun. By early October, no such effect is obvious from a nearly two-month arc covered by the post-conjunction data set. It is noticed from Figure~2 that the rate of brightening with decreasing heliocentric distance is a little steeper than $r^{-2}$, an encouraging indicator. Another gratifying sign is that comet C/2012~S1 is intrinsically much brighter at 1.6~AU from the Sun than C/2002~O4 was at 1.4~AU. And if the early part of the light curve of this latter object was a result of an outburst (Sekanina 2002), then C/2012~S1 is {\it considerably\/} brighter than was C/2002~O4 at the same distance. \section{Nongravitational Effects in Orbital Motion} It is true, though odd, that the light curve of comet C/2002~O4 covers a time interval two weeks longer than the arc covered by the astrometric observations. This is partly because the astrometry started only five days after discovery, but mainly because the comet's brightness could still be estimated after its nuclear condensation disappeared and there was nothing to bisect for the position. The loss of the condensation is the most ominous attribute of disintegrating comets. Unfortunately, this unmistakable sign of imminent demise sets in rather suddenly and near the end of a rapidly progressing process, so it is definitely not an early warning sign. \begin{table}[b] \noindent \begin{center} {\footnotesize {\bf Table 3}\\[0.08cm] {\sc Original Reciprocal Semimajor Axis for Comet C/2002 O4\\As Function of Orbital Solution's End Date.}\\[0.10cm] \begin{tabular}{l@{\hspace{0.12cm}}c@{\hspace{0cm}}r@{\hspace{1.23cm}}c@{\hspace{0.93cm}}c} \hline\hline\\[-0.22cm] \multicolumn{3}{@{\hspace{-0.9cm}}c}{End date} & Original reciprocal & Number \\[-0.03cm] \multicolumn{3}{@{\hspace{-0.9cm}}c}{of orbital} & semimajor axis, & of observa- \\[-0.02cm] \multicolumn{3}{@{\hspace{-0.9cm}}c}{solution} & $(1/a)_{\rm orig}$\,(AU$^{-1}$) & tions used \\[0.05cm] \hline\\[-0.22cm] 2002 & Sept. & 2 & $-$0.000\,520$\;\pm\;$0.000\,096 & 946 \\ & & 10 & $-$0.000\,694$\;\pm\;$0.000\,054 & 984 \\ & & 13 & $-$0.000\,717$\;\pm\;$0.000\,031 & \llap{1}088 \\ & & 23 & $-$0.000\,772$\;\pm\;$0.000\,021 & \llap{1}135 \\[0.08cm] \hline\\[-0.6cm] \end{tabular}} \end{center} \end{table} A better timely indicator of the impending termination of a comet's existence is a {\it major, progressively increasing\/} deviation of its orbital motion from the gravitational law. Although detection of these {\it nongravitational perturbations\/} requires a fairly high-quality orbit determination, they begin to show up much earlier than the condensation's disappearance. To detect this effect, there is no need to solve for the nongravitational parameters (often a doomed effort); a straightforward approach is to look for temporal variations in the reciprocal semimajor axis, $1/a$, derived from different orbital arcs. A {\it clear\/} systematic trend toward smaller $1/a$ with time, as the end date of the observations included in a set of gravitational orbital solutions is stepwise advanced, is a sign that the comet is in trouble. This trend means that the comet orbits the Sun in a gravity field of decreasing magnitude, the deviations apparently caused by a momentum transferred to the eroding nucleus by sublimation- and fragmentation-driven forces, in the final stage probably also by solar radiation pressure. Negative values of the original reciprocal semimajor axis $(1/a)_{\rm orig}$ are particularly worrisome. One has to be sure, however, that this is not due to inaccurate data and that the magnitude of the effect clearly exceeds the errors of observation. Even though comet C/2002~O4 was under observation for only about two months, Marsden (2002a, 2002b) successively derived four general orbital solutions. Each of them used astrometric observations that covered an orbital arc starting on July~27 but ending at different times. The resulting values of $(1/a)_{\rm orig}$ that Marsden obtained were summarized in my previous work on the comet (Sekanina 2002) and are in abbreviated form presented in Table~3. The hyperbolic excess, driven by the nongravitational forces, which was already enormous in the first solution, grew further by $\sim$250~units of 10$^{-6}$\,AU$^{-1}$ as the end date of the subsequent solutions advanced by only three weeks. \begin{table}[t] \noindent \vspace{-0.25cm} \begin{center} {\footnotesize {\bf Table 4}\\[0.08cm] {\sc Original Reciprocal Semimajor Axis for Comet C/2012~S1\\As Function of Orbital Solution's End Date.}\\[0.10cm] \begin{tabular}{l@{\hspace{0.08cm}}c@{\hspace{0.02cm}}r@{\hspace{0.33cm}}c@{\hspace{-0.04cm}}c@{\hspace{0.04cm}}l} \hline\hline\\[-0.22cm] \multicolumn{3}{@{\hspace{0cm}}c}{End date} & Original reciprocal & Number & \\[-0.03cm] \multicolumn{3}{@{\hspace{0cm}}c}{of orbital} & semimajor axis, & of observa- & \\[-0.02cm] \multicolumn{3}{@{\hspace{0cm}}c}{solution} & $(1/a)_{\rm orig}$\,(AU$^{-1}$) & tions used & $\;$Reference$^{\rm a}$\\[0.05cm] \hline\\[-0.22cm] 2012 & Oct. & 25 & $+0.000\:056\,9\pm0.000\:016\,2$ & $\;\:$418 & MPC\,80809 \\ & Dec. & 24 & $+0.000\:039\,4\pm0.000\:008\,8$ & 1000 & MPC\,81859 \\ 2013 & Jan. & 24 & $+0.000\:045\,2\pm0.000\:005\,4$ & 1612 & MPC\,82319 \\ & Feb. & 20 & $-0.000\:000\,6\pm0.000\:003\,6$ & 2372 & MPC\,82720 \\ & Mar. & 23 & $-0.000\:012\,2\pm0.000\:002\,5$ & 3181 & MPC\,83144 \\ & Apr. & 20 & $-0.000\:000\,7\pm0.000\:002\,0$ & 3442 & MPC\,83520 \\ & June & 8 & $+0.000\:007\,1\pm0.000\:001\,6$ & 3722 & MPC\,84317 \\ & Aug. & 18 & $+0.000\:008\,5\pm0.000\:001\,6$ & 3740 & MPC\,84625 \\ & Aug. & 23\rlap{$^{\rm b}$} & $+0.000\:028\,6\pm0.000\:001\,4$ & 3746 & E\,2013-Q27 \\ & Sept. & 6\rlap{$^{\rm b}$} & $+0.000\:009\,2\pm0.000\:001\,2$ & 3897 & E\,2013-R59 \\ & Sept. & 16\rlap{$^{\rm b}$} & $+0.000\:008\,6\pm0.000\:001\,1$ & 3997 & MPC\,84932 \\ & Sept. & 30\rlap{$^{\rm b}$} & $+0.000\:005\,4\pm0.000\:000\,7$ & 4308 & E\,2013-S75 \\ \hline\\[-0.22cm] \multicolumn{6}{l}{\parbox{8.26cm}{$^{\rm a}$\,\scriptsize MPC = Minor Planet Circular; E = Minor Planet Electronic Circular (MPEC).}}\\[0.12cm] \multicolumn{6}{l}{\parbox{8.2cm}{$^{\rm b}$\,\scriptsize Start date for this solution was 2011 Sept.\ 30.}}\\[0.3cm] \end{tabular}} \end{center} \end{table} While the nominal values of $(1/a)_{\rm orig}$ leave no doubt that comet C/2002~O4 could not have its aphelion nearer than the Oort Cloud, an effort aimed at interpreting the steep rate of change in $(1/a)_{\rm orig}$ in Table~3 led the author to a somewhat more ambiguous conclusion (\mbox{Sekanina} 2002). While the Oort Cloud origin was still the most likely, the uncertainty of the result was much too high. On the other hand, I am unaware of a candidate better than C/2002~O4 for a disintegrating Oort Cloud comet. For comparison, the original reciprocal semimajor axis from a dozen orbital solutions for C/2012~S1 is listed in Table~4, in which their start date (that is, the first astrometric observation used) is December~28, 2011, unless stated otherwise. Overall, no clear trend is perceived. However, when one considers only the last four entries, which are based on an orbital arc extended further back in time compared to the previous ones after the comet's 11~images were identified on CCD frames taken at the Haleakala Pan-STARRS Station from September~30, November~10 and 26, and December~9, 2011 (see MPEC 2013-Q27), a systematic negative trend is apparent in $(1/a)_{\rm orig}$ at an average rate of about 18 units of 10$^{-6}$\,(AU)$^{-1}$ per month of the end-date advance. This is much less than the rate for C/2002~O4 and most of it appears to have occurred between the end dates of August~23 and September~6, but the accuracy of this result is also much higher than for C/2002~O4. The future develop\-ment of this potentially unsettling issue needs to be monitored. \section{Conclusions Based on Observations Up to Early October 2013} Comparison of C/2012~S1 with two very different comets shows that, as of early October, its intrinsic brightness is close to that of C/1962~C1, another Oort Cloud comet, which survived perihelion at a distance 4~{$R_{\mbox{\scriptsize \boldmath $\odot$}}$} greater than is C/2012~S1's. The forthcoming weeks will show whether one can become more confident about potential similarities between these two objects. On the other hand, indications are that C/2012~S1 will intrinsically be significantly brighter in the coming several days than C/2002~O4 was at the same heliocentric distance shortly after discovery (\mbox{$H_\Delta = 10.7$}~AU; equivalent to October~12 for C/2012~S1). On the other hand, comet C/1999~S4, perhaps the most prominent disintegrating comet (other than the disintegrating sungrazers), was between 2.4 and 2.2~AU from the Sun just about as bright intrinsically as C/2012~S1. However, the rate of brightening of C/1999~S4 between 2.2 and 1.6~AU, where it was picked up after conjunction with the Sun, was so sluggish that by 1.6~AU (around October~3) comet C/2012~S1 was already significantly brighter. Throughout the remaining preperihelion orbital arc of C/2012~S1, two issues to pay close attention to --- among others that may come to the forefront of interest --- are:\ (i)~its rate of brightening (or fading?), as mentioned above, and (ii)~the exact nature of its orbital motion, especially in terms of systematic changes in the original semimajor axis. An unwelcome sign would be the need to introduce the nongravitational terms into the equations of motion in order to fit satisfactorily the newly available astrometric observations. If the slight tendency of sliding toward ever more negative values continues or even accelerates, concerns about the prospects for an impressive show near perihelion, will be warranted. In approximately weekly intervals until mid-November, {\it Status Update Reports\/} (SURs) will be appended to this paper, based on results from the most recent relevant observations. The SURs will include updates to Figures~1 and 2 and to a truncated Table 4, which will include only the orbital solutions with the start date of September 30, 2011.\\[-0.2cm] This research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.\\[-0.2cm] \begin{center} {\footnotesize REFERENCES} \end{center} \vspace{0.1cm} {\footnotesize \parbox{8.6cm}{Beyer, M. (1963). Physische Beobachtungen von Kometen. XIII. {\hspace*{0.3cm}}{\it Astron. Nachr.\/} {\bf 287}, 153--167.}\\[0.04cm] \parbox{8.6cm}{Bortle, J. E. (1985). The Observation of Bodies in Close Proximity {\hspace*{0.3cm}}to the Sun. {\it Int. Comet Quart.\/} {\bf 7}, 7--11.}\\[0.04cm] \parbox{8.6cm}{Ferrin, I. (2013). The Location of Oort Cloud Comets C/2011~L4 {\hspace*{0.3cm}}Panstarrs and C/2012~S1 ISON, on a Comets' Evolutionary Di- {\hspace*{0.3cm}}agram. {\it eprint} {\tt arXiv:1306.5010.}}\\[0.04cm] \parbox{8.6cm}{Green, D. W. E., ed. (1997). Daytime Observations of Comets, in {\hspace*{0.3cm}}{\it The ICQ Guide to Observing Comets\/}, Smithsonian Astrophys- {\hspace*{0.3cm}}ical Observatory, Cambridge, Mass., pp.\ 90--92.}\\[0.04cm] \parbox{8.6cm}{Marcus, J. N. (2007). Forward-Scattering Enhancement of Comet {\hspace*{0.3cm}}Brightness. I.\ Background and Model. {\it Int. Comet Quart.\/} {\bf 29}, {\hspace*{0.3cm}}39--66.}\\[0.04cm] \parbox{8.6cm}{Marsden, B. G. (2000). Comet C/1999 S4 (LINEAR). {\it Minor Plan. {\hspace*{0.3cm}}Circ.\/} 40988.}\\[0.04cm] \parbox{8.6cm}{Marsden, B. G. (2002a). Comet C/2002 O4 (Hoenig), {\it Minor Plan. {\hspace*{0.3cm}}Electr. Circ.\/} 2002-R15, 2002-R48, and 2002-S10.}\\[0.04cm] \parbox{8.6cm}{Marsden, B. G. (2002b). Comet C/2002 O4 (Hoenig), {\it Minor Plan. {\hspace*{0.3cm}}Circ.\/} 46762.}\\[0.04cm] \parbox{8.6cm}{Marsden, B. G.; Z. Sekanina; and D. K. Yeomans (1973). Comets {\hspace*{0.3cm}}and Nongravitational Forces.\ V, {\it Astron. J.\/} {\bf 78}, 211--225.}\\[0.04cm] \parbox{8.6cm}{Seares, F. H. (1922). Report of the President, Commission~25: {\hspace*{0.3cm}}Stellar Photometry. {\it Trans. Int. Astron. Union\/} {\bf 1}, 69--82.}\\[0.04cm] \parbox{8.6cm}{Sekanina, Z. (1984). Disappearance and Disintegration of Comets. {\hspace*{0.3cm}}{\it Icarus\/} {\bf 58}, 81--100.}\\[0.04cm] \parbox{8.6cm}{Sekanina, Z. (2002). What Happened to Comet C/2002~O4 {\hspace*{0.3cm}}(H\"{o}nig)? {\it Int. Comet Quart.\/} {\bf 24}, 223--236.}\\[0.04cm] \parbox{8.6cm}{Venter, S. C. (1962). Observations of Comet Seki-Lines 1962c. {\hspace*{0.3cm}}{\it Mon. Notes Astron. Soc. South Africa\/} {\bf 21}, 89--91.}\\[-0.2cm] \clearpage \mbox{ }\\[-0.79cm] \begin{center} {\large \bf STATUS UPDATE REPORT \#1}\\ {\large \bf (October 15, 2013)}\\ \end{center} {\normalsize This {\it SUR\/} appends results from the relevant observations of C/2012~S1 made between October 4, the cutoff date in the paper, and October~13. Numerous reports of the comet's total magnitude measurements have been made available, mostly via CCD imaging. The number of observations used in the updated light curve increased from 28 in the paper to 44 now, based on the additional data by six observers. The photometric-system correction for one of the six has been refined. The updated light curve is presented in Figure~SUR1-1 in both versions, with the normalized magnitude $H_\Delta$ plotted against time in the upper panel and against heliocentric distance in the lower panel. A few of the new data points date before October~4. The figure shows that the light curve of C/2012~S1 is currently running a little more than 1~magnitude below C/1962~C1. \begin{figure}[b] \vspace{-1.4cm} \hspace{-0.27cm} \centerline{ \scalebox{0.475}{ \includegraphics{f1_2012S1-SUR1.ps}}} \vspace{-8.5cm} \hspace{-0.305cm} \centerline{ \scalebox{0.475}{ \includegraphics{f2_2012S1-SUR1.ps}}} \vspace{-6.9cm} \noindent {\footnotesize {\bf Figure SUR1-1.} Light curve of C/2012 S1, as of October 13, 2013, plotted against time, in comparison with the light curve of C/1962~C1 (upper panel); and plotted against heliocentric distance, in comparison with the light curves of C/1962~C1 and C/2002~O4 (lower panel).}{\vspace{2.1cm}\pagebreak} \end{figure} Since the light curve of C/1962 C1 in the lower panel of Fig.~SUR1-1 shows that between about 1.5 and 1.2~AU from the Sun the comet's intrinsic total brightness varied with heliocentric distance $r$ approximately as $r^{-8}$, it is tempting to estimate the power $n$ of heliocentric distance with which does the brightness of C/2012~S1 vary at present. This is easily found out from the day-to-day variations in the normalized magnitude $H_\Delta$. In a parabolic approximation, the expression for $n$ in the brightness law $r^{-n}$ is related to the daily rate of intrinsic brightening (or fading), $dH_\Delta/dt$, by \begin{equation} n = \mp \, \frac{\sqrt{2}}{5 k \log e} \, r^{\frac{3}{2}} \!\! \left( 1 \! - \! \frac{q}{r} \right)^{\!\!-\!\frac{1}{2}} \! \frac{dH_\Delta}{dt} \simeq \mp \, 37.86 \, r^{\frac{3}{2}} \frac{dH_\Delta}{dt}, \end{equation} where the minus sign applies before perihelion and the plus sign after {\vspace{-0.05cm}}perihelion, $k$ is the Gaussian gravitational constant, 0.0172021~AU$^{\frac{3}{2}}\!$/day, \mbox{$\log e = 0.434294\,$\ldots}, and $q$ is the comet's perihelion distance in AU. The simplified expression on the right of (1) offers a satisfactory approximation as long as \mbox{$r \gg q$} \begin{table}[h] \noindent \vspace{-0.1cm} \begin{center} {\footnotesize {\bf Table SUR1-1}\\[0.08cm] {\sc Original Reciprocal Semimajor Axis for Comet C/2012~S1\\As Function of Orbital Solution's End Date.}\\[0.10cm] \begin{tabular}{l@{\hspace{0.08cm}}c@{\hspace{0.02cm}}r@{\hspace{0.33cm}}c@{\hspace{-0.04cm}}c@{\hspace{0.04cm}}l} \hline\hline\\[-0.22cm] \multicolumn{3}{@{\hspace{0cm}}c}{End date} & Original reciprocal & Number & \\[-0.03cm] \multicolumn{3}{@{\hspace{0cm}}c}{of orbital} & semimajor axis, & of observa- & \\[-0.02cm] \multicolumn{3}{@{\hspace{0cm}}c}{solution} & $(1/a)_{\rm orig}$\,(AU$^{-1}$) & tions used & $\;$Reference$^{\rm a}$\\[0.05cm] \hline\\[-0.22cm] 2013 & Aug. & 23 & $+0.000\:028\,6\pm0.000\:001\,4$ & 3746 & E\,2013-Q27 \\ & Sept. & 6 & $+0.000\:009\,2\pm0.000\:001\,2$ & 3897 & E\,2013-R59 \\ & & 16 & $+0.000\:008\,6\pm0.000\:001\,1$ & 3997 & MPC\,84932 \\ & & 30 & $+0.000\:005\,4\pm0.000\:000\,7$ & 4308 & E\,2013-S75 \\ & Oct. & 14 & $+0.000\:003\,8\pm0.000\:000\,6$ & 4677 & E\,2013-T11\rlap{0} \\[0.09cm] \hline\\[-0.22cm] \multicolumn{6}{l}{\parbox{8.26cm}{$^{\rm a}$\,\scriptsize MPC = Minor Planet Circular; E = Minor Planet Electronic Circular (MPEC).}}\\[-0.15cm] \end{tabular}} \end{center} \end{table} An average rate of intrinsic brightening between October~4 and 13, at heliocentric distances 1.59 to 1.40~AU, amounted to \mbox{$-0.030 \pm 0.010$}~mag/day, which, with an average heliocentric distance of 1.50~AU in this period of time, is equivalent to \mbox{$n \simeq 2.1 \pm 0.7$}, a considerably slower rate of upswing than was displayed by C/1962~C1. At this rate of brightening, C/2012~S1 would be appreciably fainter intrinsically than C/1962~C1 near the Sun. However, this trend may not necessarily continue. And even if it does, the smaller perihelion distance of C/2012~S1 and its somewhat stronger forward-scattering effect near perihelion, with the phase angle peaking at 128$^\circ$ around December~1, should help offset the shortfall. The big question is:\ To what extent? A new orbital solution that includes published astrometric observations up to October 14, has been added to the previous ones in Table SUR1-1. The new solution shows that the negative trend in the original reciprocal semimajor axis continues but is not accelerating. Starting with the end dates in early September, {\vspace{-0.03cm}}the average rate amounts to about \mbox{$-4 \!\times\! 10^{-6}$}~AU$^{-1}$ per month, some two orders of magnitude below that for C/2002~O4. To appraise the comet's current status, I would suggest that, overall, C/2012~S1 is looking reasonably healthy but not exuberant.} \clearpage \mbox{ }\\[-0.8cm] \begin{center} {\large \bf STATUS UPDATE REPORT \#2}\\ {\large \bf (October 22, 2013)}\\ \end{center} {\normalsize This status update report covers the period of time from October~13, the end date of report \#1, to October~21. Very few new total magnitude observations of C/2012~S1 were published. The count of the data points used in the upgraded light curve, both versions of which are plotted in Figure~SUR2-1, is now 49, based on reports from seven observers. \begin{figure}[b] \vspace{-1.15cm} \hspace{-0.27cm} \centerline{ \scalebox{0.475}{ \includegraphics{f1_2012S1-SUR2.ps}}} \vspace{-8.3cm} \hspace{-0.31cm} \centerline{ \scalebox{0.475}{ \includegraphics{f2_2012S1-SUR2.ps}}} \vspace{-6.9cm} \noindent {\footnotesize {\bf Figure SUR2-1.} Light curve of C/2012 S1, as of October 21, 2013, plotted against time, in comparison with the light curve of C/1962~C1 (upper panel); and plotted against heliocentric distance, in comparison with the light curves of C/1962~C1 and C/2002~O4 (lower panel).}{\vspace{4.64cm}{\pagebreak}} \end{figure} The rate at which the comet's intrinsic brightness has lately been increasing is fittingly described as a crawl. An average daily rate of brightening over the period of October~8--21, equals \mbox{$dH_{\!\Delta}/dt = -0.036 \pm 0.012$}~mag/day, which, with the mean heliocentric distance of 1.39~AU, gives for an equivalent average power of intrinsic brightness variation with heliocentric distance \mbox{$n = 2.2 \pm 0.7$}, practically the same as before (see {\it SUR\,\#1\/}). This means that the comet's activity generates about as much gas (that radiates in the visible spectral region) and dust as is lost, respectively, by photodissociation and escape of dust particles into the tail. Figure~SUR2-1 shows that at a heliocentric distance of 1.26~AU, the comet is about 2~magnitudes fainter intrinsically than comet C/1962~C1 and only marginally brighter than comet C/2002~O4. The latter comparison should not, however, be interpreted to indicate that C/2012~S1 is about to fizzle, as its light curve shows no clear signs of having peaked. On the other hand, a future outburst excepting, the comet's continuing lethargic brightening does not make the prospects for its spectacular appearance near the Sun any likelier than they were a week ago. \begin{table}[h] \noindent \vspace{-0.1cm} \begin{center} {\footnotesize {\bf Table SUR2-1}\\[0.08cm] {\sc Original Reciprocal Semimajor Axis for Comet C/2012~S1\\As Function of Orbital Solution's End Date.}\\[0.10cm] \begin{tabular}{l@{\hspace{0.04cm}}c@{\hspace{0.02cm}}r@{\hspace{0.3cm}}c@{\hspace{-0.04cm}}c@{\hspace{0.04cm}}l} \hline\hline\\[-0.22cm] \multicolumn{3}{@{\hspace{0cm}}c}{End date} & Original reciprocal & Number & \\[-0.03cm] \multicolumn{3}{@{\hspace{0cm}}c}{of orbital} & semimajor axis, & of observa- & \\[-0.02cm] \multicolumn{3}{@{\hspace{0cm}}c}{solution} & $(1/a)_{\rm orig}$\,(AU$^{-1}$) & tions used & $\;$Reference$^{\rm a}$\\[0.05cm] \hline\\[-0.22cm] 2013 & Aug. & 23 & $+0.000\:028\,6\pm0.000\:001\,4$ & 3746 & E\,2013-Q27 \\ & Sept. & 6 & $+0.000\:009\,2\pm0.000\:001\,2$ & 3897 & E\,2013-R59 \\ & & 16 & $+0.000\:008\,6\pm0.000\:001\,1$ & 3997 & MPC\,84932 \\ & & 30 & $+0.000\:005\,4\pm0.000\:000\,7$ & 4308 & E\,2013-S75 \\ & Oct. & 14 & $+0.000\:003\,8\pm0.000\:000\,6$ & 4677 & E\,2013-T11\rlap{0} \\ & & 15 & $+0.000\:003\,6\pm0.000\:000\,5$ & 4688 & MPC\,85336 \\ & & 21 & $+0.000\:004\,6\pm0.000\:000\,5$ & 4789 & E\,2013-U17 \\[0.09cm] \hline\\[-0.22cm] \multicolumn{6}{l}{\parbox{8.26cm}{$^{\rm a}$\,\scriptsize MPC = Minor Planet Circular; E = Minor Planet Electronic Circular (MPEC).}}\\[-0.1cm] \end{tabular}} \end{center} \end{table} Two new orbital solutions have become available since {\it SUR\,\#1\/}. Together with previous solutions, whose start date was September~30, 2011, they are in Table SUR2-1. The most recent one is based on published astrometric observations up to October~21. The new data for the original reciprocal semimajor axis from the four most recent runs show that the trend toward smaller values has essentially stopped, which means that the sublimation-driven nongravitational perturbations of the comet's orbital motion have not in the past three weeks increased with time, perhaps due in part to the low activity. The trend in the evolution of activity of C/2012~S1 during the past week tends to reinforce the author's statement about the comet's health, as expressed in {\it SUR\,\#1\/}. The comet continues to brighten at a sluggish rate, but has not run out of breath. \clearpage \mbox{ }\\[-0.97cm] \begin{center} {\large \bf STATUS UPDATE REPORT \#3}\\ {\large \bf (November 4, 2013)}\\ \end{center} {\normalsize This status update report covers the period of time from October~21, the end date of report \#2, to November 2. The number of observations has in this period been steadily increasing, with new developments in the comet's activity now apparent. The count of the total-magnitude determinations used in the upgraded light curve, both versions of which are plotted in Figure~SUR3-1, is now 92, based on reports from thirteen observers. Comet C/2012~S1 is currently about 3~magnitudes intrinsically fainter than was C/1962~C1 at the same heliocentric distance. Over the past week or so, the light curve of C/2012~S1 has nearly coincided with that of C/2002~O4. In order to avoid an overlap, the individual data points for C/2002~O4 were replaced with their mean curve in the lower panel. The coincidence should not be interpreted to indicate that C/2012~S1 is near collapse. It even cannot be ruled out that the comet might partly recover its activity. \begin{figure}[b] \vspace{-2cm} \hspace{-0.27cm} \centerline{ \scalebox{0.475}{ \includegraphics{f1_2012S1-SUR3.ps}}} \vspace{-8.3cm} \hspace{-0.31cm} \centerline{ \scalebox{0.475}{ \includegraphics{f2_2012S1-SUR3.ps}}} \vspace{-6.85cm} \noindent {\footnotesize {\bf Figure SUR3-1.} Light curve of C/2012 S1, as of November 2, 2013, plotted against time, in comparison with the light curve of C/1962~C1 (upper panel); and plotted against heliocentric distance, in comparison with the light curves of C/1962~C1 and C/2002~O4 (lower panel).} \end{figure} It appears that the comet's intrinsic brightness has nearly stagnated ever since $\sim$October 13. An average daily rate of brightening over the period of October~20--November~2 equals \mbox{$dH_{\!\Delta}/dt = -0.023 \pm 0.009$}~mag/day, equivalent at an average of 1.11~AU from the Sun to a~power of heliocentric distance of \mbox{$n = 1.0 \pm 0.4$}, which{\vspace{0.2cm}\pagebreak} means that the total cross-sectional area of ejecta in the coma has recently been declining and the comet has failed to fully resupply it and compensate for the losses. This conclusion is fundamentally consistent with the results from water production measurements made on seven occasions between September~14 and October~28 (D.\,Schleicher, IAUC 9260; H.\,Weaver et al., CBET 3680; J.\,V.\,Keane et al., IAUC 9261; M.\,J.\,Mumma et al., IAUC 9261; and N.\,Dello Russo et al., CBET 3686) and with the results from $Af\rho$ measurements made on five occasions between October 5 and 27 (A.\,Fitzsimmons and P.\,Lacerda, IAUC 9261). To the extent that the data points resulting from three different methods of measuring H$_2$O in the coma can be combined, they show that the water production rate has been stalling at 10$^{28.17 \pm 0.13}$\,molecules/s over the more than six-week period. Assuming that the sublimation rate at a given heliocentric distance depends only on the Sun's zenith angle as seen from the nucleus, an integration of the modeled sublimation rate over the entire sunlit hemisphere suggests that in mid-September, at 1.95~AU from the Sun, the total water production area was 9.5~km$^2$, while by late October, at 1.08~AU from the Sun, it was reduced to 2.4~km$^2$. The equivalent diameters are, respectively, 2.46 and 1.24~km. Fitzsimmons and Lacerda find from the quantity $Af\rho$ that over the three-week period, when the comet was between 1.57 and 1.11~AU from the Sun, the amount of dust in the coma was increasing as $r^{-0.3}$. \begin{table}[h] \noindent \vspace{-0.2cm} \begin{center} {\footnotesize {\bf Table SUR3-1}\\[0.08cm] {\sc Original Reciprocal Semimajor Axis for Comet C/2012~S1\\As Function of Orbital Solution's End Date.}\\[0.10cm] \begin{tabular}{l@{\hspace{0.04cm}}c@{\hspace{0.02cm}}r@{\hspace{0.3cm}}c@{\hspace{-0.04cm}}c@{\hspace{0.04cm}}l} \hline\hline\\[-0.22cm] \multicolumn{3}{@{\hspace{0cm}}c}{End date} & Original reciprocal & Number & \\[-0.03cm] \multicolumn{3}{@{\hspace{0cm}}c}{of orbital} & semimajor axis, & of observa- & \\[-0.02cm] \multicolumn{3}{@{\hspace{0cm}}c}{solution} & $(1/a)_{\rm orig}$\,(AU$^{-1}$) & tions used & $\;$Reference$^{\rm a}$\\[0.05cm] \hline\\[-0.22cm] 2013 & Aug. & 23 & $+0.000\:028\,6\pm0.000\:001\,4$ & 3746 & E\,2013-Q27 \\ & Sept. & 6 & $+0.000\:009\,2\pm0.000\:001\,2$ & 3897 & E\,2013-R59 \\ & & 16 & $+0.000\:008\,6\pm0.000\:001\,1$ & 3997 & MPC\,84932 \\ & & 30 & $+0.000\:005\,4\pm0.000\:000\,7$ & 4308 & E\,2013-S75 \\ & Oct. & 14 & $+0.000\:003\,8\pm0.000\:000\,6$ & 4677 & E\,2013-T11\rlap{0} \\ & & 15 & $+0.000\:003\,6\pm0.000\:000\,5$ & 4688 & MPC\,85336 \\ & & 21 & $+0.000\:004\,6\pm0.000\:000\,5$ & 4789 & E\,2013-U17 \\ & & 28 & $+0.000\:007\,0\pm0.000\:000\,5$ & 4978 & E\,2013-U73 \\ & Nov. & 2 & $+0.000\:009\,2\pm0.000\:000\,5$ & 5194 & E\,2013-V07 \\[0.09cm] \hline\\[-0.22cm] \multicolumn{6}{l}{\parbox{8.26cm}{$^{\rm a}$\,\scriptsize MPC = Minor Planet Circular; E = Minor Planet Electronic Circular (MPEC).}}\\[-0.19cm] \end{tabular}} \end{center} \end{table} In the light of discouraging news on the activity of C/2012~S1, it is of particular interest to see whether the comet's nucleus has still been holding against the perturbations of its gravitational motion. Over the period since October 21, two new orbital solutions became available as seen from Table SUR3-1. There was no need to introduce the nongravitational terms into the equations of motion, and the original reciprocal semimajor axis actually continued to increase very slightly. suggesting that as of November~2, the nucleus was in good shape. In summary, although the performance of C/2012 S1 is close to anemic, there still is no hard evidence that the comet is about to fall apart. However, prospects of a spectacular show near perihelion appear now to be less likely than ever before. \clearpage \mbox{ }\\[-0.9cm] \begin{center} {\large \bf STATUS UPDATE REPORT \#4}\\ {\large \bf (November 12, 2013)}\\ \end{center} {\normalsize This status update report, the last of the series, covers the period of time from November~2, the end date of report \#3, to November 10. The count of the total-magnitude determinations used for the light curve from the period of time starting on August~16 is now 111, based on reports by fifteen observers. The updated light curve, both versions of which are plotted in Figure~SUR4-1, shows a new, encouraging sign:\ after about two weeks of stagnation and more than three weeks of near-stagnation, a few days before the end of October the comet's intrinsic brightening resumed and has now been sustained at a modest rate of \mbox{$\langle dH_\Delta/dt \rangle = -0.065 \pm 0.007$}~mag/day, judging from 39 observations spanning two weeks between October~27 and November~10, which transforms into an average variation of \mbox{$r^{-2.24 \pm 0.24}$}. Also significantly, the comet's light curve now runs parallel to, though nearly 3~magnitudes below, the light curve of C/1962~C1. On the other hand, C/2012~S1 is now, in terms of heliocentric distance, beyond the point of disintegration of C/2002~O4. ({\bf See an alert below, dated November~14.}) \begin{figure}[b] \vspace{-1cm} \hspace{-0.27cm} \centerline{ \scalebox{0.475}{ \includegraphics{f1_2012S1-SUR4.ps}}} \vspace{-8.6cm} \hspace{-0.31cm} \centerline{ \scalebox{0.475}{ \includegraphics{f2_2012S1-SUR4.ps}}} \vspace{-6.95cm} \noindent {\footnotesize {\bf Figure SUR4-1.} Light curve of C/2012 S1, as of November 10, 2013, plotted against time, in comparison with the light curve of C/1962~C1 (upper panel); and plotted against heliocentric distance, in comparison with the light curves of C/1962~C1 and C/2002~O4 (lower panel).} ({\bf Updated November 14.}) \end{figure} The brightening is consistent with additional information on the activity of C/2012~S1 that became available{\nopagebreak} in the past several days. C.~Opitom et al.\ (CBET 3693) reported that the production of gas, such as C$_2$ and CN, has increased rapidly since November~3. Judging from the OH emission, the water production grew as well, but because of large error bars, this increase only slightly exceeded 1$\sigma$ between October~31 and November~5. No increase was detected in the production rate of dust. Very recently, H.~A.~Weaver et al.\ (web message) reported a water production rate of slightly exceeding \mbox{$2 \times 10^{28}$}\,molecules/s derived from Hubble Space Telescope observations of the OH(0,0) band during November~1, comparable to the rates they measured using the same technique on October~8 and 21 (CBET 3680). However, employing the Keck-2 telescope, L.~Paganini et al.\ (IAUC 9263) determined from the near-infrared H$_2$O emissions a water production rate of \mbox{$(3.1 \pm 0.2) \times 10^{28}$}\,molecules/s on November~7, at least twice as high as on October~22--25 (IAUC 9261). This increase implies an $r^{-2.3 \pm 0.5}$ variation. Some issues related to the comet's overall light curve, from 9.4~AU at the end of September 2011 down to less than 0.8~AU before perihelion, are briefly addressed in the Appendix to this paper, which follows. \begin{table}[h] \noindent \vspace{-0.3cm} \begin{center} {\footnotesize {\bf Table SUR4-1}\\[0.08cm] {\sc Original Reciprocal Semimajor Axis for Comet C/2012~S1\\As Function of Orbital Solution's End Date.}\\[0.10cm] \begin{tabular}{l@{\hspace{0.04cm}}c@{\hspace{0.02cm}}r@{\hspace{0.3cm}}c@{\hspace{-0.04cm}}c@{\hspace{0.04cm}}l} \hline\hline\\[-0.22cm] \multicolumn{3}{@{\hspace{0cm}}c}{End date} & Original reciprocal & Number & \\[-0.03cm] \multicolumn{3}{@{\hspace{0cm}}c}{of orbital} & semimajor axis, & of observa- & \\[-0.02cm] \multicolumn{3}{@{\hspace{0cm}}c}{solution} & $(1/a)_{\rm orig}$\,(AU$^{-1}$) & tions used & $\;$Reference$^{\rm a}$\\[0.05cm] \hline\\[-0.22cm] 2013 & Aug. & 23 & $+0.000\:028\,6\pm0.000\:001\,4$ & 3746 & E\,2013-Q27 \\ & Sept. & 6 & $+0.000\:009\,2\pm0.000\:001\,2$ & 3897 & E\,2013-R59 \\ & & 16 & $+0.000\:008\,6\pm0.000\:001\,1$ & 3997 & MPC\,84932 \\ & & 30 & $+0.000\:005\,4\pm0.000\:000\,7$ & 4308 & E\,2013-S75 \\ & Oct. & 14 & $+0.000\:003\,8\pm0.000\:000\,6$ & 4677 & E\,2013-T11\rlap{0} \\ & & 15 & $+0.000\:003\,6\pm0.000\:000\,5$ & 4688 & MPC\,85336 \\ & & 21 & $+0.000\:004\,6\pm0.000\:000\,5$ & 4789 & E\,2013-U17 \\ & & 28 & $+0.000\:007\,0\pm0.000\:000\,5$ & 4978 & E\,2013-U73 \\ & Nov. & 2 & $+0.000\:009\,2\pm0.000\:000\,5$ & 5194 & E\,2013-V07 \\ & & 8 & $+0.000\:009\,6\pm0.000\:000\,5$ & 5363 & E\,2013-V48 \\[0.09cm] \hline\\[-0.22cm] \multicolumn{6}{l}{\parbox{8.26cm}{$^{\rm a}$\,\scriptsize MPC = Minor Planet Circular; E = Minor Planet Electronic Circular (MPEC).}}\\[-0.2cm] \end{tabular}} \end{center} \end{table} The resumed activity of comet C/2012 S1 appears to have had, as of November~8, no effect on its orbital motion, as is shown by comparing the original reciprocal semimajor axis from the most recent orbital solution with the previous entries in Table SUR4-1. The comet's motion still is satisfactorily matched by a purely gravitational orbit. To summarize, the immediate future of C/2012~S1 looks a little brighter now than a week ago, but the prospects of an exceptionally striking display near and after perihelion are still not good, unless the nucleus suddenly disintegrates near or shortly after perihelion into a cloud of dust. The forward scattering effect should increase the brightness by up to $\sim$1~magnitude around December~1, but very little (probably $<$0.2 magnitude) before perihelion. {\bf Alert}: Starting at $\sim$5$^{\rm h}$ UT on November~14, comet C/2012~S1 has been reported to be in major outburst. From early data, a preliminary estimate for its onset~is November~14.$0 \pm 0$.2~UT, with an amplitude of at least 2~mag. Intrinsically the comet is now almost as bright~as C/1962~C1 at the same heliocentric distance. It is unclear whether this event's nature is benign or cataclysmic. \begin{table*}[ht] \vspace{0.2cm} \begin{center} {\large \bf Appendix}\\[0.2cm] {\large \bf LIGHT CURVE OF COMET C/2012 S1 (ISON) AS\\[-0.01cm] A SEQUENCE OF SEVERAL CONSECUTIVE CYCLES OF\\[0.06cm] TWO-STAGE ACTIVITY EVOLUTION}{\nopagebreak} \end{center} \end{table*} In the paper and all the updates I was concerned only with the light curve since the reappearance of comet C/2012~S1 after the July 2013 conjunction with the Sun, because the main issues were the comet's comparison with the other two objects and its changing behavior on relatively short time scales. In this Appendix I present the comet's light curve in its entirety, starting with the prediscovery observations at the end of September 2011.\footnote{See {\tt http://www.minorplanetcenter.net/db\_search}.} The sources of data were the web site of the {\it International Comet Quarterly\/},\footnote{See {\tt http://www.icq.eps.harvard.edu/CometMags.html}.} the web site of the {\it Comet Observations Group\/},\footnote{See {\tt http://groups.yahoo.com/neo/groups/CometObs/info}.} and some of the CCD sets of {\it total\/} magnitudes (T) reported to the {\it Minor Planet Center\/}.$^2$ The procedure that established a common photometric system was already described in the paper itself (Sec.~2). The assembled light curve, presented in the following --- just as in the paper and the updates --- both as a plot against time (reckoned from the perihelion time) and as a plot against heliocentric distance, is based on 227~total-magnitude determinations\footnote{Sets of brightness data points from the same day by the same observer(s) have been consistently averaged into a~single data point and are counted and plotted as such. The only exceptions are the rather discordant prediscovery magnitudes reported from the November 26 and December 9, 2011 observations by the Pan-STARRS~1 Station at Haleakala; all four data points, two on each date, were averaged into a~single data point.} by 29~observers (or groups of observers). As a dynamically new comet, C/2012 S1 is believed to have originated, with countless others, by accretion of a~primordial material of the protoplanetary nebula in regions near Jupiter's orbit and was afterward ejected by perturbations into the Oort Cloud, in which, over billions of years, was in an extremely cold environment mercilessly bombarded by galactic cosmic rays. A~relatively thin outer layer of the nucleus was saturated with free radicals and other chemically active species. During a~journey to the inner solar system, even slight warming of the surface of comets like C/2012~S1 at very large heliocentric distances leads to release of reactive species from the surface layer, making these objects unusually active (compared to dynamically old comets). For example, their apparently icy tails made up of submillimeter- to millimeter-sized grains are formed at heliocentric distances of up to at least 15~AU, as is readily inferred from the tails' strong deviation from the prolonged radius vector for objects with perihelia beyond 2--3~AU. Hyperactivity of dynamically new comets results, however, in their highly volatile species being rapidly exhausted, which in turn brings about a~decline in the ejection rate of material from the nucleus and sometimes even a temporary drop in the comet's brightness during the continuing approach to the Sun. The light curve of comet C/2012~S1 is a~superb illustration of the complexity of a new comet's activity and physical behavior. Figure A-1 shows the variations in the comet's total intrinsic brightness as a function of time. It is immediately evident that the light curve consists of at least four (and possibly five) consecutive periods, in each of which the brightness first increases, reaches a~local maximum before stagnating or subsiding. This means that the comet's activity evolves literally in cycles, each of which begins with an {\it ignition point\/}, introducing an {\it active stage\/}, and terminates with a~{\it stage of progressive deactivation\/}. In the first cycle (A), the activity is controled by the sublimation of the most volatile ice available in abundance and continues until its supplies are depleted. From that time on, the activity is governed by the sublimation of the next ice in a~succession of diminishing volatility, etc., until eventually the least volatile, water ice takes its turn. Each cycle requires a new source. Even though the light-curve data are very incomplete before the discovery of C/2012~S1, it is possible to estimate the brightness variations in the major gap between 430 and 670 days before perihelion thanks to insignificant changes after the 240~days. A~similar development took apparently place during the comet's conjunction with the Sun between mid-June and mid-August 2013 (110--160 days before perihelion). Besides, the comet's behavior throughout a~stage of progressive deactivation is readily perceived from the observations in cycle B between mid-January and the beginning of May 2013. The properties of a cycle are suggested by the ignition-point position. The beginning of cycle~A is unknown, but it must have occurred more than 790~days before perihelion, more than 9.4~AU from the Sun. The ignition point of cycle~B cannot be determined accurately, but it took place most probably between 460 and 550~days before perihelion, 6.5 to 7.4~AU from the Sun. Cycle~C began around 212~days before perihelion and 3.9~AU from the Sun, and cycle~D some 115~days before perihelion and 2.6~AU from the Sun. It is well-known that the sublimation of water ice usually begins to dominate comet activity at heliocentric distances between 2 and 3~AU, so that cycle~D in Figures~A-1 and A-2 is very probably water-ice controled. In a so-called isothermal water sublimation model an ice-covered surface of the nucleus has a~temperature of 170~K at a distance of 2.6~AU from the Sun. The ignition points in cycles~A and B are deemed to refer to activity governed by highly volatile ices, such as carbon monoxide and carbon dioxide. In reality, the situation is more complex. In an extreme case, with the Sun constantly near the zenith, the water sublimation can control activity even near 5~AU from the Sun. Figure A-2 shows that during the active stage of each cycle (with a~possible exception of the less confidently identified cycle~E) the brightness grew with a fairly high power of heliocentric distance, between $\sim$4 and $\sim$6. This brightening was however largely mitigated by a~stagnation or drop of activity during each deactivation stage, so that overall the intrinsic brightness grew only as $\sim \! r^{-2}$. \clearpage \begin{figure*} \vspace{-3.6cm} \hspace{-0.4cm} \centerline{ \scalebox{0.867}{ \includegraphics{fA1_2012S1.ps}}} \vspace{-12.15cm} {\footnotesize {\bf Figure A-1}. The entire light curve of C/2012~S1, as it appeared as of November 10, 2013, 18~days before perihelion, plotted as a function of time, reckoned from the time of perihelion passage. Temporal variations in the normalized total brightness, expressed by the magnitude $H_\Delta$, consist of at least four, and possibly five, consecutive sections --- quasi-periodic cycles A, B, C, D, and apparently also E --- each of which begins with an ignition point, introducing an active stage, and terminates with a stagnation or drop in a stage of progressive deactivation. Each cycle requires a new source of activity.} \vspace{-2.6cm} \hspace{-0.38cm} \centerline{ \scalebox{0.87}{ \includegraphics{fA2_2012S1.ps}}} \vspace{-12.25cm} {\footnotesize {\bf Figure A-2}. The same light curve of C/2012~S1 plotted as a function{\vspace{-0.06cm}} of heliocentric distance. This relationship is approximated in the active stages of the cycles A, B, C, D, as well as E by an inverse dependence on a power $n$ of the distance, $r^{-n}$, where $n$ is seen to be confined to a range from 2.24 to 6. Note a stagnation of activity after the active stages B and D. Also note that the periods of stagnation (or drop) drag down the average rate of brightening with decreasing heliocentric distance, which is close to $r^{-2}$.} \end{figure*} \end{document}
1,941,325,221,220
arxiv
\section{Introduction}\label{sec:intro} The Sun emits energetic particles during coronal mass ejections (CME) and solar flares. The observed energies connected to these particles reach GeVs in extreme cases. At first, solar energetic particle (SEP) events were linked exclusively to solar flares \citepads{forbush46}, but once CMEs were first observed in the seventies, it became clear that they have a major role in the genesis of SEP events \citepads{kahler-hildner}. The CME-driven shocks are now believed to be the major acceleration agent during the strongest SEP events \citepads[e.g.,][]{1999SSRv...90..413R}, and the most plausible acceleration mechanism in operation is diffusive shock acceleration (DSA) \citepads{bell78}. For further reading into the matter observations and models of CMEs we recommend \citetads{2006SSRv..123..251F}, and \citetads{2011LRSP....8....1C}. In DSA, particles are accelerated through repeated crossings of the shock compression front, each crossing giving a small boost to the particle energy. The shock crossings are mediated through interactions of particles with background waves. Furthermore, the particles amplify wavemodes. This yields a coupled system of waves and particles as described, e.g., in \citetads{2007ApJ...658..622V}. Since DSA is a resonant wave-particle process, it is now interesting to see if energy transfer between wavemodes will affect different particle energies. A detailed treatement of the DSA process itself is given in \citetads{1983RPPh...46..973D}, and \citetads{Schlickeiser2002}. To enable acceleration to the GeV energies through DSA, the upstream medium needs to be very turbulent. The ambient solar wind turbulence levels are generally too low to account for this acceleration mechanism to operate beyond the MeV regime \citepads{2006GMS...165..253V}, but acceleration to the highest energies can still proceed through the strong amplification Alfv\'en waves in the ambient medium through streaming instabilities driven by the accelerated particles themselves. Analytical \citepads{2005ApJS..158...38L} and numerical models of the self-consistent particle acceleration in coronal plasma have been presented \citepads{2007ApJ...658..622V,2008ApJ...686L.123N}, showing that the wave generation process is strong enough to account for the turbulence responsible for fast scattering of particles from one side of the shock to the other. The models also show that if particles are accelerated to hundreds of MeVs, the waves will grow to nonlinear amplitudes close to the shock. It is crucial to understand the processes that govern the turbulent waves because the mechanism of DSA is strongly connected to the wave-particle interactions. Nonlinear Alfv\'en waves may interact with each other, which may lead to three important effects from the point of view of particle scattering: firstly, wave decay through three-wave interactions may limit the wave amplitudes in the shock environment; secondly, the wavenumber spectrum of the Alfv\'en waves may be altered so that the waves fall out of resonance with the particles; thirdly, the cross-helicity state of the resonant waves may also change, which affects the scattering center compression ratio of the shock and thus the accelerated particle spectrum \citepads{1998A&A...331..793V}. According to our knowledge, nonlinear wave transport models have not yet been applied to the SEP acceleration problem. In this paper we will take the first steps towards understanding the nonlinear evolution of waves generated by energetic particle beams in the solar corona. We concentrate on a simplified model, where the beam-generated wave component is represented by a Gaussian peak in parallel wavenumber that follows the interaction of this spectral component with a quasi-isotropic background turbulence driven randomly in an incompressible magnetohydrodynamic (MHD) simulation. The turbulent plasma environment mimics the fast solar wind, where Alfv\'en waves are observed to be the dominant species \citepads{2009ApJ...706..238T,brunocarbone-livrev}, hence incompressibility can be assumed. It is especially interesting to see the energy transport in parallel and perpendicular direction. Taking into account the resonance condition, only energy transport parallel to the background magnetic field will alter the transport of particles with an energy different from the incident particle's energy. On the other hand, most turbulence theories for incompressible plasmas predict perpendicular energy transport. \section{Numerical model}\label{sec:theory} Despite MHD-turbulence has been studied for roughly 70 years after it was initiated by Hannes Alfv\'en, it is still a controversial topic. We focus on incompressible turbulence for which some promising progress has been made in recent years. One commonly observed property is the characteristic energy spectrum $E(k)$ following a power-law with a slope of $-5/3$, which is commonly referred to as the Kolmogorov--type spectrum. \citetads{kolmogorov} predicted this power-law for hydrodynamic turbulence by using dimensional analysis and scaling behaviour assuming isotropy. The basic system of turbulence evolution has also been explained by Kolmogorov. On large scales, i.e. small wavenumbers, energy is injected into the turbulent fluid. This energy decays by generating smaller structures up to the smallest scales where dissipation becomes dominant. Consequently, dissipation maintains the energy flow from small towards large $\vec k $. This is also the reason for dividing the spectrum into \emph{driving-}, \emph{inertial-}, and \emph{dissipation range}. Although Kolmogorov model of turbulence was first discussed in connection with neutral fluids, it seems to be valid in the magnetohydrodynamic case as well. A different approach to Kolmogorov's theories by \citetads{iroshnikov} and \citetads{kraichnan} assuming a local mean magnetic field and assuming Alfv\'en wave packets led to an exponent of $-3/2$. The problem of the Iroshnikov-Kraichnan (IK) model is the assumption of isotropy because a background magnetic field will lead to a preferential direction in space caused by wave interaction resonances. The IK model implies resonant three wave interactions within a weak turbulent regime. In magnetised incompressible plasmas these interaction rates are empty (\citetads{gsweak}). Goldreich and Sridhar (GS) describe anisotropic turbulence and distinguish between its weak \citepads{gsweak} and strong state \citepads{gsstrong}. Their assumption of strict separation between three and four wave interactions was controversially discussed and an intermediate state was introduced \citepads{gsrev}. These theories describe Alfv\'enic turbulence evolution towards the perpendicular direction. In the weak turbulence regime four wave interactions are the underlying process, referring to the GS-framework. Due to the resonance condition energy transfer to parallel wavenumbers is not possible. It is still debated wheter four wave interactions are indeed the basic mechanism in weak turbulence is questionable. In recent theories the intermediate turbulence (\citetads{gsrev}), which is based on three wave interactions, replaces the weak four wave interaction model (\citetads{lithwick2003}). However, strong turbulence is dominated by nonresonant three wave interactions, which leads to an anisotropic energy-cascade in the perpendicular direction. Parallel evolution is not caused by cascading. One of the main achievements of the Goldreich and Sridhar theory is that is explains the Kolmogorov-type energy spectrum for an anisotropic regime. This could explain the observed $-5/3$ slope in parts of the solar wind \citepads{brunocarbone-livrev} where Kolmogorov-theory is not applicable. The region of the heliosphere we are interested in is within the weak turbulence regime, with magnetic field fluctuations defined as \begin{align} \vec{dB} \equiv \vec{B} - \vec{B_0}, \end{align} with a mean value of $\langle \vec{dB} \rangle = 0$, which leads to $\langle \vec{B} \rangle \sim \vec B_0$. It is observed that the solar wind magnetic fluctuations decrease as $dB^2\propto r^{-3}$, while the background field decreases as $B^2_0\propto r^{-4}$ \citepads{1982SoPh...78..373B,brunocarbone-livrev}. Consequently, the $dB/B_0$ ratio within the heliosphere ratio is increasing with distance to the Sun \citepads{hollweg10}. A remark on the notation, the magnetic background field $\vec{B_0}$ is defined towards the z-direction within our simulations and hence is also noted as $B_0\vec{e}_z$. The parallel direction is, therefore, defined as the z-directions and the x- and y-direction are the perpendicular directions. For symmetry reasons there will be no further distinction between the two perpendicular directions and all plots show values averaged over the azimuthal angle in cylindrical coordinates of x and y. For small perpendicular wave numbers the transport in perpendicular direction is dominating until $k_\perp v_\perp$ is of the same order as the Alfv\'en cascading time $k_\parallel v_A$. This means that in addition to the perpendicular cascade, a cascade in the parallel direction will occur as well and energy will be transferred towards smaller parallel spatial scales. Accordingly, the parameter \begin{align} \zeta \sim \frac{k_\perp v_\perp}{k_\parallel v_A}, \label{eq:zeta} \end{align} where $v_A$ is the Alfv\'en velocity, is of the order of unity. This state is called the \emph{critical balance} and was first introduced by \citetads{gsstrong}. In this state the linear wave period of the Alfv\'en waves are comparable to the intrisically nonlinear timescale. If $\zeta \sim 1$, the fluctuations become more correlated along the parallel direction, up to $l_\parallel \sim v_A/(k_\perp v_\perp)$ as indicated by Eq. \ref{eq:zeta}. Then the turbulence is clearly within the strong regime. This means that the fluctuations become comparable to $B_0$ and the nonlinear term is not small anymore \citepads{2008ApJ...672L..61P}. High Reynolds numbers in combination with massive energy injection, as seen in, e.g., the solar wind, are strong indicators of a highly turbulent state. \textit{In situ} measurements of the energy spectrum \citepads{tu-marsch} agree with this fact. To simulate conditions within the turbulent heliospheric plasma, the research group at the University of W\"urzburg has developed a simulation code, \textsc{Gismo}. \textsc{Gismo} is an incompressible pseudospectral MHD--code that is fully parallelised and capable of efficiently using massive computing clusters. The basis of the simulation software is to solve the following set of incompressible MHD-equations: \begin{align} \pa{\vec{u}}{t} &= \vec{b} \cdot \nabla \vec{b} -\vec{u} \cdot \nabla \vec{u} -\nabla P + \nu_v \nabla^{2h} \vec{u} \nonumber \\ \pa{\vec{b}}{t} &= \vec{b} \cdot \nabla \vec{u} -\vec{u} \cdot \nabla \vec{b} + \eta \nabla^{2h} \vec{b} \nonumber \\ \nabla \cdot \vec{u} &= 0 \nonumber \\ \nabla \cdot \vec{b} &= 0, \label{eq:mhdset} \end{align} where $\vec{b}={\vec{B}}/{\sqrt{4\pi \varrho}}$ is the normalised magnetic field, $\vec{u}$ is the fluid velocity and $\varrho$ is the constant mass density. The diffusion coefficient related to viscous and Ohmic dissipation is denoted by $\nu_v$ and $\eta$. A common approach in pseudospectral methods is to amplify the diffusion term by a power of $h$ - resulting in hyperdiffusivity. This artificial enhancement of the dissipation is necessary to reach a saturated state of turbulence within a reasonable timescale. It is a methodic problem, since pseudospectral approaches do not strongly suffer from dissipative numerical effects. The only intrisic energy loss of the system is caused by \emph{antialiasing}, which we discuss below. Furthermore, the parameter $\nu$ is introduced as a global diffusivity with $\eta=\nu_v\equiv\nu$. Hence magnetic resistivity and viscous damping are not distinguishable anymore. This is the case for the magnetic Prandtl number of the order of unity, which is valid within the regime of Alfv\'en wave turbulence where an equipartion between magnetic and kinetic energy can be assumed \citepads{PhysRevE_66_046410,2008AA...490..325B}. The pressure term $\nabla P$ fulfills the closure condition for incompressibility \citepads{marongold} \begin{align} \nabla^2 P = \nabla \vec{b} : \nabla \vec{b} -\nabla \vec{u} : \nabla \vec{u}. \label{eq:pressureclosure} \end{align} These equations are solved in Fourier space by using pseudospectral methods that lead to the componentwise equations \begin{align} \pa{\tilde u_\alpha}{t} &= -ik_\gamma\left( \delta_{\alpha \beta} - \frac{k_\alpha k_\beta}{k^2} \right) \left( \widetilde{u_\beta u_\gamma} - \widetilde{b_\beta b_\gamma} \right) - \nu k^{2h} \tilde u_\alpha, \nonumber \\ \pa{\tilde b_\alpha}{t} &= -ik_\beta \left( \widetilde{u_\beta b_\alpha} - \widetilde{b_\beta u_\alpha} \right)- \nu k^{2h} \tilde b_\alpha, \nonumber \\ k_\alpha \tilde u_\alpha &= 0, \nonumber \\ k_\alpha \tilde b_\alpha &= 0, \label{eq:fft-mhdset} \end{align} where the tilde-notation stands for quantities in Fourier space. The components of the wavevector are written as $k_\alpha$. In the incompressible regime of a magnetised plasma the MHD-turbulence consists of only two types of waves, which propagate along the parallel direction - the so-called pseudo- and shear Alfv\'en waves. The Former are the incompressible limit of slow magnetosonic waves and play a minor role within anisotropic turbulence (\citetads{marongold}). The pseudo Alfv\'en waves polarisation vector is in the plane spanned by the wavevector $\vec{k}$ and $\vec{B_0}$. The shear waves are transversal modes with a polarisation vector perpendicular to the $\vec{k}$ - $\vec{B_0}$ plane. They are circularly polarised for parallel propagating waves. Both species exhibit the dispersion relation $\omega^2=(v_A k_\parallel)^2$. Note that the shear mode seems to be dominant because pseudo waves are heavily damped by the \emph{Barnes} damping process within weakly turbulent regimes \citepads{gsweak}. However the damping weakens in strong turbulence, but according to \citetads{gsstrong}, the wave generation of pseudo-Alfv\'enic wavemodes is only possible via three-wave interactions by two shear wavemodes. Barnes damping is important for high-$\beta$ plasmas. Since this is not fulfilled for the solar corona, the role of pseudo waves should not be ignored. Because the model consists only of these two wave types, it is suitable to use a description with Alfv\'enic waves moving either forwards or backwards. This is achieved by introducing the Els\"asser variables \citepads{elsasser} \begin{align} \vec w^- &= \vec v + \vec b - v_A \vec e_\parallel \nonumber \\ \vec w^+ &=\vec v - \vec b + v_A \vec e_\parallel, \end{align} and transforming the Eqs. (\ref{eq:fft-mhdset}) into a suitable form of \begin{align} \left(\partial_t - v_A k_z\right) \tilde w_\alpha^- &= \frac{i}{2} \frac{k_\alpha k_\beta k_\gamma}{k^2} \left( \widetilde{w_\beta^+ w_\gamma^-} + \widetilde{w_\beta^- w_\gamma^+}\right) \nonumber \\ &-ik_\beta \widetilde{w_\alpha^- w_\beta^+} - \frac{\nu}{2} k^{2h} \tilde w_\alpha^- \nonumber \\ \left( \partial_t + v_A k_z \right) \tilde w_\alpha^+ &= \frac{i}{2} \frac{k_\alpha k_\beta k_\gamma}{k^2} \left(\widetilde{w_\beta^+ w_\gamma^-} + \widetilde{w_\beta^- w_\gamma^+}\right) \nonumber \\ &-ik_\beta \widetilde{w_\alpha^+ w_\beta^-} - \frac{\nu}{2} k^{2h} \tilde w_\alpha^+. \label{eq:fft-wpm-mhdset} \end{align} Obviously, the nonlinearities of Eqs. (\ref{eq:fft-wpm-mhdset}) that describe the turbulent behaviour of the MHD-plasma cannot be solved in Fourier space. Hence the main numerical load is the transfomation between real- and wavenumber space for each iterative step. For this purpose we used the P3DFFT algorithm, which is a MPI-parallelised fast Fourier transfomation based on FFTW3 \citepads{p3dfft}. One basic problem of spectral methods that use discrete Fourier transformation is the aliasing effect. Because of discrete sampling in the wavenumber space, high $k$-values exhibit errors that depend on the structure of the real space fields. Therefore we used zero padding, which is also referred to as Orszags $2/3$ rule, i.e. $2/3$ of the wavenumbers below the Nyquist frequency have to be truncated to achieve maximum anti-aliasing, hence reducing the Fourier space resolution to $1/3$ of the original wavenumber range (\citetads{orszag}). This process is repeated each step, immediately before calculating the nonlinearities and, accordingly, calculating the RHS of the MHD equations. Consequently, the change in the antialiasing-range of one MHD-step is physically correct, but not the long-term evolution. \textsc{Gismo} is capable of using different foward--in--time schemes, namely Euler and Runge-Kutta second as well as fourth order. All the simulations in this paper has been peformed by using RK--4. \begin{figure}[ht] \begin{center} \includegraphics[width=0.8\columnwidth]{peak-evo.pdf} \caption{Possible mechanisms that might influence the evolution of an amplified wavemode embedded in a turbulent plasma with a Kolmogorov-type power-law energy spectrum.} \label{fig:peak-evo-sketch} \end{center} \end{figure} To mimic an SEP-event, understanding the underlying mechanisms is crucial. Before we simulated particle scattering at modified field-fluctuations, we focussed on the evolution of amplified wavemodes within the background turbulence. Since streaming particles ejected by the Sun, e.g., protons from coronal mass ejections, will not sharply amplify a discrete mode but a broader interval, a Gaussian distributed shape was assumed. The streaming instability has the highest growth rate in the background field direction. Consequently, we assumed that only purely parallel-propagating Alfv\'en waves are modified. The Alfv\'en wave generation mechanism by SEPs is not be investigated in detail here. The driving mechanism assumed for our simulations is the streaming instability. The estimatation of the wave growth rate is described in \citetads{rami-ongenofalfvs}. The streaming instability is caused by energetic proton scattering off the interplanetary Alfv\'en waves. During the the scattering process the particle changes its pitch angle cosine by $\Delta \mu$ while its momentum in the wave frame remains constant. Thus the particles' energy in the plasma frame is changed by $v_A p \Delta \mu$, consequently also happens to the Alfv\'en wave energy due to energy conservation. Another important instability in the solar corona is the electrostatic instability. This is caused by an electron current as well as by streaming ions. Ion acoustic waves would be generated by this process. However, for the growth rate of these modes a sufficiently high ratio $T_e/T_i \gg1$ of the electron and ion temperatures is crucial. Observations and simulations in the vicinity of three solar radii indicate temperature ratios of the order of unity \citepads{2007ApJ...663.1363L,2012ApJ...745....6J}. In this regime the ion acoustic waves are also efficiently suppressed by Landau damping. For further reading about the streaming instability we refer to \citetads{1993tspm.book.....G}. These parallel peaked modes are influenced by dissipation, diffusion, and convection. An illustration of this is shown in Fig. \ref{fig:peak-evo-sketch}. Regarding the evolution within the Fourier space, dissipation will damp the wavemode, hence lower the maximum without altering the position or broadness of the peak. The dissipation of wavemodes is caused by the spatial diffusion term in Eqs. (\ref{eq:fft-wpm-mhdset}). Convection in Fourier space would shift the position of the peak. If diffusive transport in Fourier space were the dominant mechanism, it would result in a broader energy distribution. The dynamics of convection and diffusion lie within the nonlinear terms, therefore one cannot distinguish exactly between the responsible terms. To investigate the effects of spatial diffusion on the peaks, we solved the dissipation equation in wavenumber space \begin{align} \pa{e}{t}=-k^2 D e \end{align} and calculated its dissipation coefficient \begin{align} D = \frac{\frac{e_0}{e}-1}{k^2 \Delta t}. \label{eq:diffusioncoeff} \end{align} When the spatial diffusion is the only dissipative effect in wavenumber space, this wavenumber dissipation coefficient would be the spatial diffusion coefficient. We emphasise that diffusion in the wavenumber space is a different process and is hence explicitly distinguished from spatial diffusion. In the discussed context above the spatial diffusion is clearly connected to the wavenumber dissipation. We concentrated our investigation on those wavemodes whose peak energy is initiated only. For other modes this approach is not feasible because the spatial diffusion is not necessarily the dominant process on an arbitrary wavemode. \section{Simulation setup}\label{sec:simsetup} To simulate the turbulent plasma in which the SEP-event is set, we performed the following type of magnetohydrodynamic turbulence. \begin{table*} \caption{ Parameter setup for the simulations. \label{tab:simparameter}} \begin{center} \begin{tabular}{c c c c c c c}\hline \hline & $L_\text{scale}[\text{cm}]$ & $n_d[\text{cm}^{-3}]$ & $B_0[\text{G}]$ & $v_A[\text{cm } \text{s}^{-1}]$ & $\nu [\text{num}]$ & $k$-grid \\ \hline \rule{0mm}{3mm} SI & $3.4\cdot 10^8$& $10^5$ & $0.174$ & $1.2\cdot 10^8$ & $1$ & $128^3$\\ \hline \rule{0mm}{3mm} SII& $3.4\cdot 10^8$& $10^5$ & $0.174$ & $1.2\cdot 10^8$ & $10$ & $128^3$\\ \hline \rule{0mm}{3mm} SIII& $3.4\cdot 10^8$& $10^5$ & $1.74$ & $1.2\cdot 10^9$ & $10$ & $128^3$\\ \hline \rule{0mm}{3mm} SIV & $3.4\cdot 10^8$& $10^5$ & $0.00174$ & $1.2\cdot 10^{6}$ & $1$ & $128^3$\\ \hline \end{tabular} \tablefoot{The wavenumber grid is defined as $k=2\pi/L_\text{scale} [\text{grid position}]$. The number density $n_d$ connects the background field $B_0$ with the Alfv\'en speed $v_A$ via $\sqrt{4 \pi m n_d}$.} \end{center} \end{table*} We used an anisotropically driven turbulence with a driving range in k-space up to the first five numerical wavenumbers in perpendicular ($k'_{\perp}=2 \pi [0\cdots4 ] $) and 15 in parallel direction ($k'_{\parallel}=2 \pi [0\cdots14 ] $). A remark to the notation, the wavenumber is in general defined as $k = (2 \pi n)/L$ where $n$ stands for the numerical grid position. For simplicity we used the normalised wavenumbers $k' = (2 \pi \cdot n)$ throughout. The anisotropy was chosen for two reasons. First, to mimic the preferential direction of the solar wind, where particles that radially stream away from the Sun form the Parker spiral. Consequently, these particles can deposit their energy in a parallel direction on different scales. This is mainly valid in the vicinity of the Sun, in which we are interested. Second, a slab--component of SW turbulence is observed also at small scales in parallel direction. To ensure turbulence evolution up to high parallel wavenumbers, the driving range was extended along the parallel axis. This is necessary because the parallel evolution is much weaker than the perpendicular. Even though this is primarily a technical aspect to ensure the extent of the spectrum to higher $k_\parallel$, it is still in line with observations. An isotropic driver would not yield sufficiently turbulent modes at high $k_\parallel$. The turbulence driving is performed by allocating an amplitude with a phase to the Els\"asser--fields within the Fourier space. The amplitude follows a power-law of $|\vec{k}|^{-2.5}$ and is initialised using a random normal distribution. The phase was randomly chosen between zero and $2\pi$. These settings are divergence-free and hermitian symmetric. After this initialisation the values were scaled to the desired scenario, which in our case is a $dB/B_0$ ratio of roughly $10^{-2}$. Note that both species, pseudo- and shear Alfv\'en waves, are excited by this type of turbulence driving, but as presented by \citetads{marongold} the pseudo-waves evolution is strongly suppressed. In this initial range energy is injected at discrete times, which leads to a saturated turbulence - an equilibrium between dissipation and injection. The spatial resolution is $256^3$ gridpoints, resulting in $128^3$ points in k-space of which $|\vec{k'}|=2 \pi \cdot 42$ wavemodes are active modes that remain unaffected by (anti)aliasing. The hyperdiffusivity coefficient was set to $h=2$. The simulations of the turbulent background plasma were performed assuming an outer scale of $L_\text{scale}=3.4\cdot 10^8 \text{cm}$. This value was estimated using the growth rate from \citetads{rami-ongenofalfvs} \begin{align} \Gamma (k) = \frac{\pi \omega_{cp}}{2 n_p v_A} \int \text{d}^3 p \text{ } v\; \mu |k| \text{ } \delta(|k|-\frac{\omega_{cp}}{\gamma v_p}) \, f_p, \label{eq:growthraterami} \end{align} with the proton cyclotron frequency $\omega_{cp}$, the proton speed $v_p$, the Lorentz factor $\gamma$, the proton number density $n_p$, $\mu$ the pitch angle cosine, and $f_p$ the proton distribution function. Here the resonance condition for the $n$th order of interaction \begin{align} k_{\parallel} v_{\parallel} - \omega - n \Omega = 0, \quad n \in \mathds{Z} \label{eq:wave-particle-res} \end{align} (cf. \citealt{schlickeiser89}) was used, where $\omega$ is the wave frequency, and $ k_{\parallel}$ its parallel wavenumber. $\Omega$ is the particle's gyrofrequency and $v_{\parallel}$ its parallel velocity component. Note that Eq. \ref{eq:growthraterami} is only valid for purely parallel waves and $n=\pm 1$. Orders of $|n|>1$ can only be generated by oblique waves. The perpendicular components of the wave would then modify the scattering process by nonvanishing Bessel functions. This is discussed in detail by \citetads{Schlickeiser2002}. We used peaks at $k=2\pi\cdot 8$ and $k=2\pi\cdot24$, which represent proton energies of $E\approx 64 \text{ MeV}$ and $E\approx 7 \text{ MeV}$ respectively. Using the resonance condition, this leads to a length scale of \begin{align} L_\text{scale}= \frac{2 \pi n}{e B_0} \gamma m_p c v \approx 10^8 \text{ cm}. \end{align} The Alfv\'en speed was assumed to be $v_A = 1.2 \cdot 10^8 \text{ cm}\text{ s}^{-1}$, which leads along with a particle number density of $10^5 \text{ cm}^{-3}$ to a background magnetic field $B_0 = 0.174 \text{ G}$. These values resemble the solar wind environment at three solar radii (\citetads{ramigramm}), where particle acceleration by CME--driven shocks is strongest. The discretisation of the timestep is stable for values up to $\Delta t' = 1 \cdot 10^{-11}$ in numerical units or $\Delta t' = 3.4 \cdot 10^{-3}\text{ s}$ for the background turbulence. If the background plasma simulation reaches the saturated state, a Gaussian--distributed energy peak with purely parallel $\vec{k}=k \vec{e_\parallel}$ is injected. We chose two different positions of the peak in wavenumber space. To investigate the physics of an SEP--event, a wavenumber of $k_\parallel=1.5\cdot10^{-7} \text{ cm}^{-1}$ was used. This corresponds to a numerical wavenumber of 8, which is still within the driving range of the turbulence. The injection at smaller scales was represented by a peak at $k_\parallel=4.4\cdot10^{-7} \text{ cm}^{-1}$. This value lies at the numerical position 24, which is roughly between the maximum driven wavemode and the anti-aliasing truncation edge (which was at 43). We injected the SEP-energy gradually over a certain time interval to develop a realistic scenario. Multiple situations were explored, by using simulations with peaks at either position, with either large (growth rate $\Gamma_1$) or small (growth rate $\Gamma_2$) total amplitude of the Gaussian at the final driving step. Because the velocities increase near the peaks, the discretisation of the timestep had to be decreased to values of $\Delta t' = 5 \cdot 10^{-12}$ in numerical units or $\Delta t' = 1.7 \cdot 10^{-3}\text{ s}$ to sustain stability. To allow even more diverse case studies, four different initial conditions were used, as described in table \ref{tab:simparameter}. In each setup a complete evolution of the background turbulence was simulated. The first setup SI uses the standard parameters as described above. The main parameters of interest are the resistivity of the plasma and the background magnetic field. The results in changing these will reveal the effects on the mechanisms described in Fig. \ref{fig:peak-evo-sketch}. An increased resistivity compared to simtype SI is approached in SII. A higher value for $\nu$ is expected to make a difference in the spatial diffusion behaviour and would make wavenumber dissipation more dominant to the other transport. As indicated in Fig. \ref{fig:peak-evo-sketch}, this would lead to a significant damping of the peak. The dissipation range of the background turbulence will likely be increased by this parameter as well. The third setup SIII has a magnetic field increased by a factor of 10. This is to examine the influence of a more anisotropic turbulence because the perpendicular evolution should be much stronger according to \citetads{gsstrong}. In general, these values may only be achieved in magnetic clouds, but it gives valuable information on the mechanisms of turbulent transport. The high resistivity is necessary because of stability problems with the accompanying high Alfv\'en speeds. The last variation of the scenario, which was used in SI, is a strongly decreased magnetic background field $B_0$, see simtype SIV. The aim of this artificial scenario is to investigate strong turbulence at $\zeta \approx 1$. \section{Results}\label{sec:results} \begin{figure}[ht] \begin{center} \includegraphics[width=\columnwidth]{backgroundturbs.pdf} \caption{Magnetic energy spectrum of the simulated background turbulence setups. The plot was made by total integration within the Fourier space.} \label{fig:backgroundturb} \end{center} \end{figure} The evolution of the anisotropic background turbulence was simulated up to 30 Alfv\'en wave crossing times, which corresponds to a simulation of 85 s in physical time. At this point, the turbulence has reached a saturated state and a Kolmogorov-type power-law has evolved over a wide range of wavenumbers (see Fig. \ref{fig:backgroundturb}). According to \citetads{gsweak} and \citetads{gsstrong}, the 5/3-spectrum is dominant for perpendicular $k_\perp$, whereas the parallel evolution is significantly slower. Our simulations confirm this behaviour very clearly. Note that the total spectra in Fig. \ref{fig:backgroundturb} deviate from the Kolmogorov shape, especially at high wavenumbers, because the shape of the parallel spectrum is not 5/3. As expected, the spectra are very sensible to $\nu$. A factor of ten increases the dissipation range drastically. This is also due to the hyperdiffusivity we used where higher wavenumbers are damped by higher power of k (dissipation term $\propto k^4$) (see Sect. \ref{sec:theory}). The $dB/B_0$ ratio of the developed turbulence is about $10^{-2}$. The magnetic field fluctuations are defined in Fourier space by \begin{align} (dB)^2 = \int \limits_{|\vec{k}|>0} \text{d}^3 k \, \frac{1}{4}(\tilde w^-(\vec{k}) - \tilde w^+(\vec{k}))^2. \end{align} The influence of the turbulence strength on the energy transport is an important aspect for the peak simulations. We show this below. \begin{table} \caption{ Evolution timesteps of the peaks. \label{tab:peaktimes}} \begin{center} \begin{tabular}{c c c c }\hline \hline $t[\text{s}]$ & $k'_\text{Peak}= 2\pi \cdot 8$ & $k'_\text{Peak}= 2\pi \cdot 24$ & \\ \hline $t_1 \equiv t_\text{start}$ & 0 & 0 & \multirow{3}{*}{SI} \\ $t_2 \equiv t_\text{mid}$ & $51 \, (25.5)$ & $8.5$ & \\ $t_3 \equiv t_\text{end}$ & $102 \, (51)$ & $17$ & \\ \hline $t_1 \equiv t_\text{start}$ & 0 & 0 & \multirow{3}{*}{SII} \\ $t_2 \equiv t_\text{mid}$ & $20.4$ & $1.28$ & \\ $t_3 \equiv t_\text{end}$ & $40.8$ & $2.55$ & \\ \hline $t_1 \equiv t_\text{start}$ & 0 & 0 & \multirow{3}{*}{SIII} \\ $t_2 \equiv t_\text{mid}$ & $22.53 \, (21.42)$ & $2.04$ & \\ $t_3 \equiv t_\text{end}$ & $45.05 \, (42.84)$& $4.08$ & \\ \hline $t_1 \equiv t_\text{start}$ & 0 & 0 & \multirow{3}{*}{SIV} \\ $t_2 \equiv t_\text{mid}$ & $54 \, (28.9)$ & $12.75$ & \\ $t_3 \equiv t_\text{end}$ & $108 \, (57.8) $& $25.5$ & \\ \hline \end{tabular} \tablefoot{The labels \emph{start}, \emph{mid} and \emph{end} stand for the times of the decay until the final dissipation of the peak mode. The time $t_\text{mid}$ is defined as the half-time of the decay cycle. Note that the peak at smaller wavenumber $k'_\parallel=2\pi \cdot 8$ remains visible significantly longer than the other peak. The values denoted in brackets are the middle and end of the simulations that were not performed until the final dissipation of the peak due to the long computational times. In these cases we estimated of the total decay by an exponential fit of the decay curve.} \end{center} \end{table} \begin{figure}[ht] \begin{center} \includegraphics[width=\columnwidth]{v31-small24-1D.pdf} \caption{Simulation setup SI: Time evolution of a Gaussian--distributed amplification at numerical wavenumber 24 ($\hat= 4.4\cdot 10^{-7} \text{ cm}^{-1}$) at the lower growth rate $\Gamma_1$. The spectrum is a one--dimensional cut along the parallel wavenumber axis where the peak is located. The peak is clearly shifted towards smaller $k_\parallel$.} \label{fig:peak24smallgauss} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=\columnwidth]{v31-big24-1D.pdf} \caption{Simulation setup SI: One--dimensional energyspectrum $E(k_\parallel)$ of the peak at numerical wavenumber 24 ($\hat= 4.4\cdot 10^{-7} \text{ cm}^{-1}$) with higher growth rate $\Gamma_2$. The influence of the diffusion process is significant because the peak is broadening during time evolution. Adjoining maxima develop, e.g. at $k'_\parallel = 2 \pi \cdot 16 $ and 38, highlighted by red circles.} \label{fig:peak24biggauss} \end{center} \end{figure} Once the background plasma was simulated, a peak is driven over a time period of ca. 1.7 s. An exemplary time evolution of the peak at normalised wavenumber 24 is shown in Figs. \ref{fig:peak24smallgauss} and \ref{fig:peak24biggauss}. This is a one-dimensional spectrum of the magnetic field energy in numerical units that shows a cut along the parallel axis. The starting time $t_\text{start}$ corresponds to the end of the driving interval, hence the maximum amplification of the wavemode. The timesteps are shown in table \ref{tab:peaktimes}. For subsequent use the times $t_\text{mid}$ and $t_\text{end}$ are introduced. Note that the decay time interval of the excited mode at $k'_\parallel=2\pi \cdot 8$ is significantly longer compared to the $k'_\parallel=2\pi \cdot 24$ peak. This is again because of the hyperdiffusivity, which damps higher modes more strongly because the dissipation term is $\propto k^4$. The time $t_\text{end}$ is defined at the state where the peak has lost nearly its complete energy within the parallel direction. We took this as the final point of the energy diffusion or dissipation. Both peaks at $k'_\parallel=2\pi \cdot 24$ show broadening of its shape and shifting from the initial value to $k'_\parallel=2\pi \cdot 23$ within 17 s, while peaks at $k'_\parallel=2\pi \cdot 8$ are only slightly shifted to $k'_\parallel=2\pi \cdot 7.7$ s within 17 s, but the broadening is clearly visible, as shown in Fig. \ref{fig:peak8smallgauss}. \begin{figure}[ht] \begin{center} \includegraphics[width=\columnwidth]{v31-small8-1D.pdf} \caption{Simulation setup SI: Time evolution of a Gaussian--distributed amplification at numerical wavenumber 8 ($\hat= 1.5\cdot 10^{-7} \text{ cm}^{-1}$) at the lower growth rate $\Gamma_1$. The spectrum is a one--dimensional cut along the parallel wavenumber axis where the peak is located. Only a slight shift towards small $k_\parallel$ is observed. The broadening is clearly visible, especially on the flanks of the Gaussian curve.} \label{fig:peak8smallgauss} \end{center} \end{figure} The next step is to investigate the development of the amplified wavemodes. If the evolution of the peak were solely governed by spatial diffusion, the dissipation coefficient $D$ of Eq. (\ref{eq:diffusioncoeff}) would stay constant in time. Therefore the change in peak energy was measured. Two intervals were used for the calculation of $D$, denoted as time-interval $\tau_1$ and $\tau_2$. These intervals are different for both peak postions because of the faster decay at high wavenumbers. At $k'_\parallel = 2 \pi \cdot 8$ the time intervals are $\tau_1 = 8.5$ s and $\tau_2 = 17$ s, whereas at $k'_\parallel = 2 \pi \cdot 24 $ the intervals $\tau_1 = 1.7$ s and $\tau_2 = 3.4$ s are used. The results of the diffusion coeffients are given in table \ref{tab:diffcoeffs}. \vspace{0.5cm} \begin{table} \caption{Dissipation coefficients according to Eq. (\ref{eq:diffusioncoeff}). \label{tab:diffcoeffs}} \begin{center} \begin{tabular}{ c c c c c }\hline \hline \rule[-0.15cm]{0pt}{0.5cm} &\multicolumn{2}{c }{$k'_\text{Peak}= 2\pi \cdot 8$} & \multicolumn{2}{c}{$k'_\text{Peak}=2\pi \cdot24$ } \\ &$\Gamma_1$ & $\Gamma_2$ & $\Gamma_1$ & $\Gamma_2$\\ \hline \multicolumn{5}{c}{\textsc{Simtype SI}\rule[-0.2cm]{0pt}{0.5cm}} \\ $\tau_1$\rule[-0.15cm]{0pt}{0.5cm}& \rule{0.5mm}{0mm} $5.09\cdot10^{14}$ \rule{0.5mm}{0mm}& \rule{0.5mm}{0mm}$1.76\cdot10^{16}$ \rule{0.5mm}{0mm}&\rule{0.5mm}{0mm} $4.12\cdot10^{15}$\rule{0.5mm}{0mm} & \rule{0.5mm}{0mm} $7.92\cdot10^{15}$ \rule{0.1mm}{0mm}\\ $\tau_2$\rule[-0.15cm]{0pt}{0.5cm}& $7.23\cdot10^{15}$ & $8.79\cdot10^{16}$ & $1.55\cdot10^{16}$ & $2.92\cdot10^{16}$\\ \hline \multicolumn{5}{c}{\textsc{Simtype SII}\rule[-0.2cm]{0pt}{0.6cm}} \\ $\tau_1$\rule[-0.15cm]{0pt}{0.5cm}& $1.41\cdot10^{15}$ & $8.33\cdot10^{15}$ & $9.79\cdot10^{24}$ & $1.23\cdot10^{25}$\\ $\tau_2$\rule[-0.15cm]{0pt}{0.5cm}& $3.28\cdot10^{17}$ & $7.60\cdot10^{15}$ & $4.85\cdot10^{30}$ & $4.91\cdot10^{32}$\\ \hline \multicolumn{5}{c}{\textsc{Simtype SIV}\rule[-0.2cm]{0pt}{0.6cm}} \\ $\tau_1$\rule[-0.15cm]{0pt}{0.5cm}& $1.25\cdot10^{14}$ & $6.74\cdot10^{13}$ & $5.84\cdot10^{14}$ & \dots \\ $\tau_2$\rule[-0.15cm]{0pt}{0.5cm}& $4.43\cdot10^{14}$ & $8.00\cdot10^{13}$ & $1.47\cdot10^{16}$ & \dots \\ \hline \end{tabular} \tablefoot{The coefficients are calculated for different simulation types (SI, SII, SIV) at different time intervals ($\tau_1, \tau_2$), for different growth rates ($\Gamma_1, \Gamma_2$), and for the two driving wave numbers ($k=2\pi\cdot 8, 2\pi\cdot 24$). Not calculated are the coefficients for SIII because two parameters were changed in this setup. Therefore it is not directly comparable to the other simulations. The diffusion coefficients are given in $\text{cm}^{2}\text{s}^{-1}$. } \end{center} \end{table} \vspace{0.5cm} \begin{table} \caption{Energy deposited at the driven peak wave number. \label{tab:peakamplitudes}} \begin{center} \begin{tabular}{c c c c}\hline \hline \multicolumn{2}{c}{ \rule[-0.15cm]{0pt}{0.5cm}$k'_\text{Peak}= 2\pi \cdot 8$} & \multicolumn{2}{c}{$k'_\text{Peak}=2\pi \cdot24$ } \\ $\Gamma_1$ & $\Gamma_2$ & $\Gamma_1$ & $\Gamma_2$\\ \hline \multicolumn{4}{c}{\textsc{Simtype SI}\rule[-0.2cm]{0pt}{0.6cm}} \\ \rule[-0.15cm]{0pt}{0.5cm} \rule{1.mm}{0mm} $1.90\cdot10^{7}$ \rule{1.mm}{0mm}& \rule{1.mm}{0mm}$1.95\cdot10^{9}$ \rule{1.mm}{0mm}&\rule{1.mm}{0mm} $1.66\cdot10^{18}$\rule{1.mm}{0mm} & \rule{1.mm}{0mm} $2.29\cdot10^{20}$ \rule{1.mm}{0mm}\\ \hline \multicolumn{4}{c}{\textsc{Simtype SII}\rule[-0.2cm]{0pt}{0.6cm}} \\ \rule[-0.15cm]{0pt}{0.5cm} \rule{1.mm}{0mm}$9.17\cdot10^{7}$\rule{1.mm}{0mm} & \rule{1.mm}{0mm}$9.22\cdot10^{9}$ \rule{1.mm}{0mm}& \rule{1.mm}{0mm}$2.10\cdot10^{18}$ \rule{1.mm}{0mm}& \rule{1.mm}{0mm}$2.10\cdot10^{20}$\rule{1.mm}{0mm}\\ \hline \multicolumn{4}{c}{\textsc{Simtype SIII}\rule[-0.2cm]{0pt}{0.6cm}} \\ \rule[-0.15cm]{0pt}{0.5cm}$4.41\cdot10^{7}$ & $4.39\cdot10^{9}$ & $4.98\cdot10^{19}$ & $4.98\cdot10^{21}$\\ \hline \multicolumn{4}{c}{\textsc{Simtype SIV}\rule[-0.2cm]{0pt}{0.6cm}} \\ \rule[-0.15cm]{0pt}{0.5cm} $1.45\cdot10^{6}$ & $1.54\cdot10^{8}$ & $5.17\cdot10^{9}$ & \dots \\ \hline \end{tabular} \tablefoot{The energy is given as the ratio of $E(k_\text{Peak},t_\text{max})/E(k_\text{Peak},t_\text{start})$.} \end{center} \end{table} To relate the growth rates $\Gamma_{1/2}$ with the total resulting amplitude of the Gaussians of the simulations SI-SIII, the energy was measured and compared to the background at timestep $t_\text{start}$ each. The results are presented in table \ref{tab:peakamplitudes}. To investigate the direction of the peak evolution in the parallel and perpendicular directions, two--dimensional contour plots were produced. Fig. \ref{fig:sphereplot-timeevo} shows the time evolution of a peak at normalised $k'_\parallel=2 \pi \cdot 8$. The two--dimensional spectrum is a contour plot of the power spectral density of the magnetic field that was calculated by cylindrical integration in $k$-space. The single contours are scaled logarithmically. \begin{figure*}[ht] \begin{center} \includegraphics[width=\textwidth]{v31-smallpeak8-evo.pdf} \caption{Two-dimensional magnetic energy spectra of a peak at normalised wavenumber 8. Red regions are at higher energies compared to the blue ones. The parallel and perpendicular wavenumbers are given as absolute values. The time development is shown for mid-drive ($\Delta t \approx 0.85 \text{ s}$), max-peak ($\Delta t \approx 1.7 \text{ s}$) and the decay 17 s after the driving. The colours of the contours were normalised for comparison between the three plots. The colours indicate the logarithm of the total spectral energy. The simulation setup SI was used.} \label{fig:sphereplot-timeevo} \end{center} \end{figure*} During the evolution, higher harmonics of the initial peak arise. To investigate these in greater detail, we measured the energy of the initial peak and its first harmonic. The result is shown in Fig. \ref{fig:peak-harmonics}. Note that the generation of these modes starts at higher perpendicular wavenumbers (see Fig. \ref{fig:sphereplot-timeevo}, most picture left, first harmonic at $k_\perp \approx 15$) and not at purely parallel k. \begin{figure}[ht] \begin{center} \includegraphics[width=\columnwidth]{peak-harmonics.pdf} \caption{Energy dependency of the first harmonic on the initial peak energy. A fit resulted in a quadratic function. The simulation setup SI was used.} \label{fig:peak-harmonics} \end{center} \end{figure} In addition to the observed higher harmonics, another basic result of the simulation type SI is the discrete generation of other modes along the $k_\parallel$ axis especially at higher amplitudes. As seen in Fig. \ref{fig:peak24biggauss}, adjoining maxima develop next to the main Gaussian curve, e.g. at positions $k'_\parallel = 2 \pi \cdot 16$ and 38. The way the peak develops appears to vary with the amplitude, the peak with $\Gamma_1$ generates clearly fewer other modes than the larger peak with $\Gamma_2$. Both peaks show a significant change of thei original wavenumber position and a clear broadening. The two--dimensional spectra reveal a strong perpendicular development in k-space especially at large $|\vec{k}|$. During the decay the evolution becomes a little more isotropic, but the preferential direction of evolution is clearly perpendicular. \begin{figure}[ht] \begin{center} \includegraphics[width=\columnwidth]{v31-24er-sphereplots.pdf} \caption{Simulation setup SI: Comparison between growth rates $\Gamma_1$ and $\Gamma_2$ at peak position $k'_\parallel = 2 \pi \cdot 24$ at maximum drive time. Although the effective energy input varies by a factor 100 (see table \ref{tab:peakamplitudes}) another transport mechanism becomes dominant. The smaller peak ($\Gamma_1$) develops dominantly in the perpendicular direction while the evolution of bigger peak ($\Gamma_2$) is more isotropic and tends towards smaller $k_\parallel$. The colours indicate the logarithm of the total spectral energy.}\label{fig:v31-24er-sphereplots} \end{center} \end{figure} The importance of the amplitudes $\Gamma_{1/2}$ is also visible in Fig. \ref{fig:v31-24er-sphereplots}. The development of each peak is very different. An interesting result is a more dominant evolution of the $k'_\parallel = 2 \pi \cdot 24$ peak with the high growth rate $\Gamma_2$ that is towards smaller $k_\parallel$ directed. In the peak-dominated region the turbulence seems to increase. Therefore the $\zeta$ parameter (Eq. \ref{eq:zeta}) is of interest. Fig. \ref{fig:v31-big24-critbalmap} shows a map of values of $\zeta = [0.01 \cdots 0.15]$, i.e. near the critical balance. The same plot for the lower growth rate would be empty. This also indicates that the high $\zeta$ values along the $k_\perp$-axis in Fig. \ref{fig:v31-big24-critbalmap} stem from interactions with the peak. \begin{figure}[ht] \begin{center} \includegraphics[width=\columnwidth]{v31-big24-Critbalmap-26000.pdf} \caption{Map of the critical balance parameter $\zeta$ for the right--hand plot in Fig. \ref{fig:v31-24er-sphereplots}. The contours linearly represent values between 0.01 and 0.14 of the integral values of $\zeta(k_\parallel,k_\perp)$. The peak structure and values near the $k_\perp$-axis are clearly visible.}\label{fig:v31-big24-critbalmap} \end{center} \end{figure} The peaks at $k'_\parallel=2 \pi \cdot 8$ remain much longer than the higher modes, see table \ref{tab:peaktimes}. This is due to the higher $\vec{k}$ on which the dissipation process depends. Furthermore, the hyperdiffusivity damps higher modes more strongly ($\propto k^4$). \begin{figure}[ht] \begin{center} \includegraphics[width=\columnwidth]{v34-8er-sphereplots.pdf} \caption{Two--dimensional peak evolution in setup SII at maximum drive time. The higher harmonics develop at lower $k_\perp$ compared to SI. The higher growth rate ($\Gamma_2$, right panel) shows a strong parallel evolution. The smaller amplitude ($\Gamma_1$, left panel) develops dominantly towards higher $k_\perp$. The colours indicate the logarithm of the total spectral energy.}\label{fig:v34-8er-sphereplots} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=\columnwidth]{v34-small24-1D.pdf} \caption{Simulation setup SII: Time evolution of a Gaussian distributed amplification at $k'_\parallel=2\pi \cdot 24$ at the smaller growth rate $\Gamma_1$. Again a 1D cut spectrum along $k_\parallel$ axis is shown. The shift of the peak is stronger compared to SI.}\label{fig:v34-peak24smallgauss} \end{center} \end{figure} The simulation of the same peaks in setup SII with higher $\nu$ reveals one key feature. The shift towards smaller $k_\parallel$ positions occurs on both, $k'_\parallel=2\pi \cdot 8$ and 24 peaks and is stronger compared to SI. The shifting is significant, especially at $k'_\parallel=2\pi \cdot 24$, which is shown in Fig \ref{fig:v34-peak24smallgauss}. Within 2.55 s the original position changes from $k'_\parallel=2\pi \cdot 24$ to 22. The peak amplitude is again important for the evolution. We observed that higher growth rates lead to an isotropic evolution, whereas the lower growth rates show strongly perpendicular development. The effective peak energy is lower and consequently the energy transport towards higher wavenumbers is more restricted. There are also fewer of higher harmonics. This can be observed by direct comparison of the left plot in Fig. \ref{fig:v34-8er-sphereplots} with the middle plot in Fig. \ref{fig:sphereplot-timeevo}. As expected, the decay of the energy is faster compared to SI (see table \ref{tab:peaktimes}). \begin{figure}[ht] \begin{center} \includegraphics[width=\columnwidth]{v35-8u24-sphereplots.pdf} \caption{Evolution of the two peak positions at $t_\text{mid}$ in setup SIII. Very strong perpendicular development in all SIII simulations is observed. The edge of the parallel driving range at $k'_\parallel=2\pi \cdot 14$ is clearly visible in the right--hand panel.}\label{fig:v35-8u24-sphereplots} \end{center} \end{figure} The simulation setup SIII reveals very strong perpendicular evolution for all peaks. Two examples are given in Fig. \ref{fig:v35-8u24-sphereplots}. The growth rates seem not to have a strong influence on the development. Only more higher harmonics of $k'_\parallel=2\pi \cdot 8$ peak are visible with $\Gamma_2$. The time of total energy decay of the peaks is slightly longer than in the simulations SII. \begin{figure}[ht] \begin{center} \includegraphics[width=\columnwidth]{v36-big8_u_critbal.pdf} \caption{Two--dimensional peak evolution in setup SIV. Left: Development of the peak at $k'_\parallel=2\pi \cdot 8$ with growth rate $\Gamma_2$. The figure presents the state 1.28 s after the maximum driven peak ($t_0$). Right panel: Corresponding map of the critical balance parameter $\zeta$. Each contour represents integral values above $\zeta=0.1$. The colour scaling is linear.}\label{fig:v36-big8_u_critbal} \end{center} \end{figure} In the last simulation setup SIV the magnetic background field was reduced by two orders of magnitude. The resulting $dB/B_0$ ratio is of the order of 10 and the turbulence development is highly isotropic. The peaks at $k'_\parallel=2\pi \cdot 8$ and $k'_\parallel=2\pi \cdot 24$ both at growth rate $\Gamma_2$ show very interesting features. \begin{figure}[ht] \begin{center} \includegraphics[width=\columnwidth]{v36-small24_u_critbal.pdf} \caption{Two--dimensional peak evolution in setup SIV. Left: Development of the peak at $k'_\parallel=2\pi \cdot 24$ with growth rate $\Gamma_1$. The figure presents the state 0.68 s after the maximum driven peak ($t_0$). Right panel: Corresponding map of the critical balance parameter $\zeta$. Each contour represents integral values above $\zeta=0.1$.The colour scaling is linear.}\label{fig:v36-small24_u_critbal} \end{center} \end{figure} In addition to the typical peak evolution and generation of higher harmonics, other structures also arise at high $k_\perp$. As shown in the Figs. \ref{fig:v36-big8_u_critbal} and \ref{fig:v36-small24_u_critbal}, both peak positions generate these structures, but at various positions. During the development of the peak at position $k'_\parallel=2\pi \cdot 8$ theses structures arise perdominantly at high $(k'_\parallel,k'_\perp)$ locations, e.g. $(2\pi \cdot 10,2\pi \cdot 18)$, $(2\pi \cdot 10,2\pi \cdot 38)$, $(2\pi \cdot 18,2\pi \cdot 32)$ and $(2\pi \cdot 24,2\pi \cdot 18)$, see Fig \ref{fig:v36-big8_u_critbal}. The structures evolving with the $k'_\parallel=2\pi \cdot 24$ simulations are instead located at middle $k_\perp$ but low $k_\parallel$, e.g. $(2\pi \cdot 5,2\pi \cdot 13)$, $(2\pi \cdot 5,2\pi \cdot 17)$ and $(2\pi \cdot 8,2\pi \cdot 22)$, see Fig \ref{fig:v36-small24_u_critbal}. In contrast to the higher harmonics, these structures are not necessarily integral multiples of the initial peaked mode. A map of the critical balance parameter $\zeta$ was calculated for both plots. Around the wavenumber of the peaks and along the $k_\perp$ axis this parameter is of order 1. Interestingly, during the $k'_\parallel=2\pi \cdot 24$ simulation this parameter increases, especially at the $k_\perp$ axis. In contrast to the other setups, the higher harmonics are located along the $k_\parallel$ axis at low or zero $k_\perp$. A significant shift along this axis towards smaller parallel wavenumbers is observed, e.g. as shown in Fig. \ref{fig:vglsmallpeak8setups}. We discuss possible explanations for these phenomena below. \section{Discussion}\label{sec:discuss} As discussed in Sect. \ref{sec:theory}, there are three possibilities how an excited wavemode can develop: through diffusion, dissipation and convection. A general conclusion from our simulations is that the dynamics of these mechanisms are strong at high wavenumbers. Especially for the dissipation process this is not unexpected because of it strongly depends on th wavenumber. The hyperdiffusivity might amplify this effect because of the higher power in $k$. The Figs. \ref{fig:peak24smallgauss} and \ref{fig:peak24biggauss} clearly show a rapid dissipation of energy because the peak loses amplitude very fast. Also table \ref{tab:peaktimes} indicates a decay in short timescales at high wavenumbers. However, a broadening also arises at the flanks of the Gaussian distribution. The broadening is strong for the higher growth rate $\Gamma_2$. Within a time interval of 1.7 s after the driving range the FWHM is increased by roughly 70\% at $\Gamma_2$, whereas the peak with smaller amplitude is broadened by ca. 15 \%. A possible explanation is the equilibrium between energy- and enstrophy cascade, which causes a similar flow of energy to large and small wavenumbers \citepads{mininni09}. \begin{figure}[ht] \begin{center} \includegraphics[width=\columnwidth]{enstrophy.pdf} \caption{Time evolution of the enstrophy for the simulation setups SI, SIII and SIV. The strong magnetic background field of SIII prevents the enstrophy from developing while a clear change in SIV is visible.}\label{fig:enstrophy} \end{center} \end{figure} The convective mechanism shifts the maximum of the peak. Convection is a slow process compared to dissipation and diffusion within the simulated regime. Nevertheless we were able to observe it within our simulations, e.g. in Fig. \ref{fig:peak24smallgauss}. The transport is towards smaller wavenumbers, which indicates an enstrophy cascade. This effect is more typical for two--dimensional plasmas. The MHD-development in anisotropic plasmas is mostly effectively two--dimensional. This lead to inverse energy cascades as well as upscaling enstrophy cascades. These mechanisms generate larger vortices and consequently transfer energy to smaller wavenumbers. \begin{figure}[ht] \begin{center} \includegraphics[width=\columnwidth]{vgl-small8-10k_steps_v313536.pdf} \caption{Comparison of the convective peak shifting between the setups with changing background field $B_0$ SI, SIII and SIV. The strong $B_0$ in SIII causes no observable shifting at all, whereas the weak $B_0$ leads to a significant change of the original peak position. One possible explanation is an enstrophy cascade.} \label{fig:vglsmallpeak8setups} \end{center} \end{figure} To investigate this behaviour in more detail, we concentrated on the peak $k'_\parallel = 2 \pi \cdot 8$ with growth rate $\Gamma_1$ within the setups of changing magnetic background field SI, SIII and SIV. The comparsion of these setups is shown in Fig. \ref{fig:vglsmallpeak8setups}. We observe a slight shift of the in setup SI after 17s of the maximum drive from position 8 to 7.7. The simulation SIII with increased $B_0$ shows no transport of the peak. After the same time interval it is still at position 8. The evolution in setup SIV with a small $B_0$ is very strong. After 17s the peak has shifted by roughly 1.5 grid positions from $k'_\parallel = 2 \pi \cdot 8$ to 6.5. The development of an enstrophy cascade explains this behaviour. The strong magnetic field effectively makes the MHD-evolution one-dimensional. As $B_0$ decreases, the evolution becomes less restricted to the magnetic background field. Consequently, the enstrophy cascade increases. Fig. \ref{fig:enstrophy} clearly supports this explanation. The enstrophy was calculated by \begin{align} \varepsilon(\vec k) = \int \text{d}^3 k \; |\vec k\times \vec u|^2. \end{align} All two--dimensional spectra show a strong evolution in the perpendicular direction. This is consistent with the theories of \citetads{gsweak} and \citetads{gsstrong} within a turbulent plasma, as explained in Sect. \ref{sec:theory}. The process of this perpendicular evolution is caused by the energy cascade. Mainly Fig. \ref{fig:v35-8u24-sphereplots} shows strong perpendicular behaviour, whereas the evolution clearly becomes more isotropic in SIV (see Figs. \ref{fig:v36-big8_u_critbal} and \ref{fig:v36-small24_u_critbal}). This is due to the increasing strength of the turbulence for the cascade, which is expressed by the $dB/B_0$ ratio. The dissipation coefficients presented in table \ref{tab:diffcoeffs} do remain constant especially for all simulations with $\Gamma_2$ at $k'_\parallel = 2 \pi \cdot 8$. This implies that spatial diffusion is the dominant process for these simulations. The most significant change of the dissipation coefficient is at $k'_\parallel = 2 \pi \cdot 24$ for setup SII, between $\tau_1$ and $\tau_2$. We interpret this as a nonlinear effect where wavemodes are generated, which in turn triggers the cascade. This influences the energy of the initialised Gaussian very strongly. As expected, the dissipation coefficients for setup SII are stronger for the $k'_\parallel = 2 \pi \cdot 24$ peaks. The spatial diffusion coefficient is connected with the wavenumber dissipation process via $\nu$. This is because the spatial diffusion process is the only mechanism that leads to energy losses in the k space (see Sect. \ref{sec:theory}). This is at least valid for wavemodes far below the antialiasing edge, where energy is artificially removed as well. A connection of wavenumber diffusion to $\nu$ is observed in Fig. \ref{fig:v34-8er-sphereplots} where the Gaussian shape is significantly broadened, which is caused by the diffusion process. The observed higher harmonics could resemble a three-wave process $k_{8}+k_{8} \rightarrow k_{16}$. This is supported by Fig. \ref{fig:peak-harmonics}. The energy dependency between peak and the next higher harmonic is quadratic. This is also the case for three-wave interactions \citepads{1969npt..book.....S}. As pointed out before, this is forbidden for Alfv\'en waves by wave interaction processes. Just oppositely directed wavepackages can collide, hence momentum conservation would be violated by the proccess described above. This wave interaction can only take place with oppositely directed waves of the background plasma. This must also be true since a cross-check simulation of the peak without turbulent background did not show these harmonics. Another explanation is given by \citetads{galinsky97}: Alfv\'en waves interact with themselves, which leads to wave steepening. Investigating the strong turbulence evolution in simulation SIV shows unexpected structures at high $k_\perp$ (see Fig. \ref{fig:v36-big8_u_critbal} and \ref{fig:v36-small24_u_critbal}). This effect might be caused by the critical balance of strong turbulence within the Goldreich and Sridhar description. This requires the parameter $\zeta$ to be $\sim 1 $, which is the case for locations in the vicinity of the peaks and along the perpendicular axis. Mainly the $k'_\parallel=2\pi \cdot 24$ peak seems to amplify the region at $k'_\perp =2\pi \cdot 12$ and $k'_\parallel = 0$. Values of $\zeta > 0.1$ lead to the development of these structures, but it is not possible to conclude whether the stuctures arise first and then generate higher values of $\zeta$, or vice versa. Nevertheless, the turbulence strength increases within these regions, which agrees with \citetads{gsstrong} where $\zeta$ is assumed to become unity during the turbulence evolution. In addition to setup SIV, SI also shows this behaviour as presented in Fig. \ref{fig:v31-24er-sphereplots} and \ref{fig:v31-big24-critbalmap}. More investigation is needed to clarify the underlying processes. \section{Conclusion} We analysed the evolution of waves generated by proton beams in a turbulent medium. This evolution may play an important role in diffusive shock acceleration in the heliosphere. Our study has revealed that three different processes as sketched in Fig. \ref{fig:peak-evo-sketch} are taking place. The most interesting question is wheter wavemodes excited by particles of a certain energy yield wavemodes that interact with other particle energies. The observed shifting of the initial parallel wavenumber position towards smaller $k_\parallel$ influences the particle acceleration at these modes. As shown in Eq. \ref{eq:wave-particle-res} and in the corresponding section, this means that higher energetic particles can be accelerated because the wavemode develops towards higher spatial scales \citepads{2007ApJ...658..622V}. The shift towards smaller $k_\parallel$ is indeed fairly minor in this simulation. This is because of the injection of energy at only one single wavenumber and the limited simulation time. We also expect an effect through nonlinear amplification: Waves at smaller $k_\parallel$ accelerate higher energetic particles, again injecting energy at lower $k_\parallel$. On the other hand a strong evolution towards high $k_\parallel$ has also been observed in terms of development of higher harmonics of the initialised mode. Consequently, particle acceleration at lower energies is also possible. It should be noted, however, that this is not consistent with isotropic diffusion of energy in wavenumber space as assumed in \citetads{2007ApJ...658..622V}. This means that the previous models of the wave-particle system will have to be updated accordingly to account for the strong dependence of energy transport on the direction in wavenumber space. Especially the development of the higher harmonics contradicts an identical forward and backward cascade. The strong perpendicular evolution of the peak initialised with purely parallel propagating waves causes higher orders ($|m|>1$, see Eq. \ref{eq:wave-particle-res}) of resonance between solar particles and the amplified mode. This is because Eq. \ref{eq:growthraterami} has to be modified in this case \citepads{Schlickeiser2002}. Owing to limited computational power we have not been able to investigate the effect of critical balance in detail. But we note that under certain conditions ($k_\parallel/k_\perp$, amplitude) the evolution may be governed by the critical balance. \begin{acknowledgements} We express our graditude to Rami Vanio, Timo Laitinen and Markus Battarbee for cooperation and their contributions to this work.\\ We acknowledge support from the Deutsche Forschungsgemeinschaft through grant SP 1124/3.\\ SL additionally acknowledges support from the European Framework Program 7 Grant Agreement SEPServer - 262773.\\ We thank the anonymous referee for her/his detailed comments, which improved the paper significantly. \end{acknowledgements} \bibliographystyle{aa}
1,941,325,221,221
arxiv
\section{Introduction} Consider a randomly moving object that tries to cross a closed area or to run away from a known position. We want to compute the best spatial and temporal deployment of elementary search efforts in order to maximize the detection probability of this intelligent and randomly moving object. In our article, we call this object a \emph{target} and the search efforts are assigned to \emph{sensors}. Our work is closely linked to search theory \cite{Souris99a}, Stackelberg games \cite{Fudenberg91} (which belong to game theory domain) and operational research. It is also related to sensor network \cite{Chen:2007:DRD:1290539.1290593}. Stackbelberg games \cite{Pita09} could have been used to model our problem, however, due to limitations it was not possible. In this two-person game, the first player, the \emph{leader}, commits to a strategy first. Then, the second player, the \emph{follower}, tries to optimize his reward, considering the action chosen by the leader. There are many security domains for which Stackbelberg games are appropriated. For example, the ARMOR system \cite{armor08} has been used for years at the Los Angeles International Airport. ARMOR casts a patrolling and monitoring problem as a Bayesian Stackelberg game (an extension of Stackelberg games). Moreover, theses games have potential applications for network deployment and routing. Since we aim to deal with continuous temporal and spatial spaces, we are faced with two difficulties. In our case, expressing the players' strategies as vector or a function would be difficult. In addition, the number of strategies is expected to be quiet large or infinite. Search theory is another domain of game theory which deals with search efforts optimization. Most of the problems in this field are used to be modeled in discrete space and time, and therefore, operational research methods are often chosen to solve these problems. Eagle \cite{Eagle96} and Washburn \cite{Washburn98} were the first to study the problem of the optimization of spatio-temporal search efforts. Using a \emph{branch and bound} solver, Eagle proposed to maximize a bound on the detection probability rather than the probability itself. Other approaches \cite{Boussetta06,Bartolini08} only focus on spatial optimization without taking the target intelligence into account. Golen, meanwhile, recently \cite{Golen09} used a stochastic approach, more precisely, an evolutionary algorithm \cite{Back:1993:OEA:1326623.1326625}, but only optimize the sensors' discrete position. Our approach, based on the splitting framework \cite{cerou07a}, is different because it allows us to optimize both space and time and we do not have to discretize these spaces anymore. In the sequel, we will try to point out that the temporal aspect is the most difficult part of our problem. This population-based optimization method can be linked to evolutionary algorithms. In this article, we first present and model our problem. Then, we give the expression of the detection probability of interest and explain how we have applied the rare events simulation and the splitting algorithms to our problem, resulting in the generalized splitting for research efforts scheduling (GSRES) algorithm. Lastly, we illustrate our method with the \emph{datum} problem and give prospects. \section{Problem presentation} In order to maximize the detection probability, until time $T$, of a smart and reactive moving target whose trajectory is unknown (and depends in practice on random variables), we have to deploy both spatially and temporally a limited number of active sensors in an area $\Omega$ (called the \emph{operational theater}). Generally, $\Omega$ is a convex polygon with at least 3 vertices, but in most cases, it is a simple rectangle. As soon as a sensor is deployed, it is powered on and starts consuming energy ; it becomes out of service when its battery is empty. In this article, we don't study the sensors' power consumption, so therefore we only assume that a battery is powerful enough to make the sensor emit a maximum number of times during a fixed period. Because sensors are active, we assume they can only detect a target when they send a signal. Since they are not autonomous, they only ping when a moving commanding station requests it. \subsection{The target dynamic} We are only given an \emph{a priori} on the starting point of the target trajectory. This \emph{a priori} is weak if the starting point is randomly sampled in the search space $\Omega$. On the contrary, a strong \emph{a priori} means its initial position is sampled from a Gaussian pdf centered on the last seen position. We then define a feasible trajectory $\bm Y_T \in \mathcal Y$ by a set of $K$ points (or $K-1$ legs) composing it. So \cite{RongLi08a} \begin{equation} \begin{array}{c} \bm Y_T = \{\bm y_k\}_{k = 0\ldots K-1} \text{ where }\\ \displaystyle \sum\limits_{k=0}^{K-2} \frac{{\|r_{k+1} - r_k \|}_2}{v_k} + \frac{{\|r_{K-1} - r_{K-2} \|}_2}{v_{K-1}}= T, \end{array} \end{equation} with $v_k$ the target speed on the leg $k$. In this formula we note $\bm y_k \triangleq \left[r_{x,k}, v_{x,k}, r_{y,k}, v_{y,k}\right]_{k = 0\ldots K-1}$, where $r_{x,k}$ and $r_{y,k}$ define the target position and where $v_{x,k}$ and $v_{y,k}$ define the target velocity. \begin{figure} \begin{center} \includegraphics[scale=0.35]{trajectory} \caption{Example of a leg-by-leg rectilinear trajectory (3 legs)} \label{fig:trajectory} \end{center} \end{figure} Moreover, if the target detects a signal of a sensor but is to far from it, it is able to avoid it before being detected and it is also able to memorize its position. Basically, a target is instantly and surely detected if it enters the sensor's detection range. However, since the target is smart, if it comes close enough to a sensor which has just sent a signal, it detects it and learns all of its specifications. Thus, it may decide to avoid this threat or to come closer and start an avoidance later. In all cases, when the target is notified of the existence of a sensor, it changes its course before it enters the detection range. The target trajectory is then directly influenced by the partial knowledge of the target on the deployed search efforts $\mu_t(\bm X)$. For instance, $\mu_t(\bm X) = 0$ means that until time $t \leq T$, the target has not detected any search effort yet. On the contrary, if $\mu_t(\bm X) = \{1,2\}$, the target has learned about a sensor existence and is trying to avoid it ($\mu_t(\bm X) = 1$) or to run away from it after a detection ($\mu_t(\bm X) = 2$). This instant of detection by a sensor $\bm s$ is denoted by $t_{\bm s}^{detect}$. \begin{figure} \begin{center} \includegraphics[scale=0.5]{detection_nonsmart} \caption{The target starts from $\bm y_0$. After a short time, it is detected by sensor 2 and does not change its course. The trajectory ends at $\bm y_T$. In red, the detection circle. In orange, the counter-detection circle.} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.3]{detection_w_escape} \caption{The target starts from $\bm y_0$. After a while, it is detected by an active sensor (sensor 1) and immediately escapes by following a radial course.} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.3]{avoidance} \caption{The target starts from $\bm y_0$. After a while, it detects an active sensor (sensor 2) and avoids it.} \end{center} \end{figure} We also introduce the control vector $\bm u(\beta)$ which contains information about the change of course $\beta$. At last, we can give an expression of the motion state equation of our target: \begin{equation} \label{eq:yk+1} \bm y_{k+1} = \bm F(\varDelta t_{k}(\mu_k(\bm X))) \bm y_k + \bm u_t(\beta_k(\mu_k(\bm X)))\ . \end{equation} Here, $\varDelta t_{k}$ and $\beta_k$ are two random variables and $\bm F(\varDelta t_{k}(\mu_k(\bm X)))$ is a state transition matrix associated with a rectilinear motion: \small \begin{equation} \bm F(\varDelta t_{k}(\mu_k(\bm X))) = \left[ \begin{array}{c c c c} 1 & \varDelta t_{k}(\mu_k(\bm X))& 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & \varDelta t_{k}(\mu_k(\bm X))\\ 0 & 0 & 0 & 1\\ \end{array} \right]. \end{equation} \normalsize More precisely, $\varDelta t_k$ is sampled from a truncated Gaussian law if $\mu_k(\bm X) = 0$ ($\varDelta t_k \sim \mathcal N_t(\mu_{\varDelta t}, \sigma^2_{\varDelta t}, \mu_{\varDelta t} - a, \mu_{\varDelta t} + a), a \in \mathbb R)$. When a detection occurs (\emph{i.e.} $\mu_k(\bm X) = \{1,2\}$), we end the current leg at $t_{s}^{detect}$ and add an extra one, so that $K = K+1$, $t_{k+2} = t_{k+1}$ and $t_{k+1} = t_{s}^{detect}$. $\varDelta t_{k+1}$ is then sampled from an uniform law as $\varDelta t_{k+1} \sim \mathds{1} ([t_{s}^{detect}, t_{k+2}])$ (with $t_{s}^{detect} \sim \mathcal U([t_k, t_{k+2}])$). Whatever the value of $\mu_k(\bm X)$, $\beta_k$ is sampled from a truncated Gaussian law whose mean and bounds may vary depending on the last most threatening sensor met. $\beta_k$, meanwhile, is sampled from a truncated Gaussian law ($\beta_k \sim \mathcal N_t(\mu_\beta, \sigma^2_{\beta}, \mu_\beta - a, \mu_\beta + a), a \in \mathbb R$) when $\mu_k(\bm X) = \{0,1\}$, and from an uniform law when $\mu_k(\bm X) = 2$. More precisely, when $\mu_k(\bm X) = 0$, the mean of the Gaussian law $\mu_\beta$ is equal to the initial course of the trajectory $\beta_0$ or to its last course $\beta_{k-1}$, depending on the chosen dynamic. When $\mu_k(\bm X) = 1$, $\mu_\beta$ is directly defined by the last most threatening sensor met. Lastly, if the target is detected, \emph{i.e.} $\mu_k(\bm X) = 2$, the new course of the target is defined by its previous course and the position of the detecting sensor. \subsection{The solution} Remember that we have a set of $P$ sensors $\bm s_i$ that we may spatially and temporally deploy in our operational theatre $\Omega \times [0,T]$ in order to detect a smart and reactive target. A sensor is only operational for a limited amount of time (\emph{e.g.} its battery life) and can therefore send a limited number of pings ($E_{max}$ times maximum). Moreover, sensors are not autonomous and they are only able to send a signal if they are given the order by a commanding station. This is denoted by the visibility parameter until time $t \leq T$, $\varphi_t(\bm X)$. Before going further, we need to explain the following points: \begin{itemize} \item all the sensors must be in the search space $\Omega$, \item sensors must have been set up before they can be activated, \item instants of activation must be in $[0;T]$. \end{itemize} Denote a solution by $\bm X \in \mathcal X$ (the set of feasible solutions), so we can define: \begin{equation} \bm X \triangleq \{\bm s_i(\bm \tau_i)\}_{i = 1,\ldots, P} \text{ with } \bm \tau_i = \{t_{i,1},\ldots,t_{i,j}\}_{j = 1,\ldots, np_i}. \end{equation} $\bm s_i$ corresponds to the sensor $i$ ($i \in \{1,\ldots,P\}$) position, while $\bm \tau_i$ is a vector containing its $np_i$ instants of activation (in $[0,T]$). We also denote by $\mathcal C$ all the spatial and temporal constraints on the feasible solutions. \section{Evaluating the detection probability} We want to maximize the detection probability of a target until time $T$. This quantity is denoted by $S_T(\bm X)$ and is given by the following equation: \small \begin{equation} S_T(\bm X) \triangleq \int_{Y_T \in \mathcal Y} \! f\left(\bm Y_T | \varphi_T(\bm X);\mathcal C\right) p\left(\bm Y_T | \mu_T(\bm X);\mathcal C\right)\, d\bm Y_T\ . \end{equation} \normalsize Here, $f\left(\bm Y_T | \varphi_T(\bm X);\mathcal C\right)$ is a cookie-cutter cost function that takes the value $1$ if the studied trajectory $\bm Y_T$ satisfies some defined criteria (such as a number of detections and a number of avoidances) and $0$ otherwise, \emph{i.e.} $f\left(\bm Y_T | \varphi_T(\bm X);\mathcal C\right) = \mathds 1_{\{\bm Y_T \in A(\bm X, \mathcal C)\}}$ where $A(\bm X, \mathcal C)$ is the set of trajectories which are detected by the solution $\bm X$ and which respect the constraints $\mathcal C$. It depends on the visibility of the solution $\varphi_T(\bm X)$. $p\left(\bm Y_T | \mu_T(\bm X);\mathcal C\right)$ is the conditional pdf used to generate the target trajectories and depends on the target intelligence $\mu_T(\bm X)$. As this is not the goal of this article, we do not give any more information on how we have implemented this cost function. Unfortunately, $S_T(\bm X)$ is an integral with respect to the probability distribution of the (random, solution-dependant) target trajectory $\bm Y$ and its analytical expression is not available. A first approach should be to use the drude Monte Carlo method to obtain an unbiased estimator of $S_T(\bm X)$, $\widehat S_T(\bm X)$: \begin{equation} \begin{array}{cc} \widehat S_T(\bm X) = \frac{1}{N} \displaystyle\sum_{i=1}^N f(\bm Y^i_T |\varphi_T(\bm X);\mathcal C), \text{ where }\\ \bm Y_T^i \sim p(\bm Y_T | \mu_T(\bm X);\mathcal C). \end{array} \end{equation} The trajectories $\bm Y_T^i$ are recursively generated using the motion state equation (\ref{eq:yk+1}). To be concrete, we generate a large number of feasible trajectories $\bm Y_T^i,i=1,\ldots,N$ and evaluate $f\left(\bm Y_T^i | \varphi_T(\bm X);\mathcal C\right)$. Note that the relative error associated with $\widehat S_T(\bm X)$ given by the CMC estimator is \begin{equation} \label{eq:er} RE_{CMC}(\widehat S_T(\bm X)) = \frac{\sqrt{1- S_T(\bm X)}}{\sqrt{N S_T(\bm X)}}\ . \end{equation} Also remark that the smaller the probability to estimate is, the larger the relative error is. To reduce this error, we have to increase the number of trajectories $N$. Knowing the probability we are meeting are above $10^{-3}$ (if they were below, planning would be useless), we have chosen $N \geq 50000$. The optimal solution we search, denoted by $\bm X^\star$ is defined by this formula: \begin{equation} \bm X^\star \triangleq {\arg\max}_{\bm X \in \mathcal X} \{S_T(\bm X)\},\ \bm X \in \mathcal X. \end{equation} We also define $\gamma^{\star} \triangleq S_T(\bm X^\star)$ as the maximum detection probability we can reach. Let $\bm X^\dag \triangleq \arg\max \{\widehat S_T(\bm X)\}$ be the solution obtained when maximizing the approached probability $\gamma^{\dag}=\widehat S_T(\bm X^\dag)$. For the sequel we focus on computing $\gamma^{\dag}$ and $\bm X^\dag$. Indeed, if $N$ is large enough, we can expect that these are good approximations of $\gamma^\star$ and $\bm X^\star$ respectively. A sufficient condition for this to hold is that $\widehat S_T(\bm X)$ converges to $S_T(\bm X)$ when $N \rightarrow \infty$ uniformly in $\bm X \in \mathcal X$. Splitting theory is based on the observation that maximizing $S_T(\bm X)$ (in practice $\widehat S_T(\bm X)$) is similar \begin{itemize} \item to estimating the probability \begin{equation} \label{eq:l_gamma} \ell(\gamma) = \mathbb P(\widehat S_T(\bm X) \geq \gamma) = \int_{\mathcal X} \mathds 1_{\{\widehat S_T(\bm X) \geq \gamma\}} q(\bm X, \mathcal C) d\bm X, \end{equation} where $q(\bm X, \mathcal C)$ is some positive probability density on $\mathcal X$, with the idea that this probability decreases and reaches zero when $\gamma$ increases toward the (unknown) value $\gamma^\dag$ and is identically zero beyond this value, hence the characterization \begin{equation} \gamma^\dag = \inf\{\gamma \geq 0 : \ell(\gamma) = 0\}\ , \end{equation} \item and simultaneously to sampling the set \begin{equation} \mathcal X_\gamma = \{\bm X \in \mathcal X : \widehat S_T(\bm X) \geq \gamma\} \subset \mathcal X\ , \end{equation} with the idea that this set decreases toward $\bm X^\dag$ when $\gamma$ increases toward the (unknown) value $\gamma^\dag$ and reduces to $0$ beyond this value. \end{itemize} However, when $\gamma$ goes to $\gamma^\dag$, the event $\{\widehat S_T(\bm X) \geq \gamma \}$ becomes rarer and rarer, and consequently the CMC estimator \begin{equation} \ell(\gamma) = \frac{1}{C} \sum_{i=1}^C \mathds 1_{\{\widehat S_T(\bm X_i) \geq \gamma\}}\ \text{where } \bm X_i \sim q(\bm X, \mathcal C) \end{equation} has a relative error \begin{equation} RE_{CMC}(\ell(\gamma)) = \frac{\sqrt{1-\ell(\gamma)}}{\sqrt{C\ell(\gamma)}} \end{equation} that increases to infinity. In order to reduce this relative error, we should increase the sample size $C$, but then, we would be faced with a computational explosion. Moreover, when $\gamma$ goes to $\gamma^\dag$, it becomes harder and harder to produce samples from $q(\bm X, \mathcal C)$ that would be close to $\bm X^\dag$. To address this problem, a technique called generalized splitting \cite{BotevKroese08} and derived from Diaconis, Holmes and Ross researches \cite{Diaconis94} on MCMC (Markov chain Monte Carlo) allows us to compute $\ell(\gamma)$ in an easier and more precise way. For an optimization problem, we will find at least one solution that maximizes our criteria among all the solutions sampled to compute our probability of interest. If we define a sequence of increasing thresholds $\gamma$, such that $\gamma_0 \geq \gamma_1 \geq \ldots \geq \gamma_L$ (with $\gamma_L \leq \gamma^\star$), we can rewrite $\ell(\gamma)$ as the following product of conditional probabilities: \begingroup \everymath{\displaystyle} \scriptsize \begin{equation} \begin{array}{lll} \ell(\gamma) & = & \mathbb P_q \left(\widehat S_T(\bm X) \geq \gamma_0\right) \prod_{l=1}^L \mathbb P_q \left(\widehat S_T(\bm X) \geq \gamma_l | \widehat S_T(\bm X) \geq \gamma_{l-1} \right)\\ \\ & = & \displaystyle c_0 \prod_{l=1}^L c_l\ . \end{array} \end{equation} \normalsize \endgroup where \begingroup \everymath{\displaystyle} \begin{equation} \begin{array}{lll} \displaystyle c_l & = & \mathbb P_q\left(\widehat S_T(\bm X) \geq \gamma_l | \widehat S_T(\bm X) \geq \gamma_{l-1}\right)\\ \\ & = & \frac{\mathbb P_q\left(\widehat S_T(\bm X) \geq \gamma_l, \widehat S_T(\bm X) \geq \gamma_{l-1}\right)}{\ell(\gamma_{l-1})}\\ \\ & = & \int_{\bm X \in \mathcal X} \mathds 1_{\{\widehat S_T(\bm X) \geq \gamma_l\}} \frac{\mathds 1_{\{\widehat S_T(\bm X) \geq \gamma_{l-1}\}} q(\bm X, \mathcal C)}{\ell(\gamma_{l-1})}\ d\bm X\\ \\ & = & \int_{\bm X \in \mathcal X} \mathds 1_{\{\widehat S_T(\bm X) \geq \gamma_l\}} g_{l-1}^\star(X;\gamma_{l-1}, \mathcal C)\ d\bm X \\ \\ & = & \mathbb P_{g^\star_{l-1}}(\widehat S_T(\bm X) \geq \gamma_l)\ . \end{array} \end{equation} \endgroup and where the importance sampling density \cite{rubinstein09a} $g_{l-1}^\star(X;\gamma_{l-1}, \mathcal C) = \displaystyle\frac{\mathds 1_{\{\widehat S_T(\bm X) \geq \gamma_{l-1}\}} q(\bm X, \mathcal C)}{\ell(\gamma_{l-1})}$ is precisely the conditional density of $\bm X$, given that $\widehat{S}_T(\bm X)\geq \gamma_{l-1}$. Remark that the support of this density $g_{l-1}^\star(\bm X; \gamma_{l-1}, \mathcal C)$ is precisely the set $\{\bm X \in \mathcal X : \widehat S_T(\bm X) \geq \gamma_{l-1}\}$. If we know how to draw independent and identically distributed random variables $\bm X_i$ over $\mathcal X_{l-1} \in \mathcal X$ from this importance sampling function, $\ell(\gamma)$ can be rewritten: \begin{equation} \begin{array}{lll} \ell(\gamma) & = & \mathbb P_q \left(\widehat S_T(\bm X) \geq \gamma_0\right) \displaystyle\prod_{l=1}^L \mathbb P_{g^\star_{l-1}} \left(\widehat S_T(\bm X) \geq \gamma_l \right)\\ & = & c_0 \displaystyle\prod_{l=1}^L c_l\ . \end{array} \end{equation} With a judicious choice or a fair estimation of the $\{\gamma_l\}$ sequence, the event $\{S_T(\bm X) \geq \gamma_l\}$ is no longer a rare event (generally, $c_l \in\left[10^{-3},10^{-2}\right]$) under the distribution $g^\star_{l-1}(\bm X, \gamma_{l-1}, \mathcal C)$ and therefore the $c_l$ quantities can now be well approximated through a CMC estimator. Hence, a CMC estimator of $\ell(\gamma)$ is: \begin{equation} \widehat \ell(\gamma) = \prod_{l=0}^L \widehat c_l, \end{equation} where $\widehat c_l = \frac{1}{C} \sum_{i=1}^{C} \mathds 1_{\{\widehat S_T(\bm X_i) \geq \gamma_l\}}$ and where $\bm X_i \sim g^\star_{l-1}(\bm X;\gamma_{l-1}, \mathcal C)$. \section{The splitting algorithm} We will now describe the general splitting algorithm applied to our optimization problem. Generate a population $\mathcal X_0 = \{\bm X_1, \ldots, \bm X_C\} \sim q(\bm X; \mathcal C)$ of $C$ feasible solutions (that respect all the constraints in $\mathcal C$) and initialize an iteration counter $l = 1$. Evaluate the scores of the solutions $\mathcal S_{0}=\{\widehat S_{T}(\bm X_i)\}$ and sort $\mathcal S_0$ in decreasing order such that $\widehat S_{T}(\bm X_{j(1)}) \geq \widehat S_{T}(\bm X_{j(2)}) \geq\ldots\geq\widehat S_{T}(\bm X_{j(C)})$. We obtain $\widehat \gamma_0=\widehat S_{T}(\bm X_{j(C_0)})$ with $C_0=\lfloor\rho C\rfloor$. Set $\mathcal{\widetilde X}_{0} = \{\widetilde{\bm X_i}\}_{i = 1,\ldots,C_{0}} \subset \mathcal X_{0}$ such that $\widehat S_T(\widetilde{\bm X}_i) \geq \gamma_{0}$. Remark that $\widetilde{\bm X_i} \sim g_{0}^\star(\bm X;\gamma_{0},\mathcal C)$ for $i=1,\ldots,C_{0}$. At iteration $l$ of the algorithm, $\mathcal{\widetilde X}_{l-1}$ contains only $\rho \%$ samples of the initial population $\mathcal{X}_{l-1}$ whose score is above the current threshold $\widehat\gamma_{l-1}$. Since we want to work with a constant number $C$ of solutions, we have to complete this set. This is done in two steps. First, we use the bootstrap method or the ADAM cloning method to repopulate our set of solutions. The bootstrap method consists of sampling uniformly with replacement $C$ times from the population $\mathcal{\widetilde X}_{l-1}$. ADAM cloning method, meanwhile, consist of making $\left\lfloor\frac{C}{C_{l-1}}\right\rfloor + B_i (i = 1,\ldots,C_{l-1})$ copies of each sample. Here each $B_1, \ldots, B_{l-1}$ are $Ber(1/2)$ random variables conditional on $\sum^{C_{l-1}}_{i=1} B_i = C \mod C_{l-1}$ and $[B_1, \ldots, B_{C_{l-1}}]$ is a binary vector with joint pdf $\mathbb P(B_1 = b_1, \ldots, B_{C_{l-1}} = b_{C_{l-1}}) = \frac{(C_{l-1} - r)! r!}{C_{l-1}!} \mathds 1_{\{b_1+\ldots+b_{C_{l-1}}=r\}},\ b_i \in \{0,1\}$, where $r = C \mod C_{l-1}$. Unfortunately, the samples of the completed set denoted by $\mathcal{\widetilde X}_{l-1}^{boot/clon}$ are identically distributed but not independent. To address this problem, we apply a \emph{random} Gibbs sampler $\pi_{l-1}(\bm X | \widetilde{\bm X}_{l-1} ; \mathcal C) = \frac{1}{C_{l-1}} \sum_{i = 1}^{C_{l-1}} \kappa_{l-1} (\bm X | \widetilde{\bm X}_i ; \mathcal C)$ to each sample $\widetilde X_i$ of $\mathcal X_{l-1}^{boot/clon}$ to obtain $\mathcal X_{l}=\{\bm X_i\}$ such that $\bm X_i \sim g_{l-1}^\star(\bm X;\widehat\gamma_{l-1}, \mathcal C)$ for $i=1,\ldots,C$. Here, $\kappa_{l-1}$ is the transition density of a Markov chain starting from $\mathcal{\widetilde X}_{l-1}^{boot/clon}$ and with stationary pdf $g^\star_{l-1}$. For our problem, we have the transition density $\kappa_{l-1}$ defined by: \begin{equation} \kappa_{l-1}\left(\bm X | \widetilde{\bm X}_i\right) = \sum_{j=1}^6 \lambda_j \prod_{r=1}^{b_l} m_j\left(\bm X_i^r | \widetilde{\bm X}_i^{-r}\right)\ , \end{equation} where $\bm X_i^r$ denotes the component $r$ of a solution and $\bm X_i^{-r}$, all the components of $\widetilde{\bm X}_i$ but the $r$ one. The $\lambda_j$ are the probabilities of updating one component at a time, given that $\sum_j\lambda_j = 1$ and the $m_j$ are the conditional pdf associated to the 6 moves (described below). We could also use a \emph{systematic} Gibbs sampler, whose transition density is defined by \begin{equation} \kappa_{l-1}\left(\bm X | \widetilde{\bm X}_i\right) = \sum_{j=1}^6 \lambda_j \prod_{r=1}^{P_{max}} m_j\left(\bm X_i^r | \widetilde{\bm X}^{-r}_i\right)\ . \end{equation} Notice that we have chosen a random Gibbs sampler with $b_l$ random updates of components of a solution $\widetilde X_i$, while with a systematic Gibbs sampler, we would have updated all the components of $\widetilde X_i$ in a fixed order. The number of random updates $b_l$ varies during the simulation in this way: $b_l = b_0 + \alpha l$ where $\alpha \in \mathbb{R}_+^\star$. For the first iterations, $b_l < P$ and therefore this approach is faster than a systematic Gibbs sampler. On the contrary, when $l$ is close to $L$, $b_l \geq P$. Thus, we do more updates than a systematic Gibbs sampler would do but we maintain more diversity in our solutions. Since we do not know how to update a solution in a way it still satisfies the constraints $\mathcal C$, we first recursively propagate the modifications starting from the sensor/activation we have modified in the sequence of activations. Then we check its feasibility, that is, if it respects all the spatial and temporal constraints $\mathcal C$. We apply acceptance–rejection (for a limited times) to each updated component until we find a feasible solution. Considering that the cost function $S_T$ also verifies the consistency of a solution, an updated solution $\bm X_i$ from $\mathcal{\widetilde X}_{i,l-1}^{boot/clon}$ is then accepted with probability $\mathds 1_{\{S_T(\bm X_i) \geq \widehat\gamma_{l-1}\}}$. Henceforth, all the $\bm X_i$ should be iid. In the same way as before, evaluate the scores $\mathcal S_{l}=\{\widehat S_{T}(\bm X_i)\}$, $\bm X_i\in \mathcal X_{l}$ and sort this set in decreasing order such that $\widehat S_{T}(\bm X_{j(1)}) \geq \widehat S_{T}(\bm X_{j(2)}) \geq\ldots\geq\widehat S_{T}(\bm X_{j(C)})$. We obtain $\widehat \gamma_l=\widehat S_{T}(\bm X_{j(C_l)})$ with $C_l=\lfloor\rho C\rfloor$. Deduct that $\widetilde{\bm X}_l= \bm X_{j(1)} = \arg\max \{\widehat S_T({\bm X}_i)\}$, $i=1,\ldots,C$ and $\widetilde{\gamma}_l=\widehat S_T(\widetilde{\bm X}_l)$ respectively the best solution at iteration and its score at iteration $l$. In addition, denote by $\widehat \gamma_{0:l}=\max\{\widetilde{\gamma}_l,\widehat \gamma_{0:l-1}\}$ and $\widehat{\bm X}_{0:l}=\widetilde{\bm X}_l$ if $\widetilde{\gamma}_l>\widehat \gamma_{0:l-1}$ or $\widehat{\bm X}_{0:l}=\widehat{\bm X}_{0:l-1}$ otherwise, respectively the best detection probability and its associated solution ever met until iteration $l$. If one of the stopping criteria is satisfied, stop the algorithm and give $\widehat{\bm X}_{0:l}$ as an estimator of the optimal solution. In practice, the threshold $\widehat{\gamma}_l$ is the $\rho$ quantile of the sample ${\mathcal X}_l$. \subsection{The Gibbs sampler moves}\label{subsec:gs_moves} Before we go further, let us introduce and recall a few notations. $\bm s_i \triangleq [s_{i_x};s_{i_y}]$ denotes the $i$th sensor position (and more generally the $i$th sensor), $P$ is the number of sensors in the current solution, $P_{max}$ is the maximum number of sensors, $np_i$ stands for the activations' number for sensor $i$ while $t_{i,{\{1,...,np_i\}}}$ and $\bm \tau_i$ respectively are the instants of activation of sensor $i$ and the set of activation times associated with sensor $i$. Also denote by $t_{\bm s_i}$ the set up duration of the sensor $\bm s_i$. Remark that a sensor whose instants of activation are negative is considered as disabled. Consequently, deleting an instant of activation consists of assigning a negative value to this instant. Removing a sensor is then equivalent to deleting all of its instants of activation and ignoring it. Now we give more details on the 6 moves of our Gibbs sampler and the associated conditional pdf. \begin{enumerate} \item \textbf{Add a sensor}. The conditional pdf $m_1$ can be defined in two steps. Sample a position $\bm s'_{P+1}$ from $\mathcal U(\Omega; \mathcal C)$ for the new sensor. Then draw its first instant of activation $t'_{P+1,1} \sim \mathcal U([t_{\bm s_{P+1}},T])$. $m_1$ is proportional to (up to a normalization constraint): \small \begin{equation} \begin{array}{l} m_1\left(\bm X_i^{P+1} | \widetilde{\bm X}_i^{-(P+1)}\right) \propto\\ \mathcal U(\Omega; \mathcal C)\ \mathcal U([t_{\bm s_{P+1}},T])\ \mathds 1_{\{S_T(\bm X_i) \geq \gamma_{l-1}\}}\ . \end{array} \end{equation} \normalsize \item \textbf{Add one instant of activation}. First, choose a sensor randomly \emph{i.e.} draw $j$ uniformly in $\{1, \ldots, P\}$. If $np_j < np_{max}$ then draw $t'_{j,np_j+1} \sim \mathcal U([t_{j,1},T])$. $m_2$ is proportional to: \small \begin{equation} \begin{array}{l} m_2\left(\bm X_i^{j} | \widetilde{\bm X}_i^{-j}\right) \propto\\ \mathcal U(\{1,\ldots, P\})\ \mathcal U([t_{j,1},T])\ \mathds 1_{\{S_T(\bm X_i) \geq \gamma_{l-1}\}}\ . \end{array} \end{equation} \normalsize \item \textbf{Remove a sensor}. To apply the move $m_3$, we apply the following steps : choose a sensor randomly, \emph{i.e.} draw $j$ uniformly in $\{1, \ldots, P\}$. Then delete all of its instants of activation and mark it as disabled. So we have \small \begin{equation} m_3\left(\bm X_i^j | \widetilde{\bm X}_i^{-j}\right) \propto \mathcal U(\{1,\ldots, P\})\ \mathds 1_{\{S_T(\bm X_i) \geq \gamma_{l-1}\}}\ . \end{equation} \normalsize \item \textbf{Remove an instant of activation}. Choose a sensor randomly \emph{i.e.} draw $j$ uniformly in $\{1, \ldots, P\}$. We assume that $np_j > 1$. Choose an instant of activation $t_{j,k}$, \emph{i.e.}, draw $k$ uniformly in $\{2,\ldots, np_j\}$. Delete $t'_{j,k}$. The conditional pdf $m_4$ is defined as below: \small \begin{equation} \begin{array}{l} m_4\left(\bm X_i^j | \widetilde{\bm X}_i^{-j}\right) \propto\\ \mathcal U(\{1,\ldots, P\})\ \mathcal U(\{1,\ldots,np_j\})\ \mathds 1_{\{S_T(\bm X_i) \geq \gamma_{l-1}\}}\ . \end{array} \end{equation} \normalsize \item \textbf{Move a sensor}. Select a sensor $\bm s_j$ randomly, \emph{i.e.} draw $j$ uniformly in $\{1, \ldots, P\}$. Then, draw $\bm s'_j \sim \sum_{k=1}^2 w_k\ \mathcal N(\bm s_j, \bm \Sigma^2_k)$ with $\sum_{k=1}^2 w_k= 1$. Notice that the weights $w_k$ may evolve during the optimization in order to promote one of the move versus the other. For this mixture of two Gaussian pdf the covariance of the first Gaussian defines a small move while the covariance of the second Gaussian defines a larger move. Here is the formula of this conditional pdf: \small \begin{equation} \begin{array}{l} m_5\left(\bm X_i^j | \widetilde{\bm X}_i^{-j}\right) \propto\\ \mathcal U(\{1,\ldots, P\})\ \sum_{k=1}^2 w_k\ \mathcal N(\bm s_j, \bm \Sigma^2_k)\ \mathds 1_{\{S_T(\bm X_i) \geq \gamma_{l-1}\}}\ . \end{array} \end{equation} \normalsize \item \textbf{Swap two sensors}. If we assume there are at least 2 active sensors, select two sensors $\bm s_k$ and $\bm s_r$ with $k$ uniformly drawn in $\{1, \ldots, P\}$ and $r$ uniformly drawn in $\{1, \ldots, P\} \setminus \{k\}$. For all $k = 2,\ldots,np_k$, delete $t'_{k,j}$ and $t'_{r,j}$ for all $r = 2,\ldots,np_r$. Swap their first instant of activation: $t'_{k,1}$ and $t'_{r,1}$. The conditional bivariate pdf $m_6$ is then defined by: \small \begin{equation} \begin{array}{l} m_6\left(\bm X_i^k, \bm X_i^r | \widetilde{\bm X}_i^{-k, -r}\right) \propto\\ \mathcal U(\{1,\ldots, P\})\ \mathcal U(\{1,\ldots, P\}\setminus \{k\})\ \mathds 1_{\{S_T(\bm X_i) \geq \gamma_{l-1}\}}\ , \end{array} \end{equation} \normalsize where $(\bm X_i^k, \bm X_i^r) \in \{(\widetilde{\bm X}_i^k, \widetilde{\bm X}_i^r), (\widetilde{\bm X}_i^r, \widetilde{\bm X}_i^k)\}$. \end{enumerate} \subsection{The GSRES algorithm} \begin{alg}[GSRES] Given parameter $\rho$, sample number $C$ and number of burn-in iterations $b_l$ of the Gibbs sampler, follow the forthcoming steps: \label{algo:cloning} \textbf{1. Initialization}. Set a counter $l = 1$. Generate $C$ feasible solutions $\{\bm X_i\},i=1,\ldots,C$ and denote $\mathcal X_{0}$ the set containing them. Note that $\bm X_i \sim q(\bm X; \mathcal C)$. Evaluate scores $\mathcal S_{0}=\{\widehat S_{T}(\bm X_i)\}$ and sort in decreasing order $\mathcal S_{0}$ such that $\widehat S_{T}(\bm X_{j(1)}) \geq \widehat S_{T}(\bm X_{j(2)}) \geq\ldots\geq\widehat S_{T}(\bm X_{j(C)})$. We obtain $\widehat \gamma_0=\widehat S_{T}(\bm X_{j(C_0)})$ with $C_0=\lfloor\rho C\rfloor$. Define $\widetilde{\bm X}_0 = \widehat{\bm X}_{0:0} = \bm X_{j(1)}$, $\widetilde \gamma_l = \widehat \gamma_{0:0}=\widehat S_{T}(\bm X_{j(1)})$. \textbf{2. Selection}. Let $\widetilde{\mathcal X}_{l-1} = \{\widetilde{\bm X}_1,...,\widetilde{\bm X}_{C_{l-1}}\}$ be the subset of the population $\{\bm X_1,...,\bm X_C\}$ for which $\widehat S_T(\bm{\widetilde X}_i) \geq \widehat\gamma_{l-1}$. $\widetilde{\mathcal X}_{l-1}$ contains $\rho \%$ of the population. Notice that $\widetilde{\bm X_i} \sim g_{l-1}^\star(\bm X;\widehat\gamma_{l-1}, \mathcal C)$ for $i=1,\ldots,C_{l-1}$. \textbf{3. Repopulation}. Apply one of these methods: \begin{itemize} \item Bootstraping: sample uniformly with replacement $C$ times from the population $\widetilde{\mathcal X}_{l-1}$ to define the temporary set of $C$ solutions $\mathcal X_{l-1}^{boot}$. \item ADAM Cloning: make $\left\lfloor\frac{C}{C_l}\right\rfloor + B_i (i = 1,\ldots,C_l)$ copies of each population sample $\widetilde{\mathcal X}_{l-1}$. Here each $B_1 ,\ldots, B_{C_l}$ are $Ber(1/2)$ random variables conditional on $\sum^{C_l}_{i=1} B_i = C \mod C_l$. We then define the temporary set of $C$ solutions $\mathcal X_{l-1}^{clon}$. \end{itemize} \textbf{4. Gibbs sampler}. Apply a random Gibbs sampler $\pi_{l-1}(\bm X | \widetilde{\bm X}_{l-1} ; \mathcal C) = \frac{1}{C_{l-1}} \sum_{i = 1}^{C_{l-1}} \kappa_{l-1} (\bm X | \widetilde{\bm X}_i ; \mathcal C)$ with $b_l$ burn-in iterations and the transition density $\kappa_{l-1}$ to each sample of $\mathcal X_{l-1}^{boot/clon}$ (see section \ref{subsec:gs_moves}) to obtain $\mathcal X_{l}=\{\bm X_i\}$ such that $\bm X_i \sim g_{l-1}^\star(\bm X;\widehat\gamma_{l-1}, \mathcal C)$ for $i=1,\ldots,C$. Notice that the $\bm X_i, i=1,\ldots,C$ should be approximately independent and identically distributed. \textbf{5. Estimation}. Evaluate scores $\mathcal S_{l}=\{\widehat S_{T}(\bm X_i)\}$, $\bm X_i\in \mathcal X_{l}$. Sort in decreasing order $\mathcal S_{l}$ such that $\widehat S_{T}(\bm X_{j(1)}) \geq \widehat S_{T}(\bm X_{j(2)}) \geq\ldots\geq\widehat S_{T}(\bm X_{j(C)})$. We obtain $\widehat \gamma_l=\widehat S_{T}(\bm X_{j(C_l)})$ with $C_l=\lfloor\rho C\rfloor$. Deduct that $\widetilde{\bm X}_l = \bm X_{j(1)}$, $\widetilde \gamma_l=\widehat S_{T}(\bm X_{j(1)})$, $\widehat{\bm X}_{0:l} = \widetilde{\bm X}_l$ if $\widetilde \gamma_l>\widehat\gamma_{0:l-1}$, else $\widehat{\bm X}_{0:l} = \widehat{\bm X}_{0:l-1}$ and $\widehat \gamma_{0:l}=\max\{\widetilde \gamma_l,\widehat \gamma_{0:l-1}\}$. \textbf{6. Stopping condition}. If one of the stopping condition is reached, stop the algorithm and give $\widehat{\bm X}_{0:l}$ as an estimator of the optimal solution. Else $l = l+1$ and go back to step 2. \end{alg} For our problem, we implement a customized version of the splitting method. To begin the computation, we generate an initial pool of feasible solutions with $q(.;\mathcal C)$. Since a solution and the carrier trajectory are closely linked, we use our trajectory generator to obtain a pool of initial solutions that respect the whole constraints set $\mathcal C$. Lastly, to ensure our algorithm will not converge and stay into a local extremum, we developed a simple heuristic. If the current maximum score and the current threshold do not increase for a chosen number of times, we automatically reduce the value of the threshold. Through the decrease of the threshold, we start again to accept the feasible solutions generated by the moves and therefore, we reintroduce some diversity in the pool of solutions. \section{Illustrative example} The first result we present here concerns a scenario in which a target is running away from the position it has just been detected. Its initial position is drawn from a Gaussian law centered on $\Omega/2$ and with a variance $\sigma_{target}^2$. Moreover, the target is supposed to be smart and reactive and therefore, while it is running away, it tries to avoid being detected another time. Considering that the research starts with a delay of $t_c^{aoz}$ which represent the time of arrival of the hunter, we aim to maximize the chances to detect the target during the time $T$. We use $P_{max}=10$ sensors that are able to ping only once. For this simulation, we use $C=800$ solutions, $N=70000$ trajectories, $b_0 = 2$, $b_l = b_0 + 0.2\ l$ and decide to keep $10\%$ of elites ($\rho=0.1$). We also let the algorithm perform up to 50 iterations. Because our algorithm is not able yet to adjust the number of sensors considering the cost of their deployment, we have chosen to work with a constant number of sensors. However, we have allowed the removal of a sensor if it is directly followed by an addition of a new sensor. We have used two of the six moves we have defined above : move a sensor and a combination of removing a sensor followed by the addition of a new sensor. The probability of both moves to occur is 0.5. In the best solution we obtain, the sensors position and activation describe a spiral. This result, illustrated in the figure \ref{fig:simul_sensors_position_spiral}, is related to the studies of Washburn \cite{Washburn02} and Son \cite{Son07} for an only-spatial optimization case, \emph{i.e.}, when the target is not able to avoid the sensor (``myope'' case). In this context, the best spatial sensor deployment designs an Archimedean spiral. \begin{figure} \begin{center} \includegraphics[scale=0.5]{simul_sensors_position_spiral01} \caption{Graphic of $\bm X^\dag$: position and activation order of the 10 sensors.} \label{fig:simul_sensors_position_spiral} \end{center} \end{figure} In the sequel, we plot the optimization evolution behaviour versus iterations $\widehat \gamma_{0:l} = \widehat{S}_{T}(\widehat{\textbf X}_{0:l})$ and $\mathbb E[\widehat S_{T}(\widehat{\textbf X}_{l})]$ which represents the mean score of the current population (figure \ref{fig:simul_maxval}). Both are smoothly increasing in a logarithmic way. On the figure \ref{fig:simul_soldistrib} we see that the support of scores pdf, much large at the beginning ($l = 0$), becomes thinner and thinner and converges toward a Dirac pdf as the optimization is conducted. Moreover, $\widehat{\gamma}_{0:l}$ increases and the standard deviation of the distribution decreases. Once the optimization is over, we observe that the detection probability reaches $0.9406$ whereas when $l=0$, the best score is below 0.25 and the mean scores $\mathbb E\left[\widehat S_{T}(\bm X)\right]$ is equal to $0.0187$ with a large standard deviation. This gap illustrates the efficiency of our approach. \begin{figure} \begin{center} \includegraphics[scale=0.2]{simul_maxval_spiral01} \caption{In blue $\widehat{S}_{T}(\widehat{\textbf X}_{0:l})$ and in red $\mathbb E[\widehat S_{T}(\widehat{\textbf X}_{l})]$ with $C=800$, $N=70000$, $L=50$.} \label{fig:simul_maxval} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.2]{simul_soldistrib_spiral01} \caption{Scores densities support versus iterations GSRES. In blue $l = 0$, in green $l = 5$, in red $l = 10$ and in black $l = 50$.} \label{fig:simul_soldistrib} \end{center} \end{figure} Figure \ref{fig:simul_sensors_position}, visualizes the result of two other simulations for wich the planification is not as optimal as for the previous example. Nevertheless, the detection probabilities associated with these deployments are above $0.93$. \begin{figure} \begin{center} \includegraphics[scale=0.43]{simul_sensors_position_spiral03} \includegraphics[scale=0.43]{simul_sensors_position_spiral02} \caption{Graphic of $\bm X^\dag$: position and activation order of the 10 sensors for two other sub-optimal solutions.} \label{fig:simul_sensors_position} \end{center} \end{figure} To compare our best solution (see figure \ref{fig:simul_sensors_position_spiral}) with Son's solution, we have computed the best track spacing of the Archimedean spiral (denoted by $TS$) for our scenario (with CMC method for detection probability evaluation), considering the target has a myope behaviour. According to Son's researches, the best track spacing is computed thanks to the following equation: \begin{equation} TS = \max\left\{2R, \min\{\alpha TS_{ray}, TS^\star+\beta\}\right\}. \end{equation} $TS_{ray}$ is considered as a good track spacing for a random tour target\footnote{Assuming that the course change frequency is large enough versus $\frac{1}{T}$ and that the carrier speed is much larger than the target speed.} that is approximately normally distributed, $S^\star$ is the largest track spacing (so-called the ``furthest-on-disk'' solution) and $R$ is the sensor detection radius. In \cite{Son07}, $\alpha$ and $\beta$ have been obtained by using a quasi Monte Carlo framework, more precisely, a nearly orthogonal latin hypercube (NOLH) method. After that, we have fitted an Archimedean spiral and a logarithmic spiral to our solution (see \cite{mishra10a}). Figure \ref{fig:spiralfit} shows the comparison between the myope Archimedean spiral, our solution, the fitted Archimedean spiral and the fitted logarithmic spiral. We remark the sensors position and activation rather describe a logarithmic spiral than an Archimedean spiral. This may be explained by the differences between the target dynamic models and by the fact that we don't use a single and always active sensor, but 10 sensors instead. \begin{figure} \begin{center} \includegraphics[scale=0.3]{spiralfit} \caption{Datum optimal solution and fitted Archimedean spiral} \label{fig:spiralfit} \end{center} \end{figure} Figure \ref{fig:simul_sensors_hist} shows that the first sensor deployed detects most of the targets ($N = 70000$). \begin{figure} \begin{center} \includegraphics[scale=0.2]{simul_sensors_hist_spiral01} \caption{Sensors detection rate for the best solution $\bm X^\dag$.} \label{fig:simul_sensors_hist} \end{center} \end{figure} The following pictures describe how our algorithm works. Figure \ref{fig:sensors_spatial_density_ite00} shows the spatial distribution of the $C=800$ solutions at initialization. We remark that for the first four sensors, the spatial distribution densities follow a Rayleigh distribution. As the carrier trajectory bounces in the search space for the next sensors, the spatial distribution densities converge toward a uniform distribution. \begin{figure} \begin{center} \includegraphics[scale=0.2]{sensors_spatial_density_ite00} \caption{Spatial sensors' pdf at initialization} \label{fig:sensors_spatial_density_ite00} \end{center} \end{figure} In figure \ref{fig:sensors_spatial_density_ite05} we notice that the first sensor is almost positioned. For the other sensors, some saddle points emerge. \begin{figure} \begin{center} \includegraphics[scale=0.2]{sensors_spatial_density_ite05} \caption{Spatial sensors' pdf at iteration 5} \label{fig:sensors_spatial_density_ite05} \end{center} \end{figure} At final iteration (see figure \ref{fig:sensors_spatial_density_ite50}), the sensors are positioned and all the spatial distribution follow a unimodal distribution. \begin{figure} \begin{center} \includegraphics[scale=0.2]{sensors_spatial_density_ite50} \caption{Spatial sensors' pdf at final iteration} \label{fig:sensors_spatial_density_ite50} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.2]{sensors_temporal_density_ite00} \caption{Temporal sensors' pdf at initialization} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.2]{sensors_temporal_density_ite05} \caption{Temporal sensors' pdf of sensors at iteration 5} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.2]{sensors_temporal_density_ite50} \caption{Temporal sensors' pdf of sensors at final iteration} \end{center} \end{figure} During the Gibbs sampler's step of the optimization, we reject a new solution if it is not feasible or if its score is below $\gamma_{l-1}$ at iteration $l$. Otherwise, we consider that the move is accepted. The two following graphics (figures \ref{fig:acceptance_rate_delete} and \ref{fig:acceptance_rate_move}) show the evolution of the two moves' acceptance during the optimization. The acceptance rates decrease while the optimization is conducted. The two peaks correspond to the threshold's decrease. \begin{figure} \begin{center} \includegraphics[scale=0.2]{acceptance_rate_delete} \caption{Acceptance rate for move ``delete a sensor''} \label{fig:acceptance_rate_delete} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.2]{acceptance_rate_move} \caption{Acceptance rate for move ``move a sensor''} \label{fig:acceptance_rate_move} \end{center} \end{figure} The next ones (figures \ref{fig:rejection_rate_delete} and \ref{fig:rejection_rate_move}) focus on the move's rejections. More precisely, the blue curves describe the rate of non-feasible solutions generated by the move, and the green curves describe the rate of solutions which score is below the threshold $\gamma_{l-1}$ conditional on the fact they are feasible. Here again, the sharp drops of the rejections designed by the green curves correspond to the threshold decrease. \begin{figure} \begin{center} \includegraphics[scale=0.2]{rejection_rate_delete} \caption{Rejection rates for the move ``delete a sensor''} \label{fig:rejection_rate_delete} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.2]{rejection_rate_move} \caption{Rejection rates for the move ``move a sensor''} \label{fig:rejection_rate_move} \end{center} \end{figure} \section{Conclusion and prospects} In this paper, we presented a method to compute the best deployment of sensors in order to optimize the detection probability of a smart and reactive target. Unlike existing works that use discrete constraints or only focus on the optimization of one aspect at a time, we purposed to optimize both space and time with continuous constraints. This was made possible through the use of a novel stochastic optimization algorithm based on the generalized splitting method. We tested our algorithm with the datum search problem and the results we obtained were very satisfying. In addition, these results were confirmed by existing works on much simplier cases. This demonstrate the efficiency of our dedicated Gibbs sampler and the splitting framework for this type of problem. To reduce the rejection rate and consequently reduce the computing time, we will try to improve the efficiency of our moves. In the same time, we should focus on cooperation between sensors (\emph{i.e. multistatic} case) and use a more complex model for sensors, signal propagation and detections (\emph{i.e.} not cookie-cutter sensors). We also aim to take the cost of each solution into account and compute a real multiobjective optimization. To achieve this, we are currently investigating two different ways. The first one would be based on a Pareto-ranking algorithm \cite{DBLP:journals/eor/BekkerA11} and the second would use Choquet integral for the aggregation of the user preferences \cite{Grabisch05,Grabisch06}. \section*{Acknowledgement} This work was partially supported by DGA (Direction g\'en\'erale de l’armement). All four authors gratefully acknowledge Emile Vasta (DGA/Techniques navales) for his friendly support and interest in this work. \bibliographystyle{plain}
1,941,325,221,222
arxiv
\section*{Acknowledgements} We would like to thank B. Acharya, T. Banks, K. Bobkov, R. Bousso, O. DeWolfe, M. Dine, G. Kane, D. Marolf, L. McAllister, J. Polchinski and S. Watson for many useful discussions. KF thanks N. Arkani-Hamed for pointing out that chain inflation with four forms is generic in the string landscape. This work was supported in part by the US Department of Energy under grant DE-FG02-95ER40899, the National Science Foundation under grant PHY99-07949, and the Michigan Center for Theoretical Physics under grant MCTP-06-27. KF would like to thank the Miller Institute at the University of California, Berkeley, for support and hospitality, and JTL wishes to acknowledge the hospitality of the KITP.
1,941,325,221,223
arxiv
\section{Introduction}\label{Intro} The spin-0 particle of mass around 125 GeV discovered by the ATLAS~\cite{Aad:2012tfa} and CMS~\cite{Chatrchyan:2012xdj} collaborations at the Large Hadron Collider (LHC) apparently completes the particle spectrum of the Standard Model (SM). Moreover, the couplings of this particle to the other SM particles are progressively getting closer to the corresponding SM values. However, issues ranging from the presence of dark matter in the universe to the naturalness problem of the electroweak scale keep alive the hope of finding physics beyond the SM (BSM). While the search for such new physics remains on, a rather pertinent question is to ask is whether the SM by itself can ensure vacuum stability at scales above that of electroweak symmetry breaking (EWSB). This is because the Higgs quartic coupling evolving via SM interactions alone tends to turn negative in between the Electroweak (EW) and Planck scales, thereby making the scalar potential unbounded from below. This is particularly true if the top quark mass is on the upper edge of its allowed band~\cite{Degrassi:2012ry,Buttazzo:2013uya}. While a \emph{metastable} EW minimum remains a possibility, stabilising the EW vacuum calls for the introduction of additional bosonic fields preferably by extending the SM Higgs sector. A number of scenarios comprising new physics are suggested for retrieving vacuum stability, a representative list of which is~\cite{PhysRevD.86.043511,Gonderinger2010,Chen:2012faa,PhysRevD.78.085005,Chakrabortty2016361, He:2013tla,PhysRevD.92.055002}. One important demonstration in this context is that stability till the Planck scale is restored, irrespective of the top-mass uncertainty, just by switching over to two Higgs doublet models (2HDM). 2HDMs open up a world of enriched collider phenomenology, CP-violation from the scalar sector and also dark matter candidates in special cases. However, a challenge faced while alleviating the vacuum instability problem using 2HDMs (or any extended Higgs sector for that matter) is that the quartic couplings so introduced tend to become non-perturbative while evolving under renormalisation group (RG). A balance between these two extremes is struck through judicious boundary conditions, which in turn leads to strong constraints on the masses and mixing angles. Elaborate accounts of this can be found in recent works. Two important points emerge from such studies. First, the spectrum of the non-standard scalars allows for only a small splitting. Secondly, the couplings of the 125 GeV Higgs with gauge bosons should have rather small deviation from the SM values. On the other hand, the gauge interactions of the non-standard scalars become suppressed. In this work, we aim to investigate the observability of a 2HDM at the present\cite{PhysRevD.90.015008,Keus2016,Kanemura2014524,Basso:2012st,PhysRevD.94.095005, Patrick:2016rtw,Li:2016umm,Arhrib:2016rlj,Akeroyd:2016ymd} and upcoming colliders\cite{LopezVal:2009qy,PhysRevD.88.115003} within the parameter region that allows for high-scale validity (including both vacuum stability and perturbativity). This could turn challenging since the search prospects could be severely inhibited by the constraints. For instance, to discern a 2HDM from the SM background through resonances, fully reconstructible final states need to be looked at. The corresponding event rates tend to be small, owing to the constraints on the interaction strengths that come from demanding the dual requirement of high scale vaccum stability and perturbative unitarity. Moreover, removal of the backgrounds requires event selection criteria which further lower the signal strength. To be more specific, the CP-even heavier neutral Higgs could lead to a four-lepton cascade at the LHC via the $ZZ$ state. Side by side, the CP-odd scalar leaves its signature in the completely reconstructible channel $hZ$ where $h$ denotes the SM-like Higgs. The two final states mentioned above are indicative of the opposite CP-properties of the decaying Higgses, which from our requirement, are destined to have closely spaced masses. We adopt a cut-based analysis to calculate the statistical significance in the respective signals. We perform this analysis for both Type-I and Type-II 2HDM. The allowed parameter space for the latter scenario is obtained via extensive investigation in reference~\cite{Chakrabarty:2014aya}. For the former, though an analysis is found in~\cite{Das:2015mwa}, for the sake of completeness, we present a set of results here that go beyond what has been reported. It is found that the constraints from flavour changing neutral current (FCNC) phenomena put a strong lower limit on the Type-II 2HDM charged scalar mass (and, via the correlation demanded by high-scale validity, on the heavy neutral scalar and pseudoscalar masses as well). Thus while obtaining LHC signals, the region of the parameter space in the Type-II case is relatively more restricted. Keeping this in mind, we also present a brief discussion on the prospects at other types of colliders. In particular, we find that muon colliders can be useful in this respect. This study comprises of the following parts. In section~\ref{model}, we briefly review the 2HDM and survey its candidature as a UV-complete scenario. Section~\ref{param} highlights the intrinsic features of the parameter space that permits high-scale stability. The search prospects at the LHC, and, future leptonic colliders are elaborated in sections~\ref{lhc} and \ref{other} respectively. We summarise our findings and conclude in section~\ref{con}. \section{2HDM and high scale validity.}\label{model} Type-I and II 2HDMs, as well as the constraints on them have already been discussed in literature \cite{Branco:2011iw}. We present a small resume here for completeness. We consider the most general renormalizable scalar potential for two doublets $\Phi_1$ and $\Phi_2$, each having hypercharge $(+1)$: \begin{eqnarray} V(\Phi_1,\Phi_2) &=& m^2_{11}\, \Phi_1^\dagger \Phi_1 + m^2_{22}\, \Phi_2^\dagger \Phi_2 - m^2_{12}\, \left(\Phi_1^\dagger \Phi_2 + \Phi_2^\dagger \Phi_1\right) + \frac{\lambda_1}{2} \left( \Phi_1^\dagger \Phi_1 \right)^2 + \frac{\lambda_2}{2} \left( \Phi_2^\dagger \Phi_2 \right)^2 \nonumber\\ & & + \lambda_3\, \Phi_1^\dagger \Phi_1\, \Phi_2^\dagger \Phi_2 + \lambda_4\, \Phi_1^\dagger \Phi_2\, \Phi_2^\dagger \Phi_1 + \frac{\lambda_5}{2} \left[ \left( \Phi_1^\dagger\Phi_2 \right)^2 + \left( \Phi_2^\dagger\Phi_1 \right)^2 \right] \nonumber\\ & & +\lambda_6\, \Phi_1^\dagger \Phi_1\, \left(\Phi_1^\dagger\Phi_2 + \Phi_2^\dagger\Phi_1\right) + \lambda_7\, \Phi_2^\dagger \Phi_2\, \left(\Phi_1^\dagger\Phi_2 + \Phi_2^\dagger\Phi_1\right). \label{treepot} \end{eqnarray} We parametrise the doublets as \begin{equation} \Phi_{i} = \frac{1}{\sqrt{2}} \begin{pmatrix} \sqrt{2} w_i^{+} \\ v_i + h_i + i z_i \end{pmatrix}~ \rm{for}~\textit{i} = 1, 2. \label{e:doublet} \end{equation} One defines tan$\beta = \frac{v_2}{v_1}$. In such a case, the scalar spectrum consists of a pair of neutral CP even scalars ($h,H$), a CP odd neutral scalar ($A$) and a charged scalar ($H^+$). The mass matrices are brought into diagonal form by the action unitary matrices comprising of mixing angles $\alpha$ and $\beta$. This scenario in general allows for CP-violation in the scalar sector~\cite{Grzadkowski:2013rza,Shu:2013uua,PhysRevD.72.095002}, through the phases in $m_{12}$ and $\lambda_5$. However, this would lead to a contamination of our proposed search channels due to interfernece efects coming from $H-A$ mixing. Thus we restrict ourselves to a CP conserving scenario only. A particular fermion generation can couple to both $\Phi_1$ and $\Phi_2$ in a 2HDM without violating the gauge symmetry. However, this leads to flavour changing neutral currents (FCNC) mediated by the Higgses, which are tightly constrained by experimental data. A manner to annul the FCNCs is to adhere to specific schemes of Yukawa interations\cite{PhysRevD.15.1966,PhysRevD.15.1958}, that are consequences of discrete symmetries. A example is the $\mathbb{Z}_2$ symmetry under which $\Phi_1 \rightarrow -\Phi_1$ and $\Phi_2 \rightarrow \Phi_2$. This demands $m_{12},\lambda_6,\lambda_7 = 0$. Assigning appropriate $\mathbb{Z}_2$ charges to the fermions gives rise to the celebrated Type-I and Type-II models~\cite{Branco:2011iw}. While the primary motivation of the above is to suppress flavour changing neutral currents (FCNC)~\cite{Kim2015,Crivellin:2013wna,Baum:2008qm}, it reduces the number of free parameters in the Yukawa sector.\footnote{It was reported in ~\cite{PhysRevD.58.116003} that the FCNCs are stable under Renormalisation Group.} This also simplifies the expressions for the one-loop beta functions. Note that one could introduce $Z_2$ violation in the scalar potential only. This would ultimately lead to FCNC, however which would be radiatively suppressed. In this study, we consider both the cases of an exactly $\mathbb{Z}_2$ symmetric 2HDM, and one that violates it in the scalar potential. We choose $\{\text{tan}\beta,m_h,m_H,m_A,m_{H^+},m_{12},c_{\beta -\alpha},\lambda_6,\lambda_7\}$ as the set of independent input parameters. The rest of the quartic couplings are expressed in terms of which for convenience. With $v = 246 ~\rm GeV$ and writing $c_{\alpha}$ = cos$\alpha$, $s_{\alpha}$ = sin$\alpha$, the remaining couplings can be expressed as \begin{subequations} \begin{eqnarray} \label{e:l1} \lambda_1 &=& \frac{1}{v^2 c^2_\beta}~\Big(c^2_\alpha m^2_H + v^2 s^2_\alpha m^2_h - m^2_{12}\frac{s_\beta}{c_\beta} - \frac{3}{2} \lambda_6 v^2 s_{\beta} c_{\beta} - \frac{1}{2} \lambda_7 v^2 \frac{s^3_{\beta}}{c_{\beta}} \Big),\\ \label{e:l2} \lambda_2 &=& \frac{1}{v^2 s^2_\beta}~\Big(s^2_\alpha m^2_H + v^2 c^2_\alpha m^2_h - m^2_{12}\frac{c_\beta}{s_\beta} - \frac{3}{2} \lambda_7 v^2 s_{\beta} c_{\beta} - \frac{1}{2} \lambda_6 v^2 \frac{c^3_{\beta}}{s_{\beta}}\Big),\\ \label{e:l4} \lambda_4 &=& \frac{1}{v^2}~(m^2_A - 2 m^2_{H^+}) + \frac{m^2_{12}}{v^2 s_\beta c_\beta} - \frac{1}{2 t_{\beta}} \lambda_6 - \frac{1}{2} t_{\beta} \lambda_7,\\ \label{e:l5} \lambda_5 &=& \frac{m^2_{12}}{v^2 s_\beta c_\beta} - \frac{m^2_A}{v^2} - \frac{1}{2 t_{\beta}} \lambda_6 - \frac{1}{2} t_{\beta} \lambda_7, \\ \label{e:l3} \lambda_3 &=& \frac{1}{v^2 s_\beta c_\beta}((m^2_H - m^2_h)s_\alpha c_\alpha + m^2_A s_\beta c_\beta - \lambda_6 v^2 c^2_\beta - \lambda_7 v^2 s^2_\beta) - \lambda_4. \end{eqnarray} \label{e:Couplings} \end{subequations} The mass parameters $m_{11}$ and $m_{22}$ in the scalar potential are traded off using the EWSB conditions. A given set of input parameters serves as boundary conditions for $\lambda_i$ for the analysis using RG equations. While carrying out the analysis, several constraints coming from both theory and experiments must be satisfied. \subsubsection{Perturbativity, unitarity and vacuum stability} For the 2HDM to remain a perturbative theory at a given energy scale, one requires $\lvert \lambda_{i} \rvert \leq 4\pi~(i=1,\ldots,5)$ and $ \lvert y_{i} \rvert \leq \sqrt{4\pi}~(i=t,b,\tau)$ at that scale. This translates into upper bounds on the model parameters at low as well as high energy scales. The matrix containing 2$\rightarrow$2 scattering amplitudes of longitudinal gauge bosons can be mapped to a corresponding matrix for the scattering of the goldstone bosons\cite{Akeroyd:2000wc,Horejsi:2005da,Kanemura:2015ska,Ginzburg:2005dt}, by virtue of the EW equivalence theorem. The theory is deemed unitary if each eigenvalue of the aforementioned amplitude matrix does not exceed 8$\pi$. The expressions for the eigenvalues are given below. \begin{subequations} \begin{eqnarray} a_{\pm}&=& \frac32(\lambda_1+\lambda_2)\pm \sqrt{\frac94 (\lambda_1-\lambda_2)^2+(2\lambda_3+\lambda_4)^2},\\ b_{\pm}&=& \frac12(\lambda_1+\lambda_2)\pm \sqrt{\frac14 (\lambda_1-\lambda_2)^2+\lambda_4^2},\\ c_{\pm}&=& d_{\pm} = \frac12(\lambda_1+\lambda_2)\pm \sqrt{\frac14 (\lambda_1-\lambda_2)^2+\lambda_5^2},\\ e_1&=&(\lambda_3 +2\lambda_4 -3\lambda_5),\\ e_2&=&(\lambda_3 -\lambda_5),\\ f_1&=& f_2 = (\lambda_3 +\lambda_4),\\ f_{+}&=& (\lambda_3 +2\lambda_4 +3\lambda_5),\\ f_{-}&=& (\lambda_3 +\lambda_5). \end{eqnarray} \label{e:LQTeval} \end{subequations} When the quartic part of the scalar potential preserves CP and $\mathbb{Z}_2$ symmetries, the aforementioned eigenvalues are discussed in \cite{Kanemura:1993hm,Akeroyd:2000wc,Horejsi:2005da}. Demanding high-scale positivity of the 2HDM potential along various directions in the field space leads to the following conditions on the scalar potential \cite{Branco:2011iw,Ferreira:2004yd,PhysRevD.18.2574,Nie199989}: \begin{subequations} \begin{eqnarray} \label{e:vsc1} \rm{vsc1}&:&~~~\lambda_{1} > 0, \\ \label{e:vsc2} \rm{vsc2}&:&~~~\lambda_{2} > 0, \\ \label{e:vsc3} \rm{vsc3}&:&~~~\lambda_{3} + \sqrt{\lambda_{1} \lambda_{2}} > 0, \\ \label{e:vsc4} \rm{vsc4}&:&~~~\lambda_{3} + \lambda_{4} - |\lambda_{5}| + \sqrt{\lambda_{1} \lambda_{2}} > 0. \label{e:vsc5} \end{eqnarray} \label{eq:vsc} \end{subequations} Meeting the above positivity criteria at each scale of evolution effectively rules out deeper vacua at high energy scales. In addition to the above, the spliting amongst the scalar masses is restricted by invoking the $T$-parameter constraint. We have used $\Delta T = 0.05 \pm 0.12$ following \cite{Baak:2014ora}, where $\Delta T$ measures departure from the SM contribution. We have filtered all points in our parameter space through the above constraints and retained only those points that negotiate it successfully. Measurement of the rate for $b \rightarrow s \gamma$ leads to $m_{H^+} \geq 480$ GeV in case of the Type-II 2HDM\cite{Mahmoudi:2009zx,Olive:2016xmw}. In case of Type-I, there is no such lower bound. The constraint $m_{H^+} \geq 80$ GeV originating from direct searches however still persists. \section{Type-I 2HDM: Allowed parameter space for stable vacuum}\label{param} We start by completing the existing studies~\cite{Eberhardt:2014kaa,Chakrabarty:2014aya,Das:2015mwa,Ferreira:2015rha} on the parameter space allowing for high scale vaccum stability and perturbativity for a Type I 2HDM. A corresponding discussion for the Type-II 2HDM can be seen in \cite{Chakrabarty:2014aya}. We fix $m_h = 125$ GeV and $M_t = 175$ GeV, the rest of the parameters are generated randomly in the following ranges.\\ $\rm tan\beta \in [1,20]$, $m_H \in [200,1000]$, $m_A \in [200,1000]$, $m_{H^+} \in [200,1000]$, $\rm cos(\beta - \alpha) \in [-0.4,0.4]$, $\lambda_6 \in [-1,1]$, $\lambda_7 \in [-1,1]$. The generated values of the masses and mixing angles are translated to the basis of the quartic couplings using Eqs.~\ref{e:l1} to ~\ref{e:l3}. \begin{figure} \begin{center} \includegraphics[scale=0.40]{mHmA_tb2.pdf}~~~ \includegraphics[scale=0.40]{mHmHp_tb2.pdf} \includegraphics[scale=0.40]{l6-l7-1.pdf}~~~ \includegraphics[scale=0.40]{l6-l7-2.pdf} \caption{Distribution of the parameter points valid till $\Lambda$ in the $m_H-m_A$ (left) and $m_H-m_{H^+}$ (right) planes for the Type-I 2HDM. The colour coding can be read from the legends. We fix tan$\beta$ = 2 as a benchmark. The upper(lower) plots correspond to $\lambda_6 = \lambda_7 = 0$($\lambda_6 ,\lambda_7 \neq 0$). We have varied $\lambda_6,\lambda_7$ in the interval [-1,1] for the lower plots.} \label{f:m-m} \end{center} \end{figure} The strong correlation among the masses, namely $m_H \simeq m_A \simeq m_{H^+}$, is revealed from Fig.~\ref{f:m-m}. This itself can be traced back to Eqs.~\ref{e:l1} to ~\ref{e:l3}. Any large mass gap results in giving large values for $\lambda_i$ at the EWSB scale itself, such that they turn non-perturbative rather early in the course of evolution. This feature is also corroborated in\cite{Das:2015mwa}. It is important to note that the mass-splitting depends, albeit weakly, on the chosen value of tan$\beta$. For instance, in case of tan$\beta = 2$, the maximum splitting allowed is $\simeq 15$ GeV for $\Lambda = 10^{19}$ GeV. This goes down to $\simeq 10$ GeV in case of tan$\beta = 10$ for the same value of $\Lambda$. It should be noted here that the bound on mass splitting that comes from the requirement of perturbativity till high scales is much more stringent than what is obtained by the imposition of the T-parameter constraint alone. Also important is the ensuing constraint on cos($\beta - \alpha$) which decides the interaction strengths between $W,Z$ and the non-standard scalars. The more suppressed is cos($\beta - \alpha$), closer are the $h$-interactions to the corresponding values. Thus, measurement of signal strengths of $h$ leads to constraint on this parameter\cite{Bernon:2015qea,Cheon:2012rh,PhysRevD.90.095006}. Models valid up to $10^{19}$ GeV could allow for $|\rm cos(\beta - \alpha)|$ $\leq$ 0.15 and $|\rm cos(\beta - \alpha)|$ $\leq$ 0.05 for tan$\beta$ = 2 and tan$\beta$ = 10 respectively. This bound can be amply relaxed by choosing a lower $\Lambda$, for example one finds $|\rm cos(\beta - \alpha)|$ $\leq$ 0.14 in case of tan$\beta$ = 10 if one demands validity up to $10^{19}$ GeV. This apparent correlation between the UV cut-off scale and the maximum allowed value of $\rm cos(\beta - \alpha)$, could lead us to predict the maximal extrapolation scale up to which such a 2HDM could be probed at the colliders. Of course, such a correlation can be noticed for the Type-II scenario as well. The additional result presented here, over and above what is found in the literature, is the establishment of the mass correlations for $\lambda_6,\lambda_7 \neq$ 0, as shown in Fig.1. \section{Signals at the LHC: Types I and II.}\label{lhc} The previous section illustrates that higher the UV cutoff of a 2HDM is, tighter become the mass-splitting and the bound on $|\rm cos(\beta - \alpha)|$. Such a constrained scenario makes its observability at the LHC a rather challenging task, as also emphasized in section~\ref{Intro}. In particular, if we probe $H$ and $A$ via their decays into recontructible final states, then the invariant mass distributions of the decay products would coincide. However, probing $H$ and $A$ in reconstructible but distinct final states could enable one to tag the CP of the decaying boson. Given that, we propose the following signals:\\ (i) $p p \longrightarrow H \longrightarrow Z Z \longrightarrow 4 l$ \\ (ii) $p p \longrightarrow A \longrightarrow h Z \longrightarrow l^+ l^- b b$ We have implemented the model using \texttt{FeynRules}\cite{Alloul20142250}. The generated Universal FeynRules Output (UFO) files are then fed to the Monte-Carlo (MC) event generator MadGraph ~\cite{Alwall:2014hca} for generation of event samples. The parton-showering and hadronisation is carried out in the \texttt{PYTHIA-6} \cite{1126-6708-2006-05-026} framework. We simulated $H \rm ~and ~A$ production through the gluon-gluon fusion (ggF) channel using the CTEQ6L1 parton distribution functions. This is because ggF offers higher rates compared to other channels. The renormalisation and factorisation scales have been set at $m_H$ and $m_A$ for the first and second signals respectively. We mention in this context that detector simulation and analysis of the events were done using \texttt{Delphes}\cite{deFavereau2014}. For simulating the proposed final states, we hold $m_H$ and $m_A$ fixed and scan over the remaining input quantities. From the randomly generated parameter sets, we select an illustrative assortment of benchmark points (Table~\ref{Benchmark}) to highlight the main findings of the analysis. \begin{table}[h] \centering \begin{tabular}{|c c c c c|} \hline Benchmark & $m_{H}$(GeV) & $m_{A}$(GeV) & $m_{12}$(GeV) & cos($\beta - \alpha$) \\ \hline \hline BP1a & 350 & 351 & 200 & -0.18 \\ BP1b & 350 & 351 & 200 & -0.12 \\ \hline BP2a & 400 & 401 & 230 & -0.15 \\ BP2b & 400 & 401 & 230 & -0.10 \\ \hline BP3a & 500 & 501 & 280 & -0.095 \\ BP3b & 500 & 501 & 280 & -0.070 \\ BP3c & 500 & 501 & 280 & -0.050 \\ \hline BP4a & 550 & 551 & 320 & -0.075 \\ BP4b & 550 & 551 & 320 & -0.060 \\ BP4c & 550 & 551 & 320 & -0.050 \\ \hline BP5a & 600 & 601 & 350 & -0.050 \\ BP5b & 600 & 601 & 350 & -0.035 \\ BP5c & 600 & 601 & 350 & -0.025 \\ \hline \end{tabular} \caption{Benchmarks chosen for simulating the proposed channels. We have taken $m_h = 125$ GeV and tan$\beta$ = 2.5 throughout. Any higher tan$\beta$ would to a lower ggF rate and so was not chosen.} \label{Benchmark} \end{table} The benchmarks are distict from another \emph{vis-a-vis} RG evolution patterns. While choosing them, it was ensured that the UV-cutoff of a given benchmark does not change upon switching between the Type-I and Type-II models. For instance, in the case where $\lambda_6 = \lambda_7$ = 0, BP1b, BP1b, BP3c, BP4c and BP5c are conservative input sets ensuring a stable vaccum and a perturbative model till $\sim 10^{19}$ GeV. This can be read from the small values of $|\rm cos(\beta - \alpha)|$ characterizing them. The other benchmarks are however not that conservative, but stil they manage to stabilise the vaccum till at least $10^{11}$ GeV. Likeweise, BP3b and BP4b are included to estimate the statistical significance of scenarios valid till $10^{14}$ GeV. For a given set of couplings, elevating the masses of $H$ and $A$ progressively diminishes the intensity of the signals, and, also narrows the allowed band of $|\rm cos(\beta - \alpha)|$. The choice of the benchmarks is thus guided by the aim to understand the maximum $m_H, m_A$ as well as the highest UV cut-off up to which the scenario can be experimentally observed. \subsubsection{$p p \longrightarrow H \longrightarrow Z Z \longrightarrow 4 l$} $H_2$ is produced through gluon fusion and decays to two on-shell $Z$ bosons. We look for a final state where the $Z$ bosons subsequently decay into four leptons\cite{Aad:2015kna}. The dominant background for this process comes from $ZZ(^*)$ production. Taking into account subleading contributions from the $Z\gamma$ and $\gamma \gamma$ channels and multiplying by appropriate next-to-leading order (NLO) K-factors \cite{Alwall:2014hca}, the total background cross section is $\simeq$ 42 fb. Some basic cuts, as listed below, were applied during event generation. \textbf{Basic-cuts:} \begin{itemize} \item All leptons have a minimum transverse momentum of 10 GeV, $p_T^{l} \geq 10$ GeV. \item Pseudorapidity of the leptons must lie within the window$|\eta^{l}| \leq 2.5$. \item All possible lepton-pairs are resolved using $\Delta R_{ll} > 0.3$. \end{itemize} We multiply the ggF cross sections of $H$ production by an NLO K factor of 1.5. The cuts listed below were further imposed. \textbf{Selection cuts:} \begin{itemize} \item \textbf{SC1}: The invariant mass of the final state leptons lie within the window $m_H - 15 \rm ~GeV \leq m_{4l} \leq m_H$ + \rm 15 GeV. \item \textbf{SC2}: The transverse momenta of the leptons lie above the thresholds $p_T^{l_1} > p_{T,\rm min}^{l_1}$, $p_T^{l_2} > p_{T,\rm min}^{l_2}$, $p_T^{l_3} > 30$ \rm GeV, $p_T^{l_4} > 20$ \rm GeV. \item \textbf{SC3}: Transverse momenta of the reconstructed $Z$-bosons satisfy $p_T^{Z_1} > p_{T, \rm min}^{Z_1}$, $p_T^{Z_2} > p_{T, \rm min}^{Z_2}$. \end{itemize} We take $p_T^{Z_1/Z_2} = 20, 20, 40, 50, 70$ GeV and $\{p_{T,\rm min}^{l_1}, p_{T,\rm min}^{l_2}\}$ = $\{50 \rm ~GeV,30 \rm ~GeV\}$, $\{50 \rm ~GeV,30 \rm ~GeV\}$, $\{80 \rm ~GeV,50 \rm ~GeV\}$, $\{90 \rm ~GeV,70 \rm ~GeV\}$, $\{100 \rm ~GeV, 70 \rm ~GeV\}$ for BP1, BP2, BP3, BP4, BP5 respectively, the decisive factor in this choice of $p_T^{Z_1/Z_2}$ being $m_H$, for any benchmark point. For $m_H > 500$ GeV, the leading and the subleading leptons are strongly boosted, thus having a good probability of suviving the strong $p_T$ cuts. In addition, appropriate cuts on the $p_T$ of the Z-bosons also contributes towards improving the signal-to-background ratio. Denoting the number of signal and background events as $\mathcal{N}_S$ and $\mathcal{N}_B$ at a given integrated luminosity ($\mathcal{L}$), the statistical significance or the confidence limit (CL) is defined as CL = $\frac{\mathcal{N}_S}{\sqrt{\mathcal{N}_S + \mathcal{N}_B}}$. \begin{table}[h] \centering \begin{tabular}{|c c c c c c c c c|} \hline Benchmark & $\sigma^{SC}_{S}$ (fb) & $\sigma^{SC}_{B}$ (fb) & $\mathcal{N}_S^{100}$ & $\mathcal{N}_B^{100}$ & $\mathcal{N}_S^{3000}$ & $\mathcal{N}_B^{3000}$ & $\rm CL_{100}$ & $\rm CL_{3000}$\\ \hline \hline BP1a & 0.173 & 0.334 & 17.36 & 33.40 & 520.94 & 1002.18 & 2.43 & 13.34\\ BP1b & 0.145 & 0.334 & 14.54 & 33.40 & 436.31 & 1002.18 & 2.10 & 11.503 \\ \hline BP2a & 0.104 & 0.194 & 10.42 & 19.46 & 312.73 & 584.00 & 1.90 & 10.44 \\ BP2b & 0.071 & 0.194 & 7.11 & 19.46 & 213.38 & 584.00 & 1.37 & 7.55 \\ \hline BP3a & 0.026 & 0.064 & 2.59 & 6.48 & 77.99 & 194.60 & 0.86 & 4.72\\ BP3b & 0.016 & 0.064 & 1.68 & 6.48 & 50.52 & 194.60 & 0.58 & 3.22 \\ BP3c & 0.009 & 0.064 & 0.97 & 6.48 & 29.37 & 194.60 & 0.35 & 1.96 \\ \hline BP4a & 0.011 & 0.041 & 1.13 & 4.16 & 34.06 & 124.91 & 0.49 & 2.70\\ BP4b & 0.008 & 0.041 & 0.81 & 4.16 & 24.52 & 124.91 & 0.36 & 2.00\\ BP4c & 0.006 & 0.041 & 0.61 & 4.16 & 18.33 & 124.91 & 0.27 & 1.53\\ \hline BP5a & 0.004 & 0.029 & 0.41 & 2.96 & 12.32 & 89.090347 & 0.22 & 1.22\\ BP5b & 0.002 & 0.029 & 0.22 & 2.96 & 6.70 & 89.090347 & 0.12 & 0.68\\ BP5c & 0.001 & 0.029 & 0.12 & 2.96 & 3.61 & 89.090347 & 0.06 & 0.37\\ \hline \end{tabular} \caption{A record of the number of surviving events in the $H \rightarrow 4 l$ channel after the selection cuts at the $\sqrt{s} = 14 $ TeV LHC for a Type-I 2HDM. Here $\mathcal{N}_S^{100(3000)}$ and $\mathcal{N}_B^{100(3000)}$ and respectively denote the number of events $\mathcal{L} = 100(3000)$ $\rm fb^{-1}$. Besides, $\rm CL_{100(3000)}$ denotes the confindence level at $\mathcal{L} = 100(3000)$ $\rm fb^{-1}$. } \label{4l_TypeI} \end{table} \begin{table}[h] \centering \begin{tabular}{|c c c c c c c c c|} \hline Benchmark & $\sigma^{SC}_{S}$ (fb) & $\sigma^{SC}_{B}$ (fb) & $\mathcal{N}_S^{100}$ & $\mathcal{N}_B^{100}$ & $\mathcal{N}_S^{3000}$ & $\mathcal{N}_B^{3000}$ & $\rm CL_{100}$ & $\rm CL_{3000}$\\ \hline \hline BP3a & 0.025 & 0.064 & 2.56 & 6.48 & 76.99 & 194.60 & 0.85 & 4.67 \\ BP3b & 0.016 & 0.064 & 1.65 & 6.48 & 49.65 & 194.60 & 0.58 & 3.17 \\ BP3c & 0.009 & 0.064 & 0.95 & 6.48 & 28.73 & 194.60 & 0.35 & 1.92\\ \hline BP4a & 0.011 & 0.041 & 1.12 & 4.16 & 33.64 & 124.91 & 0.48 & 2.67 \\ BP4b & 0.008 & 0.041 & 0.80 & 4.16 & 24.15 & 124.91 & 0.36 & 1.97\\ BP4c & 0.006 & 0.041 & 0.60 & 4.16 & 18.02 & 124.91 & 0.27 & 1.50\\ \hline BP5a & 0.004 & 0.029 & 0.40 & 2.96 & 12.148 & 89.09 & 0.22 & 1.20\\ BP5b & 0.002 & 0.029 & 0.21 & 2.96 & 6.58 & 89.09 & 0.124 & 0.67\\ BP5c & 0.001 & 0.029 & 0.11 & 2.96 & 3.54 & 89.09 & 0.06 & 0.36 \\ \hline \end{tabular} \caption{A record of the number of surviving events in the $H \rightarrow 4 l$ channel after the selection cuts at the $\sqrt{s} = 14 $ TeV LHC for a Type-II 2HDM. Here $\mathcal{N}_S^{100(3000)}$ and $\mathcal{N}_B^{100(3000)}$ and respectively denote the number of events $\mathcal{L} = 100(3000)$ $\rm fb^{-1}$. Besides, $\rm CL_{100(3000)}$ denotes the confindence level at $\mathcal{L} = 100(3000)$ $\rm fb^{-1}$.} \label{4l_TypeII} \end{table} Tables ~\ref{4l_TypeI} and ~\ref{4l_TypeII} contain the estimated CL for all the benchmarks. The following features thus emerge:\\ (i) The statistical significance diminishes as $m_H$ is increased. This is due to two reasons. First, the ggF cross section for a single $H$ drops. Secondly, the higher is $m_H$, the smaller is the upper limit on $|\rm cos(\beta - \alpha)|$ consistent with high scale stability, and hence, the lower is the $H \rightarrow Z Z$ branching ratio.\\ (ii) Type-I 2HDM offers a marginally higher significance as compared with Type-II. This is entirely attributed to the persistence of a slightly higher $H \rightarrow Z Z$ branching ratio in Type-I.\\ (iii) For $m_H \simeq 350$ GeV, an integrated luminosity of 100 $\rm fb^{-1}$ is sufficient to yield a 3$\sigma$ significance. \\ (iv) To observe an $H$ of mass around $\simeq$ 500 GeV that originates from a 2HDM valid till $10^{19}$ GeV with a minimum of 3$\sigma$ confidence level, one needs to gather 3000 $\rm fb^{-1}$ of data at the LHC. The statistical signifiance decreases for higher masses. In short, the observability of a given $H$ can be improved by either lowering $m_H$ and holding the UV cutoff fixed or vice versa. This interplay is illustrated in Fig.~\ref{4l_500} and Fig.~\ref{4l_550}. \begin{figure} \begin{center} \includegraphics[scale=0.48]{m_500_z2_I_4l.pdf}~~~ \includegraphics[scale=0.48]{m_500_z2_II_4l.pdf} \includegraphics[scale=0.48]{m_500_noz2_I_4l.pdf}~~~ \includegraphics[scale=0.48]{m_500_noz2_II_4l.pdf} \caption{The paramater space in the tan$\beta$ vs. $c_{\beta - \alpha}$ plane for $m_H = 500 \rm ~GeV$ and $m_A = 501$ GeV that allows for validity till $10^{11}$ GeV(red), $10^{14}$ GeV(green) and $10^{19}$ GeV(black). The region inside the blue curve corresponds to a signal significance greater than or equal to 3$\sigma$. The upper and lower plots are for $\lambda_6 = \lambda_7 = 0$ and $\lambda_6,\lambda_7 \neq 0$ respectively.} \label{4l_500} \end{center} \end{figure} Fig.~\ref{4l_500} corroborates the previous observation that an $H$ with $m_H = 500$ GeV can lead to a 3$\sigma$ signal at the LHC, consistently with perturbativity as well as a stable vaccum till $10^{19}$ GeV. This is true for both Type-I and Type-II 2HDM. Note that the parameter space relaxes upon the introduction of non-vanishing $\lambda_6$ and $\lambda_7$. This marginally helps in elevating the UV cutoff without compromising on the strength of the signal. For $m_H = 550$ GeV, on the other hand, a 2HDM (of either Type-I or Type-II) cannot be be extrapolated beyond $10^{11}$ GeV if a 3$\sigma$ statistical significance has to be maintained. This is confirmed by an inspection of Fig.~\ref{4l_550}. \begin{figure} \begin{center} \includegraphics[scale=0.48]{m_550_z2_I_4l.pdf}~~~ \includegraphics[scale=0.48]{m_550_z2_II_4l.pdf} \includegraphics[scale=0.48]{m_550_noz2_I_4l.pdf}~~~ \includegraphics[scale=0.48]{m_550_noz2_II_4l.pdf} \caption{The paramater space in the tan$\beta$ vs. $c_{\beta - \alpha}$ plane for $m_H = 550 \rm ~GeV$ and $m_A = 551$ GeV that allows for validity till $10^{11}$ GeV(red), $10^{14}$ GeV(green) and $10^{19}$ GeV(black). The region inside the blue curve corresponds to a signal significance greater than or equal to 3$\sigma$. The upper and lower plots are for $\lambda_6 = \lambda_7 = 0$ and $\lambda_6,\lambda_7 \neq 0$ respectively.} \label{4l_550} \end{center} \end{figure} We examine the prospects of reconstructing $A$ through the proposed $l^+ l^- b \bar{b}$ final state in the following section. \subsubsection{$p p \longrightarrow A \longrightarrow h Z \longrightarrow l^+ l^- b \bar{b}$} In the absence of CP-violation (as assumed here), the $h Z$ pair production points towards a CP-odd parent particle\cite{Khachatryan:2015lba}, and a peak in the invariant mass close to the afore mentioned $ZZ$ peak should be the somking gun signal of the near degeneracy of a scalar and a pseudoscalar. However, $p p \longrightarrow t\bar{t}$ generates the dominant background for this final state. Subleading contributions come from the production of $ZWW$ and $Z b \bar{b}$. Similar to the previous analysis, we adopt a $K$-factor = 1.5 for pseudoscalar production for all the benchmarks. The following cuts are applied during event-generation. \textbf{Basic cuts:} \begin{itemize} \item $p_T^{l} \geq 10$ GeV, $p_T^{b} \geq 20$ GeV \item $|\eta^{l}| \leq 2.5$, $|\eta^{b}| \leq 2.5$ \item $\Delta R_{ll} > 0.3$, $\Delta R_{lb} > 0.4$, $\Delta R_{bb} > 0.4$ \end{itemize} On applying the above cuts, the NLO background cross section turns out to be $\simeq$ 32 pb. The following cuts are imposed for an efficient background rejection. \textbf{Selection cuts:} \begin{itemize} \item C1: The invariant mass of the leptons satisfy $85.0 ~\rm GeV \leq m_{ll} \leq 100 ~\rm GeV$. \item C2: The invariant mass of the b-jets satisfy $95.0 ~\rm GeV\leq m_{bb} \leq 155 ~\rm GeV$. \item C3: The scalar sum of the transverse momenta of the leptons and b-jets satisfies $\sum_{l,b} p_T > (\sum_{l,b} p_T)_{\rm min}$. \item C4: An upper bound on the missing transverse momenta, $\cancel{E_T} \leq 30$ GeV. \item C5: The invariant mass of the $2l-2b$ system lies within the range $m_A - 30$ GeV $\leq m_{llbb} \leq m_A + 30$ GeV. \item C6: $p_T$ of the reconstructed $Z$-boson satisfies $p_T^Z > 120$ GeV for BP5, and, \\ $~~~~~~~~~~~ > 100$ GeV for the rest \item C7: Upper bounds on the $p_T$ of the b-jets, $p_T^{b_1} > p_{T,\rm min}^{b_1}$. \end{itemize} The cuts on the $p_T$ of leading b-jet as well as on scalar the sum of the $p_T$ of the b-jets and leptons are appropriately strengthened with increase in $m_A$. We opt for $\{(\sum_{l,b} p_T)_{\rm min},p_T^{b_1}\}$ = $\{270 ~\rm GeV, 40 ~GeV\}$ for BP1, $\{320 ~\rm GeV, 40 ~GeV\}$ for BP2, $\{350 ~\rm GeV, 50 ~GeV\}$ for BP3 and BP4, and, $\{380 ~\rm GeV, 70 ~GeV\}$ for BP5. The selection cuts involve reconstructing the invariant masses of not only the decaying $A$, but also of the $Z$ and the $h$, appropriately in each case. A lower limit on the scalar sum of the $p_T$ of the leptons and the $b$-hadrons also aids in incresing the significance. All the $\cancel{E_T}$ in the signal is generated from mis-measurement of the momenta of the visible particles, thus generating a soft missing $\cancel{E_T}$ distribution. On the other hand, the corresponding background has a harder $p_T$ spectrum since the $t \bar{t}$ and $ZWW$ chennels always lead to neutrinos in the final state. Therefore, a suitable upper bound on the missing transverse energy reduces a portion of these backgrounds. \begin{table}[h] \centering \begin{tabular}{|c c c c c c c c c|} \hline Benchmark & $\sigma^{SC}_{S}$ (fb) & $\sigma^{SC}_{B}$ (fb) & $\mathcal{N}_S^{300}$ & $\mathcal{N}_B^{100}$ & $\mathcal{N}_S^{3000}$ & $\mathcal{N}_B^{3000}$ & $\rm CL_{100}$ & $\rm CL_{3000}$\\ \hline \hline BP1a & 1.65 & 10.94 & 164.60 & 1094.05 & 4938.02 & 32821.48 & 4.64 & 25.41 \\ BP1b & 0.90 & 10.94 & 89.55 & 1094.05 & 2686.45 & 32821.48 & 2.60 & 14.26 \\ \hline BP2a & 0.55 & 4.30 & 55.22 & 430.24 & 1656.63 & 12907.32 & 2.51 & 13.73 \\ BP2b & 0.28 & 4.30 & 27.92 & 430.24 & 837.64 & 12907.32 & 1.30 & 7.14 \\ \hline BP3a & 0.132 & 1.387 & 13.24 & 138.73 & 397.11 & 4161.95 & 1.07 & 5.88 \\ BP3b & 0.076 & 1.387 & 7.63 & 138.73 & 228.91 & 4161.95 & 0.63 & 3.45 \\ BP3c & 0.041 & 1.387 & 4.05 & 138.73 & 121.52 & 4161.95 & 0.34 & 1.86 \\ \hline BP4a & 0.066 & 0.632 & 6.56 & 63.22 & 196.86 & 1896.59 & 0.79 & 4.30\\ BP4b & 0.044 & 0.632 & 4.35 & 63.22 & 130.50 & 1896.59 & 0.53 & 2.90\\ BP4c & 0.031 & 0.632 & 3.08 & 63.22 & 92.53 & 1896.59 & 0.38 & 2.07\\ \hline BP5a & 0.021 & 0.334 & 2.07 & 33.37 & 62.19 & 1000.98 & 0.35 & 1.91\\ BP5b & 0.010 & 0.334 & 1.05 & 33.37 & 31.38 & 1000.98 & 0.18 & 0.98\\ BP5c & 0.005 & 0.334 & 0.54 & 33.37 & 16.27 & 1000.98 & 0.09 & 0.51\\ \hline \end{tabular} \caption{A record of the number of surviving events in the $A \rightarrow l^+ l^- b \bar{b}$ channel after the selection cuts at the $\sqrt{s} = 14 $ TeV LHC for a Type-I 2HDM. Here $\mathcal{N}_S^{100(3000)}$ and $\mathcal{N}_B^{100(3000)}$ and respectively denote the number of events with $\mathcal{L} = 100(3000)$ $\rm fb^{-1}$. Besides, $\rm CL_{100(3000)}$ denotes the confindence level for $\mathcal{L} = 100(3000)$ $\rm fb^{-1}$.} \label{BP} \end{table} In this channel, too, Type-I fares slightly better than Type-II, much due to the same reason outlined in preceding discussion. In this channel, The statistical significance of BP1-5 is also enhanced \emph{w.r.t} the $4l$ case, albeit marginally. The confidence level corresponding to $m_A$ = 500 GeV looms around 3$\sigma$, for both Type-I and Type-II. \begin{table}[h] \centering \begin{tabular}{|c c c c c c c c c|} \hline Benchmark & $\sigma^{SC}_{S}$ (fb) & $\sigma^{SC}_{B}$ (fb) & $\mathcal{N}_S^{300}$ & $\mathcal{N}_B^{100}$ & $\mathcal{N}_S^{3000}$ & $\mathcal{N}_B^{3000}$ & $\rm CL_{100}$ & $\rm CL_{3000}$\\ \hline \hline BP3a & 0.130 & 1.387 & 13.01 & 138.73 & 390.23 & 4161.95 & 1.06 & 5.78 \\ BP3b & 0.075 & 1.387 & 7.49 & 138.73 & 224.80 & 4161.95 & 0.62 & 3.39\\ BP3c & 0.040 & 1.387 & 3.98 & 138.73 & 119.30 & 4161.95 & 0.33 & 1.82\\ \hline BP4a & 0.065 & 0.632 & 6.45 & 63.22 & 193.61 & 1896.59 & 0.77 & 4.23 \\ BP4b & 0.043 & 0.632 & 4.28 & 63.22 & 128.30 & 1896.59 & 0.52 & 2.85\\ BP4c & 0.030 & 0.632 & 3.03 & 63.22 & 90.95 & 1896.59 & 0.37 & 2.04\\ \hline BP5a & 0.020 & 0.334 & 2.04 & 33.37 & 61.18 & 1000.98 & 0.34 & 1.88\\ BP5b & 0.010 & 0.334 & 1.03 & 33.37 & 30.87 & 1000.98 & 0.18 & 0.96\\ BP5c & 0.005 & 0.334 & 0.53 & 33.37 & 16.00 & 1000.98 & 0.09 & 0.50 \\ \hline \end{tabular} \caption{A record of the number of surviving events in the $A \rightarrow l^+ l^- b \bar{b}$ channel after the selection cuts at the $\sqrt{s} = 14 $ TeV LHC for a Type-II 2HDM. Here $\mathcal{N}_S^{100(3000)}$ and $\mathcal{N}_B^{100(3000)}$ and respectively denote the number of events with $\mathcal{L} = 100(3000)$ $\rm fb^{-1}$. Besides, $\rm CL_{100(3000)}$ denotes the confindence level at for $\mathcal{L} = 100(3000)$ $\rm fb^{-1}$.} \label{BP} \end{table} A clearer picture regarding the observability of an $A$ of masses 500 GeV and 550 GeV emerge upon inspection of Fig.~\ref{llbb_500} and Fig.~\ref{llbb_550} respectively. We display the 5$\sigma$ contour as well in case of the $l^+ l^- b \bar{b}$ channel. For $m_A = 550$ GeV with non-zero $\lambda_6$ and $\lambda_7$, the $l^+ l^- b \bar{b}$ channel offers sensitivity at the level of 3$\sigma$ for a scenario valid till 10$^{14}$ GeV or even higher. On the contrary, the corresponding cut-off cannot be pushed above $10^{11}$ GeV if one demands similar observability in case of the 4$l$ final state from $H$-decay. Overall, a violation of the $Z_2$ symmetry via $\lambda_6$ and $\lambda_7$ aids to the effort of observing a 2HDM valid up to high cut-off scales. \begin{figure} \begin{center} \includegraphics[scale=0.48]{m_500_z2_I_llbb.pdf}~~~ \includegraphics[scale=0.48]{m_500_z2_II_llbb.pdf} \includegraphics[scale=0.48]{m_500_noz2_I_llbb.pdf}~~~ \includegraphics[scale=0.48]{m_500_noz2_II_llbb.pdf} \caption{The paramater space in the tan$\beta$ vs. $c_{\beta - \alpha}$ plane for $m_H = 500 ~\rm GeV$ and $m_A = 501$ GeV that allows for validity till $10^{11}$ GeV(red), $10^{14}$ GeV(green) and $10^{19}$ GeV(black). The region inside the solid (broken) blue curve corresponds to a signal significance greater than or equal to 3(5)$\sigma$. The upper and lower plots are for $\lambda_6 = \lambda_7 = 0$ and $\lambda_6,\lambda_7 \neq 0$ respectively.} \label{llbb_500} \end{center} \end{figure} It is mentioned that the analysis for this channel is subject to uncertainties, albeit small, that are introduced while estimating the background cross section. Upon considering the errors in the $t \bar{t}$ production rates and the background NLO K-factors\cite{Alwall:2014hca}, the total background cross section can deviate up to $\simeq \pm 20\%$. This, however, does not modify the overall conclusions made in this section. \begin{figure} \begin{center} \includegraphics[scale=0.48]{m_550_z2_I_llbb.pdf}~~~ \includegraphics[scale=0.48]{m_550_z2_II_llbb.pdf} \includegraphics[scale=0.48]{m_550_noz2_I_llbb.pdf}~~~ \includegraphics[scale=0.48]{m_550_noz2_II_llbb.pdf} \caption{The paramater space in the tan$\beta$ vs. $c_{\beta - \alpha}$ plane for $m_H = 550 ~\rm GeV$ and $m_A = 551$ GeV that allows for validity till $10^{11}$ GeV(red), $10^{14}$ GeV(green) and $10^{19}$ GeV(black). The region inside the solid (broken) blue curve corresponds to a signal significance greater than or equal to 3(5)$\sigma$. The upper and lower plots are for $\lambda_6 = \lambda_7 = 0$ and $\lambda_6,\lambda_7 \neq 0$ respectively.} \label{llbb_550} \end{center} \end{figure} \section{Prospects at other colliders}\label{other} With the prospects of observing non-standard scalars with masses above the 500 GeV range at the LHC turning bleak, we resort to future lepton colliders for better observability. These include not only the $e^+ e^-$ colliders, but also a muon collider\cite{PhysRevD.88.115003}. The principal heavy Higgs production channels at the $e^+ e^-$ machine are those of associated production (VH) and Vector-Boson-Fusion (VBF)\cite{Hodgkinson:2009uj}. The production rate in both of these modes is controlled by the value of $\rm cos(\beta-\alpha)$. As elaborated in the previous sections, $\rm cos(\beta-\alpha)$ is tightly bounded by the requirement of a stable vacuum till the Planck scale. In addition, the maximum $\sqrt{s}$ proposed for the ILC is 1 TeV\cite{Behnke:2013lya} which hampers a probe of heavy scalars due to kinematical limitations. For instance, the VH production cross section for an $H$ of mass 600 GeV could be at most $\simeq$ 0.01 fb in a ILC with $\sqrt{s} = 1$ TeV. This does not result in the requisite signal significance when the backgrounds are estimated and the cut efficiencies are folded in. \subsection{$\mu^+ \mu^-$ collisions and radiative return}\label{radret} A particularly interesting process in a muon collider is one of radiative return (RR)\cite{PhysRevD.91.015008}, where one does not need to know the mass of the resonantly produced scalar precisely. In our context, the processes under consideration are \begin{equation} \mu^+ \mu^- \longrightarrow H ~\gamma, A ~\gamma \end{equation} Note here that $H/A$ can be produced in association with a $\gamma$ in t-channel $\mu^+ \mu^-$ annihilations. When the center of mass energy of the muon collider is above the heavy resonance, the photon emission from the initial state provides an opportunity to reconstruct the mass of the heavy scalar or pseudoscalar. For this, one need not know the mass of the (unknown) heavy resonance. The final state then consists of a soft photon and other visible products exhibiting an invariant mass peak. The closer is the mass of the heavy scalar to the centre-of-mass (COM) energy of $\mu^+ \mu^-$ collisions, the higher the cross section. Thus tagging a heavy scalar state from invariant mass peak of its decay product can help us in reducing the background and increasing the statistical significance. Moreover, in order to obtain information on the CP of the heavy resonance, the CP-even and the CP-odd states must be allowed to decay in different final states following their production through RR. We can propose $H \longrightarrow Z Z \longrightarrow 4 l$ and $A \longrightarrow h Z \longrightarrow l^+ l^- b \bar{b}$, which resemble the signals studied in the previous sections, and distinguish the cP-even scalar from the CP-odd one. In order to study the observability of the benchmarks BP3a - BP5c in RR, we choose the COM energy of the $\mu^+ \mu^-$ collisions to be just 10 GeV above $m_H$, in each case. For BP3a, the RR cross section for $H$ production is $\simeq$ 1.3 fb for a Type-II 2HDM. Upon multiplying by the branching ratios corresponding to $H \rightarrow Z Z$ and $Z \rightarrow l l$, the corresponding cross section for the $4l + \gamma$ final state turns out to be $\mathcal{O}$(${10^{-4}}$) (fb). The cross section for the $l^+ l^- b \bar{b} + \gamma$ final state could still be $\mathcal{O}$(${10^{-3}}$) (fb). However, it will ultimately get reduced when kinematical cuts are applied. In a Type-II 2HDM, though the $\mu\mu H$ coupling is proportional to tan$\beta$, opting for a higher value of tan$\beta$ does not help in this regard, since in that case, the allowed value of $|\rm cos(\beta-\alpha)|$ decreases owing to the demand of validity till high scales (see Fig.). This diminishes the $H \rightarrow Z Z$ and $A \rightarrow h Z$ branching ratios, and ultimately, leads to further lower rates. The other BPs too predict negligibly small RR rates for both Type-I and Type-II. With such meagre RR rates in $4l + \gamma$ as well as $l^+ l^- b \bar{b} + \gamma$ channels, chances of observing the heavy resonances are obliterated. Still promising could the fermionic decay channels of $H/A$ in this regard. For instance, the $b \bar{b} A$ coupling in a Type-(I)II 2HDM is proportional to cot$\beta$(tan$\beta$) and for sufficiently small $|\rm cos(\beta-\alpha)|$, the fermionic couplings of $H$ and $A$ are nearly equal. The advantage of a muon collider over the LHC is that the $b \bar{b}$ final state can rise above the background more effectively. As we shall see below, this enhances the mass reach. One can thus probe the observability of the heavy scalars in the $\mu^+ \mu^- \rightarrow H/A \gamma \rightarrow b \bar{b} \gamma$\footnote{In view of the high t-Yukawa coupling, one could also look at $\mu^+ \mu^- \rightarrow H/A \gamma \rightarrow t \bar{t} \gamma$ in principle. However that channel will ultimately lead to lesser rates compared to the $b \bar{b}$ mode owing to the smaller $t \bar{t}$ branching fraction.}. It is readily seen that for tan$\beta > 1$, Type-II has higher production rates of $H/A$ through RR compared to Type-I. This could give a handle in distinguishing between Types I and II. Therefore, to test the potency of RR in the $H/A \rightarrow b \bar{b}$ mode, we tabulate two additional benchmarks, as shown in Table ~\ref{BP_radret}. \begin{table}[h] \centering \begin{tabular}{|c c c c c|} \hline Benchmark & $\sqrt{s}$ (GeV) & tan$\beta$ & $m_{H}$(GeV) & $m_{A}$(GeV) \\ \hline \hline BP6 & 500 & 12 & 492 & 493 \\ \hline BP7 & 1000 & 12 & 992 & 993 \\ \hline \end{tabular} \caption{The values of $m_H$, $m_A$ and tan$\beta$ chosen to probe the radiative return channel.The values of $\sqrt{s}$ are also shown. } \label{BP_radret} \end{table} The values of the other 2HDM parameters have been fixed appropriately so as to ensure stability till the Planck scale. For instance, we chose $m_{12} = 150$, $c_{\beta - \alpha} = 0.01$ and $m_{12} = 500$, $c_{\beta - \alpha} = 0.005$ for BP6 and BP7 respectively. We take 500 GeV and 1 TeV to be COM energy for these two cases. Accordingly, $\sqrt{s} - m_{H/A}$ is maintained around $\sim$ 7 GeV to maximize the efficiency of the radiative return mechanism. In addition, we have purposefully chosen a somewhat large value for tan$\beta$ to elevate the $H/A$ production rate to the order of 10 fb. Moreover, we also get a sizeable branching ratio for the $H/A \rightarrow b \bar{b}$ channel for both BP6 and BP7 ($> 70 \%$). The SM background comes from the processes $\mu^+ \mu^- \rightarrow b \bar{b}$ and $\mu^+ \mu^- \rightarrow b \bar{b} \gamma$. The cut, $m_H - 30 ~\text{GeV} < m_{bb} < m_H + 30 ~\text{GeV}$ on the invariant mass of the $b$-pair is imposed. The softness of the photon in the case of RR can be exploited to reduce the background by putting an upper bound on the photon $p_T$, which we take to be 30 GeV. Effects arising out of smearing the photon-energy are small, so we keep the photon-energy same as the simulated value. The confidence levels obtained for BP6 and BP7 are listed in Table~\ref{sig_radret}. \begin{table}[h] \centering \begin{tabular}{|c c c c c c c c c|} \hline Benchmark & $\sigma^{SC}_{S}$ (fb) & $\sigma^{SC}_{B}$ (fb) & $\mathcal{N}_S^{500}$ & $\mathcal{N}_B^{500}$ & $\mathcal{N}_S^{1000}$ & $\mathcal{N}_B^{1000}$ & $\rm CL_{500}$ & $\rm CL_{1000}$\\ \hline \hline BP6 & 2.02 & 32.22 & 1011.28 & 16110.05 & 2022.56 & 32220.08 & 7.72 & 10.92\\ \hline BP7 & 0.26 & 2.52 & 133.92 & 1264.28 & 267.85 & 2528.57 & 3.58 & 5.06\\ \hline \end{tabular} \caption{Number of signal and background surviving events in the radiative return process at the muon collider. Here $\mathcal{N}_S^{500(1000)}$ and $\mathcal{N}_B^{500(1000)}$ and respectively denote the number of events $\mathcal{L} = 500(1000)$ $\rm fb^{-1}$. Besides, $\rm CL_{500(1000)}$ denotes the confindence level at $\mathcal{L} = 500(1000)$ $\rm fb^{-1}$.} \label{sig_radret} \end{table} Table~\ref{sig_radret} shows that it is possible to experimentally observe an $H$ as heavy as 1 TeV through radiative return. The corresponding signal rates are almost identical for a near degenerate $A$ decaying to $b \bar{b}$, and thus, are not separately shown. Thus, radiative return in the $b \bar{b}$ channel does succeed in predicting abundant signal events in case of heavy scalars. This is reflected by a sizable statistical significance of $\sim$ 5$\sigma$ that can be obtained in case of a scalar of mass 1 TeV when the $\mu^+ \mu^-$ machine is operated at an integrated luminosity of 1000 fb$^{-1}$. More importantly, this is found to be in perfect agreement with high-scale stability and perturbativity up to $M_{Pl}$. However in this channel, one faces the difficulty in distinguishing between a $b \bar{b}$ resonance that comes from an $H$ and one coming from $A$. This is in sharp contrast with the results obtained in case of the 14 TeV LHC. Over there, though the $CP$ of the scalar can be tagged, its observability does not exceed 3$\sigma$ in terms of confidence level for masses beyond 500 GeV. \section{Summary and conclusions.}\label{con} By virtue of the additional bosonic fields, a 2HDM ensures the stability of the EW vacuum till a cut-off scale all the way up to the Planck scale. This holds true even after switching between the Type-I and Type-II cases. However stringent constraints apply on the parameter space in the process. This is especially true when vacuum stability and perturbative unitarity are demanded up to the Planck scale. Then, the couplings of the non-standard scalars to other bosonic states become very small because of suppressed cos($\beta - \alpha$). In addition, the mass spectrum of the non-standard scalar bosons becomes quasi-degenerate. These constraints limit the observability of such a 2HDM at colliders. We have studied in detail the interplay between high-scale validity and the discernability of the scenario at the LHC and at a future muon collider. In the LHC, signatures of the the CP-even boson $H$ and CP-odd boson $A$ are studied through their decays into the $4l$ and $l^+ l^- b \bar{b}$ channel respectively. The search turns challenging due to the stringent upper bound on cos($\beta - \alpha$). A sizable signal significance demands an upper bound on tan$\beta$, contrary to high scale validity constraints, where no such bound is predicted. An analysis at the 14 TeV LHC including detector effects reveals that $H$ and $A$ of masses around 500 GeV can be simultaneously observed in their respective channels with at least 3$\sigma$ confidence when the integrated luminosity is 3000 fb$^{-1}$. The observability improves upon de-escalating the cut-off scale, attaining 5$\sigma$ statistical significance becomes possible when the cut-off is near $10^{11}$ GeV. Radiative return at the muon collider yields sizable production rates of $H$ or $A$. We have studied the observation their prospects through their subsequent decay to the $b \bar{b}$ final state. Contrary to the results obtained for the LHC, the $\mu^+ \mu^-$ machine can lead to a 5$\sigma$ statistical significance even if the scalar mass is 1 TeV. Thus a certain complimentarity of roles between the LHC and a muon collider is noticed. The former has relatively lower mass reach but clearly differentiates the $H$-peak from the $A$-peak, while the latter loses this distinction by being forced to look at the $b \bar{b}$ decay mode, though up to higher (pseudo)scalar masses. \section{Acknowledgements} We thank Subhadeep Mondal and Jyotiranjan Beuria for useful discussions. This work was partially supported by funding available from the Department of Atomic Energy, Government of India, for the Regional Centre for Accelerator-based Particle Physics (RECAPP), Harish-Chandra Research Institute. \bibliographystyle{JHEP}
1,941,325,221,224
arxiv
\section{Introduction} Let $G$ denote a smooth quadric in $\mathbb{P}^5$ over the field of complex numbers, considered as the Pl\"ucker quadric parametrizing lines in $\mathbb{P}^3$. A {\it quadratic complex} or to be more precise a {\it quadratic line complex} is by definition a complete intersection $X = F \cap G$ with a quadric $F \subset \mathbb{P}^5$ different from $G$. A quadratic complex can be considered as a pencil of quadrics. Hence the Segre symbol $\sigma = \sigma(X)$ of a quadratic complex is well defined. For the definition see Section 2. The first aim of the paper is to construct the moduli spaces of quadratic complexes with a fixed Segre symbol. Let $SO(G)$ denote the special orthogonal group associated to the quadric $G$. Two quadratic complexes $X_1$ and $X_2$ are isomorphic if and only if there is a matrix $A \in SO(G)$ such that $X_2 = A(X_1)$. This gives an action of $SO(G)$ on the space of quadratic complexes and the notion of semistable quadratic complexes is well defined. It turns out (Corollary \ref{cor3.3}) that a quadratic complex $X$ is semistable if and only its discriminant admits at least two different roots or equivalently if its Segre symbol consists of at least two brackets. On the other hand, a quadratic complex is non-reduced (respectively reducible) if and only if its Segre symbol contains a bracket of length 5 (respectively 4). Hence all irreducible and reduced quadratic complexes are semistable. The irreducible and reduced quadratic complexes with a fixed Segre symbol $\sigma$ form an irreducible variety on which the group $SO(G)$ acts and such that all orbits are of the same dimension. This implies that the coarse moduli space $\mathcal{M}_{qc}(\sigma)$ of these quadratic complexes exists and is a quasiprojective variety of dimenson $r-2$, where $r$ denotes the number of brackets of the Segre symbol $\sigma$ (see Theorem \ref{teo4.7}).\\ The classical method for investigating quadratic complexes is to study the set of lines in $\mathbb{P}^3$ parametrized by the points of $X$. For any point $p \in \mathbb{P}^3$, the lines in $\mathbb{P}^3$ passing through $p$ are parametrized by a plane $\alpha(p)$ containd in $G$. For a general point $p \in \mathbb{P}^3$ the intersection $\alpha(p) \cap F$ is a smooth conic. The set $S =S(X) = \{p \in \mathbb{P}^3: rk(\alpha(p) \cap F) \leq 2 \}$ is a surface in $\mathbb{P}^3$, not necessarily irreducible. It is called the {\it singular surface in $\mathbb{P}^3$} associated to the quadratic complex $X$. The surface $S(X)$ is a quartic whose singularities depend on the Segre symbol $\sigma(X)$. The next aim of the paper is to construct the moduli space of these quartics. The group $SL(4) = SL(4,\mathbb{C})$ acts in a natural way on the space of quartics in $\mathbb{P}^3$. Hence it makes sense to talk about semistability of these quartics. We show (see Proposition \ref{prop6.2}) that a quartic surface in $\mathbb{P}^3$ is semistable with respect to the action of $SL(4)$ if and only if it does not admit a triple point whose tangent cone is a cone over a cuspidal plane cubic (possibly degenerated). We use this to construct the moduli space $\mathcal{M}_{ss}(\sigma)$ of quartics with a singularity of type $\sigma$ (Theorem \ref{teo6.4}). The moduli space $\mathcal{M}_{qc}(\sigma)$ is constructed using the action of the group $SO(G) \simeq SO(6)$, whereas the moduli space $\mathcal{M}_{ss}(\sigma)$ is constructed with respect to the action of the group $SL(4)$. Note that there is an isomorphism $$ PSO(6) \simeq PSL(4). $$ We use this isomorphism to show that the map associating to a quadratic complex $X$ its singular surface $S(X)$ induces a morphism $$ \pi: \mathcal{M}_{qc}(\sigma) \longrightarrow \mathcal{M}_{ss}(\sigma). $$ In the classical literature (see e.g. \cite{J}) two quadratic complexes are called {\it cosingular} if their singular surfaces are isomorphic. So the fibres of the morphism $\pi$ are just the varieties of cosingular complexes. In \cite{K} Klein showed the variety of cosingular complexes in the generic case, i.e. for $\sigma = [111111]$, is just the projective line. We show (see table 7.3) that the varieties of cosingular complexes are generically curves, except in one case. Finally, in the last section we reprove Klein's result using our set up.\\ We would like to thank G.-M. Greuel for pointing out the paper \cite{FK} to us.\\ {\it Notation}: A quadric $F \in \mathbb{P}^n$ is up to a nonzero constant given by a symmetric matrix of degree $n+1$. By a slight abuse of notation we denote the matrix by the same letter as the quadric it defines. We work over the field of complex numbers. Most of the results are valid over an arbitrary algebraically closed field of characteristic $\neq 2$, however some of the computations use complex numbers. Hence the groups $SO(G), SO(6), SL(4)$ etc. always mean the corresponding complex Lie groups. \\ \section{The Segre symbol of a quadratic complex} Let $G \subset \mathbb{P}^5$ be a smooth quadric which we fix in the sequel. We consider $G$ as the Pl\"ucker quadric, although not always with Pl\"ucker coordinates. In fact, we choose the coordinates appropriate to the statement we want to prove. However in any case, a point $x \in G$ represents a line in $\mathbb{P}^3$, denoted by $l_x$. Let $F$ denote a quadric in $\mathbb{P}^5$, different from $G$. The complete intersection $$ X = F \cap G, $$ parametrizes a set of lines in $\mathbb{P}^3$, which is classically called a quadratic line complex. We call the variety $X$ itself the {\it quadratic line complex} or to be short the {\it quadratic complex} defined by $F$. A quadratic line complex determines a pencil of quadrics in $\mathbb{P}^5$, namely $$ {\cal P} = \{Q_{(\lambda:\mu)} = \lambda F + \mu G\; | \; (\lambda:\mu) \in \mathbb{P}^1 \} $$ which we call the pencil {\it associated} to the quadratic line complex. Note that the space of pencils of quadrics is by definition the Grassmannian $Gr(2,21)$ of lines in $\mathbb{P}^{20}$ and thus of dimension 38, whereas, as we shall see, the space of quadratic line complexes is of dimension 19 only. Pencils of quadrics are classified according to their Segre symbol. The {\it Segre symbol of the quadratic complex} is by definition the Segre symbol of the associated pencil of quadrics. Let us recall the definition of the Segre symbol: The {\it discriminant of the pencil} ${\cal P}$ is by definition the binary sextic $$ \Delta=\Delta(\lambda,\mu ):=\det(\lambda F + \mu G). $$ The discriminant $\Delta$ of ${\cal P}$ depends on the choice of the matrices $F$ and $G.$ The roots of $\Delta,$ however, are uniquely determined up to an isomorphism of $\mathbb{P}^1$. In particular, the multiplicities of the roots are uniquely determined. Suppose $({\bar \lambda}:{\bar \mu})$ is a root of $\Delta.$ It may also happen that all the subdeterminants of ${\bar \lambda} F +{\bar \mu} G $ of a certain order vanish. Suppose that all subdeterminants of order $6-d$ vanish for some $d \geq 0,$ but not all subdeterminants of order $5-d.$ This means that the quadric $Q_{({\bar \lambda}:{\bar \mu})}$ is a d-cone with vertex a linear space of dimension $d$ and directrix a smooth quadric in a linear subspace of dimension $4-d$ in $\mathbb{P}^5$. Let $l_i$ denote the minimum multiplicity of the root $({\bar \lambda}:{\bar \mu})$ in the subdeterminants of order $6-i,$ for $i=0,1,\dots,d.$ Then $l_i > l_{i+1}$ for all $i$ so that $e_i:=l_i-l_{i+1} > 0,$ and we have : $$ \Delta(\lambda,\mu) = (\lambda {\bar \mu} - {\bar \lambda} \mu)^{e_0} \dots (\lambda {\bar \mu} - {\bar \lambda} \mu)^{e_{d}} \Delta_1(\lambda,\mu), $$ with $\Delta_1({\bar \lambda},{\bar \mu}) \neq 0.$ The numbers $e_i$ are called the {\it characteristic numbers} of the the root $({\bar \lambda}:{\bar \mu})$ and the factors $(\lambda {\bar \mu} -{\bar \lambda} \mu)^{e_i}$ are called the {\it elementary divisors} of the pencil ${\cal P}.$ If $(\lambda_i:\mu_i)\mbox{ for }i=1,\dots,r $ are the roots of $\Delta$ and $e_0^j,\dots, e_{d_j}^j$ the characteristic numbers associated to the root $(\lambda_j:\mu_j)$ and $d_1 \geq d_2 \geq \dots \geq d_r,$ then $$ \sigma_X = \sigma_{\cal P} = [(e_0^1\dots e_{d_1}^1)(e_0^2\dots e_{d_2}^2) \dots(e_0^r \dots e_{d_r}^r)] $$ is called the {\it Segre symbol} of the quadratic complex $X$ or the pencil ${\cal P}.$ The parentheses are omitted if $d_i=1.$ In order to make it unique, we assume that the expressions $(e_0^i,\ldots,e_{d_i}^i)$ are ordered lexicographically if $d_i = d_j$. We call these expressions the {\it brackets} of the Segre symbol $\sigma_X$ or of the pencil ${\cal P}$ (even if the parentheses are omitted, i.e. $d_i =1$). It is a classical fact (see e.g. \cite[p. 278]{HP}) that 2 pencils of quadrics ${\cal P}_1$ and ${\cal P}_2$ in $\mathbb{P}^n$, whose discriminants have roots exactly at $(\lambda_i^1:\mu_i^1)$ and $(\lambda_i^2:\mu_i^2),$ are isomorphic, that is, projectively equivalent in $\mathbb{P}^n$, if and only if they have the same Segre symbol and there is an automorphism of $\mathbb{P}^1$ taking $(\lambda_i^1:\mu_i^1)$ to $(\lambda_i^2:\mu_i^2)$ for all i, where the brackets corresponding to $(\lambda_i^1:\mu_i^1)$ and $(\lambda_i^2:\mu_i^2)$ are of the same type. This can be used to define a normal form for those pencils ${\cal P}$, whose discriminant is not identically zero (see \cite[p. 280]{HP}): For every $e^i_j$ occurring in the Segre symbol of $X$ consider the $e^i_j \times e^i_j$-matrices $$ F_{ij} =\left(\begin{array}{ccccc}0&0&\dots&1& \frac{\lambda_i}{\mu_i}\\ 0&\dots&1& \frac{\lambda_i}{\mu_i}&0\\ \dots&\dots&\dots&\dots&\dots\\ 1&\frac{\lambda_i}{\mu_i}&0&\dots&0\\ \frac{\lambda_i}{\mu_i}&0&0&\dots&0 \end{array}\right)\; \mbox{ and }\; G_{ij}=\left(\begin{array}{ccccc}0&0&\dots&0&1\\ 0&0&\dots&1&0\\ \dots&\dots&\dots&\dots&\dots\\ 0&1&\dots&0&0\\ 1&0&\dots&0&0 \end{array}\right). $$ The coordinates of $\mathbb{P}^5$ can be chosen in such a way that $F$ and $G$ are given as block diagonal matrices as follows $$ F = \mbox{diag} (F_{11},\cdots,F_{rd_r}) \quad \mbox{and} \quad G = \mbox{diag} (G_{11},\cdots,G_{rd_r}). $$ We call these coordinates {\it Segre coordinates} of the quadratic complex $X$. Note that Segre coordinates are not uniquely determined. \begin{rem} {\em If $F$ and $G$ are given in Segre coordinates, the matrix $(G^{-1}F)^t$ will be in Jordan normal form. This gives another way to determine the Segre normal form: If $X = F \cap G$, choose the coordinates in such a way that the matrix $(G^{-1}F)^t$ is in Jordan normal form. Then the Segre normal form can be read off from this.} \end{rem} From the Segre normal form it is easy to derive the following lemma. \begin{lem} Let $(e_0,\cdots,e_d)$ denote a bracket in the Segre symbol of a quadratic complex. Then $e_0 \geq e_1 \geq \cdots \geq e_d$. \end{lem} The lemma is valid for any pencil of quadrics in $\mathbb{P}^n$, a general quadric of which is smooth. The proof is the same. \begin{proof} The Segre symbol of a pencil $\lambda F + \mu G$ does not depend on the coordinates chosen. Hence we may choose Segre coordinates for the pencil. In particular, we can choose the coordinates in such a way, that the matrix ${\bar F}$ of the block in the matrix $F$ corresponding to the root $({\bar \lambda}:1)$, which belongs to the bracket $(e_0, \cdots,e_d)$, is of the form $$ {\bar F} = \mbox{diag}({\bar F}_0, \cdots, {\bar F}_d) \quad \mbox{with} \quad {\bar F}_i = \left(\begin{array}{ccccc}0&0&\dots&1& {\bar \lambda}\\ 0&\dots&1& {\bar \lambda}&0\\ \dots&\dots&\dots&\dots&\dots\\ 1&{\bar \lambda}&0&\dots&0\\ {\bar \lambda}&0&0&\dots&0 \end{array}\right) $$ such that the sizes ${\bar e}_i$ of the matrices ${\bar F}_i$ are ordered as follows: ${\bar e}_0 \geq {\bar e}_1 \geq \cdots \geq {\bar e}_d$. We have to show that $e_i = {\bar e_i}$ for all $i$. For this it suffices to show that the minimum multiplicity of the root ${\bar \lambda}$ in the subdeterminants of order $6-i$ of the matrix ${\bar F}$ is $l_i= \sum_{j=i}^d {\bar e_j}$. Clearly we have $l_i \geq \sum_{j=i}^d {\bar e_j}$ for all $i$ and have to show only that the minimum can be obtained. For $i=0$ this is clear. For $i=1$ cancel the last line and column of the matrix $\bar F$ and compute the corresponding minor to see this. Then proceed successively always cancelling the line and column given by the last line and column of the submatrix ${\bar F}_i$. The corresponding minor always is the minimum. \end{proof} The quadric line complex $X$ is by definition the base locus of the pencil ${\cal P}.$ Thus $X$ is the intersection of any two different quadrics of the pencil. The brackets in the Segre Symbol correspond 1-1 to the cones in the pencil. We call the cone $Q_{(\lambda_i:\mu_i)}$ corresponding to the bracket $(e_0^i,\dots,e_{d_i}^i)$ a cone {\it of type} $(e_0^i,\dots,e_{d_i}^i).$ The quadric $Q_{(\lambda_i:\mu_i)}$ is then a $d_i$-cone and the corresponding root in the discriminant $\Delta$ is a root of multiplicity $e^i:=\sum_{j=0}^{d_i} e^i_j.$ By a slight abuse of notation we call $Q_{(\lambda_i:\mu_i)}$ a {\it d-cone of multiplicity} $e^i.$ The following table gives a list of the singularities of $X$ corresponding to the brackets occurring in this paper. \begin{center} \begin{tabular}{c|c|c|c} bracket& dim of vertex & vertex $ \cap \; X$ & type of singularities\\ \hline \hline 1 & 0 & $ \emptyset$ & no singularities\\ \hline 2 & 0 & 1 point & $A_1$ \\ \hline 3 & 0 & 1 point & $A_2$ \\ \hline 4 & 0 & 1 point & $A_3$ \\ \hline $(11)$ & 1 & 2 different points & $A_1$ \\ \hline $(21)$ & 1 & 1 point & $A_2$ \\ \hline $(22)$ & 1 & 1 point & $A_3$ \\ \hline $(111)$ & 2 & smooth conic $C$ & $X$ singular along $C$\\ \hline $(211)$ & 2 & rank 2 conic $C$ & $X$ singular along $C$ \\ \hline \end{tabular} \end{center} \section{Semistable quadratic complexes} Recall that we fixed a smooth quadric $G$ in $\mathbb{P}^5$. Considering the projective space $\mathbb{P}^{20}$ as the space parametrizing nontrivial quadrics in $\mathbb{P}^5$, a quadratic line complex is given by a line in $\mathbb{P}^{20}$ passing through $G$. Thus the space of quadratic line complexes can be considered as the closed subvariety $$ LC = \{ L \in Gr(2,21) \;|\; G \in L \} $$ of the Grassmannian of lines in $\mathbb{P}^{20}$. Two quadratic line complexes $X_1$ and $X_2$ are called {\it isomorphic} if there is an automorphism $A$ of $\mathbb{P}^5$ with $X_2 = A(X_1)$ which fixes the quadric $G$. The group of automorphisms of $\mathbb{P}^5$ fixing the quadric $G$ is by definition the group $PSO(G) \simeq PSO(6,k)$. We work instead with the finite covering $SO(G)$. Hence we get an action of the reductive group $SO(G)$ on the projective variety $LC$. Since $LC$ clearly admits an $SO(G)$-linearized line bundle, the notion of semistability is well defined for quadratic line complexes. In order to determine the semistable quadratic line complexes, we will use various coordinates of $\mathbb{P}^5$. Let us recall the relation between the corresponding special orthogonal groups. Let $G$ and $G'$ denote the matrices of the Pl\"ucker quadric with respect to two coordinate systems. We normalize the matrices such that the determinants of $G$ and $G'$ are $1$ and denote the corresponding groups by $SO(G)$ and $SO(G')$. Let $A$ denote a matrix of the coordinate change, so that $A^t G A = G'$ and $A^t = A^{-1}$. Then \begin{equation} \label{eq2.1} SO(G) \rightarrow SO(G'), \qquad g \mapsto h=A^t g A \end{equation} is an isomorphism of groups. Now choose the coordinates in such a way that $G$ is given by the matrix $1_6$. In classical terminology the corresponding coordinates are called {\it Klein coordinates}. Here the corresponding group $SO(G)$ coincides with the usual orthogonal group $SO(6)$. Let $S_0$ denote the space of quadrics in $\mathbb{P}^5$ with trace 0, i.e. the vector space of nonzero symmetric $6 \times 6$-matrices of trace 0 modulo $\mathbb{C}^*$. Obviously $S_0 \simeq \mathbb{P}^{19}$ and the group $SO(6)$ acts on $S_0$ by $(g,M) \mapsto g^tMg$. \begin{prop} \label{prop3.1} There is a canonical isomorphism $$ \Phi: LC \rightarrow S_0 $$ compatible with the actions of $SO(G)$ and $SO(6)$. In particular the variety of quadratic complexes is isomorphic to $\mathbb{P}^{19}$. \end{prop} Note that that the coordinates for $LC$ can be chosen arbitrarily. That is the reason for denoting the group acting on $LC$ by $SO(G)$. \begin{proof} Given any quadratic complex $X$, choose Klein coordinates. Then the associated pencil of quadrics $\{\lambda F + \mu G \;|\; (\lambda:\mu) \in \mathbb{P}^1 \}$ contains exactly one quadric of trace 0, namely $$ F_0 = F - \frac{tr F}{6}G. $$ Certainly this definition does not depend on the choice of $F$. Conversely, if $F_0$ is a non-zero quadratic form of trace 0, then $F_0$ and $G \;(=1_6)$ are linearly independent and thus determine a quadratic line complex. Certainly the maps $X \mapsto F_0$ and $F_0 \mapsto X$ are algebraic and inverse to each other, giving the canonical isomorphism as stated. In both cases the special orthogonal group acts by conjugation and according to (\ref{eq2.1}) this is independent of the chosen coordinates. Hence the maps are compatible with the given actions. \end{proof} As a consequence of Proposition \ref{prop3.1} we get that a quadratic complex is semistable with respect to the action of $SO(G)$ if and only if the associated quadric of trace 0 in $\mathbb{P}^5$ is semistable with respect to the action of $SO(6)$. The next proposition gives a criterion for an arbitrary quadric $$ F = \sum_{i,j=1}^6 f_{ij}x_ix_j $$ with $f_{ij}=f_{ji}$ for $i \neq j$ in $\mathbb{P}^5$ to be semistable with respect to the action of $SO(6)$. \begin{prop} \label{prop3.2} The quadric $F$ in $\mathbb{P}^5$ is not semistable with respect to the action of $SO(6)$ if and only it is equivalent under this action to a quadric $Q = (q_{ij})$ with $$ q_{ij} =0 \; for \; all \; 1 \leq i,j \leq 3 \quad and \quad q_{14} = q_{15} = q_{16} = q_{25} = q_{35} = q_{45} = 0. $$ \end{prop} In other words, a quadric $F$ in $\mathbb{P}^5$ is semistable with respect to the action of $SO(6)$ if and only if there is no $g \in SO(6)$ such that \begin{equation} \label{eq3.2} g^tFg = \left(\begin{array}{cccccc}0&0&0&0&0&0\\ 0&0&0&*&0&0\\ 0&0&0&*&*&0\\ 0&*&*&*&*&*\\ 0&0&*&*&*&*\\ 0&0&0&*&*&* \end{array}\right). \end{equation} \begin{proof} Choose the coordinates of $\mathbb{P}^5$ in such a way that \begin{equation} \label{eq3.3} G = \left(\begin{array}{cc}0&1_3\\ 1_3&0 \end{array}\right). \end{equation} In classical terminology these coordinates are called {\it Pl\"ucker coordinates}. Then the matrices $diag(x_1,x_2,x_3,x_1^{-1},x_2^{-1},x_3^{-1})$ with $x_i \in \mathbb{C}^*$ form a maximal torus of $SO(6)$. This fact and the form of the Weyl group of $SO(6)$ imply that every 1-parameter subgroup of $SO(6)$ is conjugate to one of the form $$ \lambda: \mathbb{C}^* \rightarrow SO(6), \qquad t \mapsto diag(t^{r_1},t^{r_2},t^{r_3},t^{r_4},t^{r_5},t^{r_6}) $$ with integers $r_1 \geq r_2 \geq r_3 \geq 0$ and $r_4 = -r_1, r_5=-r_2, r_6=-r_3$ acting on the space of quadrics in the usual way. In particular it acts on a monomial $x_ix_j$ of degree 2 by $$ \lambda(t)(x_ix_j) = t^{-r_i-r_j}x_ix_j \qquad \mbox{for} \qquad 1 \leq i \leq j \leq 6. $$ Defining $$ \mu(F,\lambda) = \max\{r_i + r_j \;| \; f_{ij} \ne 0\}, $$ the Hilbert-Mumford criterion implies that it suffices to show that for a given quadric $Q = (q_{ij})$ there exists a $\lambda$ as above with $\mu(Q,\lambda) < 0$ if and only if the coefficients in the statement of the proposition vanish. It is easy to see that if there exists a 1-parameter group $\lambda$ as above with $\mu(Q,\lambda) < 0$, then the coefficients vanish. For example, if $q_{15} \neq 0$, then $\mu(Q,\lambda) \geq r_1 + r_5 \geq r_2 - r_2 = 0$, a contradiction. Conversely, suppose all these coefficients vanish. So $$ Q = 2q_{24}x_2x_4 + 2q_{34}x_3x_4 + 2q_{35}x_3x_5 + \sum_{i,j=4}^6 q_{ij}x_ix_j. $$ Taking $r_1=3, r_2=2$ and $r_3 = 1$ we get $$ \mu(Q,\lambda) = max(2r_4,2r_5,2r_6,r_4+r_5,r_4+r_6,r_5+r_6,r_2+r_4,r_3+r_4,r_3+r_4)= -1 < 0. $$ This completes the proof of the proposition. \end{proof} \begin{cor} \label{cor3.3} A quadratic complex $X$ is semistable with respect to the action of $SO(G)$ if the discriminant $\Delta(X)$ has at least two different roots, i.e. if its Segre symbol consists of at least 2 brackets. \end{cor} \begin{proof} According to Proposition \ref{prop3.1} a quadratic complex in Klein coordinates is semistable if and only if the quadric $\Phi(X)$ of trace 0 is semistable. Changing to Pl\"ucker coordinates the statement remains true, since according to (\ref{eq2.1}) the matrix $A$ of the coordinate change satisfies $A^t=A^{-1}$. Then $G$ is given by (\ref{eq3.3}). Let $F_0$ denote the matrix $\Phi(X)$ transformed into Pl\"ucker coordinates, i.e. $F_0 = A^t\Phi(X)A$. According to Propositions \ref{prop3.1} and \ref{prop3.2}, $X$ is not semistable if and only if $F_0$ is equivalent under the action of $SO(6)$ to a matrix of the form of the right hand side of (\ref{eq3.2}). Since the multiplicities of the roots of $\Delta(X)$ stay the same under a change of coordinates, we may even assume that $F_0$ is of this form. But then clearly $\Delta(X)(\lambda,\mu) = \lambda^6$. In particular $\Delta$ has only one root. \end{proof} \begin{rem} {\em One could even work out which irreducible quadratic complexes (see Lemma \ref{lem4.1} below) are not semistable. They are exactly those with Segre symbols $[6], [(51)], [(42)], [(33)], [(411)],$ $[(321)]$ and $[(222)]$. } \end{rem} \section{Moduli spaces of quadratic complexes} In this section we construct the moduli spaces of quadratic complexes with a fixed Segre symbol. First we need some preliminaries. Recall that a quadratic complex is called {\it irreducible} (respectively {\it non-reduced}), if it is irreducible (respectively non-reduced) as a variety in $\mathbb{P}^5$. \begin{lem} \label{lem4.1} (1) A quadratic complex is non-reduced if and only if its Segre symbol contains a bracket of length 5;\\ (2) A quadratic complex is reducible if and only if its Segre symbol contains a bracket of length $4$.\\ \end{lem} \begin{proof} (1): Suppose the Segre symbol of a quadratic complex $X$ contains a bracket of length 5. It corresponds to a 4-cone, which is a double plane $F$ in $\mathbb{P}^5$. Hence $X = F \cap G$ is non-reduced. Conversely, If $X$ is non-reduced, it follows from the Jacobi criterion that the associated pencil contains a double plane. The bracket corresponding to it is of length 5. (2): Suppose the Segre symbol of $X$ contains a bracket of length 4. It corresponds to a 3-cone $F$. The directrix of $F$ is a nonsingular quadric in $\mathbb{P}^1$, i.e. consists of 2 different points. Hence $F$ and thus $X$ is reducible. Conversely, suppose $X$ reducible. No component of $X$ can be of degree 1, since otherwise the smooth quadric $G$ would contain a projective space of dimension 3. Hence $X = X_1 \cup X_2$ with smooth 3-dimensional quadrics $X_1$ and $X_2$. Let $H_i \;(\simeq \mathbb{P}^4)$ denote the linear span of $X_i$ for $i=1$ and 2. Then $F_0 := H_1 \cup H_2$ is a quadric in $\mathbb{P}^5$ such that $$ X = F_0 \cap G. $$ Two different $\mathbb{P}^4$'s in $\mathbb{P}^5$ intersect in a $\mathbb{P}^3$. This means that $F_0$ is a 3-cone. The bracket corresponding to it is of length 4. \end{proof} \begin{lem} \label{lem4.2} Let $X$ and $X'$ be quadratic complexes with associated pencils $\{\lambda F + \mu G \;|\; (\lambda: \mu) \in \mathbb{P}^1\}$ and $\{\lambda' F' + \mu' G \;|\; (\lambda':\mu') \in \mathbb{P}^1\}$ and roots $(\lambda_i:\mu_i)$ and $(\lambda'_j:\mu'_j)$ of the corresponding discriminants. Then the quadratic complexes $X$ and $X'$ are isomorphic if and only if they have the same Segre symbol and there is an automorphism of $\mathbb{P}^1$ fixing $(0:1)$ and carrying $(\lambda_i:\mu_i)$ to $(\lambda'_i:\mu'_i)$ for all $i$, where $(\lambda_i:\mu_i)$ and $(\lambda'_i:\mu'_i)$ correspond to brackets of the same type. \end{lem} \begin{proof} For the proof we choose Segre coordinates. As quoted already in Section 2, the corresponding pencils are isomorphic if and only if they have the same Segre symbol and there is an automorphism $(x_0:x_1) \mapsto (ax_0 + bx_1:cx_0+dx_1)$ of $\mathbb{P}^1$ carrying $(\lambda_i:\mu_i)$ into $(\lambda'_i:\mu'_i)$ for all $i$, where the brackets of $(\lambda_i:\mu_i)$ and $(\lambda'_i:\mu'_i)$ are of the same type. Now an isomorphism of quadratic complexes maps $G$ onto $G$. This means just that the automorphism of the associated pencils fixes the point $(0:1)$ of $\mathbb{P}^1$. \end{proof} \begin{cor} \label{cor4.3} If $\sigma$ is a Segre symbol with at most 2 brackets, then all quadratic complexes with Segre symbol $\sigma$ are isomorphic. \end{cor} \begin{proof} Let $X_1$ and $X_2$ be quadratic complexes with Segre symbol $\sigma$. Since the discriminants $\Delta(X_1)$ and $\Delta(X_2)$ admit $r \leq 2$ roots, there is an automorphism of $\mathbb{P}^1$ fixing $(0:1)$ and carrying the roots of $\Delta(X_1)$ onto the roots of $\Delta(X_2)$. So the assertion follows from Lemma \ref{lem4.2}. \end{proof} Hence the moduli space of quadratic line complexes with a fixed Segre symbol consists of a point only whenever the corresponding discriminant admits at most 2 different roots. In particular the varieties of cosingular complexes are not interesting for these Segre symbols. We assume in the sequel that $\sigma$ is a Segre symbol with the following 2 properties: \begin{equation} \label{eq4} \sigma \; \mbox{does not contain any brackets of length} \geq 4; \end{equation} \begin{equation}\label{eq5} \sigma \; \mbox{consists of at least 3 brackets}. \end{equation} According to Lemma \ref{lem4.1}, (\ref{eq4}) implies that every quadratic complex with Segre symbol $\sigma$ is irreducible and reduced and (\ref{eq5}) means, as we shall see, that the corresponding moduli space is positive dimensional. There are exactly 23 Segre symbols with the properties (\ref{eq4}) and (\ref{eq5}), see table 7.3 below. Let $\sigma$ be one of them. We want to construct the moduli space ${\cal M}(\sigma)$ of quadratic complexes with Segre symbol $\sigma$. \begin{lem} \label{lem4.4} The quadratic line complexes with Segre symbol $\sigma$ are parametrized by a quasiprojective subvariety $R(\sigma)$ of the variety $LC \simeq \mathbb{P}^{19}$ of all quadratic complexes. \end{lem} \begin{proof} Clearly the quadratic complexes of Segre normal form with Segre symbol $\sigma$ are parametrized by a quasiprojective variety $\tilde{R}(\sigma)$. In fact, $\tilde{R}(\sigma) \simeq (\mathbb{P}^1)^r \setminus \{diagonals\}$, where $r$ denotes the number of brackets in $\sigma$. Since every quadratic complex is isomorphic to one in Segre normal form, $R(\sigma)$ is the image of the map $$ SO(G) \times {\tilde R}(\sigma) \rightarrow LC, \qquad (g,X) \mapsto g^tXg $$ and as such a quasiprojective subvariety of $LC$. \end{proof} The group $SO(G)$ acts on the variety $R(\sigma)$ in an obvious way. We have to determine the stabilizer of any $X \in R(\sigma)$. \begin{lem} \label{lem4.5} Let the Segre symbol $\sigma$ satisfy (\ref{eq4}) and (\ref{eq5}) and suppose that $\sigma$ consists of $r_i$ brackets of length $i$ for $i = 1,2$ and 3. Then the stabilizer of any quadratic complex $X \in R(\sigma)$ in $SO(G)$ is of dimension $$ \dim Stab(X) = r_2 + 3r_3, $$ except in the case $\sigma = [(2,2),1,1]$, where it has dimension 2. In particular the dimension of the stabilizer depends only on the Segre symbol $\sigma$ and not on the quadratic complex $X \in R(\sigma)$. \end{lem} \begin{proof} A matrix $A \in SL(6)$ is in the stabilizer of $X$ if and only if $A^tGA=G$ and $A^tFA=F.$ Since the pencil is in Segre's normal form, $G=G^{-1}$, and these equations are equivalent to \begin{equation} \label{eqn6} A^t=GA^{-1}G \quad \mbox{and} \quad GFA=AGF \end{equation} Suppose first that every cone is a zero cone. We have to show that the stabilizer is zero-dimensional. It suffices to show that the stabilizer of every pair of blocks corresponding to a zero-cone of type $d$ is zero-dimensional. The Segre normal form of these blocks $G_i$ of $G$ and $F_i$ of $F$ are given by the $d \times d$ matrices: \[ G_i=\left(\begin{array}{ccccc}0&0&\dots&0&1\\ 0&0&\dots&1&0\\ \dots&\dots&\dots&\dots&\dots\\ 0&1&\dots&0&0\\ 1&0&\dots&0&0 \end{array}\right) \mbox{ and }\; F_i=\left(\begin{array}{ccccc}0&0&\dots&1&\lambda\\ 0&\dots&1& \lambda&0\\ \dots&\dots&\dots&\dots&\dots\\ 1&\lambda&0&\dots&0\\ \lambda&0&0&\dots&0 \end{array}\right)\, \] Consider the $ d\times d$ matrix $A=\left(a_{ij}\right)$. Computing $G_iF_iA$ and $AG_iF_i$, we see from the second equation of (\ref{eqn6}) that $A$ must be of the form: \[ A=\left(\begin{array}{ccccc}a_{11}&0&\dots&0&0\\ a_{21}&a_{11}&\dots&0&0\\ \dots&\dots&\dots&\dots&\dots\\ a_{(n-1),1}&a_{(n-2),1}&\dots&a_{11}&0\\ a_{n1}&a_{(n-1),1}&\dots&a_{21}&a_{11} \end{array}\right)\] Using this, we deduce from $A^tG_iA=G_i$, that \[G_i = \left(\begin{array}{cccccc}2a_{11}a_{n1}+2a_{21}a_{(n-1),1)}+\dots&\dots&\dots&\dots&2a_{11}a_{21}&a_{11}^2\\ 2a_{11}a_{(n-1),1}+2a_{21}a_{(n-2),1}&\dots&\dots&\dots&a_{11}^2&0\\ \dots&\dots&\dots&\dots&\dots&\dots\\ 2a_{11}a_{31}+a_{21}^2&2a_{11}a_{21}&a_{11}^2&0&\dots&0\\ 2a_{11}a_{21}&a_{11}^2&0&0&\dots&0\\ a_{11}^2&0&0&0&\dots&0 \end{array}\right)\] This implies that $A = \pm id$ and thus the assertion in this case. Note that the result does not depend on $\lambda$, which means that the dimension of the stabilizer does not depend on the chosen quadratic complex. Since the proof in the remaining cases is analogous, we omit the details. To be more precise, we distinguish the following cases and the proof always uses the equations (\ref{eqn6}). First assume the pencil has $k_1$ $1$-cones none with a bracket of type (2,2), and all other cones are $0$-cones. In this case the stabilizer is $k_1$-dimensional. In case the pencil $\mathcal{P}$ has a $2$-cone, it can have only this one $2$-cone according to our hypotheses. Segre's normal form has therefore one block of either of these 2 forms: $G_i= \mathds{1}$ and $F_i = \lambda \mathds{1}$ or $G_i=\left(\begin{array}{ccc}0&1&0\\ 1&0&0\\ 0&0&\mathds{1} \end{array}\right)\; \mbox{ and }\; F_i=\left(\begin{array}{ccc}1&\lambda&0\\ \lambda&0&0\\ 0&0&\lambda\mathds{1} \end{array}\right)$. In the first case, the equation $A^t F_i A=F_i$ implies $A^t=A^{-1}$ and the stabilizer is $SO(3)$ which clearly 3-dimensional. In the second case the stabilizer also is seen to be 3-dimensional. It follows that if the pencil has $k_0$ $0$-cones and $k_1$ $1$-cones (none with a bracket of type (2,2)) and $k_2 \; (= 1)$ 2-cones, the stabilizer has dimension $k_1+ 3k_2$. Finally consider the exceptional case $\sigma = [(2,2),1,1]$. The reason for the increase of the dimension of the stabilizer in this case is that there are two $2 \time 2$ blocks corresponding to the same root of the discriminant and both this blocks are not diagonal. \end{proof} As a consequence of Lemmas \ref{lem4.2}, \ref{lem4.4} and \ref{lem4.5} we obtain \begin{cor} \label{cor4.6} With the assumptions of above, let $r = r_1 + r_2 + r_3$ be the number of brackets in $\sigma$. Then we have $$ \dim R(\sigma) = r + 13 - \dim Stab(X) $$ where $X$ is any quadratic complex with Segre symbol $\sigma$. \end{cor} The main result of this section is the following theorem \begin{teo} \label{teo4.7} Let $\sigma$ be a Segre symbol satisfying (\ref{eq4}) and (\ref{eq5}) consisting of $r$ brackets. Then the moduli space ${\cal M}(\sigma)$ of quadratic complexes exists and is a quasiprojective variety of dimension $r-2$. \end{teo} \begin{proof} According to Lemma \ref{lem4.4} the variety $R(\sigma)$ parametrizes all quadratic complexes with Segre symbol $\sigma$. The group $SO(G)$ acts on $R(\sigma)$ and two quadratic complexes are isomorphic if and only if they differ by this action. According to Corollary \ref{cor3.3} every element of $R(\sigma)$ is semistable with respect to the action of $SO(G)$. Hence according to \cite[Theorem 3.14]{Ne} a good quotient ${\cal M}(\sigma)$ of $R(\sigma)$ modulo the action of $SO(G)$ exists, is a quasiprojective variety and parametrizes the closed orbits. Since by Lemma \ref{lem4.5} all orbits are of the same dimension, all orbits are closed in $R(\sigma)$. Hence ${\cal M}(\sigma)$ parametrizes the isomorphism classes of quadratic complexes with Segre symbol $\sigma$. For the dimension we have according to Corollary \ref{cor4.6} $$ \dim {\cal M}(\sigma) = \dim R(\sigma) - \dim SO(G) + \dim Stab(X) = r-2. $$ \end{proof} \section{The singular surface of a quadratic complex} Recall that a point $x \in G$ represents a line in $\mathbb{P}^3$ which we denote by $l_x$. The points of $G$ which correspond to the lines in $\mathbb{P}^3$ passing through a particular point $p \in \mathbb{P}^3$ form a plane $\alpha(p)$ contained in $G$. Similarly, the points of $G$ corresponding to lines in $\mathbb{P}^3$ lying in a plane $h \subset \mathbb{P}^3$ form a plane $\beta(h)$ contained in $G$. Therefore we have two systems of planes on $G$ and in fact these are the only planes in $G$. Two distinct planes of the same system meet exactly in one point, while two planes of different systems are either disjoint or meet exactly in a line. Conversely, every line on $G$ is contained in exactly one plane of each of the two systems. Following Newstead \cite{N}, we call the planes $\alpha(p)$ {\it $\alpha$-planes} and the planes $\beta(h)$ {\it $\beta$-planes}. Now let $\sigma$ denote a Segre symbol satisfying properties (\ref{eq4}) and (\ref{eq5}) and consider a quadratic line complex $$ X = F \cap G, $$ with Segre symbol $\sigma$. For a general point $p \in \mathbb{P}^3$ the intersection $\alpha(p) \cap F$ is a smooth conic in $\alpha(p)$. The set $$ S = \{p \in \mathbb{P}^3 : {\mbox rk} (\alpha(p) \cap F) \leq 2\} $$ is a surface in $\mathbb{P}^3$, not necessarily irreducible. It is called the {\it singular surface in $\mathbb{P}^3$} associated to the complex $X$. Clearly it does not depend on the quadric $F$ defining the quadratic complex, but only on the complex $X$ itself. The set $$ R = \{p \in \mathbb{P}^3 : {\mbox rk} (\alpha(p) \cap F) \leq 1\} $$ is an algebraic subset of dimension $\leq 1$ of $S$. For $p \in S \setminus R$, the intersection $\alpha(p) \cap F$ parametrizes two different pencils of lines in $\mathbb{P}^3$ intersecting in the common point $p$, denoted as the {\it focus} of the two {\it cofocal pencils}. For $p \in R$, either the intersection $\alpha(p) \cap F$ parametrizes one pencil of lines in $\mathbb{P}^3$ counted twice, or the plane $\alpha(p)$ is contained in $X$. In particular $S$ is the set of foci of pencils of lines in the complex $X$.\\ We now define a surface $\Sigma \subset X \subset \mathbb{P}^5$ closely related to $S$. For any $x \in X$ the line $l_x \in \mathbb{P}^3$ is called a {\it singular line} of the complex $X$ {\it at a point} $p \in l_x$, if the plane $\alpha(p)$ is contained in the tangent space $T_xF$. This means that the line $l_x$ belongs to more than one pencil of the complex (or to one pencil counted twice). If the line $l_x$ is singular at the point $p$, the point $p$ is certainly contained in the surface $S$. Conversely, for $p \in S \setminus R$ there is a unique singular line at $p$, namely the line of intersection of the two corresponding cofocal pencils. If $p \in R$, any line $l_x$ through $p$ is singular at $p$. We will see that the set $$ \Sigma := \{x \in X : l_x \; \mbox{is a singular line of the complex}\; X \} $$ is a surface in $X$, not necessarily irreducible. We call it the {\it singular surface in } $\mathbb{P}^5$ associated to $X$. In order to work out the relation between $\Sigma$ and $S$ we need the following lemma. \begin{lem} \label{lem5.1} Let $x \in \Sigma$.\\ {\em (a)} If $x$ is a smooth point of $X$, the line $l_x$ is singular at exactly one point $p \in l_x$.\\ {\em (b)} If $x$ is a singular point of $X$, the line $l_x$ is singular at any point $p \in l_x$. \end{lem} \begin{proof} (a): Suppose the line $l_x$ is singular at the points $p \neq q$. Since $\alpha(p) \cap \alpha(q) = x$, the linear span of $\alpha(p)$ and $\alpha(q)$ in $\mathbb{P}^5$ is the whole tangent space $T_xG$. Hence $\alpha(p)$ and $\alpha(q)$ cannot be both contained in $T_xF \neq T_xG$.\\ (b): If $x$ is a singular point of $X$, we have $T_xG \subset T_xF$. So for any point $p \in l_x$, $\alpha(p) \subset T_xG \subset T_xF$. \end{proof} \begin{rem} \label{rem5.2} {\em If the $\alpha$-plane $\alpha(p)$ is contained in $X$, any line passing through $p$ is singular exactly at $p$. If the $\beta$-plane $\beta(h)$ is contained in $X$, any line in the plane $h$ is singular at exactly one point $p$ and this gives a bijection $\beta(h) \setminus Sing(X) \rightarrow h \setminus Sing(S)$.} \end{rem} As a consequence of Lemma \ref{lem5.1} we can define a map $$ \pi: \Sigma \setminus Sing(X) \rightarrow S $$ by associating to each $x \in \Sigma \setminus Sing(X)$ the unique point $p \in l_x$ at which the line $l_x$ is singular. \begin{lem} \label{lem5.3} For any smooth point $x \in X$ the following conditions are equivalent:\\ {\em (1)} $x \in \Sigma$;\\ {\em (2)} there is a $y \in G, \; y \neq x$ such that $T_xF = T_{y}G$. \end{lem} \begin{proof} (1) $\Rightarrow$ (2): Suppose $l_x$ is singular at $p \in \mathbb{P}^3$, i.e. $\alpha(p) \subset T_xF$. Since $x$ is a smooth point of $X$, the tangent space $T_xF$ has projective dimension 4 and $G$ vanishes on $\alpha(p)$, which has projective dimension 2. This implies that the restriction of $G$ to $T_xF$ is singular. So $T_xF$ is tangent to $G$ at some point $y \in G$. Clearly $y \neq x$, since otherwise $x$ would be a singular point of $X$. (2) $\Rightarrow$ (1): Suppose $T_xF = T_yG$ for some $y \neq x$. The line $\overline{xy} \subset \mathbb{P}^5$ is contained in $G$ and hence it is contained in exactly one plane of each system of planes of $G$. Let $\alpha(p)$ the one corresponding to a point $p \in \mathbb{P}^3$. Then $\alpha(p) \subset T_xF$, i.e. $x \in \Sigma$. \end{proof} We may assume that $F$ is a smooth quadric, as is the case for the Segre normal form. The following theorem shows that $\Sigma$ is a complete intersection surface in $\mathbb{P}^5$. \begin{teo} \label{teo5.4} Let $H$ denote the quadric defined by the symmetric matrix $H = FG^{-1}F$. Then \begin{equation} \label{eq6} \Sigma = F \cap G \cap H. \end{equation} \end{teo} \begin{proof} According to Lemma \ref{lem5.1} the line $l_x$ is singular for any singular point $x$ of $X$, i.e. whenever $T_xF =T_xG$. Together with Lemma \ref{lem5.3} we get that for any $x \in X$ we have: $x \in \Sigma$ if and only if $T_xF = T_{x'}G$ for some point $x' \in X$ (not necessarily different from $x$). The dual coordinates of the tangent space $T_xF$, considered as a point in ${\mathbb{P}^5}^*$, are $x^* = Hx$, and this is tangent to $G$ if and only if $(x^*)^t G^{-1}x^* = 0$. This implies the assertion. \end{proof} \begin{rem} {\em According to the definition, $\Sigma$ depends only on $X$ and not on the choice of $F$. This can be seen also from the description in the theorem, since, if $F$ is replaced by $F + \lambda G$, then $H$ is replaced by $H + 2\lambda F + \lambda^2 G$.} \end{rem} Theorem \ref{teo5.4} implies that the map $\pi : \Sigma \setminus Sing(X) \rightarrow S$ as defined above can be described in a way which can be applied to compute the singular surface. For this we choose the coordinates of $\mathbb{P}^5$ in such a way that the matrix $G$ satisfies $G = G^{-1}$. This is the case for example for Pl\"ucker-, Klein- and Segre coordinates. Then we have \begin{prop} \label{prop5.5} For any $x \in \Sigma \setminus Sing(X)$ the point $Fx$ is in $G$ and the map $\pi: \Sigma \setminus Sing(X) \rightarrow S$ is given by \begin{equation} \label{eq7} \pi(x) = l_x \cap l_{Fx}. \end{equation} \end{prop} \begin{proof} Suppose $x \in \Sigma$ is a smooth point of $X$. Since $G = G^{-1}$, we have according to (\ref{eq6}), $x \in \Sigma$ if and only if $$ x^tGx=0, \quad x^tFx = 0 \quad \mbox{and} \quad x^tFGFx = 0. $$ The last equation can be read also as $(Fx)^tG(Fx) = 0$, which means that $Fx \in G$. Now with the coordinates $Y = (Y_1, \ldots, Y_6)$ of $\mathbb{P}^5$, $$ (Fx)^tY = 0 $$ is the equation of $T_xF$ as well as the equation of $T_{Fx}G$, so that $T_xF = T_{Fx}G$. This implies that the line $\overline{x,Fx}\subset \mathbb{P}^5$ is contained in $G$. It follows that the lines $l_x$ and $l_{Fx}$ intersect in a point $p \in \mathbb{P}^3$ and that the line $l_x$ is singular at $p$. This completes the proof of the proposition. \end{proof} \section{Semistable quartics in $\mathbb{P}^3$} A quartic surface in $\mathbb{P}^3$ is, up to a nonzero constant, defined by a quartic form \begin{equation} \label{eq6.1} S = \sum_{i_0+i_1+i_2+i_3=4} a_{i_0i_1i_2i_3}X_0^{i_0}X_1^{i_1}X_2^{i_2}X_3^{i_3}. \end{equation} As usual we denote the quartic surface and the quartic form by the same letter. Two quartic surfaces $S$ and $S'$ are isomorphic if there is a $\varphi \in SL(4)$ such that $\varphi(S) = S'$. This defines an action of $SL(4)$ on the projective space $\mathbb{P}^{34}$ parametrizing all quartic surfaces, which we want to analyze. The line bundle $L = \mathcal{O}_{\mathbb{P}^3}(4)|S$ is $SL(4)$-linearizable, so that we can speak about semistability of points in $\mathbb{P}^{34}$. \begin{lem} \label{lem6.1} A quartic surface in $\mathbb{P}^3$ is semistable with respect to the action of $SL(4)$ if and only if it is not isomorphic to a quartic (\ref{eq2.1}) with $$ a_{4000} = a_{3100} = a_{3010}=a_{3001} = a_{2200} = a_{2110} = a_{2101} = a_{2020} = a_{2011} = a_{2002} $$ $$ = a_{1300} = a_{1210} = a_{1201} = a_{1120} = a_{1111} = 0. $$ \end{lem} \begin{proof} Consider the 1-parameter groups $$ \lambda : \mathbb{C}^* \rightarrow SL(4), \qquad t \mapsto diag(t^{r_0},t^{r_1},t^{r_2},t^{r_3}) $$ with integers $r_0 \geq r_1 \geq r_2 \geq r_3, \; \sum r_i =0$ acting on the space $\mathbb{P}^{34}$ of quartics in $\mathbb{P}^3$ in the usual way. In particular it acts on a monomial $x_0^{i_0}x_1^{i_1}x_2^{i_2}x_3^{i_2}$ of degree 4 by $$ \lambda(t)(x_0^{i_0}x_1^{i_1}x_2^{i_2}x_3^{i_2}) = t^{-(r_0i_0+r_1i_1+r_2i_2+r_3i_3)}x_0^{i_0}x_1^{i_1}x_2^{i_2}x_3^{i_2} $$ Defining $$ \mu(f,\lambda) = \max\{r_0i_0+r_1i_1+r_2i_2+r_3i_3 \;| \; a_{i_0i_1 i_2 i_3} \ne 0\}, $$ the Hilbert-Mumford criterion implies that it suffices to show that for a given quartic $f$ there exists a $\lambda$ as above with $\mu(f,\lambda) < 0$ if and only if the coefficients $a_{i_0i_1 i_2 i_3}$ of the proposition vanish. It is easy to see that if there exists a one-parameter group $\lambda$ as above with $\mu(f,\lambda) < 0$, then the coefficients vanish. For example if $a_{3001} \neq 0$, then $\mu(f,\lambda) \geq 3r_0+ r_3 = (r_0-r_1)+(r_0-r_2) \geq 0$. Conversely, suppose all these coefficients vanish, and we take $r_0=8,r_1=-1,r_2=-3,r_3=-4$, then $\mu(f,\lambda) = -1 < 0$. This completes the proof of the lemma. \end{proof} \begin{prop} \label{prop6.2} A quartic surface $S \subset \mathbb{P}^3$ is semistable with respect to the action of $SL(4)$ if and only if it does not admit a triple point whose tangent cone is a cone over a cuspidal plane cubic (possibly degenerated). \end{prop} \begin{proof} According to Lemma \ref{lem6.1}, $S$ is not semistable if and only if it is isomorphic to a surface $S' \subset \mathbb{P}^3$ with a triple point at $e_0 = (1:0:0:0)$ whose tangent cone $TC(e_0)$ is given in coordinates $y_i = \frac{x_i}{x_0}$ for $i=1,2$ and 3 by \begin{equation} \label{eq6.1} TC(e_0) = a_{1102}y_1y_3^2 + a_{1030}y_2^3 + a_{1021}y_2^2y_3 + a_{1012}y_2y_3^2 + a_{1003}y_3^3. \end{equation} Considered as a plane projective curve, $TC(e_0)$ is a cubic with a cusp at $(1:0:0)$ (or a degeneration of it). As an isomorphic surface $S$ itself has a singularity of this type. Conversely, since all cuspidal plane cubics are isomorphic, every surface $S \subset \mathbb{P}^3$ with a singularity of this type is isomorphic to a surface $S \subset \mathbb{P}^3$ with a triple point at $(1:0:0:0)$ whose tangent cone is of the form (\ref{eq6.1}). By Lemma \ref{lem6.1}, $S$ is not semistable. \end{proof} Using this we can construct various moduli spaces of quartic surfaces in $\mathbb{P}^3$. Recall that the projective space $\mathbb{P}^{34}$ parametrizes all quartics in $\mathbb{P}^3$. Equisingularity induces a stratification of $\mathbb{P}^{34}$ into locally closed algebraic varieties (not necessarily irreducible). Certainly the group action of $SL(4)$ restricts to an action on the strata. We may assume that the dimension of the stabilizers is fixed on the strata by refining the stratification if necessary. Since semistability of a quartic depends only on the singularities of the quartic, we may call a stratum {\it semistable} if one quartic in it is. Let $\sigma$ be a Segre symbol satisfying the assumptions (\ref{eq4}) and (\ref{eq5}), i.e. $\sigma$ consists of at least 3 brackets and does not contain any bracket of length $\geq 4$. One deduces from the equations of the normal forms given in \cite{J} and Section 7 that the quartics occurring as singular surfaces of quadratic complexes with Segre symbol $\sigma$ all have the same type of singularities. Let $Z_{\sigma} \subset \mathbb{P}^{34}$ denote the corresponding stratum. We call the quartics of $Z_{\sigma}$ {\it singular surfaces of type $\sigma$}. \begin{lem} \label{lem6.3} {\em (a)} The strata $Z_{\sigma}$ are locally closed subsets of the projective space $\mathbb{P}^{34}$ parametrizing all quartics in $\mathbb{P}^3$. {\em (b)} Any singular surface $S$ of a quadratic complex of type $\sigma$ is semistable with respect to the action of $SL(4)$. \end{lem} \begin{proof} (a): The singularities of the singular quartics of the quadratic complexes with Segre symbol $\sigma$ are (analytically) locally trivial in the sense of \cite{FK}. It is shown in \cite{FK} (see Corollary 0.2 and the proof of 0.3) that the locus of these quartics is analytically locally closed in the base space $\mathbb{P}^{34}$. But $\mathbb{P}^{34}$ being projective, this implies the assertion. (b): According to Proposition \ref{prop6.2}, $S$ is not semistable if and only if it admits a triple point whose tangent cone is a cone over a cuspidal cubic or a degeneration of it. Checking the equations of $S$ given in \cite{J} and Section 7 below, one sees that this is not the case (it suffices to check this for the most degenerate cases). \end{proof} \begin{teo} \label{teo6.4} The moduli space of singular surfaces of type $\sigma$ $$ \mathcal{M}_{ss}(\sigma) = Z_{\sigma}/SL(4) $$ exists and is a quasiprojective variety. \end{teo} \begin{proof} According to Lemma \ref{lem6.3} all elements of $Z_{\sigma}$ are semistable. As for Lemma \ref{lem4.5} one checks that all stabilizers are of the same dimension. This implies that all orbits are of the same dimension. As in Theorem \ref{teo4.7} we conclude that a geometric quotient $\mathcal{M}_{ss}(\sigma) = Z_{\sigma}/SL(4,\mathbb{C})$ exists and its points parametrise the classes of isomorphic quartics in $Z_{\sigma}$. \end{proof} \begin{rem} \label{rem6.5} {\em It is well known that the singular surface of a generic quadratic complex is a {\it Kummer surface}, i.e. a quartic surface in $\mathbb{P}^3$, smooth apart from 16 ordinary double points. Moreover every Kummer surface appears as the singular surface of a generic quadratic complex. So in particular we constructed the moduli space $\mathcal{M}_{\kappa} := \mathcal{M}_{ss}(\sigma)$ for $\sigma = [111111]$ of Kummer surfaces. Using the normal form for a Kummer surface (see \cite{J} or \cite{Hu}) one checks that $\dim Z_{\sigma} = 18$. On the other hand, it is easy to see that Kummer surfaces have finite stabilizer in $SL(4)$ (see e.g. \cite[Exercise V.5.1 (3)]{Be}). From this we conclude $$ \dim \mathcal{M}_{\kappa} = 3. $$ Gonzalez-Dorrego \cite{GD} uses the normal form for Kummer surfaces (see \cite[p.98]{J} or \cite[p.81]{Hu}) to construct the moduli space $\mathcal{M}_{\kappa}$ as follows: The normal forms parametrize a 3-dimensional quasiprojective variety $\tilde{Z}$. There is a finite group $N$ (an extension of the symmetric group $S_6$ by ${\mathbb F}^4$), which is a subgroup of $SL(4)$ in a natural way and thus acts on $\tilde{Z}$. The quotient $\tilde{\mathcal{M}}_{\kappa} = \tilde{Z}/N$ is the moduli space of Kummer surfaces. The embedding $\tilde{Z} \rightarrow Z_{\sigma}$ induces a canonical isomorphism $\tilde{\mathcal{M}}_{\kappa} \simeq \mathcal{M}_{\kappa}$.} \end{rem} \section{The varieties of cosingular complexes} Let $\sigma$ be a Segre symbol satisfying the assumptions (\ref{eq4}) and (\ref{eq5}), i.e. $\sigma$ consists of at least 3 brackets and does not contain any bracket of length $\geq 4$. In Theorem \ref{teo4.7} we constructed the moduli space $\mathcal{M}_{qc}(\sigma)$ of quadratic complexes of type $\sigma$ and in Theorem \ref{teo6.4} the moduli space $\mathcal{M}_{ss}(\sigma)$ of quartic surfaces of type $\sigma$. As above let $R(\sigma)$ and $Z_{\sigma}$ denote the spaces parametrizing quadratic complexes and singular surfaces of type $\sigma$ as in sections 4 and 6. In Section 5 we associated to every quadratic complex in $R(\sigma)$ a singular surface in $Z_{\sigma}$. This induces a map $$ \pi: R(\sigma) \rightarrow Z_{\sigma}. $$ which certainly is holomorphic. According to sections 4 and 6 the groups $SO(6)$ and $SL(4)$ act on $R(\sigma)$ and $Z_{\sigma}$ in a natural way. Certainly these actions factorize via actions of $PSO(6)$ and $PSL(4)$ respectively. \begin{lem} \label{lem7.1} There is an isomorphism $\iota: PSO(6) \rightarrow PSL(4)$ such that the map $\pi: R(\sigma) \rightarrow Z_{\sigma}$ is equivariant with respect to the actions of $PSO(6)$ and $PSL(4)$, i.e. $$ \pi(A \cdot X) = \iota(A) \cdot \pi(X) $$ for every $A \in PSO(6)$ and $X \in R(\sigma)$. \end{lem} \begin{proof} This is a well-known fact (see \cite[Section 19.1]{FH}). In fact, the equivariance of the map $\pi$ can be used to the define the isomophism $\iota$: The points of $\mathbb{P}^3$ parametrize a family of planes, namely the $\alpha$-planes, in the Pl\"ucker quadric $G$ and the action of $PSO(6)$ on $\mathbb{P}^5$ induces an action on this $\mathbb{P}^3$. This gives just the isomorphism $\iota$ of the lemma. \end{proof} \begin{rem} {\em The equivariance of the map $\pi: R(\sigma) \rightarrow Z_{\sigma}$ can also be expressed in terms of the actions of $SO(6)$ and $SL(4)$: there is a surjective homomorphism $\kappa: SL(4) \rightarrow SO(6)$ with kernel of order 2 such that $\pi(\kappa(\alpha) \cdot X) = \alpha \cdot \pi(X)$ for every $\alpha \in SL(4)$ and $X \in R(\sigma)$.} \end{rem} Lemma \ref{lem7.1} implies that the map $\pi: R(\sigma) \rightarrow Z_{\sigma}$ induces a morphism of the corresponding moduli spaces, which we denote by the same letter $$ \pi: \mathcal{M}_{qc}(\sigma) \rightarrow \mathcal{M}_{ss}(\sigma). $$ Two quadratic complexes $X$ and $X'$ in $\mathcal{M}_{qc}(\sigma)$ are called {\it cosingular} if their singular surfaces are isomorphic, i.e. if $\pi(X) = \pi(X')$. The {\it variety $CS(X)$ of quadratic complexes cosingular to} $X$ is by definition the fibre of the surface $\pi(X)$ under the map $\pi$: $$ CS(X) := \pi^{-1}(\pi(X)) $$ In the generic case $\sigma = [111111]$ the varieties $CS(X)$ have been investigated by Klein in \cite{K}. It is the aim of this section to compute the dimension of $CS(X)$ for a generic complex $X \in \mathcal{M}_{ss}(\sigma)$ for every Segre symbol $\sigma$ satisfying equations (\ref{eq4}) and (\ref{eq5}). The result is given in the table 7.3 below.\\ The order of the Segre symbols is chosen as in \cite[pp 230-232]{J}. We omit here, however, the quadratic complexes with Segre symbol with less than 3 brackets. We only give the equation of the singular surface, when the equation is either not given or incorrect (i.e. there is a typo) in \cite{J}. For the other cases we refer to the corresponding section of \cite{J}. \newpage $$ {\bf Table \; 7.3} $$ \begin{center} \begin{tabular}{c|c|c|c|c|c} & Segre symbol $\sigma$ & singular surface $S$& $\dim \mathcal{M}_{qc}(\sigma)$ & $\dim \mathcal{M}_{ss}(\sigma)$ & $\dim \pi^{-1}(S)$\\ \hline \hline 1 & $[111111]$ & see \cite[p. 98]{J} & 4 & 3 & 1 \\ \hline 2 & [21111]& see \cite[No 171]{J} & 3 & 2 & 1 \\ \hline 3 & [3111] & see \cite[No 180]{J} & 2& 1 & 1\\ \hline 4 & [411] & see \cite[No 194]{J} & 1& 0 & 1\\ \hline 5 & [2211] & see \cite[No 186]{J} & 2& 1 & 1\\ \hline 6 & [321] & see \cite[No 198]{J} & 1 & 0 & 1\\ \hline 7 & [222] & see \cite[No 203]{J}& 1 & 0 & 1\\ \hline 8 & [(11)1111] & see \cite[No 162]{J} &3 & 2 & 1\\ \hline 9 & [(11)211] & see \cite[No 173]{J} &2 & 1 & 1\\ \hline 10 & [(11)31] & see \cite[No 182]{J} & 1& 0 & 1\\ \hline 11 & [(11)22] & see Case 11 below& 1 &0 & 1\\ \hline 12 & [(21)111] & see \cite[No 172]{J} & 2& 1 & 1\\ \hline 13 & [(21)21] & see \cite[No 188]{J}& 1 & 0 & 1\\ \hline 14 & [(31)11] & see Case 14 below &1 & 0 &1 \\ \hline 15 & [(22)11] & see \cite[No 187]{J} & 1& 0 & 1\\ \hline 16 & [(11)(11)11] & see \cite[No 167]{J} & 2& 1 & 1\\ \hline 17 & [(11)(11)2] & see \cite[No 177]{J} & 1& 0 & 1\\ \hline 18 & [(21)(11)1] & see Case 18 below & 1& 0 & 1\\ \hline 19 & [(11)(11)(11)] & coordinate tetrahedron &1 & 0 & 1\\ \hline 20 & [(111)111] & see Case 20 below & 2& 0 & 2\\ \hline 21 & [(111)(11)1] & same as in Case 19 &1 & 0 & 1 \\ \hline 22 & [(111)21] & see \cite[No 176]{J} &1 & 0 &1 \\ \hline 23 & [(211)11] & see Case 23 below &1 & 0 & 1\\ \end{tabular} \end{center} \noindent {\bf Case 11}: $\sigma = [(11)22]$\\ The equations for $G$ and $F$ are $$ G = x_1^2 + x_2^2 + 2x_3x_4 + 2x_5x_6 $$ $$ F= \lambda_1(x_1^2 + x_2^2) + 2\lambda_2x_3x_4 + 2\lambda_3x_5x_6 + x_3^2 + x_5^2 $$ and the equation of the singular surface is $$ S = \lambda_1^2y_1^2y_3^2 + (\lambda_1-\lambda_2)^2y_1^2y_4^2 -4\lambda_1\lambda_2(\lambda_1-\lambda_2)y_1y_2y_3y_4. $$ \noindent {\bf Case 14}: $\sigma = [(31)11]$\\ The equations of $G$ and $F$ are $$ G = x_1^2 + x_2^2 + x_3^2 + x_5^2 + 2x_4x_6 $$ $$ F = \lambda_1x_1^2 + \lambda_2x_2^2 + \lambda_3(x_3^2 + x_5^2 + 2x_4x_6) + 2x_4x_5. $$ Then the equation of the singular surface $S$ is $$ S= (\lambda_1 - \lambda_2)(y_1^4 + y_4^4) + 8(\lambda_1 - \lambda_3)(\lambda_2 - \lambda_3)y_1y_4(y_1y_3-y_2y_4) + 2(\lambda_1 + \lambda_2 - 2 \lambda_3)y_1^2y_4^2. $$ \noindent {\bf Case 18}: $\sigma = [(21)(11)1]$\\ The equations of $G$ and $F$ are \begin{equation} \label{eqn1} G = x_1^2 + x_2^2 + x_3^2 + x_4^2 + 2x_5x_6 \end{equation} $$ F = \lambda_1(x_1^2 + x_2^2) + \lambda_3x_3^2 + \lambda_4(x_4^2 + 2x_5x_6) +x_5^2. $$ Then $S$ is given by $$ S = (\lambda_1 - \lambda_4)(\lambda_3 - \lambda_4)(y_1y_3+y_2y_4)^2 - 4(\lambda_3- \lambda_1)y_1^2 y_4^2. $$ \noindent {\bf Case 20}: $\sigma = [(111)111]$\\ The quadric $G$ is given by $\sum_{i=1}^{6} x_i^2$ whereas $$ F = \lambda_1(x_1^2 + x_2^2 + x_3^2)+ \lambda_4 x_4^2 + \lambda_5 x_5^2 + \lambda_6 x_6^2. $$ Then the equation of $S$ is $$ S = (y_1y_3 - y_2y_4)^2. $$ \noindent {\bf Case 23}: $\sigma = [(211)11]$\\ Let $G$ be as in (\ref{eqn1}) and $F$ be given by $$ F = \lambda_1 x_1^2 + \lambda_2 x_2^2 + \lambda_3(x_3^2 + x_4^2 + 2x_5x_6) + x_5^2. $$ Then $$ S = y_1^2y_4^2. $$ \begin{proof} For the proof of the second column, the equations of the singular surfaces, we applied the classical method as outlined in \cite{J}, but using Maple 9.5. The third column is a consequence of Theorem \ref{teo4.7}. For the proof of the fourth column we showed in all the cases where $\dim \mathcal{M}_{ss}(\sigma) =0$, again using Maple, that any 2 singular surfaces of that type are isomorphic. Since it is well known that the moduli space of Kummer surfaces is 3-dimensional, we can conclude the remaining dimensions by general arguments. To give an example, the variety $\mathcal{M}_{ss}([21111])$ is in the closure of the irreducible variety $\mathcal{M}_{ss}([111111])$ and the variety $\mathcal{M}_{ss}([(21)111])$ is in the closure of $\mathcal{M}_{ss}([21111])$. So $\dim \mathcal{M}_{ss}([(21)111]) \leq 1$. But $\mathcal{M}_{ss}([(21)111])$ cannot be of dimension 0, since it is irreducible and the 0-dimensional variety $\mathcal{M}_{ss}([(21)(11)1])$ is in its closure. Hence $\dim \mathcal{M}_{ss}([(21)111]) = 1$ and we can conclude $\dim \mathcal{M}_{ss}([21111]) = 2$. Finally the dimension $CS(X)$ for a general $X \in \mathcal{M}_{qc}(\sigma)$ is given as the difference of the dimensions of $\mathcal{M}_{qc}(\sigma)$ and $\mathcal{M}_{ss}(\sigma)$. \end{proof} \noindent {\bf Remark 7.4.} It is easy to work out the singularities of the singular surface in every case. For example the singularities of the singular surface of type $[21111]$ consists of one line and 8 points. The classical authors in general did not mention the points (see \cite{J}, some singular points are, however, computed in \cite{W}). \section{Cosingular complexes in the generic case} In this section we investigate the variety of cosingular complexes of a generic quadratic complex. Let $\sigma = [1 \ldots 1]$ which we assume in the whole section. In particular $\mathcal{M}_{ss}(\sigma)$ is the moduli space of Kummer surfaces. Since $\pi : \mathcal{M}_{qc}(\sigma) \rightarrow \mathcal{M}_{ss}(\sigma)$ is surjective (see Remark \ref{rem6.5}), $\dim \mathcal{M}_{qc}(\sigma) = 4$ and $\dim \mathcal{M}_{ss}(\sigma) = 3$, a general fibre of $\pi$ is of dimension 1. We will see that all fibres are curves in this case. The main ideas of this section are due to Klein (see \cite{K}). We reformulate his arguments using our set-up.\\ Consider a fixed generic complex $X = F \cap G$ in Segre normal form, (which in this case is the same as the Klein normal form) i.e. \begin{equation} \label{eq7.0} G = \sum_{i=1}^6 x_i^2=0 \quad \mbox{and} \quad F = \sum_{i=1}^6 \lambda_i x_i^2 = 0 \end{equation} with $\lambda_i \in \mathbb{C}$ pairwise different. For any $\rho \in \mathbb{C},\; \rho \neq \lambda_i$ for $i = 1, \ldots,6$ consider the quadric $F_{\rho}$ with equation \begin{equation} \label{eq7.1} F_{\rho} = \sum_{i=1}^6 \frac{x_i^2}{\lambda_i - \rho} = 0. \end{equation} \begin{lem} \label{lem7.3} Let $\Sigma$ (respectively $\Sigma_{\rho}$) denote the singular surface in $\mathbb{P}^5$ of the complex $X = F \cap G$ (respectively $X_{\rho} = F_{\rho} \cap G$). Then there is an automorphism $\varphi$ of $\mathbb{P}^5$ such that $\varphi(\Sigma) = \Sigma_{\rho}$. \end{lem} \begin{proof} According to Theorem \ref{teo5.4}, the surface $\Sigma$ is the complete intersection in $\mathbb{P}^5$ with equations \begin{equation} \label{eq7.2} \sum_{i=1}^6 x_i^2 = \sum_{i=1}^6 \lambda_ix_i^2 = \sum_{i=1}^6 \lambda_i^2 x_i^2 = 0. \end{equation} Similarly $\Sigma_{\rho}$ is given by the equations \begin{equation} \label{eq7.3} \sum_{i=1}^6 x_i^2 = \sum_{i=1}^6 \frac{x_i^2}{\lambda_i - \rho} = \sum_{i=1}^6 \frac{x_i^2}{(\lambda_i - \rho)^2} = 0. \end{equation} Let the automorphism $\varphi$ of $\mathbb{P}^5$ with $\varphi(x) = y$ be defined by $$ x_i = \frac{y_i}{\lambda_i - \rho} \quad \mbox{for} \quad i = 1, \ldots 6. $$ If $x \in \Sigma$, then $$ \begin{array}{c} 0 = \sum x_i^2 = \sum \frac{y_i^2}{(\lambda_i - \rho)^2},\\ 0 = \sum \lambda_i x_i^2 = \sum \lambda_i \frac{y_i^2}{(\lambda_i - \rho)^2} - \rho \sum \frac{y_i^2}{(\lambda_i - \rho)^2} = \sum \frac{y_i^2}{\lambda_i - \rho},\\ 0 = \sum \lambda_i^2 x_i^2 = \sum \lambda_i^2 \frac{y_i^2}{(\lambda_i - \rho)^2} - 2 \rho \sum \lambda_i \frac{y_i^2}{(\lambda_i - \rho)^2} + \rho^2 \sum \frac{y_i^2}{(\lambda_i - \rho)^2} = \sum y_i^2. \end{array} $$ Hence $y = \varphi(x) \in \Sigma_{\rho}$. Similarly one checks $\varphi^{-1}(\Sigma_{\rho}) \subset \Sigma$. \end{proof} Lemma \ref{lem7.3} implies that all quadratic complexes $X_{\rho}$ are contained in the fibre $\pi^{-1}S = \pi^{-1} \pi(X)$. \begin{lem} \label{lem7.4} The quadratic complex $X$ is the limit of the complexes $X_{\rho}$ as $\rho \rightarrow \infty$. \end{lem} \begin{proof} Fix $\rho_0 \in \mathbb{C} \setminus \{\lambda_1, \ldots , \lambda_6\}$. The complex $X_{\rho}$ can be described as $X_{\rho} = F'_{\rho} \cap G$ with $$ F'_{\rho} = \rho ( G + (\rho_0 + \rho) F_{\rho}) = \rho \sum_{i=1}^6 \frac{\lambda_i + \rho_0}{\lambda_i - \rho} x_i^2 = \sum_{i=1}^6 \frac{\lambda_i + \rho_0}{\frac{\lambda_i}{\rho} + 1}x_i^2. $$ Hence $\lim_{\rho \rightarrow \infty} F'_{\rho} = \sum_i \lambda_i x_i^2 + \rho_0 \sum_i x_i^2 = F + \rho_0 G$ which gives the assertion. \end{proof} For any $X \in \mathcal{M}_{qc}(\sigma)$ consider the family of quadratic complexes $$ {\cal C}_X = \{ X, X_{\rho} \in \mathcal{M}_{qc}(\sigma) \;|\;X_{\rho} = F_{\rho} \cap G \; \mbox{with} \; F_{\rho} \; \mbox{as in} \; (\ref{eq7.1}) \}. $$ Certainly the index $\rho$ in $X_{\rho}$ depends on the chosen quadric $F_{\rho}$. However we have \begin{prop} \label{prop7.5} For a quadratic complex $X' = F' \cap G \in \mathcal{M}_{qc}(\sigma)$ with $ F' = \sum_{i=1}^6 \lambda'_i x_i^2$ the following statements are equivalent\\ {\em (1)} $X' \in {\cal C}_X$;\\ {\em (2)} There is a matrix $A = (a_{ij}) \in SL(2)$ such that for $i=1, \ldots,6$, $$\lambda'_i = \frac{a_{11}\lambda_i + a_{12}}{a_{21}\lambda_i + a_{22}}.$$ \end{prop} \begin{proof} (1) $\Rightarrow$ (2): If $X' = X$, choose $A = {\bf 1}_2$. So let $X' = X_{\rho}$ with $\rho \neq \lambda_i$ for $i = 1, \ldots, 6$. Then $A = \left( \begin{array}{cc} 0&-1\\ 1&-\rho \end{array} \right) \in SL(2)$ with $X' = F' \cap G$ where $F' = \sum_i \frac{-1}{\lambda_i - \rho} x_i^2$, i.e. $\lambda'_i = \frac{-1}{\lambda_i - \rho}$. (2) $\Rightarrow$ (1): Let $A \in SL(2)$ be as in (2). If $a_{21} =0$, then $F' = \frac{a_{11}}{a_{22}}F + \frac{a_{12}}{a_{22}}G$ and thus $X' = X$. If $a_{21} \neq 0$, then $$ \lambda'_i = \frac{\frac{a_{11}}{a_{21}}(a_{21} \lambda_i + a_{22}) + a_{12} - \frac{a_{11}a_{22}}{a_{21}}}{a_{21}\lambda_i + a_{22}} = \frac{a_{11}}{a_{21}} - \frac{1}{a_{21}^2(\lambda_i + \frac{a_{22}}{a_{21}})} $$ and thus $F' = -\frac{1}{a_{21}^2} F_{(-\frac{a_{22}}{a_{21}})} + \frac{a_{11}}{a_{21}}G$, i.e. $X' \in {\cal C}_X$. \end{proof} Call two quadratic complexes $X'$ and $X$ {\it equivalent}, i.e. $X' \sim X$, if and only if $X' \in {\cal C}_X$. Since $SL(2)$ is a group, we obtain as an immediate consequence of the proposition \begin{cor} \label{cor7.6} $'' \sim ''$ is an equivalence relation on the set $\mathcal{M}_{qc}(\sigma)$. \end{cor} Using this we can determine the fibres of the morphism $\pi: \mathcal{M}_{qc}(\sigma) \rightarrow \mathcal{M}_{ss}(\sigma)$. \begin{teo} \label{teo7.7} For any quadratic complex $X \in \mathcal{M}_{qc}(\sigma)$, $$ \pi^{-1}\pi(X) = {\cal C}_X. $$ \end{teo} \begin{proof} As quotients of normal varieties by reductive groups, the moduli spaces $\mathcal{M}_{qc}(\sigma)$ and $\mathcal{M}_{ss}(\sigma)$ are normal varieties. The generic fibre of $\pi$ being irreducible, every fibre is connected by Zariski's connectedness theorem. Now Corollary \ref{cor7.6} implies the assertion. \end{proof} As a consequence of Proposition \ref{prop7.5} and Theorem \ref{teo7.7} we obtain \begin{cor} \label{cor7.8} Two quadratic complexes of $\mathcal{M}_{qc}(\sigma)$ have the same singular surface if and only if they are isomorphic as pencils of quadrics. \end{cor} \begin{prop} \label{prop7.9} Let $X \in \mathcal{M}_{qc}(\sigma)$ be a quadratic complex. For any $\rho_1, \rho_2 \in \mathbb{C}, \rho_1 \neq \rho_2$, the quadratic complexes $X_{\rho_1}$ and $X_{\rho_2}$ are non-isomorphic and not isomorphic to $X$. \end{prop} \begin{proof} It suffices to show that for any $\rho \in \mathbb{C}, \; \rho \neq \lambda_i$ for all $i$ the complex $X_{\rho}$ is not isomorphic to $X$. Suppose $X = F \cap G$ with $F$ and $G$ as in (\ref{eq7.0}). Then $X_{\rho} = F_{\rho} \cap G$ with $F_{\rho}$ as in (\ref{eq7.1}). The roots of $\det(\lambda F + \mu G) = \prod_{i=1}^6 (\lambda \lambda_i + \mu)$ are $(\lambda: \mu) = (1:-\lambda_i)$ and the roots of $\det(\lambda F_{\rho} + \mu G) = \prod_{i=1}^6 (\lambda \frac{1}{\lambda_i - \rho} + \mu)$ are $(\lambda: \mu) = (1:-\frac{1}{\lambda_i - \rho})$. Hence according to Lemma \ref{lem4.2} the complexes $X$ and $X_{\rho}$ are isomorphic if and only if there is a permutation $\sigma$ of the indices $1, \ldots, 6$ such that the system of linear equations in $c$ and $d$ \begin{equation} \label{eq7.4} c - d \lambda_i = - \frac{1}{\lambda_{\sigma(i)} - \rho} \qquad \mbox{for} \qquad i = 1, \ldots, 6 \end{equation} admits a solution.\\ Since $\lambda_1 \neq \lambda_2$, there is a unique solution of the first 2 equations. If this would be also a solution for $i = 3, \ldots, 6$, we would have \begin{equation} \label{eq7.5} \det \left( \begin{array}{ccc} 1& -\lambda_1& - \frac{1}{\lambda_{\sigma(1)} - \rho}\\ 1& -\lambda_2& - \frac{1}{\lambda_{\sigma(2)} - \rho}\\ 1& -\lambda_i& - \frac{1}{\lambda_{\sigma(i)} - \rho}\\ \end{array} \right) = 0 \end{equation} for $i = 3, \ldots 6$. But for fixed $\lambda_1$ and $\lambda_2$ this has at most 2 solutions in $\lambda_i$ and $\lambda_{\sigma(i)}$, whereas $\lambda_3, \ldots, \lambda_6$ are 4 different values. Hence for any permutation $\sigma$ the linear system (\ref{eq7.4}) is unsolvable, which implies the assertion. \end{proof}
1,941,325,221,225
arxiv
\section{Introduction} \label{sec:intro} Error exponent analysis has been an active area of research for quite a few decades. The vast literature in this area can be categorized based on (i) whether the channel is memoryless or with memory; (ii) whether there is or is not channel output feedback to the transmitter; (iii) whether the employed coding is fixed-length or variable-length; and (iv) whether upper (converse) or lower (achievable) bounds are analyzed. In the case of memoryless channels with noiseless feedabck Schalkwijk and Kailath~\cite{ScKa66} proposed a transmission scheme for the additive white Gaussian noise (AWGN) channel with infinite error exponent. On the other hand, Dobrushin~\cite{Do62} and later Haroutunian~\cite{Ha77}, by deriving an error upper bound for discrete memoryless channels (DMCs) showed that at least for symmetric channels there is no gain to be expected through feedback when fixed-length codes are employed. This was a strong negative result since it suggested that for DMC channels, noiseless feedback can neither improve capacity (as was well known) nor can it improve the error exponent when fixed-length codes are used. A remarkable result was derived by Burnashev in~\cite{Bu76}, where error exponent matching upper and lower bounds were derived for DMCs with feedback and variable-length codes. The error exponent has a simple form $E(\overline{R})=C_1(1-\overline{R}/C)$, where $\overline{R}$ is the average rate, $C$ is the channel capacity and $C_1$ is the maximum divergence that can be obtained in the channel for a binary hypothesis testing problem. Berlin et al~\cite{BeNaRiTe09} have provided a simpler derivation of the Burnashev bound that emphasizes the link between the constant $C_1$ and the binary hypothesis testing problem. Several variable-length transmission schemes have been proposed in the literature for DMCs and their error exponents have been analyzed~\cite{Ho63, YaIt79, ShFe11}. In the case of channels with memory and feedback, the capacity was studied in~\cite{TaMi09, PeCuVaWe08, BaAn10}, and a number of capacity-achievable schemes have been recently studied in the literature~\cite{CoYuTa09, BaAn10, An12a}. The only work that studies error exponents for variable-length codes for channels with memory and feedback is~\cite{CoYuTa09} where the authors consider finite state channels with channel state known causally to both the transmitter and the receiver. In this work, we consider channels with memory and feedback, and derive a straight-line upper bound on the error-exponent for variable-length codes. We specifically look at unifilar channels since for this family, the capacity has been characterized in an elegant way through the use of Markov decision processes (MDPs)~\cite{PeCuVaWe08}. Our technique is motivated by that of~\cite{Bu76}, i.e., studying the rate of decay of the posterior message entropy using martingale theory in two distinct regimes: large and small message entropies. A major difference between this work as compared to~\cite{Bu76} is that we analyze the multi-step drift behavior of the communication system instead of the one-step drift that is analyzed for DMCs. This is necessitated by the fact that one-step analysis cannot capture the memory inherent in the channel and thus results in extremely loose bounds. It is not surprising that the parameter $C_1$ in our case also relates to the maximum discrimination that can be achieved in this channel in a binary hypothesis testing problem. In order to evaluate this quantity, we formulate two MDPs with decreasing degree of complexity, the solutions of which are upper bounds on the quantity $C_1$, with the former being tighter than the latter. The tightness of the bounds is argued based on the fact that asymptotically this is the expected performance of the best system, and by achievability results presented in the companion paper~\cite{AnWu17}. An additional contribution of this work is a complete reworking of some of the more opaque proofs of~\cite{Bu76} resulting in significant simplification of the exposition. We finally provide some numerical results for a number of interesting unifilar channels including the trapdoor, chemical, and other two-input/output/state unifilar channels. The main difference between our work and that in~\cite{CoYuTa09} is that for unfilar channels, the channel state is not observed at the receiver. This complicates the analysis considerably as is evidenced by the different approaches in evaluating the constant $C_1$ in these two works. Furthermore, our results indicate that optimal policies for achieving the maximum divergence are very different when the receiver knows or does not know the channel state. The remaining part of this paper is organized as follows. In section~\ref{sec:model}, we describe the channel model for the unifilar channel and the class of encoding and decoding strategies. In section~\ref{sec:bound}, we analyze the drifts of the posterior message entropy in the large- and small-entropy regime. In section~\ref{sec:C_1}, we formulate two MDPs in order to study the problem of one-bit transmission over this channel. Section~\ref{sec:example} presents numerical results for several unifilar channels. Final conclusions are given in section~\ref{sec:conclusions}. \section{Channel Model and preliminaries} \label{sec:model} Consider a family of finite-state point-to-point channels with inputs $X_t\in\mathcal{X}$, output $Y_t\in\mathcal{Y}$ and state $S_t\in\mathcal{S}$ at time $t$, with all alphabets being finite and initial state $S_1=s_1$ known to both the transmitter and the receiver. The channel conditional probability is \begin{equation} P(Y_t,S_{t+1}|X^t, Y^{t-1}, S^t) = Q(Y_t|X_t, S_t) \delta_{g(S_t,X_t,Y_t)}(S_{t+1}), \end{equation} for a given stochastic kernel $Q\in \mathcal{X}\times\mathcal{S}\rightarrow \mathcal{P}(\mathcal{Y})$ and deterministic function $g\in \mathcal{S}\times\mathcal{X} \times\mathcal{Y}\rightarrow \mathcal{S}$, where $\mathcal{P}(\mathcal{Y})$ denotes the space of all probability measure on $\mathcal{Y}$, and $\delta_{a}(\cdot)$ is the Kronecker delta function centered at $a$. This family of channels is referred to as unifilar channels~\cite{PeCuVaWe08}. The authors in~\cite{PeCuVaWe08} have derived the capacity $C$ under certain conditions in the form of \begin{equation} C = \lim_{N\rightarrow \infty}\sup_{(p(x_t|s_t,y^{t-1},s_1))_{t\geq 1}}\frac{1}{N}\sum_{t=1}^{N} I(X_t,S_t;Y_t|Y^{t-1},S_1). \label{eq:capacity} \end{equation} In this paper, we restrict our attention to such channels with strictly positive $Q(y|x,s)$ for any $(y,x,s)\in \mathcal{Y} \times \mathcal{X} \times \mathcal{S}$ and ergodic behavior so that the above limit indeed exists. Let $W\in\{1,2,3,\cdots,M=2^K\}\stackrel{\scriptscriptstyle \triangle}{=} [M]$ be the message to be transmitted. In this system, the transmitter receives perfect feedback of the output with unit delay and decides the input $X_t$ based on $(Y^{t-1},S_1)$ at time $t$. The transmitter can adopt randomized encoding strategies, where $X_t \sim e_t(\cdot|W,Y^{t-1},S_1)$ with a collection of distributions $(e_t: [M] \times \mathcal{Y}^{t-1} \times \mathcal{S} \rightarrow \mathcal{P}(\mathcal{X}))_{t\geq 1}$. Without loss of generality we can represent the randomized encoder through deterministic mappings $(e_t: [M] \times \mathcal{Y}^{t-1} \times \mathcal{S} \times \mathcal{V} \rightarrow \mathcal{X})_{t\geq 1}$ with $X_t = e_t(W,Y^{t-1},S_1,V_t)$ involving the random variables $(V_t)_{t\geq 1}$ which are generated as $P(V_t|V^{t-1},Y^{t-1},X^{t-1},S^{t-1},W)=P_V(V_t)$. Furthermore, since we are interested in error exponent upper bounds, we can assume that the random variables $V_t$ are causally observed common information among the transmitter and receiver. The decoding policy consists of a sequence of decoding functions $(d_t: (\mathcal{Y}\times \mathcal{V})^t \times \mathcal{S} \rightarrow [M])_{t\geq 1}$, with estimated message at every time $t$, $\hat{W}_t = d_t(Y^t,V^t,S_1)$, and a stopping time $T$ w.r.t. the filtration $(\mathcal{F}_t\triangleq \sigma(Y^t,V^t,S_1))_{t\geq 0}$. The final message estimate is defined as $\hat{W}=\hat{W}_T$. The average rate $\overline{R}$ and error probability $P_e$ of this scheme are defined as $\overline{R}=\frac{K}{\mathbb{E}[T]}$ and $P_e = P(\hat{W}_T \neq W \cup T= \infty)$. The channel reliability function (highest achievable error exponent) is defined as $E^*(\overline{R})=\sup -\frac{\log P_e}{\mathbb{E}[T]}$. Since transmission schemes with $P(T=\infty)>0$ result in the trivial error exponent $-\frac{\log P_e}{\mathbb{E}[T]}=0$, we restrict attention to those schemes that have a.s. finite decision times. \section{Error-exponent upper bound}\label{sec:bound} Our methodology is inspired by the analysis in~\cite{Bu76} for DMCs. The analysis involves lower-bounding the rate of decrease of the posterior message entropy which, through a generalization of Fano's Lemma, provides lower bounds on the error probability. Entropy can decrease no faster than the channel capacity. However this bound becomes trivial at low values of entropy which necessitates switching to lower bounding the corresponding log drift. The log drift analysis is quite involved in~\cite{Bu76} even for the DMC. The fundamental difference in our work compared to DMC, is the presence of memory in unifilar channels. A single-step drift analysis wouldn't be able to capture this memory resulting in loose bounds. For this reason we analyze multi-step drifts; in fact we consider the asymptotic behavior as the step size becomes larger and larger. The outline of the analysis is as follows. Lemma~\ref{lemma:driftentropy} and Lemma~\ref{lemma:driftlogentropy} describe the overall decreasing rate of the entropy induced by the posterior belief on the message in terms of drifts in the linear and logarithmic regime, respectively. The former relates the drift to capacity, $C$, while the latter relates it to a quantity $C_1$ which can be interpreted as the largest discrimination that can be achieved in this channel for a binary hypothesis testing problem, as elegantly explained in~\cite{BeNaRiTe09}. The result presented in Lemma~\ref{lemma:newsubmartingale} shows that based on a general random process that satisfies the two above mentioned drift conditions one can create an appropriate submartingale. These three results are then combined together in Theorem~\ref{th:main} to provide a lower bound on the stopping time of an arbitrary system employing variable-length coding, and equivalently an upper bound on the error exponent. Let us define the following random processes \begin{align} \Pi_t(i) &= P(W=i|\mathcal{F}_{t}) , \qquad i\in [M], t\geq 0\\ H_t &= -\sum_{i=1}^{M}\Pi_t(i)\log \Pi_t(i). \end{align} Denoting by $h(\cdot)$ the binary entropy function, we have the following result. \begin{lemma}~\label{lemma:genFano} Generalized Fano's inequality: If $P(T<\infty)=1$ then \begin{align} \mathbb{E}[H_T] \leq h(P_e) + P_e \log (M-1). \end{align} \end{lemma} \begin{proof} Please refer to Appendix~\ref{app:genFano}. \end{proof} This is essentially~\cite[Lemma 1]{Bu76} and the proof is presented here for completeness. In view of the above, in order to estimate the rate of decrease of $P_e$, we study the corresponding rate for $H_t$. The next lemma gives a first estimate of the drift of $(H_t)_{t\geq 0}$. \begin{lemma} \label{lemma:driftentropy} For any $t\geq 0$ and $\epsilon>0$, there exists an $N=N(\epsilon)$ such that \begin{equation} \mathbb{E}[H_{t+N} - H_{t}|\mathcal{F}_{t}] \geq -N(C+\epsilon) \qquad a.s. \end{equation} \end{lemma} \begin{IEEEproof} Please see appendix~\ref{app:lemma1}. \end{IEEEproof} Since for small values of $H_t$ the above result does not give any information, we now analyze the drifts of the process $(\log H_t)_{t\geq 0}$. \begin{lemma} \label{lemma:driftlogentropy} For any given $\epsilon>0$, there exists an $N=N(\epsilon)$ such that if $H_t<\epsilon$ \begin{equation} \mathbb{E}[\log(H_{t+N})-\log(H_t)|\mathcal{F}_t] \geq -N(C_1+\epsilon) \qquad a.s. \end{equation} where the constant $C_1$ is given by \begin{align}\label{eq:defC1} &C_1 =\max_{s_1,y^t,v^t,k}\limsup_{N'\rightarrow \infty} \max_{(e_i)_{i=t+1}^{t+N'}}\frac{1}{N'} \nonumber \\ &\sum_{Y^{t+N'}_{t+1},V^{t+N'}_{t+1}} P(Y^{t+N'}_{t+1},V^{t+N'}_{t+1}|W=k,y^t,v^t,s_1) \log \frac{P(Y^{t+N'}_{t+1},V^{t+N'}_{t+1}|W=k,y^t,v^t,s_1)}{ P(Y^{t+N'}_{t+1},V^{t+N'}_{t+1}|W\neq k,y^t,v^t,s_1)}. \end{align} \end{lemma} \begin{IEEEproof} Please see appendix~\ref{app:lemma2}. \end{IEEEproof} We comment at this point that the proof of this result is significantly simpler than the corresponding one in~\cite[Lemma~3]{Bu76}. The reason is that we develop the proof directly in the asymptotic regime and thus there is no need for complex convexity arguments as the ones derived in~\cite[Lemma~7, and eq. (A8)-(A12)]{Bu76}. At this point one can bound the quantity in~\eqref{eq:defC1} by $\max_{x,s,x's'} D(Q(y|x,s)||Q(y|x',s')) $ using convexity. Such a bound, however, can be very loose since it does not account for channel memory. In Section~\ref{sec:C_1}, we will discuss how to evaluate $C_1$. Before we continue, we also note that $|\log H_{t+1} - \log H_t|$ is bounded above by a positive number $C_2$ almost surely due to the fact that kernel $Q(\cdot|\cdot,\cdot)$ is strictly positive. The proof is similar to that in~\cite[Lemma~4]{Bu76}. In the following lemma, we propose a submartingale that connects drift analysis and the stopping time in the proof of our main result. \begin{lemma} \label{lemma:newsubmartingale} Suppose a random process $(H_t)_{t\geq 0}$ has the following properties \begin{subequations} \begin{align} \mathbb{E}[H_{t+1}-H_{t}|\mathcal{F}_t] &\geq -K_1 \label{ineq:entropy}\\ \mathbb{E}[\log H_{t+1}-\log H_{t}|\mathcal{F}_t] &\geq -K_2 \label{ineq:logentropysmall} \qquad \text{if } H_t<H^* \\ |\log H_{t+1}-\log H_{t}| &< K_3 \label{ineq:logentropyall} \qquad \text{if } H_t<H^* \end{align} \end{subequations} almost surely for some positive numbers $K_1,K_2,K_3,H^*$ where $K_2>K_1$. Define a process $(Z_t)_{t\geq 0}$ by \begin{align} \label{def:newsubmartingale} Z_t &= (\frac{H_t-H^*}{K_1}+t)1_{\{H_t>H^*\}} \nonumber \\ &\ \ + (\frac{\log \frac{H_t}{H^*}}{K_2}+t+f(\log \frac{H_t}{H^*}))1_{\{H_t\leq H^*\}} \quad \forall t\geq 0, \end{align} where $f: \mathbb{R} \rightarrow \mathbb{R} $ is defined by \begin{equation} f(y) = \frac{1-e^{\lambda y}}{K_2\lambda} \end{equation} with a positive constant $\lambda$. Then, for sufficiently small $\lambda$, $(Z_t)_{t\geq 0}$ is a submartingale w.r.t. $\mathcal{F}_t$. \end{lemma} \begin{IEEEproof} Please see appendix~\ref{app:lemma3}. \end{IEEEproof} Two comments are in place regarding the proof of this result. First, the main difficulty in proving such results is to take care of what happens in the ``transition'' range (around $H^*$) where $H_t$ and $H_{t+1}$ are not both above or below the threshold. The choice of the function $f(\cdot)$ is what makes the proof work. The proof offered here is quite concise compared to the one employed in~\cite{Bu76} (which consists of Lemma~5 and an approximation argument given in Theorem~1). The reason for that is the specific definition of the $Z_t$ process and in particular the choice of the $f(\cdot)$ function which simplifies considerably the proof. The second, and related, comment is that this Lemma is not a straightforward extension of~\cite[Lemma in p.~50]{BuZi75} since there, the purpose was to bound from below a positive rate of increase of a process. In our case, the proof hinges on the additional constraint~\eqref{ineq:subdiff} we impose on the choice of the $f(\cdot)$ function. We are now ready to present our main result. \begin{theorem} \label{th:main} Any transmission scheme with $M=2^K$ messages and error probability $P_e$ satisfies \begin{equation} -\frac{\log P_e}{\mathbb{E}[T]} \leq C_1(1-\frac{\overline{R}}{C})+ U(\epsilon,K,P_e,\overline{R},C,C_1,C_2,\lambda), \end{equation} for any $\epsilon>0$. Furthermore, $\lim_{P_e\rightarrow0}\lim_{K\rightarrow\infty}U(\epsilon,K,\overline{R},C,C_1,C_2,\lambda)=o_{\epsilon}(1)$. \end{theorem} \begin{IEEEproof} Please see appendix~\ref{app:proposition}. \end{IEEEproof} \section{Evaluation of $C_1$} \label{sec:C_1} In this section we evaluate the constant $C_1$. As noted in \cite{Bu76}\cite{BeNaRiTe09}, the quantity $C_1$ relates to a binary hypothesis testing problem. When the posterior entropy is small, the receiver has very high confidence in a certain message. In this situation, the transmitter is essentially trying to inform the receiver whether or not this candidate message is the true one. Since the unifilar channel has memory, it is not surprising that the constant $C_1$ may be connected to a Markov decision process related to the aforementioned binary hypothesis testing problem, as was the case in~\cite{CoYuTa09}. Recall that $C_1$ is defined as \begin{align} C_1 &= \max_{s_1,y^t,v^t,k} \limsup_{N'\rightarrow \infty} \max_{\{e_i\}_{i=t+1}^{t+N'}} \frac{1}{N'} \sum_{Y^{t+N'}_{t+1},V^{t+N'}_{t+1}} \nonumber \\ &P(Y^{t+N'}_{t+1},V^{t+N'}_{t+1}|W=k,y^t,v^t,s_1) \log \frac{P(Y^{t+N'}_{t+1},V^{t+N'}_{t+1}|W=k,y^t,v^t,s_1)}{ P(Y^{t+N'}_{t+1},V^{t+N'}_{t+1}|W\neq k,y^t,v^t,s_1)} \\ &= \max_{s_1,y^t,v^t,k} \limsup_{N'\rightarrow \infty} \max_{\{e_i\}_{i=t+1}^{t+N'}} \frac{1}{N'} \nonumber \\ & D( P(Y^{t+N'}_{t+1},V^{t+N'}_{t+1}|W=k,y^t,v^t,s_1) || P(Y^{t+N'}_{t+1},V^{t+N'}_{t+1}|W\neq k,y^t,v^t,s_1) ) \label{eq:divergence} \end{align} We now look into the quantities $P(Y^{t+N}_{t+1},V^{t+N}_{t+1}|W=k,y^t,v^t,s_1)$ and $P(Y^{t+N}_{t+1},V^{t+N}_{t+1}|W\neq k,y^t,v^t,s_1)$. Let us define $X^k_{t} \stackrel{\scriptscriptstyle \triangle}{=} e_{t}(k,Y^{t-1},S_1,V_t)$ and $S^k_{t} \stackrel{\scriptscriptstyle \triangle}{=} g_{t}(k,Y^{t-1},V^{t-1},S_1)$ which are the input and the state at time $t$, respectively conditioned on $W=k$. Then, \begin{align} P(Y_{t+1}^{t+N},V^{t+N}_{t+1}|y^t,v^t,s_1,W=k) &= \prod_{i=t+1}^{t+N} P(Y_i|Y^{i-1}_{t+1},V^{i}_{t+1},y^t,v^t,s_1,W=k)P(V_i|Y^{i-1}_{t+1},V^{i-1}_{t+1},y^t,v^t,s_1,W=k)\nonumber \\ &= \prod_{i=t+1}^{t+N} Q(Y_i|S^k_i,X^k_i)P_V(V_i) \end{align} and \begin{align} P&(Y_{t+1}^{t+N},V_{t+1}^{t+N}|y^t,v^t,s_1,W\neq k) \nonumber \\ &= \prod_{i=t+1}^{t+N} P(Y_i|Y^{i-1}_{t+1},V^{i}_{t+1},y^t,v^t,s_1,W \neq k) \nonumber \\ & \qquad P(V_i|Y^{i-1}_{t+1},V^{i-1}_{t+1},y^t,v^t,s_1,W \neq k) \nonumber \\ &= \prod_{i=t+1}^{t+N} P_V(V_i) \sum_{x,s} Q(Y_{i}|x,s) P(X_i=x|S_i=s,Y^{i-1}_{t+1},V^{i}_{t+1},y^t,v^t,s_1,W \neq k) \nonumber \\ &\qquad P(S_i=s|Y^{i-1}_{t+1},V^{i-1}_{t+1},y^t,v^t,s_1,W \neq k) \nonumber \\ &= \prod_{i=t+1}^{t+N} P_V(V_i) \sum_{x,s} Q(Y_{i}|x,s) X^{\overline{k}}_i(x|s) B^{\overline{k}}_{i-1}(s), \end{align} where $X^{\overline{k}}_i(x|s)$ and $B^{\overline{k}}_{i-1}(s)$ are given by \begin{align} X^{\overline{k}}_i(x|s) &\stackrel{\scriptscriptstyle \triangle}{=} P(X_i=x|S_i=s,Y^{i-1}_{t+1},V^{i}_{t+1},y^t,v^t,s_1,W \neq k) \\ B^{\overline{k}}_{i-1}(s) &\stackrel{\scriptscriptstyle \triangle}{=} P(S_i=s|Y^{i-1}_{t+1},V^{i-1}_{t+1},y^t,v^t,s_1,W \neq k). \end{align} Moreover, $B^{\overline{k}}_i$ can be updated by \begin{align} B^{\overline{k}}_i(s) &= \frac{\sum_{\tilde{x},\tilde{s}} \delta_{g(\tilde{s},\tilde{x},Y_t)}(s)Q(Y_t|\tilde{x},\tilde{s})X^{\overline{k}}_i(\tilde{x}|\tilde{s})B^{\overline{k}}_{i-1}(\tilde{s})}{\sum_{\tilde{x},\tilde{s}} Q(Y_t|\tilde{x},\tilde{s})X^{\overline{k}}_i(\tilde{x}|\tilde{s})B^{\overline{k}}_{i-1}(\tilde{s})}, \end{align} which we can concisely express as $B^{\overline{k}}_i = \phi(B^{\overline{k}}_{i-1},X^{\overline{k}}_i,Y_i)$. With the above derivation, the divergence in~\eqref{eq:divergence} can be expressed as \begin{align} \label{eq:divergence1} D&(P(Y^{t+N}_{t+1},V^{t+N}_{t+1}|W=k,y^t,v^t,s_1)||P(Y^{t+N}_{t+1},V^{t+N}_{t+1}|W\neq k,y^t,v^t,s_1)) \nonumber \\ &= \sum_{i=t+1}^{t+N}\mathbb{E}[ \log \frac{Q(Y_i|S^k_i,X^k_i) }{\sum_{x,s} Q(Y_{i}|x,s) X^{\overline{k}}_i(x|s) B^{\overline{k}}_{i-1}(s)} \nonumber \\ & \hspace*{6cm} |y^t,v^t,s_1,W=k] \nonumber \\ &= \sum_{i=t+1}^{t+N}\mathbb{E}[ \mathbb{E}[\log \frac{Q(Y_i|S^k_i,X^k_i) }{\sum_{x,s} Q(Y_{i}|x,s) X^{\overline{k}}_i(x|s) B^{\overline{k}}_{i-1}(s)} \nonumber \\ &\hspace*{1cm} |S^k_i,B^{\overline{k}}_{i-1},X^k_i,X^{\overline{k}}_i,y^t,v^t,s_1,W=k] |y^t,v^t,s_1,W=k]\nonumber \\ &= \sum_{i=t+1}^{t+N}\mathbb{E}[ R(S^k_i,B^{\overline{k}}_{i-1},X^k_i,X^{\overline{k}}_i)|y^t,v^t,s_1,W=k], \end{align} where the function $R(s^0,b,x^0,x^1)$ is given by \begin{align} R(s^0,b^1,x^0,x^1) &= \sum_y Q(y|s^0,x^0) \nonumber \\ &\qquad \log \frac{Q(y|s^0,x^0) }{\sum_{\tilde{x},\tilde{s}} Q(y|\tilde{x},\tilde{s}) x^{1}(\tilde{x}|\tilde{s}) b^1(\tilde{s})} . \end{align} This inspires us to define a controlled Markov process with state $(S^0_t,B^1_{t-1})\in \mathcal{S} \times \mathcal{P}(\mathcal{S})$, action $(X^0_t,X^1_t) \in \mathcal{X} \times (\mathcal{S} \rightarrow \mathcal{P}(\mathcal{X}))$, instantaneous reward $R(S^0_t,B^1_{t-1},X^0_t,X^1_t)$ at time $t$ and transition kernel \begin{align} Q'&(S^0_{t+1},B^1_{t}|S^0_{t},B^1_{t-1},X^0_{t},X^1_{t}) \nonumber \\ &= \sum_y \delta_{g(S^0_{t},X^0_t,y)}(S^0_{t+1})\delta_{\phi(B^1_{t-1},X^1_t,y)}(B^1_{t}) Q(y|X^0_t,S^0_{t}). \end{align} That this is indeed a controlled Markov process can be readily established. Note that at time $t=0$ the process starts with initial state $(S^0_0,B^1_{-1})$. Let $V^N(s^0,b^1)$ be the (average) reward in $N$ steps of this process \begin{align} V^N&(s^0,b^1) \nonumber \\ &\stackrel{\scriptscriptstyle \triangle}{=} \frac{1}{N} \mathbb{E}[\sum_{i=1}^{N}R(S^0_i,B^1_{i-1},X^0_i,X^1_i)|S^0_0=s^0,B^1_{-1}=b^1], \end{align} and denote by $V^{\infty}(s^0,b^1)$ the corresponding $\limsup$, i.e., $V^{\infty}(s^0,b^1) =\limsup_{N\rightarrow\infty}V^N(s^0,b^1)$. Then, the constant $C_1$ is given by \begin{align} C_1 &= \sup_{s^0,b^1} V^{\infty}(s^0,b^1). \end{align} \subsection{A computational efficient upper bound on $C_1$ } \label{sec:simpleC_1} The MDP defined above has uncountably infinite state and action spaces. In this section, we propose an alternative upper bound on $C_1$ and formulate an MDP with finite state and action spaces to evaluate it. This provides a looser but more computational efficient upper bound. As it turns out, there are several instances of interest where this upper bound can be achieved~\cite{AnWu17}. Consider again the divergence term \begin{align} D&( P(Y^{t+N}_{t+1},V^{t+N}_{t+1}|W=k,y^t,v^t,s_1) || P(Y^{t+N}_{t+1},V^{t+N}_{t+1}|W\neq k,y^t,v^t,s_1) ) \\ &=D( P(Y^{t+N}_{t+1},V^{t+N}_{t+1}|W=k,y^t,v^t,s_1) || \nonumber \\ &\qquad \sum_{j\neq k}\frac{P(W=j|y^t,v^t,s_1)}{1-P(W=k|y^t,v^t,s_1)} P(Y^{t+N}_{t+1},V^{t+N}_{t+1}|W=j,y^t,v^t,s_1) ) \\ &\overset{(a)}{\leq} \sum_{j\neq k}\frac{P(W=j|y^t,v^t,s_1)}{1-P(W=k|y^t,v^t,s_1)} \nonumber \\ &\qquad D( P(Y^{t+N}_{t+1},V^{t+N}_{t+1}|W=k,y^t,v^t,s_1) || P(Y^{t+N}_{t+1},V^{t+N}_{t+1}|W=j,y^t,v^t,s_1) ) \\ &\leq \max_{j\neq k} D( P(Y^{t+N}_{t+1},V^{t+N}_{t+1}|W=k,y^t,v^t,s_1) || P(Y^{t+N}_{t+1},V^{t+N}_{t+1}|W=j,y^t,v^t,s_1) ) \end{align} where (a) is due to convexity. Now look into the first distribution in the divergence, \begin{align} P(Y^{t+N}_{t+1},V^{t+N}_{t+1}|W=k,y^t,v^t,s_1) &= \prod_{i=1}^{N} P(Y_{t+i}|W=k,Y^{t+i-1}_{t+1},V^{t+i}_{t+1},y^t,v^t,s_1) P(V_{t+i}|W=k,Y^{t+i-1}_{t+1},V^{t+i-1}_{t+1},y^t,v^t,s_1) \nonumber \\ &= \prod_{i=1}^{N} Q(Y_{t+i}|X^k_{t+i},S^k_{t+i})P_V(V_{t+i}). \end{align} Then we have \begin{align} D&(P(Y^{t+N}_{t+1},V^{t+N}_{t+1}|W=k,y^t,v^t,s_1)||P(Y^{t+N}_{t+1},V^{t+N}_{t+1}|W=j,y^t,v^t,s_1)) \nonumber \\ &= \sum_{i=1}^{N} \mathbb{E}[\mathbb{E}[\log\frac{Q(Y_{t+i}|X^k_{t+i},S^k_{t+i})}{Q(Y_{t+i}|X^j_{t+i},S^j_{t+i})} \nonumber \\ &\qquad \qquad |Y^{t+i-1}_{t+1},V^{t+i}_{t+1},y^t,v^t,s_1,W=k]|y^t,v^t,s_1,W=k] \nonumber \\ &= \sum_{i=1}^{N} \mathbb{E}[\tilde{R}(S^k_{t+i},S^j_{t+i},X^k_{t+i},X^j_{t+i})|y^t,v^t,s_1,W=k], \end{align} where $\tilde{R}$ is defined by \begin{align} \tilde{R}(s^0,s^1,x^0,x^1) = \sum_y Q(y|x^0,s^0)\log\frac{ Q(y|x^0,s^0)}{ Q(y|x^1,s^1)}. \end{align} Similar to the previous development, we define a controlled Markov chain with state $(S^0_t,S^1_t)\in \mathcal{S}^2$, action $(X^0_t,X^1_t) \in \mathcal{X}^2$ , instantaneous reward $\tilde{R}(S^0_t,S^1_t,X^0_t,X^1_t)$ at time $t$ and transition kernel \begin{align} \tilde{Q}'&(S^0_{t+1},S^1_{t+1}|S^0_{t},S^1_{t},X^0_{t},X^1_{t}) \nonumber \\ & = \sum_y \delta_{g(S^0_{t},X^0_t,y)}(S^0_{t+1})\delta_{g(S^1_{t},X^1_t,y)}(S^1_{t+1}) Q(y|X^0_t,S^0_{t}). \end{align} Let $\tilde{V}^N(s^0,s^1)$ denote the average $N$-stage reward for this MDP, i.e., \begin{align} \tilde{V}^N(s^0,s^1) &\stackrel{\scriptscriptstyle \triangle}{=} \frac{1}{N} \mathbb{E}[\sum_{i=1}^{N}R(S^0_i,S^1_i,X^0_i,X^1_i)|S^0_0=s^0,S^1_0=s^1]. \end{align} Combining the above with the definition of $C_1$, we have \begin{align} C_1 &\leq \max_{s^0,s^1} \tilde{V}^{\infty}(s^0,s^1). \end{align} which gives an easier to evaluate upper bound on $C_1$. \section{Numerical Result for unifilar channels} \label{sec:example} In this section, we provide numerical results for the expressions $V^{\infty}$ and $\tilde{V}^{\infty}$ for some binary input/output/state unifilar channels. We consider the trapdoor channel (denoted as channel $A$), chemical channel (denoted as channel $B(p_0)$), symmetric unifilar channels (denoted as channel $C(p_0,q_0)$), and asymmetric unifilar channels (denoted as channel $C(p_0,q_0,p_1,q_1)$). All of these channels have $g(s,x,y) = s\oplus x \oplus y$ and kernel $Q$ characterized as shown in Table~\ref{t:Q}. \begin{table}[h] \centering \caption{Kernel definition for binary unifilar channels} \begin{tabular}{|c|c|c|c|c|} \hline Channel & $Q(0| 0,0)$& $Q(0|1,0)$ & $Q(0|0,1)$& $Q(0|1,1)$\\ \hline A & 1 & 0.5 & 0.5 & 0\\ \hline B($p_0$) & $1$ & $p_0$ & $1-p_0$ & 0\\ \hline C($p_0,q_0$) & $1-q_0$ & $p_0$ & $1-p_0$ & $q_0$\\ \hline D($p_0,q_0,p_1,q_1$) & $1-q_0$ & $p_0$ & $1-p_1$ & $q_1$\\ \hline \end{tabular} \label{t:Q} \end{table} The numerical results are shown in the following table and were obtained by numerically solving the corresponding MDPs. The results for $V^{\infty}$ were obtained by quantizing the state and input spaces using uniform quantization with $n=100$ points. The results are tabulated in Table~\ref{t:R}. \begin{table}[h] \centering \caption{Asymptotic reward per unit time} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Channel & $\inf_{s^0,b^1}V^{\infty}(s^0,b^1)$ & $\sup_{s^0,b^1}V^{\infty}(s^0,b^1)$ & $\min_{s^0,s^1}\widetilde{V}^{\infty}(s^0,s^1)$ & $\max_{s^0,s^1}\widetilde{V}^{\infty}(s^0,s^1)$ & $C_1$ & $C_1^*$ \\ \hline A & $\infty$ & $\infty$ & $\infty$ & $\infty$ & $\infty$ & $\infty$ \\ \hline B($0.9$) & $\infty$ & $\infty$ & $\infty$ & $\infty$ & $\infty$ & $3.294$ \\ \hline C($0.5,0.1$)& 1.633 & 1.637 & 1.637 & 1.637 & 1.637 & 1.533 \\ \hline C($0.9,0.1$)& 2.459 & 2.536 & 2.533 & 2.536 & 2.536 & 2.459 \\ \hline D($0.5,0.1,0.1,0.1$)& 2.274 & 2.303 & 2.298 & 2.298 & 2.303 & 2.247 \\ \hline D($0.9,0.1,0.1,0.1$)& 2.459 & 2.536 & 2.533 & 2.536 & 2.536 & 2.459 \\ \hline \end{tabular} \label{t:R} \end{table} It is not surprising that the trapdoor and chemical channels have infinite upper bounds. This is also true for the Z channel in the DMC case and it is related to the fact that the transition kernel has a zero entry. Intuitively, discrimination of the two hypotheses can be perfect by transmitting always $X_t=1\oplus S_t$ under $H_0$ and $X_t=S_t$ under $H_1$ hypothesis: with high probability, that does not depend on the message size or the target error rate, the receiver under the $H_0$ hypothesis will receive the output $Y_t=1\oplus S_t$ which is impossible under $H_1$ hypothesis and thus will make a perfect decision. For each MDP the rewards do not seem to depend on the initial state, within the accuracy of our calculations. Similarly, the results comparing the first and second MDPs are within the accuracy of our calculations and so we cannot make a conclusive statement regarding the difference between the two MDP solutions. There is a strong indication, however, that they both result in the same average reward asymptotically. Also shown in the above table is the quantity $C_1^*$ which is the average reward received in the MDP for the instantaneous reward \begin{align} R^*(s^0,b^1,x^0,x^1) = \sum_{\tilde{x},\tilde{s}} x^1(\tilde{x}|\tilde{s}) b^1(\tilde{s})\sum_y Q(y|\tilde{x},\tilde{s})\log \frac{Q(y|\tilde{x},\tilde{s})}{Q(y|x^0,s^0)}, \nonumber \end{align} which is of interest in the design of transmission schemes in~\cite{AnWu17}. \section{Conclusions} \label{sec:conclusions} In this paper, we derive an upper bound on the error-exponent of unifilar channels with noiseless feedback and variable length codes. We generalize Burnashev's techniques by performing multi-step drift analysis and deriving a lower bound on the stopping time together with a proposed submartingale. The constant $C_1$ which is the zero rate exponent is evaluated through an MDP and furher upper bounded through a more computationally tractable MDP. Numerical results show that for all unifilar channels tested, the two MDPs give similar results. A future research direction is the analytical solution of these MDPs. In addition, the presented analysis can be easily generalized to channels with finite state and inter-symbol interference (ISI) with the state known only to the receiver. \appendices \section{Proof of Lemma~\ref{lemma:genFano}}\label{app:genFano} We will first establish that under the condition $P(T<\infty)=1$ the limit $\lim_{n\rightarrow \infty} \mathbb{E}[H_{T\wedge n}]$ exists, where $T\wedge n=\min\{T,n\}$. We have \begin{equation} \mathbb{E}[H_{T\wedge n}] = \sum_{t=1}^n \mathbb{E}[H_t|T=t]P(T=t) + \mathbb{E}[H_n|T>n]P(T>n). \end{equation} Take $m<n$ and using the fact that $0 \leq H_T \leq \log M$ a.s., we have \begin{subequations} \begin{align} |\mathbb{E}[H_{T\wedge n}]-\mathbb{E}[H_{T\wedge m}]| &= \mathbb{E}[H_n|T>n]P(T>n) + \mathbb{E}[H_m|T>m]P(T>m) + \sum_{t=m+1}^n \mathbb{E}[H_t|T=t]P(T=t) \\ & \leq (P(T>n)+P(T>m)+\sum_{t=m+1}^n P(T=t))\log M \\ & = 2 P(T>m) \log M \\ & \stackrel{m\rightarrow \infty}{\longrightarrow} 0. \end{align} \end{subequations} Defining the event $\mathcal{E} = \{W\neq \hat{W}\}$ we have from Fano's inequality \begin{subequations} \begin{align} H(W|\hat{W},T=n) &\leq h(P(\mathcal{E}|T=n)) + P(\mathcal{E}|T=n) \log(M-1) \Leftrightarrow \\ \sum_{j=1}^M H(W|\hat{W}=j,T=n) P(\hat{W}=j|T=n) &\leq h(P(\mathcal{E}|T=n)) + P(\mathcal{E}|T=n) \log(M-1) \label{eq:h_ineq2} \end{align} \end{subequations} Now consider the probability $P(W=i|\hat{W}=j,T=n)$ \begin{subequations} \begin{align} P(W=i|\hat{W}=j,T=n) & = \sum_{y^n,v^n} P(W=i|\hat{W}=j,T=n,Y^n=y^n,V^n=v^n,S_1=s_1)P(Y^n=y^n,V^n=v^n,S_1=s_1|\hat{W}=j,T=n) \\ &\stackrel{(a)}{=} \sum_{y^n,v^n} P(W=i|Y^n=y^n,V^n=v^n,S_1=s_1) P(Y^n=y^n,V^n=v^n,S_1=s_1|\hat{W}=j,T=n) \\ &\stackrel{(b)}{=} \mathbb{E}[ \Pi_n(i) | \hat{W}=j,T=n], \end{align} \end{subequations} where (a) is due to $1_{T=n\text{ and } \hat{W}=j}=1_{T=n\text{ and } \hat{W}_n=j}$ being measurable wrt $\mathcal{F}_n$, and (b) is due to the definition of the rv $\Pi_n$. Using concavity of entropy and the definition of the rv $H_n$ we now have \begin{align}\label{eq:h_ineq1} \mathbb{E}[ H_n | \hat{W}=j,T=n] \leq H(W|\hat{W}=j,T=n). \end{align} We can now write \begin{subequations} \begin{align} \mathbb{E}[ H_n | T=n] &= \sum_{j=1}^M \mathbb{E}[ H_n | \hat{W}=j,T=n] P(\hat{W}=j|T=n) \\ &\stackrel{(a)}{\leq} \sum_{j=1}^M H(W|\hat{W}=j,T=n) P(\hat{W}=j|T=n) \\ &\stackrel{(b)}{\leq} h(P(\mathcal{E}|T=n)) + P(\mathcal{E}|T=n) \log(M-1), \end{align} \label{eq:h_ineq3} \end{subequations} where (a) is due to~\eqref{eq:h_ineq1} and (b) is due to~\eqref{eq:h_ineq2}. Averaging out wrt $T$ and using the fact that the limit $\lim_{n\rightarrow \infty} \mathbb{E}[H_{T\wedge n}]$ exists, results in \begin{subequations} \begin{align} \mathbb{E}[ H_T ] &= \sum_{n=1}^\infty \mathbb{E}[H_n|T=n] P(T=n) \\ &\stackrel{(a)}{\leq} \sum_{n=1}^\infty h(P(\mathcal{E}|T=n))P(T=n) + P(\mathcal{E}|T=n)P(T=n) \log(M-1) \\ &\stackrel{(b)}{\leq} h(\sum_{n=1}^\infty P(\mathcal{E}|T=n)P(T=n)) + P(\mathcal{E}) \log(M-1) \\ &= h(P_e) + P_e \log(M-1), \end{align} \end{subequations} where (a) is due to~\eqref{eq:h_ineq3} and (b) is due to the concavity of the binary entropy function $h(\cdot)$. \section{Proof of Lemma~\ref{lemma:driftentropy}}\label{app:lemma1} Given any $y^{t}\in \mathcal{Y}^{t}$, $v^{t}\in \mathcal{V}^{t}$ and $s_1\in \mathcal{S}$, \begin{align} \label{eq:onedrift} \mathbb{E}&[H_{t+1} - H_{t}|Y^{t}=y^{t},V^t=v^t,S_1=s_1] \nonumber\\ &= -I(W;Y_{t+1},V_{t+1}|Y^{t}=y^{t},V^t=v^t,S_1=s_1)\nonumber \\ &= -H(Y_{t+1}|V_{t+1},Y^{t}=y^{t},V^t=v^t,S_1=s_1)-H(V_{t+1}|Y^{t}=y^{t},V^t=v^t,S_1=s_1) \nonumber \\ & \qquad + H(Y_{t+1}|V_{t+1},Y^{t}=y^{t},V^t=v^t,S_1=s_1,W) + H(V_{t+1}|Y^{t}=y^{t},V^t=v^t,S_1=s_1,W) \nonumber \\ &\overset{(a)}{=} -H(Y_{t+1}|V_{t+1},Y^{t}=y^{t},V^t=v^t,S_1=s_1)-H(V_{t+1}) \nonumber \\ & \qquad + H(Y_{t+1}|V_{t+1},Y^{t}=y^{t},V^t=v^t,S_1=s_1,W) + H(V_{t+1}) \nonumber \\ &\overset{(b)}{\geq} -H(Y_{t+1}|Y^{t}=y^{t},V^t=v^t,S_1=s_1) + H(Y_{t+1}|V_{t+1},Y^{t}=y^{t},V^t=v^t,S_1=s_1,W) \nonumber \\ &\overset{(c)}{=} -H(Y_{t+1}|Y^{t}=y^{t},V^t=v^t,S_1=s_1) + H(Y_{t+1}|V_{t+1},Y^{t}=y^{t},V^t=v^t,S_1=s_1,W,S_{t+1},X_{t+1}) \nonumber \\ &\overset{(d)}{=} -H(Y_{t+1}|Y^{t}=y^{t},V^t=v^t,S_1=s_1) + H(Y_{t+1}|S_{t+1},X_{t+1}, Y^{t}=y^{t},V^t=v^t,S_1=s_1) \nonumber \\ &= -I(X_{t+1},S_{t+1};Y_{t+1}|Y^{t}=y^{t},V^t=v^t,S_1=s_1), \end{align} where (a) is due to the way the common random variables are selected, (b) is due to conditioning reduces entropy, and (c) is due to encoding and the deterministic channel state update (d) due to the channel properties. Note that the last term is the mutual information between $X_{t+1},S_{t+1}$ and $Y_{t+1}$ conditioning on $Y^{t}=y^{t},V^t=v^t,S_1=s_1$, which is different from conditional mutual information $I(X_{t+1},S_{t+1};Y_{t+1}|Y^{t},V^t,S_1)$. Now the $N$-step drift becomes \begin{align} \mathbb{E}&[H_{t+N} - H_{t}|Y^t=y^t,V^t=v^t,S_1=s_1] \nonumber \\ &= \sum_{k=t}^{t+N-1} \mathbb{E}[ \mathbb{E}[H_{k+1} - H_{k}|Y^t=y^t,V^t=v^t, Y_{t+1}^k,V_{t+1}^k,S_1=s_1]|Y^t=y^t,V^t=v^t,S_1=s_1] \nonumber \\ &= \sum_{k=t}^{t+N-1} \sum_{y^k_{t+1},v^k_{t+1}} P(Y^k_{t+1}=y^k_{t+1},V^k_{t+1}=v^k_{t+1}|Y^t=y^t,V^t=v^t,S_1=s_1) \mathbb{E}[H_{k+1} - H_{k}|Y^k=y^k,V^k=v^k,S_1=s_1] \nonumber \\ &\overset{(a)}{\geq} -\sum_{k=t}^{t+N-1} \sum_{y^k_{t+1},v^k_{t+1}} P(Y^k_{t+1}=y^k_{t+1},V^k_{t+1}=v^k_{t+1}|Y^t=y^t,V^t=v^t,S_1=s_1) I(X_{k+1},S_{k+1} ;Y_{k+1}|Y^{k}=y^k,V^{k}=v^k,S_1=s_1)\nonumber \\ &= -\sum_{k=t}^{t+N-1} I(X_{k+1},S_{k+1} ;Y_{k+1}|Y^{k}_{t+1},V^{k}_{t+1},Y^{t}=y^t,V^{t}=v^t,S_1=s_1)\nonumber \\ &\overset{(b)}{\geq} -N(C+\epsilon), \end{align} where (a) is due to~\eqref{eq:onedrift} and (b) is due to~\eqref{eq:capacity}. \section{Proof of Lemma~\ref{lemma:driftlogentropy}}\label{app:lemma2} Given any $y^t \in \mathcal{Y} ^t$, $v^t \in \mathcal{V} ^t$ and $s_1\in \mathcal{S} $, \begin{align} E&[\log(H_{t+N})-\log(H_t)|Y^t=y^t,V^t=v^t,S_1=s_1] =\nonumber \\ & \mathbb{E}[\log\frac{-\sum_{i}P(W=i|Y^{t+N}_{t+1},V^{t+N}_{t+1},y^t,v^t,s_1) \log P(W=i|Y^{t+N}_{t+1},V^{t+N}_{t+1},y^t,v^t,s_1)}{-\sum_{i}P(W=i|y^{t},v^t,s_1) \log P(W=i|y^t,v^t,s_1)}|Y^t=y^t,V^t=v^t,S_1=s_1]. \end{align} For convenience, we define the following quantities \begin{subequations} \begin{align} f_i &= P(W=i|y^t,v^t,s_1) \\ f_i(Y^{t+N}_{t+1},V^{t+N}_{t+1}) &= P(W=i|Y^{t+N}_{t+1},V^{t+N}_{t+1},y^t,v^t,s_1) \\ \hat{Q}(Y^{t+N}_{t+1},V^{t+N}_{t+1}|i) &= P(Y^{t+N}_{t+1},V^{t+N}_{t+1}|W=i,y^t,v^t,s_1). \end{align} \end{subequations} Since $H_t < \epsilon$, there exits a $k$ such that $f_k>1-\epsilon/2$ while $f_j<\epsilon/2$ for $j\neq k$. We further define $\hat{f}_j \triangleq f_j/(1-f_k) $ for $j\neq k$. The following approximations are valid for $f_k$ close to 1. \begin{subequations} \begin{align} f_k(Y^{t+N}_{t+1},V^{t+N}_{t+1}) \log f_k(Y^{t+N}_{t+1},V^{t+N}_{t+1}) &= -(1-f_k) \frac{\sum_{j\neq k} \hat{f}_j \hat{Q}(Y^{t+N}_{t+1},V^{t+N}_{t+1}|j)}{\hat{Q}(Y^{t+N}_{t+1},V^{t+N}_{t+1}|k)} + o(1-f_k) \\ f_j(Y^{t+N}_{t+1},V^{t+N}_{t+1}) \log f_j(Y^{t+N}_{t+1},V^{t+N}_{t+1}) &= (1-f_k) (\log(1-f_k) + o(\log(1-f_k)))\frac{\hat{f}_j\hat{Q}(Y^{t+N}_{t+1},V^{t+N}_{t+1}|j)}{\hat{Q}(Y^{t+N}_{t+1},V^{t+N}_{t+1}|k)} \\ P(Y^{t+N}_{t+1},V^{t+N}_{t+1}|y^t,S_1) &= \hat{Q}(Y^{t+N}_{t+1},V^{t+N}_{t+1}|k) +o(1). \end{align} \end{subequations} Substituting these approximate expressions back to the drift expression we have \begin{align} \label{ineq:logHtlowerboundprimitive} E&[\log (H_{t+N})-\log (H_t)|Y^t=y^t,V^t=v^t,S_1=s_1] \nonumber \\ &= \sum_{Y^{t+N}_{t+1},V^{t+N}_{t+1}} P(Y^{t+N}_{t+1},V^{t+N}_{t+1}|y^t,v^t,s_1) \log \frac{\sum_i f_i(Y^{t+N}_{t+1},V^{t+N}_{t+1})\log f_i(Y^{t+N}_{t+1},V^{t+N}_{t+1})}{\sum_i f_i\log f_i} \nonumber \\ &=\sum_{Y^{t+N}_{t+1},V^{t+N}_{t+1}} \hat{Q}(Y^{t+N}_{t+1},V^{t+N}_{t+1}|k) \log \frac{(1-f_k) (\log(1-f_k) + o(\log(1-f_k)))\sum_{j\neq k}\frac{\hat{f}_j\hat{Q}(Y^{t+N}_{t+1},V^{t+N}_{t+1}|j)}{\hat{Q}(Y^{t+N}_{t+1},V^{t+N}_{t+1}|k)}}{(1-f_k)(\log (1-f_k) + o(\log(1-f_k))} \nonumber \\ &= -\sum_{Y^{t+N}_{t+1},V^{t+N}_{t+1}}\hat{Q}(Y^{t+N}_{t+1},V^{t+N}_{t+1}|k) \log \frac{\hat{Q}(Y^{t+N}_{t+1},V^{t+N}_{t+1}|k)}{\sum_{j\neq k}\hat{f}_j\hat{Q}(Y^{t+N}_{t+1},V^{t+N}_{t+1}|j)} +o(1)\nonumber \\ &\geq -N(C_1 + \epsilon), \end{align} where the last inequality is due to the definition of $C_1$. \section{Proof of Lemma~\ref{lemma:newsubmartingale}}\label{app:lemma3} We can always choose a sufficiently small positive $\lambda$ such that \begin{subequations} \begin{align} \frac{H^*}{K_1}(e^y-1) &< \frac{y}{K_2} + f(y) &\qquad -K_3 < y < 0 \label{ineq:subnegy} \\ \frac{H^*}{K_1}(e^y-1) &> \frac{y}{K_2} + f(y) &\qquad 0 < y < K_3 \label{ineq:subposy} \\ \frac{1}{K_2} + f'(y) &> 0 &\qquad -K_3 < y < 0. \label{ineq:subdiff} \\ 1-\frac{\lambda e^{\lambda K_3}}{2K_2}K_3^2 &> 0 & \label{ineq:subsub} \end{align} \end{subequations} We first consider the case $H_t > H^*$. \begin{align} Z_{t+1} &= (\frac{H_{t+1}-H^*}{K_1}+t+1)1_{\{H_{t+1}>H^*\}}+(\frac{\log \frac{H_{t+1}}{H^*}}{K_2}+t+1+f(\log \frac{H_t}{H^*}))1_{\{H_{t+1}\leq H^*\}} \nonumber\\ &\overset{(a)}{\geq} (\frac{H_{t+1}-H^*}{K_1}+t+1)1_{\{H_{t+1}>H^*\}}+(\frac{H_{t+1}-H^*}{K_1}+t+1)1_{\{H_{t+1}\leq H^*\}} \nonumber\\ & = \frac{H_{t+1}-H^*}{K_1}+t+1, \end{align} where (a) is due to \eqref{ineq:subnegy}. Therefore we have \begin{align} \mathbb{E}[Z_{t+1}-Z_t|\mathcal{F}_t] &= \mathbb{E}[Z_{t+1}1_{\{H_{t} > H^*\}}-Z_t1_{\{H_{t} > H^*\}}|\mathcal{F}_t] \nonumber\\ &\geq \mathbb{E}[(\frac{H_{t+1}-H^*}{K_1}+t+1)1_{\{H_{t} > H^*\}}-(\frac{H_{t}-H^*}{K_1}+t)1_{\{H_{t} > H^*\}}|\mathcal{F}_t] \nonumber \\ &\geq 0, \label{ineq:subHtbig} \end{align} where the last equation is due to \eqref{ineq:entropy}. Similarly, for the case $H_t \leq H^*$, from \eqref{ineq:subposy} we have \begin{equation} Z_{t+1} \geq (\frac{\log \frac{H_{t+1}}{H^*}}{K_2}+t+1+f(\log \frac{H_{t+1}}{H^*})), \end{equation} and therefore \begin{align} E&[Z_{t+1}-Z_t|\mathcal{F}_t] \nonumber \\ &\geq \mathbb{E}[\frac{\log \frac{H_{t+1}}{H^*}}{K_2}+t+1+f(\log \frac{H_{t+1}}{H^*})-\frac{\log \frac{H_{t}}{H^*}}{K_2}-t-f(\log \frac{H_{t}}{H^*})|\mathcal{F}_t]\nonumber \\ &\overset{(a)}{=} \mathbb{E}[(\frac{1}{K_2}+f'(\log\frac{H_{t+1}}{H^*}))(\log\frac{H_{t+1}}{H^*}-\log\frac{H_{t}}{H^*})+1+\frac{f''(Z(H_{t+1},H_t))}{2}(\log\frac{H_{t+1}}{H^*}-\log\frac{H_{t}}{H^*})^2|\mathcal{F}_t]\nonumber \\ &\overset{(b)}{\geq} \mathbb{E}[-K_2f'(\log\frac{H_{t+1}}{H^*})+\frac{f''(Z(H_{t+1},H_t))}{2}(\log\frac{H_{t+1}}{H^*}-\log\frac{H_{t}}{H^*})^2|\mathcal{F}_t] \nonumber \\ &= \mathbb{E}[e^{\lambda\log\frac{H_{t+1}}{H^*}}+\frac{-\lambda e^{\lambda (Z(H_{t+1},H_t)-\log\frac{H_{t+1}}{H^*}+\log\frac{H_{t+1}}{H^*})}}{2K_2}(\log\frac{H_{t+1}}{H^*}-\log\frac{H_{t}}{H^*})^2|\mathcal{F}_t] \nonumber \\ &\overset{(c)}{\geq} \mathbb{E}[e^{\lambda\log\frac{H_{t+1}}{H^*}}-\frac{\lambda e^{\lambda (K_3+\log\frac{H_{t+1}}{H^*})}}{2K_2}(\log\frac{H_{t+1}}{H^*}-\log\frac{H_{t}}{H^*})^2|\mathcal{F}_t] \nonumber \\ & \overset{(d)}{\geq} (1-\frac{\lambda e^{\lambda K_3}}{2K_2}K_3^2) \mathbb{E}[e^{\lambda\log\frac{H_{t+1}}{H^*}}|\mathcal{F}_t] \nonumber \\ & \overset{(e)}{\geq}\ 0,\label{ineq:subHtsmall} \end{align} where (a) is from the second-order Taylor's expansion of $f$ at $\log\frac{H_t}{H^*}$, (b) is due to \eqref{ineq:logentropysmall} and \eqref{ineq:subdiff}, (c) is due to that $Z(H_{t+1},H_t)$ is between $\log\frac{H_t}{H^*}$ and $\log\frac{H_{t+1}}{H^*}$, (d) is due to \eqref{ineq:logentropyall}, and (e) is due to \eqref{ineq:subsub}. From \eqref{ineq:subHtbig} and \eqref{ineq:subHtsmall}, we have $\mathbb{E}[Z_{t+1}-Z_t|Y^t]\geq 0$ and thus $(Z_t)_{t\geq 0}$ is a submartingale. \section{Proof of Theorem~\ref{th:main}}\label{app:proposition} The proof essentially applies Lemma~\ref{lemma:newsubmartingale} to the ``block'' submartingale. Given any $\epsilon>0$, there exists an $N=N(\epsilon)$ such that by Lemma~\ref{lemma:driftentropy} and Lemma~\ref{lemma:driftlogentropy}, \begin{subequations} \begin{align} \mathbb{E}[H_{N(t+1)}-H_{Nt}|\mathcal{F}_{Nt}] &\geq -N(C+\epsilon) \\ \mathbb{E}[\log H_{N(t+1)}-\log H_{Nt}|\mathcal{F}_{Nt}] &\geq -N (C_1 + \epsilon) \qquad \text{if } H_{Nt}<\epsilon \\ |\log H_{N(t+1)}-\log H_{Nt}| &< NC_2 \qquad\qquad \text{if } H_{Nt}<\epsilon. \end{align} \end{subequations} Define $M_{t'}=Z_{N t'}$, where $Z_t$ is defined in~\eqref{def:newsubmartingale}, and filtration $\mathcal{F}'_{t'} = \sigma(Y^{Nt'},V^{Nt'},S_1)$. Then $(M_t')_{t'\geq 0}$ is a submartingale w.r.t. $(\mathcal{F}'_{t'})_{t'\geq 0}$ by Lemma~\ref{lemma:newsubmartingale}. Notice that the quantity $t'$ here indicates the order of the block of $N$ consecutive transmissions. Furthermore, define the stopping time $\hat{T}$ w.r.t. $(\mathcal{F}'_{t'})_{t'\geq 0}$ by $\hat{T} = \min\{k| T \leq Nk\}$. By definition of $\hat{T}$, we have \begin{equation} \label{ineq:twostoppingtime} (\hat{T}-1)N \leq T \qquad a.s. \end{equation} Now we essentially apply the optional sampling theorem on the submartingale $(M_t')_{t'\geq 0}$ as follows \begin{align} \frac{K-\epsilon}{N(C+\epsilon)}& =M_0 \nonumber \\ &\leq \mathbb{E}[M_{\hat{T}}]\nonumber \\ &= \mathbb{E}[(\frac{\log H_{N\hat{T}}-\log \epsilon}{N (C_1 + \epsilon) } +f(\frac{\log H_{N\hat{T}}}{\log \epsilon}))1_{H_{N\hat{T}}\leq\epsilon}] + \mathbb{E}[( \frac{H_{N\hat{T}}-\epsilon}{N(C+\epsilon)} )1_{H_{N\hat{T}}>\epsilon}] + \mathbb{E}[\hat{T}] \nonumber \\ &\leq \mathbb{E}[\frac{\log H_{N\hat{T}}+|\log \epsilon|}{N (C_1 + \epsilon) } +f(\frac{\log H_{N\hat{T}}}{\log \epsilon})] + \mathbb{E}[ \frac{H_{N\hat{T}}+\epsilon}{N(C+\epsilon)} ] + \mathbb{E}[\hat{T}] \nonumber \\ &\overset{(a)}{=} \mathbb{E}[\frac{\log H_{T}+|\log \epsilon|}{N (C_1 + \epsilon) } +f(\frac{\log H_{T}}{\log \epsilon})] + \mathbb{E}[ \frac{H_{T}+\epsilon}{N(C+\epsilon)} ] + \mathbb{E}[\hat{T}] \nonumber \\ &\overset{(b)}{\leq} \frac{\log \mathbb{E}[H_{T}]+|\log \epsilon|}{N (C_1 + \epsilon) } +\frac{1}{\lambda NC_1} + \frac{\mathbb{E}[H_{T}]+\epsilon}{N(C+\epsilon)} + \mathbb{E}[\hat{T}] \nonumber \\ &\overset{(c)}{\leq} \frac{\log \mathbb{E}[H_{T}]+|\log \epsilon|}{N (C_1 + \epsilon) } +\frac{1}{\lambda NC_1} + \frac{\mathbb{E}[H_{T}]+\epsilon}{N(C+\epsilon)} + \frac{\mathbb{E}[T]}{N}+1 \nonumber \\ &\overset{(d)}{\leq} \frac{\log(P_e(K-\log P_e)-\log(1-P_e) )+|\log \epsilon|}{N (C_1 + \epsilon) } + \frac{1}{\lambda NC_1} + \frac{P_e(K-\log P_e)-\log(1-P_e) +\epsilon}{N(C+\epsilon)}+\frac{\mathbb{E}[T]}{N}+1 \nonumber \\ &\leq \frac{\log P_e + \log(K-\log P_e)+\Delta+|\log \epsilon | }{N (C_1 + \epsilon) } + \frac{1}{\lambda NC_1}+ \frac{P_e(K-\log P_e)-\log(1-P_e) +\epsilon}{N(C+\epsilon)} +\frac{\mathbb{E}[T]}{N}+1, \end{align} where (a) is due to that the receiver no longer performs actions after time $T$, (b) is due the the concavity of $\log(\cdot)$ and that $f$ is upper-bounded by $\frac{1}{\lambda NC_1}$, (c) is due to \eqref{ineq:twostoppingtime}, (d) is due to the Fano's lemma and $\Delta=\log(1-\frac{\log(1-P_e)}{P_e(K-\log P_e)})$. Multiplying $N$ on the both sides the above inequality and rearranging terms, we get \begin{align} -\frac{\log P_e}{\mathbb{E}[T]} & \leq C_1(1-\frac{\overline{R}}{C}) + \frac{ \log(K-\log P_e)+ \Delta + |\log \epsilon|}{K/\overline{R}} \nonumber \\ & +\frac{C_1+\epsilon}{K/\overline{R}}( \frac{1}{\lambda C_1}+\frac{-P_e \log P_e -\log(1-P_e)+2\epsilon}{C+\epsilon}+N) \nonumber \\ & + \frac{\overline{R}(C_1+\epsilon)P_e}{C+\epsilon} + \epsilon(1-\frac{1+C_1/C}{C+\epsilon}\overline{R}). \label{ineq:result} \end{align} Taking the limit $K\rightarrow \infty$ on the error term on the RHS results in $\frac{\overline{R}(C_1+\epsilon)P_e}{C+\epsilon} + \epsilon(1-\frac{1+C_1/C}{C+\epsilon}\overline{R})$ and after taking the limit $P_e \rightarrow 0$ we get $\epsilon(1-\frac{1+C_1/C}{C+\epsilon}\overline{R})=o_{\epsilon}(1)$. \bibliographystyle{IEEEtran} \input{allerton17_aw_final.bbl} \end{document}
1,941,325,221,226
arxiv
\section{Introduction} \label{sec:intro} Lattice QCD is the method of choice for the accurate calculation of hadronic matrix elements needed for a huge range of precision particle physics phenomenology aimed at uncovering new physics. Compelling evidence of new physics in the comparison of experiment to the Standard Model (SM) has so far proved elusive, however, and this is driving the need for smaller and smaller uncertainties on both sides. This means that the error bars from lattice QCD calculations must be reduced to sub-1\% levels. Here we address uncertainties coming from the renormalisation of lattice QCD operators to match their continuum QCD counterparts. This renormalisation is needed so that the hadronic matrix elements of the operators calculated in lattice QCD can be used in continuum phenomenology. Ideally the uncertainty from the renormalisation factors, $Z$, should be much less than other lattice QCD uncertainties (such as statistical errors) in the hadronic matrix element calculation. Defining QCD on a space-time lattice provides an ultraviolet cutoff on the theory of $\pi/a$ where $a$ is the lattice spacing. This is a different regularisation than that used in continuum formulations of QCD and hence we expect a finite renormalisation to be required to match lattice QCD and continuum operators. This renormalisation takes account of the differing ultraviolet behaviour in the two cases and hence can be calculated as a perturbative series in the strong coupling constant, $\alpha_s$, at a scale related to the ultraviolet cutoff. Lattice QCD perturbation theory is notoriously difficult, however, and very few renormalisation constants have been calculated beyond $\mathcal{O}(\alpha_s)$ (for an example of a two-loop renormalisation in lattice QCD perturbation theory see~\cite{Mason:2005bj}). It therefore seems clear that this route will not give accurate enough results for the future. Instead we concentrate here on other approaches that can be implemented using results from within the nonperturbative lattice QCD calculation. These approaches will typically still need to make use of perturbation theory to provide a full matching to a preferred continuum scheme such as $\overline{\text{MS}}$, but if this perturbation theory can be done in the continuum to high order then much improved accuracy should be possible. At the heart of these nonperturbative-on-the-lattice approaches is always the idea that we can construct a short-distance operator on the lattice whose leading term in an operator product expansion is the operator that we wish to study. The matrix elements that we calculate on the lattice, and use to determine $Z$, will be dominated by those from the leading operator. There will inevitably be contamination, however, from subleading terms in the expansion, i.e. higher-dimension operators multiplied by inverse powers of some scale. This means then that nonperturbative artefacts can enter the determination of $Z$ and these must be understood and controlled in order to make use of the $Z$ obtained~\cite{Lytle:2018evc}. Here we will study the renormalisation factor $Z_V$ associated with the flavour-diagonal vector current that couples to the photon. This current is conserved in continuum QCD and has no anomalous dimension. Hence we can study the lattice QCD determination of $Z_V$ directly, and its dependence on the lattice spacing, without having to combine it with a matrix element for the vector current determined in lattice QCD. $Z_V$ is a special case of a renormalisation constant that can be calculated exactly in lattice QCD, i.e. without the need for any continuum perturbation theory and without nonperturbative artefact contamination. It is important to use a method that allows for such a calculation if we want an accurate normalisation. It is possible to write down conserved vector currents in lattice QCD and use these, knowing that they do not require renormalisation because there is an exact vector Ward-Takashashi identity. Conserved vector currents are not generally used, however, because they are complicated objects, especially for discretisations of QCD that are highly improved. The removal of tree-level discretisation errors at $\mathcal{O}(a^2)$ from the covariant derivative in the Dirac equation requires the addition of operators that extend over three links~\cite{Naik:1986bn}. The conserved current then contains both one-link and three-link terms and this is the case for the Highly Improved Staggered Quark (HISQ) action that we will use here (see Appendix~\ref{app:cons-curr}). We demonstrate explicitly how the vector Ward-Takahashi identity works in this case. The HISQ action was designed~\cite{Follana:2006rc} to have very small discretisation effects and this allows its use to study both light and heavy quark phenomenology~\cite{Follana:2007uv}. Whenever a vector current is needed for phenomenology, however, it is much easier to use a nonconserved local (or simple one-link point-split) vector current than the conserved one~\cite{Donald:2012ga, Donald:2013pea, Donald:2013sra}. This must then be renormalised. Renormalisation schemes for nonconserved currents that make use (not necessarily explicitly) of ratios of matrix elements for conserved and nonconserved vector currents have a special status because nonperturbative contributions from higher dimension operators are suppressed by powers of $a^2$. They give renormalisation constants, $Z_V$, for nonconserved lattice vector currents that are exact in the $a \rightarrow 0$ limit. Such a $Z_V$ can then be combined with a matrix element of that nonconserved current in the lattice QCD calculation and the result extrapolated to zero lattice spacing. The same answer will be obtained in that limit with any such $Z_V$. Following the discussion of perturbative matching earlier we can think of an exact $Z_V$ as consisting of a perturbative series in $\alpha_s$ that depends on the form of the vector current (and also in principle terms arising from small instantons~\cite{Novikov:1984rf} or other nonperturbative effects of this kind) plus discretisation effects that depend on the scheme and vanish as $a \rightarrow 0$~\cite{Vladikas:2011bp}. Note that we do not need to know what the perturbative series is; the method is completely nonperturbative. Which exact $Z_V$ to use is then simply an issue of numerical cost to achieve a given uncertainty and/or convenience. One standard exact method for renormalising nonconserved vector currents in lattice QCD is to require (electric) charge conservation i.e. that the vector form factor between identical hadrons at zero momentum transfer should have value 1. Since this result would be obtained for the conserved current, $Z_V$ is implicitly a ratio of nonconserved to conserved current matrix elements between the two hadrons. This method is numerically fairly costly because it requires the calculation of two-point and three-point correlation functions. It can give numerically accurate results ($\mathcal{O}(0.1\%)$ uncertainties) when averaged over a sufficiently large sample (hundreds) of gluon field configurations. As above, we expect the $Z_V$ determined from this method (which we will denote $Z_V(\mathrm{F(0)})$) to be equal to a perturbative matching factor up to discretisation effects. This was tested by HPQCD in Appendix B of~\cite{Chakraborty:2017hry} for the local vector current made of HISQ quarks. Values for $Z_V^{\mathrm{loc}}(\mathrm{F(0)})$ were calculated at multiple values of the lattice spacing and gave a good fit to a perturbative expansion in $\alpha_s$ plus discretisation effects, constraining the $\mathcal{O}(\alpha_s)$ coefficient to have the known value determined in lattice QCD perturbation theory. Alternative methods of determining renormalisation factors by defining a variety of momentum-subtraction schemes on the lattice~\cite{Martinelli:1994ty,Chetyrkin:1999pq,Aoki:2007xm,Sturm:2009kb} can produce precise results for $Z$ factors at lower computational cost. However, only some of these schemes are exact for $Z_V$ in the sense defined above. The momentum-subtraction schemes define $Z_V$ from the ratio of two matrix elements calculated between external quark states of large virtuality, $\mu^2$, in a fixed gauge. Working at large $\mu^2$ is part of the definition of these schemes because nonperturbative contributions from higher-dimension operators will in general be suppressed by powers of $\mu^2$ and not $a^2$ as above. A wavefunction renormalisation factor is determined from the quark propagator. A vertex renormalisation factor comes from an amputated vertex function for the vector current, on which momentum-subtraction renormalisation conditions have been imposed. $Z_V$ is then obtained as the ratio of these two factors, with tiny statistical errors from a handful of gluon field configurations if `momentum sources' are used~\cite{Gockeler:1998ye}. The momentum-subtraction scheme known as RI-SMOM~\cite{Sturm:2009kb} is constructed around the Ward-Takahashi identity and so designed to give $Z_V=1$ for the lattice conserved current. We show explicitly that this is true for the HISQ action. This means that implementing the RI-SMOM scheme for nonconserved currents is equivalent to taking a ratio of vector vertex functions for conserved and nonconserved currents. We compare the $Z_V$ values obtained in the RI-SMOM scheme, $Z_V({\text{SMOM}})$, to those from the form factor method for the local vector HISQ current. We are able to show that, as expected, $Z_V^{\text{loc}}({\text{SMOM}})$ differs from $Z_V^{\text{loc}}({F(0)})$ only by discretisation effects so that the two methods will give the same answer for physical matrix elements in the continuum limit. A popular momentum-subtraction scheme that does {\it not} make use of the vector Ward-Takahashi identity is the RI$^{\prime}$-MOM scheme~\cite{Martinelli:1994ty,Chetyrkin:1999pq}. We show that in this scheme the $Z_V$ values for both the conserved and local vector currents are not exact but have contamination from nonperturbative (condensate) artefacts that survive the continuum limit. To make use of this scheme $Z_V$ must be redefined to use instead a ratio of the vector vertex function for conserved and nonconserved currents. We show the results from implementing this method. We stress here that we are determining $Z_V$ very precisely and hence comparing values with uncertainties at the 0.1\% level. Previous work has compared values for $Z_V$ for nonconserved currents from methods that use Ward identities and the RI$'$-MOM scheme (for example~\cite{Becirevic:2004ny, Constantinou:2010gr}) and concluded that there was agreement at the 1\% level. Our more accurate results show clear disagreement, most obviously in the analysis for the conserved current. Our earlier argument that 0.1\% accuracy is needed for renormalisation constants in pure lattice QCD can be extended when we study the impact of adding QED effects. When we allow the valence quarks to have electric charge (i.e. adding quenched QED to lattice QCD) we see a tiny impact (less than 0.1\%) on $Z_V$ using the HISQ action. We can now quantify and analyse this effect using the RI-SMOM scheme, having established that the nonperturbative $Z_V$ values behave correctly. The paper is laid out as follows: We first discuss in Section~\ref{sec:ward} the exact lattice vector Ward-Takahashi identity that gives the conserved vector current for the HISQ action. We then give a brief overview of the momentum-subtraction schemes, called RI-SMOM and RI$'$-MOM, that we will use (abbreviating the names to SMOM and MOM) in Section~\ref{sec:MOM-schemes} and, following that, a brief description of our lattice set-up in Section~\ref{sec:latt}. We show how the Ward-Takashashi identity works for the HISQ action in Section~\ref{subsec:wti-latt} so that the conserved current is not renormalised. This is then translated into the RI-SMOM scheme in Section~\ref{subsec:SMOMcons} where $Z_V=1$ is obtained for the conserved current at all $a$ and $\mu$ values. For RI$'$-MOM, however, condensate contributions are clearly evident in the $Z_V$ values for the conserved current as shown in Section~\ref{subsec:MOMcons}. In Sections~\ref{subsec:SMOMloc} and~\ref{subsec:MOMloc} we demonstrate the impact of the protection from the Ward-Takahashi identity on the renormalisation factors for the simple local vector current ($\overline{\psi}\gamma_{\mu}\psi$ with the fields at the same space-time point). The difference between $Z_V^{\text{loc}}(\text{SMOM})$ and $Z_V^{\text{loc}}(\text{F(0)})$ is purely a discretisation effect; in the RI$'$-MOM scheme we demonstrate how to achieve the same outcome with a renormalisation factor that is a ratio between that for the local and conserved currents. In Section~\ref{sec:QED} we show the impact of quenched QED on the $Z_V$ values obtained for the local current in the RI-SMOM scheme and compare to our expectations based on the work in earlier sections. Finally, in Section~\ref{sec:conclusions} we discuss the implications of these results for ongoing and future calculations and give our conclusions. A similar picture to that for the local current is seen for the 1-link point-split vector current and we give the RI-SMOM results for this case in~Appendix~\ref{app:1link}. We reiterate the shorthand notation that we will use for the renormalisation constants for clarity. $Z_V^{\text{x}}(\text{A})$ renormalises the lattice vector current x (cons, loc, 1link) to match the continuum current (in e.g. $\overline{\text{MS}}$) and has been calculated in the scheme A (F(0), SMOM, MOM). \section{The vector Ward-Takahashi identity on the lattice} \label{sec:ward} For both continuum and lattice~\cite{Karsten:1980wd, Bochicchio:1985xa} actions the derivation of the vector Ward-Takahashi identity proceeds from the observation that the path integral is invariant under a local change of the fermion field variables $\psi$ and $\overline{\psi}$ (only) that has unit Jacobian. Then \begin{equation} \label{eq:pathint} \int \mathcal{D}\psi\mathcal{D}\overline{\psi}e^{-S[\psi^{\epsilon},\overline{\psi}^{\epsilon}]}f(\psi^{\epsilon},\overline{\psi}^{\epsilon})=\langle f(\psi,\overline{\psi}) \rangle \, . \end{equation} An example of such a transformation is to multiply $\psi$, say at point $x$, by phase $e^{i\epsilon}$ and $\overline{\psi}(x)$ by $e^{-i\epsilon}$: \begin{equation} \label{eq:psieps} \psi(z) \rightarrow \psi^{\epsilon}(z) \equiv \begin{cases} e^{i\epsilon} \psi(x) & \mbox{for $z = x$} \\ \psi(x) & \mbox{for $z \ne x$} \, . \end{cases} \end{equation} Expanding Eq.~(\ref{eq:pathint}) to first order in $\epsilon$ and denoting $\Delta X = X^{\epsilon}-X$ to this order gives \begin{equation} \label{eq:deltapi} \langle - \Delta S \cdot f + \Delta f \rangle = 0 \, . \end{equation} If we consider the path integral for the two point correlator $\langle \overline{\psi}(y_1) \psi(y_2) \rangle$ then $\Delta f$ becomes the difference of propagators from the points $y_1$ and $y_2$ to $x$. $\Delta S$ can be recast into the form $\Delta_{\mu}J^{\mu}$, allowing us to identify the conserved current $J$ associated with $S$. We have \begin{align} \label{eq:ward-pos} \langle \Delta_{\mu} J^{\mu}(x) \overline{\psi}(y_1) \psi(y_2) \rangle &= \delta_{y_2,x} \langle \overline{\psi}(y_1)\psi(x) \rangle \\ &- \delta_{y_1,x} \langle \overline{\psi}(x)\psi(y_2) \rangle \nonumber . \end{align} The right-hand side is zero unless $y_1$ or $y_2$ overlaps with $x$ (and not with each other). Note that $\Delta_{\mu} J^{\mu}(x)$ is centred on the point $x$. On the lattice $\Delta_{\mu}$ can be a simple forward ($\Delta_{\mu}^+$) or backward ($\Delta_{\mu}^-$) finite difference over one link. The current $J^{\mu}$ must then be chosen appropriately so that \begin{eqnarray} \label{eq:deltadef} \Delta^{\mu}J_{\mu}(x) &\equiv& \Delta^{\mu, +}J_{\mu}^- = \sum_{\mu} \left(J_{\mu}^-(x+\hat{\mu}) - J_{\mu}^-(x)\right) \\ &\equiv& \Delta^{\mu, -}J_{\mu}^+ = \sum_{\mu} \left(J_{\mu}^+(x) - J_{\mu}^+(x-\hat{\mu})\right) \nonumber \end{eqnarray} We give $J_{\mu}^+$ for the HISQ action~\cite{Follana:2006rc} that we use in Appendix~\ref{app:cons-curr}. As discussed in the Introduction, it is rather complicated. It contains a number of 3-link terms because of the Naik term~\cite{Naik:1986bn} that removes tree-level $a^2$ errors in the action. The position-space Ward-Takahashi identity of Eq.~(\ref{eq:ward-pos}) provides a test of the implementation of the conserved current and we have checked that this works for our implementation exactly on a single gluon-field configuration for a variety of choices of $y_1$ and $y_2$. We can perform the exact Fourier transform on the lattice of Eq.~(\ref{eq:ward-pos}). The left-hand side becomes \begin{eqnarray} \label{eq:wardmomlhs} &&(1-e^{iaq_{\mu}}) \times \\ &&\int d^4xd^4y_1 d^4y_2 e^{iqx}e^{-ip_1y_1}e^{ip_2y_2}\langle J^{\mu,+}(\tilde{x})\overline{\psi}(y_1)\psi(y_2) \rangle \nonumber \end{eqnarray} where $a$ is the lattice spacing and we take $q=p_1-p_2$. $\tilde{x}$ is the mid-point of the link between $x$ and $x+\hat{\mu}$. The right-hand side becomes \begin{eqnarray} \label{eq:wipropmom} &&\int d^4x d^4y_1 e^{ip_1x}e^{-ip_1y_1} \langle \overline{\psi}(y_1)\psi(x) \rangle \nonumber \\ &&-\int d^4x d^4y_2 e^{-ip_2x}e^{ip_2y_2} \langle \overline{\psi}(x)\psi(y_2) \rangle \nonumber \\ &&\hspace{3em}\equiv S(p_1)-S(p_2) \end{eqnarray} where $S$ is the quark propagator. Then, multiplying on both sides by the product of inverse quark propagators we reach the lattice version of the standard expression for the Ward-Takahashi identity, \begin{equation} \label{eq:ward-mom} \frac{-2i}{a} \sin\left(\frac{aq_{\mu}}{2}\right) \Lambda_V^{\mu,+}(p_1,p_2) = -S^{-1}(p_1) + S^{-1}(p_2) . \end{equation} $\Lambda_V^{\mu,+}$ is the amputated vertex function for the vector current $J^{\mu,+}$ (absorbing a factor of $e^{iaq_{\mu}/2}$ into the vertex function since $J^{\mu,+}$ sits on a link). This equation is exact, gluon-field configuration by configuration, in lattice QCD and we will demonstrate this for the HISQ action in Section~\ref{subsec:wti-latt}. As is well-known, Eq.~(\ref{eq:ward-mom}) tells us that any rescaling of the vertex by renormalisation on the left-hand side has to match rescaling of the inverse propagators on the right-hand side. This means that $J^{\mu,+}$ is not renormalised, i.e. that the renormalisation factor for this conserved current, $Z_V^{\text{cons}}$=1. Since this is also true for the conserved current in the continuum $\overline{\text{MS}}$ scheme then the matrix elements of the lattice conserved current will agree in the continuum limit with those in the $\overline{\text{MS}}$ scheme. A renormalised nonconserved vector current, written for example as $Z_V^{\text{loc}}V^{\text{loc},\mu}$ for a local current, obeys the same equations as $J^{\mu,+}$ since it is by definition the same operator up to discretisation effects on the lattice~\cite{Vladikas:2011bp}. For the HISQ action \begin{equation} \label{eq:cons-loc} J_{\mu}^+ = Z_V^{\text{loc}}V^{\text{loc}}_{\mu} + \mathcal{O}(a^2) . \end{equation} Again this is well-known, but we point it out here because it has implications for the accuracy of the determination of $Z_V^{\text{loc}}$ on the lattice. It means that, if $Z_V^{\text{loc}}$ is determined by a procedure which uses the Ward-Takahashi identity and gives 1 for the renormalisation of $J^{\mu,\pm}$, then $Z_V^{\text{loc}}$ must be free of systematic errors from nonperturbative (condensate) artefacts in the continuum limit because these must cancel between the left- and righthand sides of Eq.~\ref{eq:ward-mom}. $Z_V^{\text{loc}}$ can in principle be determined by substituting $Z_V^{\text{loc}}V^{\text{loc}}$ into the lefthand side of Eq.~(\ref{eq:ward-mom}) for any $p_1$ and $p_2$. Hadronic matrix elements of $Z_V^{\text{loc}}V^{\text{loc}}$ will then differ from the results in the continuum purely by discretisation effects (which will depend on $p_1$ and $p_2$) that can be extrapolated away straightforwardly using results at multiple values of the lattice spacing. The $Z_V$ so obtained is completely nonperturbative. Using Eq.~(\ref{eq:ward-mom}) in its full generality is unnecessarily complicated and there are lattice QCD methods that make use of it in specific, and simpler, kinematic configurations. As $q \rightarrow 0$ the identity of Eq.~(\ref{eq:ward-mom}) can be used to show that the vector form factor for the conserved current between quark or hadron states of the same momentum will be unity. The inverse of the vector form factor at the same kinematic point for a nonconserved current then gives its $Z_V$ value. This method clearly satisfies the criteria above for an exact determination of $Z_V$. We now discuss momentum-subtraction renormalisation schemes on the lattice and the extent to which they make use of Eq.~(\ref{eq:ward-mom}). \section{Momentum-subtraction schemes used on the lattice} \label{sec:MOM-schemes} Momentum-subtraction schemes are useful intermediate schemes between the lattice regularisation and the continuum $\overline{\text{MS}}$ scheme in which it is now standard to quote results for scheme-dependent quantities. If the same momentum-subtraction scheme is implemented both in lattice QCD and in continuum QCD then the continuum limit of the lattice results will be in the continuum momentum-subtraction scheme (and should be independent of lattice action at that point). They can then be converted to the $\overline{\text{MS}}$ scheme using continuum QCD perturbation theory. A momentum-subtraction scheme imposes renormalisation conditions on matrix elements between (in the cases we consider) external quark states so that the tree-level result is obtained, i.e.\ $Z_{\Gamma}$ is defined by \begin{equation} \label{eq:momgen} Z_{\Gamma} \langle p_1 | O_{\Gamma} | p_2 \rangle = \langle p_1 | O_{\Gamma} | p_2 \rangle_{\text{tree}} \end{equation} for some operator $O_{\Gamma} = \overline{\psi} \Gamma \psi$, and $\langle p_1 |$ and $|p_2 \rangle$ external quark states with momenta $p_1$ and $p_2$, typically taken to have large magnitude. This calculation must of course be done in a fixed gauge, and this is usually taken to be Landau gauge, which can be straightforwardly implemented in lattice QCD. Effects from the existence of Gribov copies under the gauge-fixing could arise in general; here we show that there are no such effects for $Z_V$ determined using the Ward-Takashahi identity. Here we will concentrate on the RI-SMOM scheme~\cite{Aoki:2007xm,Sturm:2009kb}. This scheme uses a symmetric kinematic configuration with only one scale so that $p_1^2=p_2^2=q^2=\mu^2$ (where $q=p_1-p_2$). The wavefunction renormalisation is defined (using continuum notation) by \begin{equation} \label{eq:zqdef} Z_q = - \frac{i}{12p^2}\mathrm{Tr}(\slashed{p}S^{-1}(p)) . \end{equation} The vector current renormalisation follows from requiring \begin{equation} \label{eq:zvdef} \frac{Z_V}{Z_q}\frac{1}{12q^2} \mathrm{Tr}(q_{\mu}\Lambda_V^{\mu}(p_1,p_2)\slashed{q}) = 1 . \end{equation} The traces here are over spin and colour and normalisations are chosen so that $Z_q=Z_V=1$ at tree-level. The equations above are given for the continuum SMOM scheme. On the lattice we must take care to define the appropriate discretisation for $q_{\mu}$ and $q^2$ in the various places that they appear. Below we will see what form $q_{\mu}$ must take in combination with the vertex function for the conserved current. The RI-SMOM scheme was defined with the vector Ward-Takahashi identity in mind~\cite{Sturm:2009kb}. This reference shows how the identity defines the projectors needed for the vector vertex function in the continuum (given in in Eq.~(\ref{eq:zvdef})) so that $Z_V=1$ for the conserved current. Here we repeat this exercise, but now on the lattice. Returning to the Ward-Takahashi identity in Eq.~(\ref{eq:ward-mom}) we can multiply both sides by $\slashed{\hat{q}}$ and take the trace whilst dividing by $\hat{q}^2$ (with $\hat{q}$ a discretisation of $q$ to be defined later). This gives \begin{align} \label{eq:rismom-der} &\frac{1}{12\hat{q}^2}\frac{-2i}{a}\sin(aq_{\mu}/2) \mathrm{Tr}(\Lambda_V^{\mu,+}\slashed{\hat{q}})\\ &= \frac{1}{12\hat{q}^2} [-\mathrm{Tr}(S^{-1}(p_1)\slashed{\hat{q}}) + \mathrm{Tr}(S^{-1}(p_2)\slashed{\hat{q}})] \nonumber . \end{align} We can simplify the right-hand side assuming that the inverse propagator takes the general form $S^{-1}(p) = i\slashed{p}\Sigma_V(p^2) + \Sigma_S(p^2)$ in the continuum (from relativistic invariance). Then, for the SMOM kinematics, \begin{equation} \label{eq:rismom-req} \mathrm{Tr}(S^{-1}(p_1)\slashed{q}) - \mathrm{Tr}(S^{-1}(p_2)\slashed{q}) = \mathrm{Tr}(S^{-1}(q)\slashed{q}) . \end{equation} On the lattice this formula could be broken by discretisation effects. We do not see noticeable effects of this kind with the HISQ action, however, as we will discuss in Section~\ref{subsec:SMOMcons}. Using Eq.~(\ref{eq:rismom-req}) in Eq.~(\ref{eq:rismom-der}) and multiplying by $i$ then gives, from the Ward-Takahashi identity \begin{equation} \label{eq:rismom-zv1} \frac{1}{12\hat{q}^2}\frac{2}{a}\sin(aq_{\mu}/2) \mathrm{Tr}(\Lambda_V^{\mu,+}\slashed{\hat{q}}) = -\frac{i}{12\hat{q}^2} \mathrm{Tr}(S^{-1}(q)\slashed{\hat{q}}) \, . \end{equation} From Eq.~(\ref{eq:zqdef}) we see that the righthand-side of this expression is $Z_q$ in the RI-SMOM scheme. Comparing the left-hand side to Eq.~(\ref{eq:zvdef}) we see that this is $Z_q/Z_V^{\text{cons}}$ in the RI-SMOM scheme where $Z_V^{\text{cons}}$ is the $Z_V$ factor for the conserved current and the Ward-Takahashi identity requires us to discretise $q_{\mu}$ as $2\sin(aq_{\mu}/2)/a$ ($\hat{q}$ is defined in Eq.~(\ref{eq:qhatdef})). Then, from Eq.~(\ref{eq:rismom-zv1}), we expect that $Z_V^{\text{cons}}(\text{SMOM})=1$ on the lattice and no further renormalisation is needed to match to $\overline{\text{MS}}$. Notice that this works for any value of $q$. We will show by explicit calculation that $Z_V^{\text{cons}}(\text{SMOM})=1$ for the HISQ action in Section~\ref{subsec:SMOMcons}. This is not true configuration by configuration, however. It does require an average over gluon fields. Another popular momentum-subtraction scheme is RI$'$-MOM~\cite{Martinelli:1994ty, Chetyrkin:1999pq}, abbreviated here to MOM. In this scheme $Z_q$ is defined in the same way, by Eq.~(\ref{eq:zqdef}), but $Z_V$ is defined by a different projector for the vector vertex function and the kinematic configuration for the MOM case is $p_1=p_2=p$ so that $q=0$. Instead of Eq.~(\ref{eq:zvdef}) we have, in the MOM scheme, \begin{equation} \label{eq:zvmomdef} \frac{Z_V}{Z_q}\frac{1}{12} \mathrm{Tr}(\gamma_{\mu}\Lambda_V^{\mu}(p)) = 1 . \end{equation} Since this scheme does not correspond to a Ward-Takahashi identity, $Z_V$ determined this way needs further renormalisation to match to the $\overline{\text{MS}}$ scheme. More problematically, as we will show in Section~\ref{sec:latt}, $Z_V^{\text{cons}}(\text{MOM})$ for the HISQ action is significantly different from 1 and is contaminated by nonperturbative condensate effects. The RI-SMOM$_{\gamma_{\mu}}$ scheme~\cite{Sturm:2009kb} is similar to RI$'$-MOM above but uses the SMOM kinematics with $p_1^2=p_2^2=q^2$. To calculate the renormalisation constants for nonconserved currents we must combine the calculation of the vector vertex function for that current (Eq.~(\ref{eq:zvdef}) and appropriate modifications of it as described in the text) with the calculation of the wave-function renormalisation (Eq.~(\ref{eq:zqdef})). We describe the results for the HISQ local vector current in the SMOM scheme in Section~\ref{subsec:SMOMloc}. We are able to show that the renormalisation factor for the local vector current in the SMOM scheme differs from that using the form factor method purely by discretisation effects, demonstrating that it is an exact form of $Z_V$. The discretisation effects depend on $q$ but the method is exact for any $q$; this is in contrast to the usual idea of a `window' of $q$ values to be used in momentum-subtraction schemes on the lattice~\cite{Martinelli:1994ty}. The RI$'$-MOM scheme is not exact, as discussed above. We show in Section~\ref{subsec:MOMloc} that a modification of the method (reverting to one of the original suggestions in~\cite{Martinelli:1994ty}) does, however, give an exact $Z_V$. There are technical issues associated with implementing momentum-subtraction schemes for staggered quarks that we will not discuss here. We use the techniques developed in~\cite{Lytle:2013qoa} and summarised again in~\cite{Lytle:2018evc} in the context of the RI-SMOM scheme. We will only discuss here specific issues that arise in the context of the vector current renormalisation. \section{The Lattice QCD calculation} \label{sec:latt} We perform calculations on $n_f=2+1+1$ gluon field configurations generated by the MILC collaboration~\cite{Bazavov:2010ru,Bazavov:2012xda} listed in Table~\ref{tab:ensembles}. These ensembles use an improved gluon action which removes discretisation errors through $\mathcal{O}(\alpha_sa^2)$~\cite{Hart:2008sq}. They include the effect of $u/d$, $s$ and $c$ quarks in the sea using the HISQ action~\cite{Follana:2006rc}. All gauge field configurations used are numerically fixed to Landau gauge by maximising the trace over the gluon field link with a gauge fixing tolerance of $\epsilon=10^{-14}$. This is enough to remove the difficulties related to loose gauge fixing discussed in~\cite{Lytle:2018evc}. We use broadly the same calculational set up as in \cite{Lytle:2018evc} but here we are considering vector current vertex functions rather than scalar ones. To implement momentum-subtraction schemes for staggered quarks we need to use momenta within a reduced Brillouin zone~\cite{Lytle:2013qoa} \begin{equation} \label{eq:redbrill} -\pi/2 \le ap \le \pi/2 . \end{equation} For each momentum $ap_1$ or $ap_2$ we then calculate propagators or vertex functions with 16 copies of that momentum, $ap_1+\pi A$ and $ap_2 +\pi B$ where $A$ and $B$ are four-vectors composed of 0s and 1s. This then enables us to do the traces over spin for specific `tastes' of vector current implied by equations such as Eq.~(\ref{eq:rismom-zv1}). There is also a trace over colour in this equation so the $S^{-1}(q)$ factor on the righthand side, for example, is actually a $48\times48$ matrix. Where necessary we will use the notation of~\cite{Lytle:2013qoa} to denote specific spin-tastes. As an example $\overline{\overline{\gamma_{\mu} \otimes I}}$ is the $16 \times 16$ matrix of 0s and 1s that projects onto a taste-singlet vector in $AB$ space. \begin{table} \caption{Simulation parameters for the MILC gluon field ensembles that we use, labelled by set number in the first column. $\beta=10/g^2$ is the bare QCD coupling and $w0/a$ gives the lattice spacing~\cite{Borsanyi:2012zs}, using $w_0$ = 0.1715(9) fm~\cite{fkpi} determined from the $\pi$ meson decay constant, $f_{\pi}$. Note that, for each group of ensembles at a given value of $\beta$ we use the $w_0/a$ value corresponding to the physical sea quark mass limit~\cite{Lytle:2018evc}, using results from~\cite{Chakraborty:2014aca}. $L_s$ and $L_t$ give the lattice dimensions. $am_l^{\text{sea}}$, $am_s^{\text{sea}}$ and $am_c^{\text{sea}}$ give the sea quark masses in lattice units. Set 1 will be referred to in the text as `very coarse', sets 2--5 as `coarse', set 6 as `fine' and set 7 as `superfine'. } \label{tab:ensembles} \begin{ruledtabular} \begin{tabular}{lllllllll} Set & $\beta$ & $w_0/a$ & $L_s$ & $L_t$ & $am_l^{\text{sea}}$ & $am_s^{\text{sea}}$ & $am_c^{\text{sea}}$ \\ \hline 1 & 5.80 & 1.1322(14) & 24 & 48 & 0.00640 & 0.0640 & 0.828 \\ 2 & 6.00 & 1.4075(18) & 24 & 64 & 0.0102 & 0.0509 & 0.635 \\ 3 & 6.00 & 1.4075(18) & 24 & 64 & 0.00507 & 0.0507 & 0.628 \\ 4 & 6.00 & 1.4075(18) & 32 & 64 & 0.00507 & 0.0507 & 0.628 \\ 5 & 6.00 & 1.4075(18) & 40 & 64 & 0.00507 & 0.0507 & 0.628 \\ 6 & 6.30 & 1.9500(21) & 48 & 96 & 0.00363 & 0.0363 & 0.430 \\ 7 & 6.72 & 2.994(10) & 48 & 144 & 0.0048 & 0.024 & 0.286 \\ \end{tabular} \end{ruledtabular} \end{table} Twisted boundary conditions are utilised to give the incoming and outgoing quarks arbitrary momenta~\cite{twist, Arthur:2010ht}. For the SMOM kinematics we take, with ordering $(x,y,z,t)$, \begin{eqnarray} \label{eq:SMOMmom} ap_1 &=& (a\mu, 0, a\mu, 0)/\sqrt{2} \\ ap_2 &=& (a\mu, -a\mu, 0, 0)/\sqrt{2} \, .\nonumber \end{eqnarray} For the MOM kinematics we take $ap_2=ap_1$. A range of $a\mu$ values are chosen at each lattice spacing, satisfying Eq.~(\ref{eq:redbrill}). This allows us to reach $\mu$ values of 3 GeV on coarse lattices and 4 GeV on fine and superfine lattices~\cite{Lytle:2018evc}. The $\mu$ values can be tuned very accurately (to 3 dec. places). Relatively small samples (20 configurations) give small statistical uncertainties for $Z_V$ at the $\mu$ values that we use (with momentum sources for the propagators). A bootstrap method is used to estimate all uncertainties and include correlations between results at different $\mu$ values on a given ensemble. Bootstrap samples are formed for each $Z_q$ and each ${\Lambda_V}$ and the bootstrap averages are then fed into the ratio to determine $Z_V$. All of our results are determined at small but non-zero valence quark mass. Degenerate masses are used for the incoming and outgoing quarks (but note that there is no need for the calculation of disconnected contributions). As the momentum-subtraction schemes that we consider are in principle defined at zero valence quark mass (but direct calculation at this point will have finite-volume issues) it is necessary to calculate each $Z_V$ at different quark masses and then extrapolate to the $am_{\mathrm{val}}=0$ point. To do this we perform all calculations at three masses corresponding to the light sea quark mass on a given ensemble, $am_l$, and at $2am_l$ and $3am_l$. Dependence on $am_{\mathrm{val}}$ can come from discretisation effects and from the contribution of nonperturbative condensate terms. We follow the procedure used for $Z_m$ in~\cite{Lytle:2018evc} and extrapolate $Z_V$ results using a polynomial in $am_{\mathrm{val}}/am_s$: \begin{align} \label{eq:massextrap} Z_V(am_{\mathrm{val}},\mu) = Z_V(\mu) &+ d_1(\mu)\frac{am_{\mathrm{val}}}{am_s} \\ &+ d_2(\mu) \left( \frac{am_{\mathrm{val}}}{am_s} \right)^2 .\nonumber \end{align} We find no need for higher powers of $am_{\mathrm{val}}/am_s$ here as the valence mass dependence of $Z_V$ is observed to be very mild in all cases. For the priors for the coefficients $d_i$ we use $\{0\pm0.1,0\pm0.01 \}$ at $\mu=2$ GeV with the widths decreased according to $\mu^{-2}$. Any sea quark mass dependence should be suppressed relative to the valence mass dependence by powers of $\alpha_s$ and this was observed in~\cite{Lytle:2018evc}. As the valence mass dependence is already negligible the sea mass dependence should be tiny here and we ignore it. \subsection{The Ward-Takahashi identity on the lattice} \label{subsec:wti-latt} \begin{figure}[ht] \includegraphics[width=0.47\textwidth]{imag_ward_inverses.pdf} \caption{ Demonstration of the vector Ward-Takahashi identity in momentum space (Eq.~\ref{eq:ward-mom}) for HISQ quarks on the lattice on a single gluon field configuration from Set 2. The plot shows the ratio of the righthand side of this equation to the amputated vertex function for the conserved vector current on the lefthand side for the SMOM kinematic configuration, Eq.~(\ref{eq:SMOMmom}). This is a matrix equation and this plots shows the result of averaging over all matrix components (which agree) and the two non-zero components of $q_{\mu}$. The solid line is the value of $2\sin(aq_{\mu}/2)$ for a non-zero component of $q_{\mu}$. The points correspond to lattice results for the ratio on a single configuration with crosses giving Coulomb gauge-fixed results and the circles Landau gauge-fixed results. Orange points correspond to a valence mass of $am_{\mathrm{val}}=0.0306$ while purple points correspond to 0.0102. The Ward-Takahashi identity requires these points to lie on the line as they do.} \label{fig:ward-id} \end{figure} In this Section we test the exact lattice Ward-Takahashi identity for HISQ quarks, i.e. Eq.~(\ref{eq:ward-mom}). If we have correctly implemented the lattice conserved vector current, this equation is true as a $3 \times 3$ matrix in colour space. It is also true for any $p_1$ and $p_2$ (except that it reduces to 0=0 for $p_1=p_2$), any values of the quark mass and any gauge. We test it for the SMOM kinematic configuration of Eq.~(\ref{eq:SMOMmom}). Figure~\ref{fig:ward-id} shows the results as a ratio of the difference of inverse propagators on the righthand side of Eq.~(\ref{eq:ward-mom}) to the amputated vertex function for the conserved vector current on the lefthand side. This is averaged over colour components (which all agree) and summed over the two non-zero components of $q_{\mu}$ (which take the same value $a\mu/\sqrt{2}$ in each of the $y$ and $z$ directions for the SMOM kinematics). The Ward-Takahashi identity (Eq.~(\ref{eq:ward-mom})) requires this ratio to be exactly equal to $2\sin[a{\mu}/(2\sqrt{2})]$, which is plotted as the line. The plot shows that this expectation works to high precision (double precision accuracy here), on a single configuration taken as an example from Set 2. Results are given for three different $a\mu$ values with two different valence quark masses and in two different gauges. The agreement between the points and the line demonstrates the Ward-Takashi identity working explicitly on the lattice for the conserved HISQ current of Eq.~(\ref{eq:Jcons}). The agreement seen in two different gauges is evidence that the Ward-Takahashi identity works in any gauge, as it must, and therefore its operation is also independent of any Gribov copy issue in the gauge-fixing procedure. \subsection{$Z_V$ for the conserved current in the RI-SMOM scheme} \label{subsec:SMOMcons} \begin{figure}[ht] \includegraphics[width=0.47\textwidth]{SMOM_final_expression_test.pdf} \caption{ A test of the expression for the difference of inverse propagators with momentum $p_1$ and $p_2$ in Eq.~(\ref{eq:rismom-req}). We show results on coarse, fine and superfine lattices (sets 2, 5 and 7) for a variety of $\mu$ values in lattice units where $|ap_1|=|ap_2|=|aq|=a\mu$. } \label{fig:prop-wi-test} \end{figure} \begin{figure}[ht] \includegraphics[width=0.47\textwidth]{ZV_con_SMOM.pdf} \caption{ The $Z_V$ value obtained for the conserved vector current in the RI-SMOM scheme on coarse, fine and superfine gluon field configurations (sets 2, 5 and 7). Values are given for a variety of $\mu$ values in lattice units where $|ap_1|=|ap_2|=|aq|=a\mu$. } \label{fig:Zv-con-smom} \end{figure} To determine $Z_V$ for the HISQ conserved current in the RI-SMOM scheme we adapt Eqs~(\ref{eq:zqdef}) and~(\ref{eq:zvdef}) to the case of staggered quarks on the lattice, as partly discussed already in Section~\ref{sec:MOM-schemes}. For staggered quarks the inverse propagator is a taste-singlet~\cite{Lytle:2013qoa} and so the HISQ version of Eq.~(\ref{eq:zqdef}) is \begin{equation} \label{eq:zqdefhisq} Z_q(q) = -\frac{i}{48} \sum_{\mu} \frac{a\hat{q}_{\mu}}{(a\hat{q})^2}\Tr\left[ \overline{\overline{(\gamma_{\mu}\otimes I)}}S^{-1}(q)\right] . \end{equation} The trace is now over colour and $AB$-space index. $\hat{q}$ is given by \begin{equation} \label{eq:qhatdef} a\hat{q}_{\mu} = \sin(aq_{\mu}) + \frac{1}{6}\sin^3(aq_{\mu}) . \end{equation} This choice is dictated by the momentum-subtraction requirement that $Z_q$ should be 1 in the non-interacting (tree-level) case and the fact that the derivatives in the HISQ action are improved through $\mathcal{O}(a^2)$~\cite{Follana:2006rc}. Likewise the HISQ calculation for $Z_V$ for this case is given by \begin{equation} \label{eq:zvdefhisq} \frac{Z_q(q)}{Z_V(q)} = \frac{1}{48} \sum_{\mu,\nu} 2\sin(aq_{\mu}/2)\frac{a\hat{q}_{\nu}}{(a\hat{q})^2}\Tr\left[ \overline{\overline{(\gamma_{\nu}\otimes I)}}\Lambda^{\mu,+}_V\right] . \end{equation} In Sec.~\ref{sec:MOM-schemes} it was shown how the Ward-Takahashi identity leads to the exact expression of Eq.~(\ref{eq:rismom-der}) on the lattice when the conserved current is used in the vertex function. In order to obtain $Z_V=1$ for the conserved current we also need Eq.~(\ref{eq:rismom-req}) to be satisfied exactly. In Fig.~\ref{fig:prop-wi-test} we give a test of this relationship. The figure shows the ratio of the difference of the two inverse propagators with momentum $p_1$ and $p_2$ to that of the propagator with momentum $q$, where the inverse propagators are multiplied by $\slashed{\hat{q}}$ and the trace taken. We use $\hat{q}$ here (Eq.~(\ref{eq:qhatdef})) instead of simply $q$ to be consistent with what we use in the determination of $Z_q$ in Eq.~(\ref{eq:zqdefhisq}) above. The results for the ratio plotted would be the same for $q$ as for $\hat{q}$. The results for the ratio in Fig.~\ref{fig:prop-wi-test} are seen to be consistent with 1.0 to better than 0.05\%. The statistical uncertainties plotted are from a bootstrap over results from 20 gluon field configurations. Figure~\ref{fig:prop-wi-test} shows that discretisation effects in the HISQ action have no effect on Eq.~(\ref{eq:rismom-req}) at the level of accuracy to which we are working. There are no tree-level $a^2$ errors with HISQ~\cite{Follana:2006rc} and there is a U(1) axial symmetry; both of these constrain the form that discretisation effects can take~\cite{Lytle:2013qoa}. A further constraint comes from the form for the $p_1$ and $p_2$ momenta (and $q$) to achieve the SMOM kinematics. Each has only two non-zero momentum components, as shown in Eq.~(\ref{eq:SMOMmom}). This means, for example, that discretisation errors in $S^{-1}$ containing 3 different $\gamma$ matrices and associated momenta are zero. Figure~\ref{fig:Zv-con-smom} shows the resulting $Z_V$ value obtained for the conserved vector current in the RI-SMOM scheme, combining the results from Eqs~(\ref{eq:zvdefhisq}) and~(\ref{eq:zqdefhisq}) and performing the extrapolation to zero quark mass as described in Sec.~\ref{sec:latt} (this has very little impact). The value obtained for $Z_V$ for the conserved current is 1 to better than 0.05\% at all $\mu$ values. Fitting the results shown in Fig.~\ref{fig:Zv-con-smom} to a constant value of 1.0 returns a $\chi^2/\text{dof}$ of 1.3 for 8 degrees of freedom ($Q=0.26$). \subsection{$Z_V$ for the conserved current in the RI$'$-MOM scheme} \label{subsec:MOMcons} \begin{table} \caption{Conversion factors from the continuum RI$'$-MOM scheme to $\overline{\mathrm{MS}}$ at the $\mu$ values used in this calculation, calculated with $n_{\mathrm{f}}=4$ using the results of~\cite{Chetyrkin:1999pq}. Results for $Z_V$ obtained on the lattice with the standard RI$'$-MOM approach must be multiplied by these values to give results in the $\overline{\mathrm{MS}}$ scheme in the continuum limit.} \label{tab:RI-MSbar-conv} \begin{tabular}{ll} $\mu$ [GeV] & $Z_V^{\overline{\mathrm{MS}}/\mathrm{RI}'\mhyphen \mathrm{MOM}}$ \\ \hline 2 & 0.99118(38) \\ 2.5 & 0.99308(26) \\ 3 & 0.99420(20) \\ 4 & 0.99549(14) \\ \end{tabular} \end{table} \begin{figure} \includegraphics[width=0.47\textwidth]{ZV_con_MOM.pdf} \caption{Points labelled `MOM' show the renormalisation factor for the HISQ conserved vector current which takes results from the lattice scheme to the $\overline{\text{MS}}$ scheme obtained using a lattice calculation in the RI$'$-MOM scheme. These should be contrasted with results obtained using a lattice calculation in the RI-SMOM scheme which give value 1.0, shown as the black line labelled `SMOM'. Points are given for $\mu$ values 2, 2.5, 3 and 4 GeV, indicated by different colours, and on coarse, fine and superfine lattices. The fit shown (see Eq.~(\ref{eq:MOM-con-fit})) accounts for discretisation errors and condensate contributions, which prove to be necessary for a good fit. The separation of the results for different $\mu$ values in the continuum limit (at $a=0$) is a result of the condensate contributions that appear in $Z_V$ when calculated in the RI$'$-MOM scheme on the lattice. } \label{fig:MOM-con} \end{figure} We now turn to the renormalisation of the conserved current in the standard RI$'$-MOM scheme where a very different picture emerges. The kinematic conditions in the MOM scheme are that the incoming and outgoing quark fields for the vertex function should have the same momentum, $ap_1=ap_2$ so that $aq=0$. We will denote this momentum by $ap$ with $|ap|=a\mu$. We take the form of $ap$ to be that of $ap_1$ in the SMOM scheme (Eq.~(\ref{eq:SMOMmom})). To implement the RI$'$-MOM scheme we determine the wave-function renormalisation, $Z_q(p)$, in the same way as for the RI-SMOM scheme using Eq.~(\ref{eq:zqdefhisq}). To determine $Z_V$ we use \begin{equation} \label{eq:zvmomdefhisq} \frac{Z_q}{Z_V} = \frac{1}{48} \frac{1}{V^{\text{cons}}_{\gamma\otimes I}}\sum_{\mu} \Tr\left[ \overline{\overline{(\gamma_{\mu}\otimes I)}}\Lambda^{\mu,+}_V\right] . \end{equation} This uses the RI$'$-MOM vector vertex projector, which is simply $\gamma_{\mu}$ (see Eq.~(\ref{eq:zvmomdef})), expressed here in the appropriate taste-singlet form for implementation with staggered quarks. Because the conserved current is a point-split operator the tree-level vertex function is not simply 1. We therefore need to divide by the tree-level matrix element for the conserved current that we denote here $V^{\text{cons}}_{\gamma_{\mu} \otimes I}$. How to calculate these tree-level factors is discussed in~\cite{Lytle:2013qoa}. We have \begin{equation} \label{eq:constree} V^{\text{cons}}_{\gamma \otimes I}= \prod_{\mu} \left ( \frac{9}{8} \mathrm{cos}(ap_{\mu} (S-T)_{\mu}) - \frac{1}{8} \mathrm{cos}(3ap_{\mu} (S-T)_{\mu}) \right) . \end{equation} The spin-taste 4-vector $S-T$ is composed of 1s and 0s. For the taste-singlet vector it takes value 1 for component $\mu$ and 0 otherwise. So the only components of the product that do not take value 1 are those for component $\mu$ that matches the direction of the current, provided that $ap$ has a non-zero component in that direction. Because the RI$'$-MOM scheme is not based on the Ward-Takahashi identity $Z_V$ will not be 1 for the conserved current. This means that to reach the $\overline{\text{MS}}$ scheme, even for the continuum RI$'$-MOM scheme, requires an additional renormalisation factor. The renormalisation factor that takes the lattice vector current to the continuum is then \begin{equation} \label{eq:msbarmom} Z_V(\mathrm{MOM}) = Z_V^{\overline{\text{MS}}/{\text{RI}'\text{-MOM}}}Z_V^{\text{MOM,raw}} . \end{equation} $Z_V^{\text{MOM,raw}}$ is the raw renormalisation factor calculated using Eq.~(\ref{eq:zvmomdefhisq}) on the lattice. The factor $Z_V^{\overline{\text{MS}}/\text{MOM}}$ can be determined from the perturbative QCD expansions in the continuum for the conversion between RI$'$-MOM and RI-MOM given in~\cite{Chetyrkin:1999pq} (see~\cite{Huber:2010zza} and the Appendix of~\cite{Lytle:2013qoa}). The values needed for our $\mu$ values are given in Table~\ref{tab:RI-MSbar-conv}; they are all close to 1 since the expansion starts at $\mathcal{O}(\alpha_s^2)$. Figure~\ref{fig:MOM-con} shows our results for $Z_V$ for the conserved HISQ current obtained by implementing the RI$'$-MOM scheme on the lattice. We have converted the $Z_V$ to the value that takes the lattice results to the $\overline{\text{MS}}$ scheme using Eq.~(\ref{eq:msbarmom}). Results are shown, after extrapolation to zero valence quark mass, at a variety of $\mu$ values from 2 GeV to 4 GeV and at three different values of the lattice spacing. It is immediately clear that the values of $Z_V^{\text{cons}}(\text{MOM})$ are not 1. This is in sharp contrast to results in the RI-SMOM scheme where, as we showed in Section~\ref{subsec:SMOMcons}, the value 1 is obtained. This result is shown by the black line at 1 in Fig.~\ref{fig:MOM-con}. To understand the discrepancy from 1 for $Z_V^{\text{cons}}$ in the RI$'$-MOM case, we fit the points shown in Fig.~\ref{fig:MOM-con} (including the correlations between them) to a form that allows for both discretisation effects and condensate contributions: \begin{eqnarray} \label{eq:MOM-con-fit} Z_V^{\mathrm{cons}}(\mathrm{MOM})(a,\mu) &=& 1 + \sum_{i=1}^5 c_{a^2\mu^2}^{(i)} (a\mu/\pi)^{2i} \\ &&\hspace{-7.0em} + \sum_{i=1}^5 c_{\alpha a^2\mu^2}^{(i)} (a\mu/\pi)^{2i} \alpha_{\overline{\mathrm{MS}}}(1/a) + c_{\alpha}(\alpha_{\overline{\mathrm{MS}}}(\mu)/\pi)^4 \nonumber \\ &&\hspace{-8.0em}+ \sum_{j=1}^5 c_{\mathrm{cond}}^{(j)} \alpha_{\overline{\mathrm{MS}}}(\mu)\frac{(1\ \mathrm{GeV})^{2j}}{\mu^{2j}} \times [1 + c_{\mathrm{cond},a^2}^{(j)}(a\Lambda/\pi)^2] \,.\nonumber \end{eqnarray} Note that this constrains $Z_V^{\mathrm{cons}}(\mathrm{MOM})$ to be 1 in the continuum once condensates are removed. Here $\alpha_{\overline{\mathrm{MS}}}(\mu)$ is the value of the strong coupling constant in the $\overline{\mathrm{MS}}$ scheme at the scale $\mu$ calculated from running the value obtained in \cite{Chakraborty:2014aca} using the four-loop QCD $\beta$ function. The fit allows for discretisation errors of the generic form $(a\mu)^{2i}$ and terms $\mathcal{O}(\alpha_s(a\mu)^{2i})$; only even powers of $a$ appear due to the remnant chiral symmetry of staggered quarks. Note that in principle we have removed $(a\mu)^2$ terms by dividing by $V^{\text{cons}}_{\gamma \otimes I}$; the fit returns only a small coefficient for this term. The $\alpha_s$-suppressed discretisation terms are included as the very small statistical uncertainties on the results mean that these terms can have an effect in the fit. The fourth term allows for systematic uncertainty from the missing $\alpha_s^4$ term in the RI$'$-MOM to $\overline{\mathrm{MS}}$ conversion factor (Eq.~\ref{eq:msbarmom}). The condensate terms on the final line of Eq.~(\ref{eq:MOM-con-fit}) start at $1/\mu^2$ to allow for the gauge-noninvariant $\langle A^2 \rangle$ condensate present in the operator product expansion (OPE) of the quark propagator~\cite{Chetyrkin:2009kh}. For the MOM kinematic setup it is not possible to perform an OPE for the vertex functions as they are not short-distance quantities ($q=0$), so a complete analysis of what nonperturbative artefacts we expect to see in $Z_V$ is not possible. However, on general grounds we expect terms with inverse powers of $\mu$ to appear and we allow these terms also to have discretisation effects. We include even inverse powers of $\mu$ up to $1/\mu^8$. We use a Bayesian fit approach~\cite{Lepage:2001ym} in which coefficients are constrained by priors with a Gaussian distribution of a given central value and width. All coefficients in the fit form of Eq.~(\ref{eq:MOM-con-fit}) are given priors of $0 \pm 1$, except for that of the $(\alpha_s/\pi)^4$ term which has prior $0 \pm 5$ based on the lower-order coefficients. The choices for the priors are based on reasonable values for the coefficients of the terms in the fit. For example, discretisation effects are expected to appear as even powers of a physical scale (such as $\mu$ or $\Lambda$ here) divided by the ultraviolet cutoff ($\pi/a$) with coefficients of order one. The results of the fit are shown as the coloured dashed lines in Fig.~\ref{fig:MOM-con}. The fit has a $\chi^2/\text{dof}$ of 0.6. It is already obvious from the figure that discretisation effects are not the only source of the discrepancy in $Z_V$ from 1. This is emphasised by attempting the fit without condensate terms (i.e. missing the last line of Eq.~(\ref{eq:MOM-con-fit})). Without the condensate terms the quality of the fit is very poor, with a $\chi^2/{\text{dof}}$ of 7.7, in contrast to the fit of Eq.~(\ref{eq:MOM-con-fit}). The sizeable contribution from the lowest order condensate is reflected in the coefficient found by the fit of \begin{align} \label{eq:conserved-conds} &c_{\mathrm{cond}}^{(1)} = 0.154(54) . \end{align} The higher-order condensates cannot be pinned down by the fit. The correct answer for $Z_V$ for the conserved current in the continuum limit is, of course, 1. Our results and fit show that this can only be obtained from a calculation in the RI$'$-MOM scheme by working at multiple $\mu$ values at multiple values of the lattice spacing and fitting as a function of $\mu$ and $a$ to identify and remove the condensate contributions. If this is not done, systematic errors of $\mathcal{O}(1\%)$ (depending on the $\mu$ value) are present in $Z_V$, as is clear from Fig.~\ref{fig:MOM-con}. The issue will resurface when we discuss the use of the RI$'$-MOM scheme to renormalise nonconserved currents, specifically the HISQ local vector current, in Section~\ref{subsec:MOMloc}. \subsection{$Z_V$ for the local current in the RI-SMOM scheme} \label{subsec:SMOMloc} \begin{figure} \includegraphics[width=0.47\textwidth]{ZV_local_SMOM_coarse5_mass_dep.pdf} \caption{Valence mass dependence of $Z_V^{\text{loc}}(\text{SMOM})$ values obtained in the RI-SMOM scheme. Results and extrapolation are shown for $\mu=3$ GeV on set 2. } \label{fig:mass-dep-loc-SMOM} \end{figure} We now turn to the calculation of the renormalisation constant for a nonconserved vector current using the RI-SMOM scheme. We will study the local current constructed from HISQ quarks since this is the simplest current and used in many analyses, such as the connected hadronic vacuum polarisation contribution to the anomalous magnetic moment of the muon~\cite{Chakraborty:2014mwa}. In~\cite{Chakraborty:2017hry} the renormalisation constant for the HISQ local current was calculated using the form factor method discussed in Section~\ref{sec:ward}. Results are given for very coarse, coarse and fine lattices in Table IV of that reference. The calculation was done using valence $s$ quarks and the form factor was determined for the local temporal vector current between two $s\overline{s}$ pseudoscalar mesons at rest\footnote{Note that the `spectator' quark used the clover formalism in this case, in order for the staggered tastes to cancel in the correlation function.}. From the discussion in Section~\ref{sec:ward} we expect such a determination of $Z_V$ to be exact so that $Z_V^{\text{loc}}(\text{F(0)})$ is equal to a perturbative series in $\alpha_s$ that matches the lattice scheme to the $\overline{\text{MS}}$ scheme, up to discretisation effects. This was tested in~\cite{Chakraborty:2017hry} (Appendix B) by fitting the $Z_V$ results to this form, including the known $\mathcal{O}(\alpha_s)$ coefficient in the perturbative series. A good fit was obtained that allowed values for $Z_V^{\text{loc}}(\text{F(0)})$ to be inferred on finer lattices. Here we will calculate $Z_V^{\text{loc}}(\text{SMOM})$ and compare it to $Z_V^{\text{loc}}(\text{F(0)})$. They should both contain the same perturbative series (since this is unique for a given operator) and differ only by discretisation effects. \begin{table*} \caption{Local vector current renormalisation factors, $Z_V^{\text{loc}}$ for a variety of $\mu$ values (given in column 2) on gluon field configurations at different lattice spacing values (denoted by the set number in column 1). Column 3 gives results using the RI-SMOM scheme and column 4 gives results using the standard RI$'$-MOM scheme. Note that the RI$'$-MOM results include the additional renormalisation factor of Eq.~(\ref{eq:msbarmom}) (Table~\ref{tab:RI-MSbar-conv}) that is needed to take the lattice current all the way to the $\overline{\text{MS}}$ scheme. Results are extrapolated to zero valence quark mass. Columns 5 and 6 give results for the modified (denoted by Rc) RI$'$-MOM and RI-SMOM$_{\gamma_{\mu}}$ schemes in which a ratio to the value for the conserved current renormalisation in that scheme has been taken (Eq.~(\ref{eq:Rcdef})). } \label{tab:local} \begin{ruledtabular} \begin{tabular}{llllll} Set & $\mu$ [GeV] & $Z_V^{\mathrm{loc}}(\mathrm{SMOM})$ & $Z_V^{\mathrm{loc}}(\mathrm{MOM})$ & $Z_V^{\mathrm{loc}}(\mathrm{MOM}_{\text{Rc}})$ & $Z_V^{\mathrm{loc}}(\mathrm{SMOM}_{\gamma_{\mu},\text{Rc}}) $\\ \hline 1 & 1 & 0.9743(11) & - & - & - \\ 2 & 1 & 0.9837(20) & - & - & - \\ \hline 1 & 2 & 0.95932(18) & - & - & - \\ 2 & 2 & 0.97255(22) & 0.98771(85) & 0.97012(25) & 0.91864(25) \\ 6 & 2 & 0.98445(11) & 0.99784(79) & 0.98292(44) & 0.959434(58) \\ 7 & 2 & 0.99090(36) & 1.00202(89) & 0.99012(19) & 0.982435(21) \\ \hline 2 & 2.5 & 0.96768(12) & 0.97968(34) & 0.96447(17) & 0.89506(19) \\ \hline 2 & 3 & 0.964328(75) & 0.97434(26) & 0.96027(23) & 0.87733(21) \\ 6 & 3 & 0.977214(35) & 0.98785(28) & 0.97608(14) & 0.930025(40) \\ 7 & 3 & 0.98702(11) & 0.99651(43) & 0.98633(11) & 0.969563(42) \\ \hline 6 & 4 & 0.972415(18) & 0.98090(16) & 0.971009(90) & 0.905823(40) \\ 7 & 4 & 0.983270(54) & 0.99241(21) & 0.982942(40) & 0.954992(30) \\ \end{tabular} \end{ruledtabular} \end{table*} \begin{figure} \includegraphics[width=0.47\textwidth]{SMOM_FF_diff_withcond.pdf} \includegraphics[width=0.47\textwidth]{adjusted_ZV_local_SMOM.pdf} \caption{The top plot shows $Z_V^{\mathrm{loc}}(\mathrm{SMOM})$ for $\mu$ values between 1 GeV and 4 GeV, plotted as a difference to the corresponding $Z_V$ at that lattice spacing obtained from the vector form factor at zero momentum-transfer. The fit shown (see text) accounts for discretisation errors and condensate contributions, but no condensate contributions are seen and they are strongly constrained to zero by the fit. The lower plot shows the same difference but for a $Z_V^{\text{loc}}(\mathrm{SMOM})$ derived from results at $\mu$ = 2 GeV and 3 GeV in such a way as to reduce discretisation effects (see Eq.~(\ref{eq:zv-adjust})). The fit here is to a simple (constant + $a^4$) form as described in the text. } \label{fig:SMOM-loc} \end{figure} To calculate $Z_V^{\text{loc}}(\text{SMOM})$ a little care is required in the construction of the SMOM vector vertex function with HISQ quarks. The operator $\slashed{q}q_{\mu}\Lambda_V^{\mu}$ of Eq.~(\ref{eq:zvdef}) must be constructed to be a taste singlet. For a local (in spin-taste notation, $\gamma_{\mu}\otimes \gamma_{\mu}$) current $\Lambda_V^{\mu}$ will have taste $\gamma_{\mu}$. This means that the $\slashed{q}$ in the vertex function must also have this taste. The correct construction is as \begin{equation} \label{eq:loc-SMOM} \sum_{\mu,\nu} \hat{q}_{\nu} \overline{\overline{(\gamma_{\nu} \otimes \gamma_{\mu})}} \hat{q}_{\mu} \Lambda_{V,\text{loc}}^{\mu} . \end{equation} Taking the spin-colour trace of this operator and dividing by $48\hat{q}^2$ then gives $Z_q/Z_V$. The wavefunction renormalisation is calculated in the same way as for the conserved current, Eq.~(\ref{eq:zqdefhisq}). Results for $Z_V^{\text{loc}}(\text{SMOM})$ are given in Table~\ref{tab:local} (column 3). This is after extrapolation to zero valence quark mass. Figure~\ref{fig:mass-dep-loc-SMOM} shows that the impact of this is very small (we expect in this case that the mass dependence is purely a discretisation effect). Figure~\ref{fig:SMOM-loc} (top plot) shows our results as a difference between $Z_V^{\text{loc}}({\text{SMOM}})$ and $Z_V^{\text{loc}}(\text{F(0)})$. $Z_V^{\text{loc}}(\text{F(0)})$ values are from~\cite{Chakraborty:2017hry} and obtained on the same gluon field configurations that we use here. We plot the difference for the multiple $\mu$ values used for the $Z_V^{\text{loc}}(\text{SMOM})$ determination as a function of lattice spacing in Fig.~\ref{fig:SMOM-loc}. Results are shown from very coarse to superfine lattice spacings noting that higher $\mu$ values are only accessible on finer lattices because of the constraint in Eq.~(\ref{eq:redbrill}). We can readily fit this difference of $Z_V^{\text{loc}}$ values, $\Delta Z_V^{\text{loc}}$, to a function constructed from possible discretisation effects. To keep the fit as general as possible we also allow for the existence of condensate terms to see to what extent they are constrained by the fit. We also allow for condensate terms multiplied by discretisation effects that would vanish in the continuum limit (and are therefore benign). We use \begin{eqnarray} \label{eq:fitzvdiff-disc} \Delta Z_V^{\mathrm{loc}}(a,\mu) &=& \sum_{i=1}^{3} \left[ c_{a^2\mu^2}^{(i)} (a\mu/\pi)^{2i} \right.\\ && \left. \hspace{2.0em}+ c_{\alpha a^2\mu^2}^{(i)} (a\mu/\pi)^{2i} \alpha_{\overline{\mathrm{MS}}}(1/a) \right] \nonumber \\ &&\hspace{-8.0em}+ \sum_{j=1}^{3} c_{\mathrm{cond}}^{(j)} \alpha_{\overline{\mathrm{MS}}}(\mu)\frac{(1\ \mathrm{GeV})^{2j}}{\mu^{2j}} \times [1 + c_{\mathrm{cond},a^2}^{(j)}(a\Lambda/\pi)^2] \,.\nonumber \end{eqnarray} All coefficients are given priors $0\pm 1$. This fit has a $\chi^2/\text{dof}$ value of 0.18 and finds no significant condensate contribution. The lowest order ($1/\mu^2$) condensate term is constrained by the fit to have a very small coefficient compatible with zero: -0.020(44) (compare. Eq.~(\ref{eq:conserved-conds})). Thus we see that $\Delta Z_V^{\text{loc}}$ is compatible with being, as expected, purely a discretisation effect. We have shown here that the $Z_V$ value obtained for the nonconserved local HISQ current using the RI-SMOM scheme is indeed exact i.e. it has no nonperturbative condensate contributions (visible at our high level of accuracy) that would survive the continuum limit as a source of systematic error. This can be traced to the fact that the condensate contributions present in the vector vertex function for the conserved vector current and in the inverse propagator must cancel because of the Ward-Takahashi identity. This identity also protects $Z_V$ from any effects arising from the gauge-fixing procedure. This means that there is in fact no lower limit in principle to the $\mu$ value that can be used for the vector current renormalisation in the RI-SMOM scheme. In Fig.~\ref{fig:SMOM-loc} (top plot) we include values corresponding to $\mu$ = 1 GeV. These show smaller discretisation effects than those for the higher $\mu$ values and so may be preferable on these grounds if only one $\mu$ value is used (which is all that is necessary in principle since no allowance needs to be made for condensate effects). The statistical errors possible with 20 configurations grow as $\mu$ is reduced. However, for $\mu = 1$~GeV the uncertainties could still readily be reduced to the 0.1\% level with higher statistics. Smaller discretisation effects are possible by extrapolating in $\mu$ to $\mu=0$. A simple method that removes $\mu^2a^2$ terms in $Z_V^{\text{loc}}(\text{SMOM})$ combines results at two different $\mu$ values (for a given lattice spacing) to determine a new value \begin{eqnarray} \label{eq:zv-adjust} Z_V^{\text{loc}}(\text{SMOM})(\mu_1,\mu_2) &=& \\ &&\hspace{-5.0em} \frac{\mu^2_1 Z_V^{\text{loc}}(\text{SMOM})(\mu_2) - \mu_2^2 Z_V^{\text{loc}}(\text{SMOM})(\mu_1)}{\mu^2_1-\mu^2_2} \, . \nonumber \end{eqnarray} This can always be done, given that $Z_V^{\text{loc}}(\text{SMOM})$ only depends on $\mu$ through discretisation effects. We use $\mu_1$ = 3 GeV and $\mu_2$ = 2 GeV and Eq.~(\ref{eq:zv-adjust}) returns a precise result because the statistical uncertainties are very small on these $\mu$ values. We show the results of taking a difference to $Z_V^{\text{loc}}(\text{F(0)})$ for this new $Z_V$ value in the lower plot of Fig.~\ref{fig:SMOM-loc}. The points clearly have smaller discretisation effects compared to the original $Z_V$ values that they were derived from. Given that the discretisation effects in $Z_V^{\text{loc}}(\text{F(0)})$ were relatively small~\cite{Chakraborty:2017hry} we interpret this as a reduction of discretisation effects in $Z_V^{\text{loc}}(\text{SMOM})$. We can fit the points in the lower plot of Fig.~\ref{fig:SMOM-loc} to a very simple curve : $C + D(a \times 1 {\text{GeV}})^4$ and $C$ is found to be 0.00008(66). The smaller discretisation effects seen using Eq.~(\ref{eq:zv-adjust}) may make this approach preferable to that of using $Z_V^{\text{loc}}(\text{SMOM})$ for a single $\mu$ value although it doubles the cost. Using three values of $\mu$ a higher-order scheme could obviously be devised to reduce discretisation effects further. \subsection{$Z_V$ for the local current in the RI$'$-MOM scheme} \label{subsec:MOMloc} \begin{figure} \includegraphics[width=0.47\textwidth]{ZV_local_MOM_coarse5_mass_dep.pdf} \caption{Valence mass dependence of our raw results for $Z_V^{\text{loc}}$ calculated in the RI$'$-MOM scheme, before multiplication by the additional renormalisation factor needed to match to $\overline{\text{MS}}$. Results and extrapolation are shown for $\mu=3$ GeV on set 2. } \label{fig:mass-dep-loc-MOM} \end{figure} \begin{figure} \includegraphics[width=0.47\textwidth]{MOM_FF_diff.pdf} \caption{$Z_V^{\mathrm{loc}}(\mathrm{MOM})$ for $\mu$ values between 2 GeV and 4 GeV, plotted as a difference to the corresponding $Z_V$ at that lattice spacing obtained from the vector form factor at zero momentum-transfer. The fit shown (see text) accounts for discretisation errors and condensate contributions, with condensate contributions being necessary to obtain a good fit. } \label{fig:MOM-loc} \end{figure} We now turn to the determination of the renormalisation constant for the nonconserved local vector current using the RI$'$-MOM scheme, $Z_V^{\text{loc}}(\text{MOM})$. Again, the vector vertex function must be a taste-singlet. The RI$'$-MOM scheme uses a simple $\gamma_{\mu}$ projector (Eq.~(\ref{eq:zvmomdef})), which for the HISQ local vector current needs to have spin-taste $\gamma_{\mu}\otimes \gamma_{\mu}$. Then we use \begin{equation} \label{eq:loc-MOM} \sum_{\mu} \overline{\overline{(\gamma_{\mu} \otimes \gamma_{\mu})}} \Lambda_{V,\text{loc}}^{\mu} . \end{equation} to determine $Z_V/Z_q$ along with Eq.~(\ref{eq:zqdefhisq}) to determine $Z_q$. Figure~\ref{fig:mass-dep-loc-MOM} shows the valence mass extrapolation for one set of raw results. Despite having a more significant mass extrapolation than for the RI-SMOM results (Fig.~\ref{fig:mass-dep-loc-SMOM}), this is still very mild. Table~\ref{tab:local} gives our results in column 4, where we note that the values given for $Z_V^{\text{loc}}(\text{MOM})$ include the additional renormalisation factor shown in Eq.~(\ref{eq:msbarmom}) and given in Table~\ref{tab:RI-MSbar-conv}. Figure~\ref{fig:MOM-loc} shows our results given, following the discussion in Section~\ref{subsec:SMOMloc}, as a difference to the renormalisation constants obtained for the local current using the form factor method in~\cite{Chakraborty:2017hry}. This figure is very different from Fig.~\ref{fig:SMOM-loc}, with the results showing no sign of converging to zero in the continuum limit that would demonstrate agreement between the form factor and RI$'$-MOM schemes for $Z_V$. This shows the presence of condensate contributions in $Z_V^{\text{loc}}(\text{MOM})$ and to fit these results we need to include condensates that survive the continuum limit in the fit form. For the difference of $Z_V^{\text{loc}}$ values shown in Fig.~\ref{fig:MOM-loc} we use the same fit form as that used earlier for the RI-SMOM results in Eq.~(\ref{eq:fitzvdiff-disc}) (with the addition of an $\alpha_s^4$ to allow for uncertainty in the matching from MOM to $\overline{\text{MS}}$ as used in Eq.~(\ref{eq:MOM-con-fit}); this term has very little effect). This fit, with $\chi^2/\text{dof} = $0.14 is shown by the dashed lines in Fig.~\ref{fig:MOM-loc}. It returns a coefficient for the leading-order condensate term of -0.209(63) which is consistent with the leading-order condensate term seen in the conserved current $Z_V$ calculated in the RI$'$-MOM scheme (Eq.~(\ref{eq:conserved-conds}), with opposite sign because of our definition of $\Delta Z_V$ here). Note the difference with the results in the RI-SMOM case. The results of Fig.~\ref{fig:MOM-loc} show that the standard RI$'$-MOM scheme cannot be used to determine an accurate result for $Z_V$ for nonconserved currents. If no attention is paid to the contamination of $Z_V$ by condensate contributions then $\mathcal{O}(1\%)$ systematic errors will be made. \begin{figure} \includegraphics[width=0.47\textwidth]{MOMrat_FF_diff.pdf} \caption{$Z_V^{\mathrm{loc}}(\mathrm{MOM}_{\text{Rc}})$ from the modified RI$'$-MOM scheme of Eq.~\ref{eq:Rcdef}, plotted as a difference to the corresponding $Z_V$ at that lattice spacing obtained from the vector form factor at zero momentum-transfer. Results are shown for $\mu$ values from 2 GeV to 4 GeV. The fit shown (see text) accounts for discretisation errors and condensate contributions, but condensate contributions are strongly constrained to be zero. } \label{fig:MOMrat-loc} \end{figure} We can modify the RI$'$-MOM scheme to address this issue, however. We know that the conserved current and the renormalised local current are the same operator in the continuum limit and so their vertex functions must contain the same nonperturbative contributions from the RI$'$-MOM scheme in that limit. We can therefore calculate $Z_V^{\text{loc}}(\text{MOM})$ by taking a ratio of the vertex functions of the local and conserved currents. We call this scheme the RI$'$-$\text{MOM}_{\text{Rc}}$ scheme. Specifically we calculate \begin{equation} \label{eq:Rcdef} Z_V^{\text{loc}}(\text{MOM}_{\text{Rc}}) = \frac{\Tr(\gamma_{\mu}\Lambda^{\mu}_{V,\text{cons}})}{\Tr(\gamma_{\mu}\Lambda^{\mu}_{V,\text{loc}})} = \frac{Z_V^{\text{loc}}(\text{MOM})}{Z_V^{\text{cons}}(\text{MOM})} . \end{equation} Taking the ratio also means that no additional renormalisation is needed in this case. Our results from implementing this scheme are given in Table~\ref{tab:local} (column 5). Figure~\ref{fig:MOMrat-loc} shows the results given once again as a difference to the renormalisation constant obtained for the local current in the form factor method. We now see that the difference of $Z_V$ values clearly approaches 0 in the continuum limit and there is no sign of condensate contamination in that limit. The results in the RI$'$-MOM$_{\text{Rc}}$ scheme look very similar to those in the RI-SMOM scheme (see Fig.~\ref{fig:SMOM-loc}). We can fit the values for $\Delta Z_V^{\text{loc}}$ in Fig.~\ref{fig:MOMrat-loc} to the same form as that used for the RI-SMOM results (Eq.~(\ref{eq:fitzvdiff-disc})). The fit gives $\chi^2/\text{dof}$ = 0.32 and constrains the lowest-order condensate coefficient that would survive the continuum limit to -0.01(5). We conclude that the modified RI$'$-MOM scheme of Eq.~(\ref{eq:Rcdef}) does provide a method to determine an accurate renormalisation for the local vector current. The method does require calculations with the conserved current and so is more complicated than the RI-SMOM scheme. \subsection{$Z_V$ for the local current in the RI-SMOM$_{\gamma_{\mu}}$ scheme} \label{subsec:SMOMgloc} \begin{figure} \includegraphics[width=0.47\textwidth]{SMOMgam_FF_diff.pdf} \caption{$Z_V^{\mathrm{loc}}(\mathrm{SMOM}_{\gamma_{\mu},\text{Rc}})$ from the modified RI-SMOM$_{\gamma_{\mu}}$ scheme, plotted as a difference to the corresponding $Z_V$ at that lattice spacing obtained from the vector form factor at zero momentum-transfer. Results are shown for $\mu$ values from 2 GeV to 4 GeV. The fit shown (see text) accounts for discretisation errors and condensate contributions, but condensate contributions are strongly constrained to be zero. } \label{fig:SMOMgam-loc} \end{figure} An alternative momentum-subtraction scheme is the RI-SMOM$_{\gamma_{\mu}}$ scheme which uses the same vertex function (and wavefunction renormalisation) as the RI$'$-MOM scheme but uses RI-SMOM kinematics (i.e. $q=p_1-p_2 \ne 0$, $p_1^2=p_2^2=q^2=\mu^2$). To obtain an accurate result for $Z_V$ for the local current (as an example of a nonconserved current) we must modify the scheme as was done for the RI$'$-MOM scheme in Eq.~(\ref{eq:Rcdef}). The only difference is that we must also modify the tree-level vertex function factor for the conserved current from that of Eq.~(\ref{eq:constree}) to reflect the SMOM kinematics. Table~\ref{tab:local} gives our results from this modified RI-SMOM$_{\gamma_\mu,\text{Rc}}$ scheme in column 6. Figure~\ref{fig:SMOMgam-loc} plots the difference of these $Z_V$ values with those from using the form factor method. We see that, as for the SMOM scheme in Fig.~\ref{fig:SMOM-loc} and the modified RI$'$-MOM scheme in Fig.~\ref{fig:MOMrat-loc}, the values converge to zero as $a\rightarrow 0$ as discretisation effects should. Discretisation effects are significantly larger here than in the previous schemes, however. We fit the results to the same functional form as used for the other schemes (i.e.\ Eq.~(\ref{eq:fitzvdiff-disc})) and obtain a good fit (we double the prior width on $(a\mu)^n$ terms to allow for the larger discretisation effects). $\chi^2/\text{dof}$ = 0.56 and the lowest order condensate coefficient is constrained very tightly, as in the other exact cases, to -0.03(5). The same conclusions apply as for the RI$'$-MOM scheme, i.e. that defining $Z_V$ from the ratio of vertex functions with the conserved current gives an exact result. \subsection{Renormalisation of the axial vector current} \label{subsec:ZA} \begin{figure} \includegraphics[width=0.47\textwidth]{VminusAsfine.pdf} \caption{The difference between the vertex functions for the local vector and local axial vector currents as a function of $\mu$ in the RI$'$-MOM and RI-SMOM schemes. The RI-SMOM scheme gives a much smaller nonperturbative contribution to $Z_A-Z_V$ than the RI$'$-MOM scheme, but neither scheme gives $Z_V=Z_A$ which is true to all orders in perturbation theory. The values plotted result from an extrapolation to zero valence quark mass and are shown for the finest lattice we use (set 7 in Table \ref{tab:ensembles}). } \label{fig:VminusA} \end{figure} The renormalisation factors for axial vector currents can also be calculated using momentum-subtraction schemes. However, for actions with sufficient chiral symmetry the axial vector current renormalisation, $Z_A$, can be related to the vector current renormalisation at zero quark mass. For example, for staggered quarks, $Z_{S\otimes T}=Z_{S5\otimes T5}$ to all orders in perturbation theory~\cite{Sharpe:1993ur}. Here $S\otimes T$ indicates the operator spin-taste and $S5=\gamma_5S$. This means that the local axial vector current and local vector current have the same renormalisation factor. Having shown that the local vector current renormalisation factor can be calculated accurately and without contamination by condensate contributions in the RI-SMOM scheme in Section~\ref{subsec:SMOMloc}, it therefore makes sense to use this value also for the local axial vector current. Indeed, doing a separate calculation of $Z_A^{\text{loc}}$ risks introducing condensate contributions where none would be found using $Z_A=Z_V$. Figure~\ref{fig:VminusA} shows the difference between the local vector and local axial vector vertex functions after extrapolation to zero quark mass, on the superfine lattices, set 7. Each point plotted is the difference of the local vector and local axial vector vertex functions i.e. $Z_q/Z_V-Z_q/Z_A$. We see that the difference in the RI-SMOM scheme is small but not zero. The results demonstrate approximately $\mu^{-6}$ behaviour expected on the basis of a chiral symmetry breaking condensate contribution~\cite{Aoki:2007xm}. Note that this contribution comes from $Z_q/Z_A$. For the RI$'$-MOM scheme the difference is much larger then for RI-SMOM and has a smaller slope in this log-log plot. This reflects the known impact of chiral symmetry breaking nonperturbative artefacts in this scheme~\cite{Aoki:2007xm}. In both cases it would be preferable to use $Z_A=Z_V$, in the RI$'$-MOM case using the modified RI$'$-MOM$_\text{Rc}$ approach of Eq.~(\ref{eq:Rcdef}). \section{Including quenched QED effects} \label{sec:QED} As lattice QCD calculations reach sub-percent precision it will become necessary to evaluate the electromagnetic corrections expected at this level. If QED effects are included in calculations involving nonconserved vector currents, such as the ongoing Fermilab/HPQCD/MILC calculations of the hadronic vacuum polarisation contribution to the anomalous magnetic moment of the muon~\cite{Davies:2019efs}, then consistency requires that QED effects are also included in the vector current renormalisation. Here we will study the impact of the valence quarks having electric charge on the renormalisation of the local vector current using the RI-SMOM scheme (for earlier results using different methods see~\cite{Boyle:2017gzv, Giusti:2019xct}). We include `quenched QED' in our lattice calculations by multiplying our QCD gauge fields by a U(1) gauge field representing the photon. The photon field, $A_{\mu}(k)$, is randomly generated in momentum space from a Gaussian distribution with variance $1/\hat{k}^2$ to yield the correct $\mathcal{O}(a^2)$-improved Feynman gauge propagator on the lattice (the definition of $\hat{k}$ is given in Eq.~(\ref{eq:qhatdef})). $A_{\mu}(k)$ is then converted to Landau gauge and transformed to position space. To make sure of the correct gauge covariance in position space it is important to remember that the position of the gauge fields is at the centre of the links, and not the sites~\cite{Drummond:2002yg}. The $A_{\mu}$ field in position space is then used as the phase to construct a U(1) field~\cite{Duncan:1996xy} in the form $\exp(ieQA_{\mu})$ where $Q$ is the charge of the quark that will interact with the field, in units of the charge on the proton, $e$. We use the $\mathrm{QED}_L$ formulation of compact QED~\cite{Hayakawa:2008an}, in which all zero modes are set to zero, $A_{\mu}(k_0,\mathbf{k}=0)=0$ with $A_{\mu}$ in Landau gauge (for a review of approaches to handling zero modes in QED on the lattice see~\cite{Patella:2017fgk}). We multiply the gluon field for each link of the lattice by the appropriate U(1) field before applying the HISQ smearing. The valence quarks can then interact with the photon via the standard HISQ action. Note that the sea quarks remain electrically neutral, so this is not a fully realistic scenario. Nevertheless it allows us to evaluate the most important QED effects. \begin{figure} \includegraphics[width=0.47\textwidth]{ZV_con_qed_against_amu.pdf} \caption{Results for the renormalisation factor, $Z_V$ for the QCD+QED conserved current for the HISQ action, calculated using the RI-SMOM scheme. Results are given for coarse, fine and superfine gluon field configurations for quark electric charge, $Q=2e/3$ and a variety of momenta with magnitude $a\mu$ in lattice units. } \label{fig:SMOM-con-u1} \end{figure} We have tested that the U(1) configurations we generate correctly reproduce the $\mathcal{O}(\alpha_{\mathrm{QED}})$ perturbation theory prediction for the average plaquette~\cite{Portelli:2010yn}, independent of gauge choice. Our results for the average value of the U(1) link field also agree with the $\mathcal{O}(\alpha_{\mathrm{QED}})$ expectations: \begin{eqnarray} \label{eq:u1link} \text{Landau \, gauge} \, &:& 1-\alpha_{\text{QED}}Q^2 0.0581 \\ \text{Feynman \, gauge} \, &:& 1-\alpha_{\text{QED}}Q^2 0.0775 \nonumber \end{eqnarray} Note that the Landau gauge $\mathcal{O}(\alpha_{\text{QED}}Q^2)$ coefficient is $1/C_F=3/4$ that of the corresponding QCD result for the $a^2$-improved gluon action~\cite{Hart:2004jn} since the gluon propagator then has the same form as that of the photon here. The Feynman gauge coefficient is then 4/3 of the Landau gauge coefficient. Although we have tested calculations as a function of quark charge, $Q$, the results we will show here are all for $Q=2/3$. The results are not extrapolated to zero valence quark mass and are instead just the values at the sea light quark mass on each ensemble. The valence mass dependence of the results is observed to be negligibly small, as was the case in pure QCD. An important test of the interaction between the quarks and the QCD+QED gauge fields is that $Z_V=1$ for the QCD+QED conserved current in the RI-SMOM scheme, as expected from a trivial extension of the Ward-Takahasi Identity to this case. This is demonstrated in Fig.~\ref{fig:SMOM-con-u1}. Our analysis for the renormalisation of the local vector current in the RI-SMOM scheme will study the ratio of the $Z_V^{\mathrm{loc}}$ calculated with and without the inclusion of electromagnetic effects. We proceed exactly as for the pure QCD case discussed in Section~\ref{subsec:SMOMloc}. The strong correlations between the QCD and QCD+QED calculations allow very precise determination of this ratio (a typical correlation being $\sim 0.99$). We will denote a quantity $X$ calculated in pure QCD as $X[\mathrm{QCD}]$ while the same quantity calculated with the inclusion of QED effects will be denoted $X[\mathrm{QED+QCD}]$. We will also employ the notation $X[\mathrm{(QCD+QED)/QCD}] \equiv X[\mathrm{QED+QCD}]/X[\mathrm{QCD}]$. Because QED is a long-range interaction it is important to test finite-volume effects, although we do not expect them to be large here since we studying renormalisation of electrically neutral currents. The finite-volume effects in the self-energy function of fermions has been studied in \cite{Davoudi:2018qpl} with the result that for off-shell quarks the finite-volume effects start at order $1/L_s^2$ where $L_s$ is lattice spatial extent. This implies that even the finite-volume effects for quantities such as $Z_q$ should be small. Figure~\ref{fig:volume} confirms both of these expectations with results on the three lattice sets with the same parameters but different volumes (sets 3, 4 and 5, ranging in spatial extent from 2.9 fm to 4.9 fm). Negligible effects are seen here and we therefore ignore finite-volume issues in the following analysis. \begin{figure} \includegraphics[width=0.47\textwidth]{Z_q_qed_2gev_vol_dep.pdf} \includegraphics[width=0.47\textwidth]{Z_V_qed_2gev_vol_dep.pdf} \caption{The impact of quenched QED (with quark charge $2e/3$) on the determination of $Z_V^{\text{loc}}$ and $Z_q$ using the RI-SMOM scheme as a function of the lattice spatial extent, $L_s$ in lattice units. Results are for coarse lattices, sets 3, 4 and 5, and $\mu$ = 2 GeV. The volume dependence is negligible. } \label{fig:volume} \end{figure} \begin{table} \caption{The ratio of renormalisation factors $Z_V$ for the QCD + quenched QED case to the pure QCD case. These are for the local HISQ vector current calculated in the RI-SMOM scheme on gluon field configuration sets listed in column 1 and at $\mu$ values listed in column 2 (and at a valence quark mass of $m_l$). } \label{tab:ZV-qed-data} \begin{ruledtabular} \begin{tabular}{lll} Set & $\mu$ [GeV] & $Z_V^{\mathrm{loc}}(\mathrm{SMOM})[\mathrm{(QED+QCD)/QCD}]$\\ \hline 3 & 2 & 0.999631(24) \\ 6 & 2 & 0.999756(32) \\ 7 & 2 & 0.999831(43) \\ \hline 3 & 2.5 & 0.999615(12) \\ \hline 3 & 3 & 0.999622(13) \\ 6 & 3 & 0.9997043(39) \\ 7 & 3 & 0.9997797(92) \\ \hline 6 & 4 & 0.9996754(26) \\ 7 & 4 & 0.9997425(24) \\ \end{tabular} \end{ruledtabular} \end{table} \begin{figure} \includegraphics[width=0.47\textwidth]{ZV_u1_against_asq.pdf} \caption{The ratio of $Z_V^{\text{loc}}$ values for QCD+QED to QCD calculated in the RI-SMOM scheme. Results are given for coarse to superfine lattices at $\mu$ values from 2 to 4 GeV and plotted against the square of the lattice spacing. The dashed lines give the result of a fit described in the text that shows that the results are fully described by a perturbative series (of which the leading coefficient is known) up to discretisation effects. The dips in the fit functions close to $a=0$ are the result of the fact that the argument of $\alpha_s$ in the fit function (Eq.~(\ref{eq:qedfit})) is inversely related to $a$. } \label{fig:SMOM-loc-u1} \end{figure} Our results for the effect of quenched QED on $Z_V$ for the local HISQ current in the RI-SMOM scheme are given for $\mu$ values from 2 GeV to 4 GeV and at 3 values of the lattice spacing in Table~\ref{tab:ZV-qed-data}. The results are plotted in Fig.~\ref{fig:SMOM-loc-u1}. Given our results for the pure QCD case in Section~\ref{subsec:SMOMloc} we expect the results for $Z_V$ for QCD+QED to be similarly well-behaved. We therefore perform a fit to the ratio of $Z_V$ for QCD+QED to that for pure QCD results that allows for both discretisation effects along with a perturbative expansion for the ratio of renormalisation constants. The leading QCD effects will cancel between the numerator and denominator of the ratio and so the leading term in this expansion will be $\mathcal{O}(\alpha_{\text{QED}})$. We can even fix the coefficient of the leading-order term based on the QCD perturbation theory for the pure QCD case. The $\mathcal{O}(\alpha_s)$ coefficient for $Z_V^{\text{loc}}$ for pure QCD is -0.1164(3)~\cite{Chakraborty:2017hry}. We therefore expect that the coefficient of $\alpha_{\text{QED}}Q^2$ in the QED case is $-0.1164\times 3/4 = -0.0873$. For $Q=2e/3$ this corresponds to an $\mathcal{O}(\alpha_{\text{QED}})$ coefficient of -0.0388. This gives a leading order result for $Z_V^{\text{loc}}$ of 0.9997, very close to 1. There will be in principle $\alpha_s\alpha_{\text{QED}}$ corrections to this which are likely to have an even smaller impact. We therefore take a fit form for the ratio of $Z_V$ values given in Table~\ref{tab:ZV-qed-data} of \begin{eqnarray} \label{eq:qedfit} Z_V^{\mathrm{loc}}(\mathrm{SMOM})[\mathrm{(QED+QCD)/QCD}] &=& 1 + \\ && \hspace{-10em}\alpha_{\mathrm{QED}} \left( \sum_{i } c_i \alpha_s^i (1+\sum_j d_{ij}(a\mu)^{2j} ) \right) .\nonumber \end{eqnarray} We use $i=0,1,2,3$ and $j=1,2,3$ fixing $c_0$ to the value given above. Note that $\alpha_{\text{QED}}$ does not run in this expression because we are using quenched QED. $\alpha_s$ in Eq.~(\ref{eq:qedfit}) is taken as $\alpha_{\overline{\text{MS}}}(1/a)$. This fit returns a $\chi^2/\text{dof}$ value of 0.25. The fit is plotted with the results in Fig.~\ref{fig:SMOM-loc-u1}. Figure~\ref{fig:SMOM-loc-u1} shows that the results for $Z_V$ behave as expected. The impact of quenched QED on the value of $Z_V^{\text{loc}}$ is tiny and indeed negligible if we imagine working to an accuracy of 0.1\%. Note that this follows directly from the analysis above in which we derive the $\mathcal{O}(\alpha_{\text{QED}})$ coefficient for the QCD+QED case from the pure QCD case. Because the HISQ action is so highly improved $Z_V^{\text{loc}}$ is very close to 1 in the pure QCD case. It then has to be true that the difference from 1 in $Z_V$ induced by QED will be over 100 times smaller than that induced by QCD. For the HISQ action this means that the impact of QED in $Z_V^{\text{loc}}$ is of order 0.03\%. This should be contrasted to the case from the domain-wall action where the $Z_V$ value in pure QCD is 0.7 and so the impact of quenched QED is to change $Z_V$ by approximately 0.3/100 for $Q=2e/3$, in this case 0.3\% (see Table 6 of~\cite{Boyle:2017gzv}); this is not negligible. The effect of having electrically charged sea quarks would appear in $Z_V$ at $\mathcal{O}(\alpha_s^2\alpha_{\text{QED}})$ i.e. two orders in $\alpha_s$ below the leading term; the leading term comes from a photon exchange across a quark bubble created from a gluon. This is unlikely to change the picture significantly. The effect of QED on $Z_V$ is of course not a physical result and it needs to be combined with hadronic matrix elements for the vector current to understand the physical effect of QED. For this we simply take the values for $Z_V$ at a fixed $\mu$ value for the ensembles for which we have matrix element results, multiply them and extrapolate to the continuum limit. Different quark formalisms should agree on the physical effect (on an uncharged sea). We will give an analysis of the impact of quenched QED on vector current matrix elements calculated with the HISQ action elsewhere. \section{Conclusions} \label{sec:conclusions} We have shown by explicit calculation how the vector Ward-Takahashi identity works for the HISQ action in lattice QCD. Renormalisation methods that make use of this identity will give a renormalisation constant of 1 for the conserved current as would be obtained in continuum QCD. The RI-SMOM momentum-subtraction scheme is such a scheme but the RI$'$-MOM scheme is not and this has implications for the accuracy achievable for $Z_V$ for nonconserved currents within each scheme. Our calculations have used the HISQ action but our conclusions are not specific to this action. The RI-SMOM scheme provides precise values for $Z_V$ for nonconserved currents (using momentum-sources) that are completely nonperturbative. Our results show that the $Z_V$ values are `exact' in being free of condensate contamination. This means that we can simply determine $Z_V$ at a given momentum scale $\mu$ on a given gluon-field ensemble, multiply our vector current hadronic matrix element by it and then extrapolate results for the renormalised matrix element to the continuum limit. Because there is no condensate contamination there is no lower limit to the $\mu$ value that can be used. Statistical errors grow as $\mu$ is reduced but discretisation effects become smaller. In Section~\ref{subsec:SMOMloc} we demonstrated a simple method to reduce discretisation effects, if they are an issue, by combining results from two different $\mu$ values. The RI$'$-MOM scheme can also provide precise values for $Z_V$ for nonconserved currents, but is not completely nonperturbative. A more critical problem with this scheme is that the $Z_V$ values for both conserved and nonconserved currents have condensate contributions that begin at $1/\mu^2$. This means that the $Z_V$ values cannot be used to obtain accurate renormalised vector current matrix elements in the continuum limit without an analysis of these condensate contributions. This requires numbers for $Z_V$ at multiple $\mu$ values and a fit that includes condensate terms. If this analysis is not done, the results obtained in the continuum limit will be incorrect at the 1\% level. An alternative to the standard RI$'$-MOM scheme that avoids this problem is to determine $Z_V$ from a ratio of vector vertex functions for the conserved and nonconserved currents. We call this scheme RI$'$-MOM$_{\text{Rc}}$. A similarly modified RI-SMOM$_{\gamma_{\mu}}$ scheme can also be used to obtain an exact $Z_V$. These schemes are discussed in Sections~\ref{subsec:MOMloc} and~\ref{subsec:SMOMgloc}. It is straightforward to include quenched QED effects in the determination of the vector current renormalisation factor in a fully nonperturbative way using the RI-SMOM scheme and to obtain a full understanding of the results (including consistency with perturbation theory). We see only very small (below 0.1\%) effects for the local HISQ vector current reflecting the fact that the renormalisation factors in the pure QCD case are already very close to 1. We will include the QCD+QED $Z_V$ values in a future QCD+QED determination of hadronic vector current matrix elements. \subsection*{\bf{Acknowledgements}} We are grateful to the MILC collaboration for the use of their configurations and their code base. We thank E. Follana and E. Royo-Amondarain for gauge-fixing the superfine and fine configurations and we are grateful to E. Follana, S. Sharpe and A. Vladikas for useful discussions. Computing was done on the Darwin supercomputer at the University of Cambridge High Performance Computing Service as part of the DiRAC facility, jointly funded by the Science and Technology Facilities Council, the Large Facilities Capital Fund of BIS and the Universities of Cambridge and Glasgow. We are grateful to the Darwin support staff for assistance. Funding for this work came from the Science and Technology Facilities Council and the National Science Foundation.
1,941,325,221,227
arxiv
\section{#1} \setcounter{equation}{0} } \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\setcounter{equation}{0}}{\setcounter{equation}{0}} \newcommand{\vspace{1cm}}{\vspace{1cm}} \def1.2{1.2} \def\psnormal{\textwidth=16cm\textheight=21.5cm \oddsidemargin=0.5cm\evensidemargin=0cm \topmargin=0cm\parindent=1cm} \psnormal \def\alpha{\alpha} \def\Phi{\Phi} \def\delta{\delta} \def\Omega{\Omega} \def\Gamma{\Gamma} \begin{document} \pagestyle{empty} \hspace{3cm} \vspace{-3.4cm} {\vbox{\baselineskip=12pt{\rightline{\small{ CERN--TH.6958/93}} \rightline{\small{NEIP--93--006}}} \rightline{{\small{IEM--FT--75/93}}}} \vspace{1cm} \begin{center} {\bf {\large M}ODEL-{\large I}NDEPENDENT {\large P}ROPERTIES AND\\ {\large C}OSMOLOGICAL {\large I}MPLICATIONS OF THE {\large D}ILATON AND {\large M}ODULI\\ {\large S}ECTORS OF {\large 4-D} {\large S}TRINGS} \vspace{1cm} B. de CARLOS${}^*$, \ J.A. CASAS${}^{**,*}\;$, \ F. QUEVEDO${}^{***} \footnote{Supported by the Swiss National Science Foundation}\;$, \ E. ROULET${}^{**}$ \vspace{0.7cm} ${}^{*}$ {\it Instituto de Estructura de la Materia (CSIC),\\ Serrano 123, 28006--Madrid, Spain} \vspace{0.4cm} ${}^{**}$ {\it CERN, CH--1211 Geneva 23, Switzerland } \vspace{0.3cm} ${}^{***}$ {\it Institut de Physique, Universit\'e de Neuch\^atel,\\ CH--2000 Neuch\^atel, Switzerland } \vspace{0.3cm} \end{center} \vspace{0.7cm} {\vbox{\baselineskip=15pt \noindent We show that if there is a realistic 4-d string, the dilaton and moduli supermultiplets will generically acquire a small mass $\sim {O}(m_{3/2})$, providing the only vacuum-independent evidence of low-energy physics in string theory beyond the supersymmetric standard model. The only assumptions behind this result are (i) softly broken supersymmetry at low energies with zero cosmological constant, (ii) these particles interact with gravitational strength and the scalar components have a flat potential in perturbation theory, which are well-known properties of string theories. (iii) They acquire a $vev$ of the order of the Planck scale (as required for the correct value of the gauge coupling constants and the expected compactification scale) after supersymmetry gets broken. We explore the cosmological implications of these particles. Similar to the gravitino, the fermionic states may overclose the Universe if they are stable or destroy nucleosynthesis if they decay unless their masses belong to a certain range or inflation dilutes them. For the scalar states it is known that the problem cannot be entirely solved by inflation, since oscillations around the minimum of the potential, rather than thermal production, are the main source for their energy and can lead to a huge entropy generation at late times. We discuss some possible ways to alleviate this entropy problem, that favour low-temperature baryogenesis, and also comment on the possible role of these particles as dark matter candidates or as sources of the baryon asymmetry through their decay.}} \vspace{0.5cm} \begin{flushleft} {\vbox{\baselineskip=12pt {\small{CERN--TH.6958/93}} \\ {\small{July 1993}}}} \end{flushleft} \psnormal \newpage \baselineskip=16pt \pagestyle{plain} \pagenumbering{arabic} \section{Introduction} One of the major problems facing string theories is the lack of predictive power for low-energy physics. This problem is mostly due to the immense number of consistent supersymmetric vacua that can be constructed. Concentrating on model-independent properties of these vacua, it is well known that all of them include in the spectrum, besides the gravity sector, a dilaton ($S$) multiplet. The tree-level couplings of this field are well understood and are independent of the vacuum. In particular it couples to the gauge kinetic terms, thus its $vev$ gives the gauge coupling constant at the string scale. It is also known to have vanishing potential to all orders in perturbation theory and to couple to every other light field only by non-renormalizable interactions. Another generic property of 4-d string vacua is that each of them belongs to classes of models labelled by continuous parameters called moduli ($T_i$). These parameters are fields in the effective field theory whose potential is flat. The standard moduli fields characterize a particular compactification, e.g. the size and shape of a Calabi--Yau manifold or toroidal orbifold. But they are also known to exist even in models without a clear geometric interpretation like asymmetric orbifolds \footnote{Examples of models without moduli have been constructed, but none of them corresponds to a 4-d string vacuum \cite{HMV}. }. Even though their existence is generic, their couplings are not as model-independent as those of the dilaton, except for having vanishing potential and non-renormalizable couplings to the other light fields. Independent of the particular mechanism of supersymmetry breaking, any realistic string model is expected to lead to a low-energy theory with softly broken supersymmetry at low scale in order to solve the hierarchy problem. Experimental constraints also impose that the cosmological constant is essentially zero and that the gauge coupling constant at the unification scale is of order one. Also the size and shape (moduli) of the compact dimensions are expected to be of order the Planck scale. These phenomenological requirements, together with the assumption that the supersymmetry breaking mechanism is responsable for fixing those $vev$'s, are enough to prove a very simple but strong result in string theory: The scalar and fermionic components of the dilaton and moduli superfields can take masses at most of the order of the gravitino mass, which is expected to be $\leq 1$ TeV in order to solve the hierarchy problem. This means that any realistic string theory (as defined above) should have at low-energies not only the supersymmetric standard model spectrum, but also the dilaton and moduli superfields \footnote{Strictly speaking, not all of the moduli fields have to take $vev$'s of order 1. Our result applies only to those that have a $vev$ of that order, having geometrical interpretation or not. The logical alternatives are that non-perturbative effects do not lift the degeneracy completely, thus leaving some flat directions and then massless fields or that the $vev$'s are smaller (larger) than the Planck scale for which the corresponding field is heavier (lighter). For an example in this class see \cite{AMQ}.}. This fact is actually not surprising since before supersymmetry breaking these fields have flat potentials (and are therefore massless). The interesting observation is that this can be claimed in a completely vacuum-independent way and is therefore a `prediction' of any model satisfying the above hypothesis, which are almost imperative in any possible realistic string model (note, however, that in ordinary field theories the $S$ and $T_i$ fields are not necessarily present). It is then natural to ask what the physical implications of these light particles are. Since all of their couplings to the observable sector are suppressed by powers of the Planck scale, it is perhaps worthless to look for direct phenomenological consequences. But they can play an important role in cosmology as very weakly interactive light particles do. In general, particles with these properties and masses in the TeV range are problematic for cosmology \cite{PWW,PP,CFKRR}. If they are stable, they can overclose the Universe, and if they decay late, they can dilute or alter the light elements abundance. For the fermionic fields this problem could be solved if there is a period of inflation, which would dilute their initial number density making them harmless. For scalar fields, the problem is more serious since after inflation they are usually sitting far from their zero-temperature minimum and they store a large amount of energy in the associated oscillations. These problems have been noticed before in the context of particular supergravity \cite{CFKRR} and superstring \cite{ENQ} models, but according to the arguments above they are truly generic in string theory. We examine alternative mechanisms to solve these problems, finding that some of them could be naturally implemented in phenomenologically interesting string models. We find that the entropy problem is significantly alleviated if the inflaton decays at late times giving a small reheating temperature, and that the cosmological bounds then favour low-energy scenarios of baryogenesis (e.g. at the electroweak scale or below). Finally, we point out that the dilaton and moduli superfields may provide a dark matter candidate and, in $R$-parity violating models, they could actually trigger low-temperature baryogenesis \cite{CR}. \vspace{0.6cm} \section{Masses of the Dilaton and Moduli Superfields} \vspace{0.2cm} Following the standard notation \cite{peter}, the scalar potential for a $d=4,\ N=1$ supergravity theory can be conveniently written in $M_P$ units as \begin{eqnarray} V=\left[F_k F^l{\cal G}^k_l -3 e^{{\cal G}}\right] + \frac{1}{2}f_{\alpha\beta}^{-1}D^\alpha D^\beta\;\;, \label{V} \end{eqnarray} where \begin{eqnarray} D^\alpha=-{\cal G}^i T^{\alpha j}_i z_j\;\;,\;\;\; F^l = e^{{\cal G}/2} ({\cal G}^{-1})^l_j {\cal G}^j \label{FD} \end{eqnarray} Here $z_j$ denote the chiral fields, $T^\alpha$ are the gauge group generators, ${\cal G}=K+\log|W|^2$ where $K$ is the K\"{a}hler potential and $W$ the superpotential, $f_{\alpha\beta}$ are the gauge kinetic functions (in superstrings $f_{\alpha\beta}=S\delta_{\alpha\beta}$ up to threshold corrections), and ${\cal G}^i$ (${\cal G}_j$) denotes $\partial {\cal G}/\partial z_i$ ($\partial {\cal G}/\partial z^*_j$). $F_l$ and $D^\alpha$ are the $F$ and $D$ auxiliary fields. Supersymmetry is broken if at least one of the $F_l$ or $D^\alpha$ fields takes a $vev$. The scale of supersymmetry breakdown, $M_S$, is usually defined as $M_S^2=\langle F\rangle$ or $\langle D\rangle$ (depending on the type of breaking).\vspace{0.4cm} Let us now see why the masses of the dilaton and moduli fields are of order $m_{3/2}$. The assumptions to show this are the following \begin{description} \item[i)] The $S$ and $T_i$ fields must take $vev's$ of the order of the Planck scale $M_P$. This is certainly mandatory for the $S$ field, whose $vev$ gives (up to threshold corrections) the gauge coupling constant at the unifying string scale ($\langle {Re}\ S \rangle= g_{string}^{-2}$). If the model can be interpreted as coming from higher dimensional compactification, the size and shape of these compact dimensions are parametrized by $\langle T_i \rangle$ and are also expected to be of the order of the Planck scale. \item[ii)] Supersymmetry is softly broken at a low scale with vanishing cosmological constant. More precisely, the mechanism for supersymmetry breakdown should yield a gravitino mass $m_{3/2}\stackrel{<}{{}_\sim} 1\ {\rm TeV}\ll M_P$ (without unnatural fine-tunings), as is required from phenomenological reasons. \end{description} We emphasize here that we do not assume any particular scenario for achieving (i) and (ii). We just assume that if the observable world is the low-energy limit of a string theory, the previous conditions must be fulfilled. Besides points (i) and (ii), the following well-known property of string perturbation theory should be added: \begin{description} \item[iii)] The fields $S$ and $T_i$ interact with gravitational strength and have a flat potential at string tree-level which remains flat to all orders of perturbation theory \cite{witten}. This means that the only source of masses will come from non-perturbative effects which on the other hand should be responsible also for the supersymmetry breaking process. Hence, we will assume these fields to be massless in the absence of supersymmetry breaking\footnote{It is conceivable that non-perturbative effects which trigger masses for the $S$ and $T_i$ fields without breaking supersymmetry could exist. However, supersymmetry has to be broken in any realistic model and non-perturbative effects would be responsible for that. The way to relax this assumption is the existence of a hierachy of non-perturbative effects: Planck scale effects which fix {\it all} the $vev$'s without breaking supersymmetry and low-energy effects which break supersymmetry, as suggested for the dilaton in \cite{FILQ}.}. \end{description} Once the assumptions are written, it is straightforward to find the order of magnitude of the masses. Roughly speaking, (see for instance {\cite{joe}}) after supersymmetry breaking, the mass splittings in a given supermultiplet are given by the product of the square of the supersymmetry breaking scale times the coupling strength between that field and the goldstino ($\sim M_S^2/M_P$) which is just the gravitino mass. Let us see this more explicitly\footnote{Notice that $S$ and $T_i$ share the properties of the hidden sector fields in the general analysis of reference \cite{SW} in the context of $N=1$ supergravity, although that analysis includes also cases such as linear superpotentials of the Polonyi type which do not hold for the moduli and dilaton fields. It is straightforward to prove our result using that formalism also. }. For simplicity of notation, let us collectively denote $\phi$ the scalar components of the $S$ and $T_i$ fields and $z$ the observable fields. Since the $vev$'s of the $\phi$ fields are of order the Planck scale it is convenient to define the dimensionless fields $\chi \equiv \phi/M_P$. The K\"{a}hler potential can be written as $K=M_P^2 K_0(\chi) + K_1(\chi)z^*z+\cdots$ where $K_0$ and $K_1$ are arbitrary real functions. The important point here is that $K$ is of order $M_P^2$ and not higher, since the $\phi$ fields have only gravitational strength interactions. Hence ${\cal G}^k_l=K^k_l\stackrel{<}{{}_\sim} O(1)$ in (\ref{V}). In flat space the gravitino mass is given by \cite{peter} \begin{eqnarray} m_{3/2}^2=e^{{\cal G}}=e^K|W|^2 \label{m32} \end{eqnarray} {}From (\ref{V}), (\ref{m32}) and the cancellation of the cosmological constant, it is clear that \begin{eqnarray} m_{3/2}^2\sim M_S^4 M_P^{-2} \label{m322} \end{eqnarray} Hence $V$, as given in eq.(\ref{V}), is a sum of terms \begin{eqnarray} V=\sum_a{\cal V}_a \label{VVa} \end{eqnarray} with $\langle{\cal V}_a\rangle\ \stackrel{<}{{}_\sim}\ m_{3/2}^2M_P^2\;\; (=M_S^4)$. Some ${\cal V}_a$ can be $\langle{\cal V}_a\rangle=0$ (or $\ll m_{3/2}^2M_P^2$), but never $\langle {\cal V}_a\rangle\gg m_{3/2}^2M_P^2$, since this would imply a fine-tuning in order to cancel the cosmological constant. Of course, $\langle{\cal V}_a\rangle=0$ in the absence of supersymmetry breaking. The $\phi$ masses are then given by \begin{eqnarray} m_\phi^2=\langle\partial^2V/\partial \phi^2\rangle=\sum_a\langle\frac{\partial^2{\cal V}_a} {\partial \phi^2}\rangle=\frac{1}{M_P^2}\sum_a\langle\frac{\partial^2{\cal V}_a} {\partial \chi^2}\rangle\sim M_P^{-2} (m_{3/2}^2 M_P^2)\sim O(m_{3/2}^2) \label{Mass} \end{eqnarray} The additional factor coming from the normalization of the kinetic term depends on ${\cal G}^k_l$ and is $O(1)$. Notice that if the breaking of supersymmetry were explicit rather than spontaneous the same conclusion would hold as long as the terms in the potential are of order $M_S^4$. \vspace{0.4cm} \vfill\eject \leftline{\it Dilatino and Modulino Masses} \vspace{0.2cm} As usual, the fermion component of the field whose $F$ (or $D$) auxiliary field takes a $vev$ is the goldstino, which in principle is massless but is eaten by the gravitino through the super-Higgs effect. This field could perfectly be the dilatino $\tilde{S}$ or one of the modulinos $\tilde{T_i}$, or perhaps a certain combination of them. For the remaining fermionic components, $\psi_i$, of the chiral superfields $z_i$ the mass term is $[M_\psi]^{ij}\bar\psi_{Li} \psi_{Lj}$, where the fermionic mass matrix $M_\psi$ can be written as $[M_\psi]^{ij} = \sum_{n=1}^4 [M_\psi^{(n)}]^{ij}$, with \begin{eqnarray} & [ M_\psi^{(1)}]^{ij} & = -e^{K/2}|W|\left\{ K^{ij} + \frac{1}{3}K^iK^j\right\} \nonumber \\ &[M_\psi^{(2)}]^{ij} & = -e^{K/2}|W|\left\{\frac{ K^{i}W^j + K^jW^i} {3~ W}-\frac{2W_iW_j}{3W^2}\right\} \nonumber \\ &[M_\psi^{(3)}]^{ij} & = -e^{K/2}\sqrt{\frac{W^*}{W}}\ W^{ij} \nonumber \\ &[M_\psi^{(4)}]^{ij} & = e^{{\cal G}/2}{\cal G}^l({\cal G}^{-1})^k_l {\cal G}^{ij}_k \label{Mfer} \end{eqnarray} where $W^i=\partial W/\partial z_i$, $K^i=\partial K/\partial z_i$. Notice that the canonically normalized fermion fields $\hat \psi_{Ln}$ are given by $\psi_{Lj}=U^*_{jn}\Lambda_{nn}\hat \psi_{Ln}$, where $U$ is the unitary matrix diagonalizing the $K^i_j$ matrix and $\Lambda=diag(\lambda_n^{-1/2})$ with $\lambda_n$ the corresponding eigenvalues. Then the mass matrix for the $\hat \psi_{Ln}$ fields is given by \begin{eqnarray} \hat M = \Lambda U^+ M (\Lambda U^+)^T \label{Mfer2}\;\;. \end{eqnarray} Since in general the non-vanishing $K_j^i$ derivatives are $\stackrel{<}{{}_\sim} O(1)$, $M$ and $\hat M$ are of the same order of magnitude. Of course, we are interested in the case where the $\psi_i, \psi_j$ fields are the dilatino $\tilde S$ and the modulinos $\tilde T_i$. Let us then analyse the magnitude of each $[M_\psi^{(n)}]^{ij}$ term individually. The first term $[M_\psi^{(1)}]^{ij}$ is clearly of order $m_{3/2}$ ($=e^{K/2}|W|$) since the possible non-vanishing $K^{ij}, K^i, K^j$ derivatives are in general $\stackrel{<}{{}_\sim} O(1)$ in Planck units. (Notice that for normal observable fields with zero $vev$'s these quantities are usually vanishing.) The second term $[M_\psi^{(2)}]_{ij}$ is $O(m_{3/2})$ for analogous reasons together with the fact that $W^i/W\stackrel{<}{{}_\sim} O(1)$ in Planck units. To see this, note that an F-term, $F^k$, can be written (see eq.(\ref{FD})) as \begin{eqnarray} F^k=e^{K/2}|W|(K^{-1})^k_j\left( K^{j} + \frac{W^j}{W}\right) \label{Fk} \end{eqnarray} where the derivatives of $K$ are $\stackrel{<}{{}_\sim} O(1)$ and $\langle F_k \rangle\stackrel{<}{{}_\sim} M_Pm_{3/2}$. Thus $W^i/W\stackrel{<}{{}_\sim} O(1)$. The third term $[M_\psi^{(3)}]^{ij}$ contains the contribution to the fermion masses which in principle is not triggered by supersymmetry breaking (notice that this term is not proportional to $m_{3/2}$). On the other hand, in the absence of supersymmetry breaking we know that $m_{\tilde S}=m_{\tilde T_i}=0$. Consequently, for the dilaton and the moduli, $W^{ij}$ can only be different from zero as an effect of supersymmetry breaking. Following a reasoning similar to that of estimating the value of $\partial^2 V/\partial \phi^2$, it is easy to see that $W^{ij}=O(W)M_P^{-2}=O(m_{3/2})$, and thus $[M_\psi^{(3)}]^{ij}=O(m_{3/2})$. Finally, $[M_\psi^{(4)}]^{ij}$ can be written as $[M_\psi^{(4)}]^{ij}=F^kK^{ij}_k$, where $K^{ij}_k\stackrel{<}{{}_\sim} O(1)$ in Planck units and $\langle F^k\rangle\stackrel{<}{{}_\sim} M_Pm_{3/2}$. Therefore, $[M_\psi^{(4)}]^{ij}=O(m_{3/2})$. To summarize, under the assumptions (i)--(iii) the masses of the scalar and fermionic components of the dilaton and the moduli are $O(m_{3/2})$ \footnote{Notice that this justifies the procedure of ref. \cite{KL} where intermediate mass hidden fields are integrated out whereas the moduli fields are kept in the low-energy theory.}. \vspace{0.5cm} \leftline{\it A Class of Models} \vspace{0.2cm} We illustrate the previous results now in a class of orbifold models in which all of the moduli are frozen except for the overall one ($T$). The K\"{a}hler potential in Planck scale units is \cite{DKL} \begin{eqnarray} K(S,T,\Phi_\alpha)=K_0(S,T)+\sum_{\alpha} K_1^{\alpha}(T){|\Phi_\alpha|}^2+ O({|\Phi_\alpha|}^4) \end{eqnarray} where \begin{eqnarray} & K_0(S,T) & = -log~ Y- 3~ log(~T_R~)\nonumber\\ & K_1^\alpha & = (~ T_R~)^{n_\alpha \label{Orbkahl} \end{eqnarray} Here $Y = S_R+\delta~ log(~ T_R~)$, where $\phi_R=\phi + \bar\phi$, $\phi=S,T$. Also $\delta\equiv\frac{\delta^{GS}}{4\pi^2}$ is a model dependent constant coming from the one-loop anomaly cancelling Green--Schwarz counterterms and $n_\alpha$ represent the modular weights of the given matter superfield and depend on the sector (twisted or untwisted) of the corresponding field. For the superpotential we will take an arbitrary function $W(S,T,\Phi_\alpha)=W^{pert}(T,\Phi_\alpha)+W^{np}(S,T,\Phi_a)$ where, as we know, the perturbative part does not depend on $S$ and vanishes with $\Phi_\alpha$, and the non-perturbative part is unknown. To write the explicit expression for the fermion masses, we find it convenient to work with a general $W$ which for vanishing matter fields will be only the non-perturbative part $W=W^{np}(S,T)$. The mass matrix (\ref{Mfer}) for the fermionic components of the $S$ and $T$ fields is then given by \begin{eqnarray} &M^{SS} & =m_{3/2} \left[\frac{1}{Y^2}-\frac{2}{Y}\frac{W^S}{W}-(C^{SS}+\frac{1}{3}~ B^2)\right]\nonumber\\ &M^{TS} &= -m_{3/2} \left[ C^{ST}+\frac{B~D}{3}+ \frac{\delta~B}{Y(T_R)}(1+\frac{ 3}{A})+\frac{\delta~W^T}{Y^2~A~W} \right]\\ &M^{TT} & = m_{3/2}\left[E {}~(1-\frac{2(T_R)~W^T}{A~W})-\left( C^{TT}+\frac{D^2}{3}\right)+\frac{\delta~B}{(T_R)^2} (1-\frac{6~\delta}{A~Y})\right]\nonumber \;, \label{Orbmfer} \end{eqnarray} where $A\equiv 3+\delta /Y$, $B\equiv W^S/W-1/Y$ , $C^{IJ}\equiv W^{IJ}/W- W^IW^J/W^2$, $D\equiv W^T/W-A/(T_R)$ and $E=\frac{A}{(T_R)^2} (1+\frac{\delta^2}{A~Y^2})$. For the correctly normalized fields $\tilde \phi_1,\tilde \phi_2$ the mass matrix is given by (\ref{Mfer2}), where in our case \begin{eqnarray} U^+=\left( \begin{array}{cc} \frac{K_S^T}{\sqrt{(\lambda_1-K_S^S)^2+(K^T_S)^2}} & \frac{\lambda_1-K_S^S}{\sqrt{(\lambda_1-K_S^S)^2+(K^T_S)^2}} \\ \frac{\lambda_2-K_T^T}{\sqrt{(\lambda_2-K_T^T)^2+(K^T_S)^2}} & \frac{K_S^T}{\sqrt{(\lambda_2-K_T^T)^2+(K^T_S)^2}} \end{array} \right) \label{Umas} \end{eqnarray} and $\lambda_{1,2}=\frac{1}{2}\left(\frac{1}{Y^2}+E\right) \pm \sqrt{\frac{1}{Y^4}+E-\frac{2}{Y^2~T_R}(A-\frac{\delta^2}{Y^2})}$. It is clear that the masses are of $O(m_{3/2})$ and the precise value will depend on the non-perturbative superpotential. Notice that if $\delta=0$ and $W^{np}\neq W^{np}(T)$, then $M^{TT}=0$\footnote{An apparent discrepancy with the calculation of \cite{DIN} in this limit is due to the fact that we are using the mass matrix after the super-Higgs effect, which is reflected in the $1/3$ factors in eqs.(\ref{Mfer}).}. For the bosonic fields we have also calculated the masses and found them of the same order. For simplicity we will present here only the final result in the limit $\delta =0$ and for the particular case that the superpotential factorizes $W(S,T)=\Omega (S)\Gamma(T)$. In this case the physically normalized mass matrix does not have off-diagonal $(S,T)$ components and its eigenvalues are, for the dilaton field: \begin{eqnarray} M_{S_{\pm}}^2 = m_{3/2}^2~S_R^2~ \left( ~ |\frac{2\Omega^S}{\Omega}+S_R~C^{SS}|^2+ |S_R~C^{SS}|^2+(2~C^{SS}~S_R~B+h.c.)\right.\nonumber\\ \left.\pm ~ 2|S_R~\bar B~(2~C^{SS}+ S_R~C^{SS}_S ) +\frac{2~\bar\Omega_{\bar S}}{\Omega}~(\frac{\Omega^S}{\Omega}+S_R~C^{SS})|~\right), \label{Orbmboss} \end{eqnarray} whereas for the modulus we find: \begin{eqnarray} M_{T_{\pm}}^2 = m_{3/2}^2~\frac{T_R^2}{9}~ \left(~|\frac{2~\Gamma^T}{\Gamma}+T_R~C^{TT}|^2+ |T_R~C^{TT}|^2-(2~C^{TT}~ \bar D+ h.c.)\right. \nonumber\\ \left.\pm ~ 2~|2\frac{\bar\Gamma_T}{\bar\Gamma}~(\frac{\Gamma^T}{\Gamma}+T_R~C^{TT})- \bar D~(2C^{TT}+C^{TT}_T)|~ \right). \label{Orbmbost} \end{eqnarray} Notice that unlike the observable fields for which usually one member of the supermultiplet acquires a mass of $O(m_{3/2})$ and the other vanishes (since their masses are protected by gauge invariance) for these fields both members of the supermultiplet have similar masses of $O(m_{3/2})$, unless there is a cancellation in any of the equations above or, as we said before, the non-perturbative effects do not lift the flatness of the potentials and some of the fields remain massless. This is the case if the superpotential is independent of any of the fields, as can be easily verified. \section{Cosmological Implications} \vspace{0.2cm} It is natural to ask what the physical implications of the lightness of the dilaton and moduli sector are. First of all it is necessary to note that the couplings of these particles to the observable sector are suppressed by powers of the Planck scale, so it is perhaps useless to look for direct phenomenological consequences unless they happen to be very light and could mediate long range forces \cite{mirjam}. However, the dilaton and the moduli can play an important role in cosmology as very weakly interacting light particles do. Let us first consider the {\em fermionic} components ($\tilde S$ and $\tilde T_i$) of these fields. Their cosmology is very similar to that of the gravitino \cite{PP,quien}. In particular, if they were stable and in the absence of inflation, they would overclose the Universe unless their masses are in the range $m_{\tilde S}, m_{\tilde T_i}\stackrel{<}{{}_\sim}1$ keV \cite{PP}. A gravitino mass of this order is an interesting possibility which has received some attention recently \cite{BH} but most standard phenomenological scenarios assume $m_{3/2}\sim$ TeV, which seems to be in conflict with cosmology. Nevertheless, these particles are actually expected to decay (unless they were the lightest supersymmetric particles) with a rate $\Gamma_{\tilde \phi}\sim m_{\tilde \phi}^3M_P^{-2}$ ($\phi=S,T_i$). They should decay before nucleosynthesis, otherwise their decay products give unacceptable alteration of the primordial ${}^4$He and D abundances. This imposes a lower bound on their masses \cite{PP,quien} \begin{eqnarray} m_{\tilde S}, m_{\tilde T_i}>O(10)\ {\rm TeV} \label{cotanuc} \end{eqnarray} This bound is not very comfortable, but may be fulfilled since $m_{\tilde S}, m_{\tilde T_i}=O(m_{3/2})$ includes the possibility of one or two orders of magnitude higher than $m_{3/2}$. The previous cosmological problems associated with light, very weakly interacting (but decaying) fermionic fields are obviated if the Universe undergoes a period of inflation that dilutes them, provided that the reheating temperature after inflation satisfies \begin{eqnarray} T_{RH}\stackrel{<}{{}_\sim}10^8 \left(\frac{100~{\rm GeV}}{m_{3/2}}\right)\ {\rm GeV}\label{TRH} \end{eqnarray} in order for them not to be regenerated in large amounts. In this case, low-temperature mechanisms for baryogenesis, which have recently received much attention, become strongly favoured (for a recent review see \cite{CKN}). On the other hand, it is worth mentioning a mechanism \cite{CR} in which baryogenesis is driven by the late decay of a particle such as the gravitino or the axino. It is based on the possible existence of baryon-number-violating terms in the superpotential of the form $u_L^cd_L^cd_L^c$ (for the third generation) and exploits the sources of CP violation appearing in supersymmetric models. This mechanism can also be implemented for the dilatino and modulino fields, since they have axino-like couplings with gravitational strength, and could work under condition (\ref{cotanuc}) but requires a reheating temperature $O(M_P)$ or no inflation at all (as is the case for the gravitino). Let us now turn to the {\em scalar} (dilaton and moduli) fields. They present much more severe problems than their supersymmetric partners since in general, when they start their relevant cosmological evolution, they are sat far from their zero-temperature minimum and the excessive energy associated to their oscillations around the minimum tend to be problematic. This well known situation \cite{CFKRR} has been called in the literature the `Polonyi problem' or the `entropy crisis' and has been noticed in the context of particular supergravity or superstring models \cite{ENQ}. However, according to the arguments of the previous section, this is a truly generic problem in string theory. The dilaton and moduli fields are expected to be initially shifted from their zero-temperature minimum either due to the effect of thermal fluctuations or of quantum fluctuations \cite{GLV} during inflation or also due to the fact that their coupling to the inflaton will generally modify, during inflation, the value corresponding to the minimum of the potential \cite{DFN}. The shift produced by these effects may even be as large as $O(M_P)$ but its magnitude depends on the particular form of the scalar potential and should be estimated case by case. The evolution of the $\phi$ field (canonically normalized dilaton or moduli) is determined by \begin{eqnarray} \ddot{\phi}+(3H+\Gamma_\phi )\dot {\phi}+{\partial V\over \partial\phi}=0 \label{eqphi} \end{eqnarray} where $\Gamma_\phi$ is the $\phi$ width ($\sim m_\phi^3 M_P^{-2}$) and $H$ is the Hubble constant. Although $V$ is not known, if we consider just oscillations around the minimum taking $V\simeq {1\over 2}m_\phi^2\phi^2$ one sees that the friction dominates until the time $t_{in}\sim m_\phi^{-1}$ ($T_{in}\sim\sqrt{m_\phi M_P}$ in the case of radiation domination), after which the field oscillates and the density evolves as $T^3$ \cite{PWW}. Hence, taking as $\phi_{in}$ the value of the shift in the fields at the end of inflation (and neglecting its evolution up to $t_{in}$), the initial density $\rho_\phi(T_{in})\simeq m_\phi^2\phi_{in}^2$ increases with respect to the density in radiation as \begin{eqnarray} \frac{\rho_\phi(T)}{\rho_{rad}(T)}= \frac{\rho_\phi(T_{in})}{\rho_{rad}(T_{in})}\frac{T_{in}}{T} \label{roro} \end{eqnarray} If $\phi$ were stable $(\Gamma_\phi\sim 0$), the constraint that $\phi$ does not overclose the Universe today imposes \begin{eqnarray} \phi_{in}\leq 10^{-10}M_P(m_\phi/100 {\rm GeV})^{-1/4}, \label{bound} \end{eqnarray} which is much smaller than the typical shifts in the fields expected due to the mechanisms mentioned above. However, $\phi$ will generally decay. For instance, the dilaton $S$ is coupled to all the gauge bosons of the theory through the term ${Re}\ f~ F_{\mu\nu}F^{\mu\nu} + {Im}\ f~ F_{\mu\nu}\tilde F^{\mu\nu}$, where $f=S+ $ threshold corrections. Also, the moduli fields appear in the threshold corrections to $f$ and/or in the Yukawa couplings between the charged fields. Therefore, $\rho_\phi$ will eventually be converted in radiation. As long as $\phi_{in}\geq 10^{-8}M_P\sqrt{m_\phi/{\rm TeV}}$ the field $\phi$ will dominate the energy density at the moment it decays, that will correspond to a temperature $T_D\approx m_\phi^{11/6}\phi_{in}^{-2/3} M_P^{-1/6}$ and will reheat the Universe to $T_{RH}$ given by \begin{eqnarray} T_{RH}\sim m_\phi^{3/2} M_P^{-1/2} \;\;. \label{TR} \end{eqnarray} (Notice that $T_{RH}$ is essentially independent of $\phi_{in}$, provided the Universe has been $\phi$-dominated and the total density is $\sim\rho_c$.) Now, the decay products of $\phi$ will destroy the ${}^4He$ and $D$ nuclei, and thus the successful nucleosynthesis predictions, unless $T_{RH}>1$ MeV, since then the nucleosynthesis process can be re-created. This implies \begin{eqnarray} m_{\phi}>O(10)\ {\rm TeV} \label{cotanuc2} \end{eqnarray} This bound is, of course, similar to that of eq.(\ref{cotanuc}) for the dilatino and modulinos (and for the gravitino), but however it cannot be escaped with the help of inflation. On the other hand, the entropy increase when $\phi$ decays, $\Delta$, is given by \begin{eqnarray} \Delta=\left(\frac{T_{RH}}{T_{D}}\right)^3\sim {\phi_{in}^2\over m_\phi M_P} \label{Delta} \end{eqnarray} If $\Delta$ is very large, it would erase any pre-existing baryon asymmetry. Denoting $\Delta_{max}$ the maximum tolerable entropy production (this will depend on the specific baryogenesis mechanism, but $\Delta_{max}\sim 10^5$ might be acceptable) we obtain the constraint \begin{eqnarray} \phi_{in}^2<\Delta_{max}M_P m_\phi \label{Delta2} \end{eqnarray} We want to note here that a possible way out of this dilema would be to have the baryogenesis generated by the $\phi$ decays, that would take place just before nucleosynthesis. This could be done for instance if the $\phi$ decay into gaugino pairs is allowed (which seems reasonable in view of (\ref{cotanuc2})) and implementing then a mechanism similar to that of ref.\cite{CR} just discussed. Unlike the baryogenesis scenarios based on decays of fermions (gravitino, dilatino, etc.) those with the $\phi$ decays would not imply any severe constraint on the reheating after inflation since it is easier to dominate the energy density at the moment of decay. This situation can however be ameliorated: the previous discussion assumed that at the moment in which $\phi$ starts to oscillate, the inflaton has already decayed and produced the main reheating of the Universe. However, this is not necessary, especially in view of the many scenarios of low-temperature baryogenesis being considered nowadays. If the inflaton $\varphi$ were instead to decay at $t_{RH}\gg t_{in}$, the situation will significantly change due to the fact that between those times both the inflaton and $\phi$ will oscillate and their energies will both evolve as $a^{-3}$, where $a$ is the scale factor. Hence, using the fact that, at $t_{in}$, $H=m_\phi$ so that $\rho_\varphi=3 M_P^2m_\phi^2/(8\pi)$, we have \begin{eqnarray} {\rho_\phi\over\rho_\varphi}(t_{RH})\simeq {8\pi\phi_{in}^2\over 3M_P^2}\;\;. \end{eqnarray} Only after the inflaton energy is converted into radiation ($\rho_\varphi(t_{RH})\sim T_{RH}^4$) the relative contribution of $\phi$ to the density starts to increase as \begin{eqnarray} {\rho_\phi\over\rho_\varphi}(T)\simeq {8\pi\phi_{in}^2\over 3M_P^2}{T_{RH}\over T}\;\; \end{eqnarray} and, for a stable $\phi$, the condition that at $T=3$ K the Universe is not overclosed becomes \begin{eqnarray} \phi_{in}\leq\sqrt{1 {\rm TeV}\over T_{RH}}10^{-6}M_P \;\;, \label{stable} \end{eqnarray} that is much less severe than the previous bound if the reheating temperature is of the order of the weak scale. In the more plausible situation in which $\phi$ decays, by a similar reasoning we obtain that $\phi$ will be dominating the density of the Universe at the decay time only if \begin{eqnarray} \phi_{in}\geq\left({m_\phi\over 10\ {\rm TeV}}\right)^{3/4}\left( {T_{RH}\over 100\ {\rm GeV}}\right)^{-1/2}\ 10^{-3} M_P \;\;, \end{eqnarray} and in that case the Universe has a further reheating to $T_{RH}'\simeq \sqrt{m_\phi^3/M}$ with an increase in the entropy by a fraction \begin{eqnarray} \Delta\simeq{\phi_{in}^2T_{RH}\over (M_Pm_\phi)^{3/2}} \;\;. \end{eqnarray} The analogue of eq. (\ref{Delta2}) is $\phi_{in}^2<\Delta_{max} (M_P m_\phi)^{3/2}/T_{RH}$ relaxing the bounds for small enough values of $T_{RH}$. For the indicative values used before (remember that $m_\phi>10$ TeV eliminates the problems associated to nucleosynthesis and $T_{RH}\sim 100$ GeV could allow for electroweak baryogenesis to occur) $\Delta$ is still not unreasonably large even for $\phi_{in}=O(M_P)$ ($\Delta \leq 10^5$), although clearly for those values of $\phi_{in}$ the approximation of a quadratic potential goes badly wrong and the detailed form of the potential would have to be taken into account. Hence, although the problem associated to the scalar field oscillations is severe, we have seen that there are ways to alleviate it. Furthermore, in the context of supergravity models, there are two interesting proposals to solve this `entropy crisis' \cite{DFN,OS}. The first method \cite{DFN} is based on a scenario in which supersymmetry is broken through an O'Raifeartaigh superpotential. Then, the scalar field has almost vanishing $vev$ and a rather large mass ($\sim (m_{3/2}M_P)^{1/2}$) at zero-temperature, thus the contributions to the scalar potential during inflation do not change appreciably the position of the minimum. Consequently, the energy stored in the scalar field is very small, avoiding cosmological problems. Besides the fact that this mechanism does not address the problem of quantum fluctuations, it cannot be implemented for the $S$ and $T$ fields since they must have $O(M_P)$ (non-vanishing) zero-temperature $vev$'s and their masses should be $O(m_{3/2})$. A further mechanism that has been proposed \cite{OS} is to couple the problematic fields to heavy ones that decay promptly into radiation. In this way the coupled evolution of the scalar fields may allow for a transfer of the energy stored in the oscillations to the radiation. This method might be accomodated in the context of string theories for the $S$ and $T$ field since there are heavy fields with mass terms in the perturbative superpotential that, in general, will depend on the moduli fields. Likewise, the factor $e^K$ in front of the scalar potential provides a non-suppressed coupling between the dilaton and the massive fields. This simply reflects the fact that all the physical couplings are proportional to the string coupling constant. There is still an additional source of couplings between the dilaton and massive fields. It is known that if the four-dimensional gauge group of the string has an `anomalous' $U(1)$ factor, a Fayet--Iliopoulos $D$-term is generated in string perturbation theory \cite{DSW}. The corresponding term in the scalar potential has the form \begin{eqnarray} V_{FI}\sim\frac{1}{S_R}\left| \frac{Tr\ Q^{a}}{48\pi^2}\frac{1}{S_R}\ + \ \sum_i Q^a_i |C_i|^2 \right|^2 \label{FI} \end{eqnarray} where $Q^a_i$ are the anomalous charges of the scalar fields $C_i$. Thus, there appear effective mass terms of the form $\sim (S+\bar S)^{-2} |C_i|^2$. Furthermore, in this scenario there appear masses for the dilatino and ${Im} \ S$, through their couplings to the `anomalous' photino and photon respectively, thus the cosmological problems associated with these fields seem to be reduced (although there remains a combination of dilaton and matter fields which remains massless at this level and to which the previous analysis applies). It is quite suggestive that most of the phenomenologically interesting superstring models have been constructed within this framework \cite{CM}. For the remaining fields, the possibility that the transfer of energy mentioned previously be efficient would be worth of a detailed analysis, although that is beyond the scope of the present paper. Let us also mention that another danger coming from the dilaton and moduli fields being initially sat far from the zero-temperature minimum is that they might never fall into it if there is a barrier in between. (This problem has been recently stressed in ref.\cite{BS}.) This can typically happen for the real part of the dilaton (${Re}\ S$) if it initially sits at ${Re}\ S\gg{Re}\ S_o$, since for ${Re}\ S\rightarrow \infty$ all the gauge and gravitational interactions are switched off, and thus supersymmetry is restored and $V\rightarrow 0$. Therefore it is necessary to assume that the initial value of the generic scalar field, $\phi_{in}$, is basically in the slope leading to the zero-temperature minimum. A further problem is that for the typical potentials arising from gaugino condensation, the kinetic energy of the dilaton may be so large as to prevent inflation to take place \cite{BS}, unless some mechanism exist to push $\phi$ close to the minimum (this problem may be only characteristic of those potentials and not necessarily generic in string theory). Finally we will mention that the dilaton and moduli-sector fields may also provide candidates for dark matter. As is apparent from the previous discussion, if the fermionic fields $\tilde \phi$ were stable they would close the Universe for $m_{\tilde \phi}\sim 1$ keV in the absence of inflation, or if the $\tilde \phi$ mass is set to a $T_{RH}$-dependent value if there is an inflationary epoch, in a similar way as for the gravitino \cite{MMY} (for instance $m_\phi\simeq 100$ GeV for $T_{RH}\simeq 10^9$ GeV). If some of the scalar fields $\phi$ were stable, they would close the Universe if the bounds in eqs. (\ref{bound}) or (\ref{stable}) are saturated. In that case, the missing energy of the Universe would be stored in the coherent oscillations of the $\phi$-field in a similar way to the more standard axionic dark matter. \vspace{0.6cm} We would like to acknowledge useful conversations with C. Burgess, L. Ib\'a\~nez, J. Louis, S. Mollerach and M. Quir\'os. After this work was finished, we received a preprint of T. Banks, D. Kaplan and A. Nelson where the cosmological problems associated with dynamical supersymmetry breaking are addressed. \small
1,941,325,221,228
arxiv
\section{Introduction}\label{sec_intro} \vspace{-0.14em} Because of their excellent performance, graph-based codes are among the most attractive error correction techniques deployed in modern storage devices \cite{jia_sym, maeda_fl}. Non-binary (NB) codes offer superior performance over binary codes, and are thus well suited for modern Flash~memories. The nature of~the detrimental objects that dominate the error floor region of non-binary graph-based codes depends on the underlying channel of the device. Unlike in the case of canonical channels, in a recent research \cite{ahh_jsac}, it was revealed that general absorbing sets of type two (GASTs) are the objects that dominate the error floor of NB graph-based codes over practical, inherently asymmetric Flash channels \cite{ahh_jsac, mit_nl}. We analyzed GASTs, and proposed a combinatorial framework, called the weight consistency matrix (WCM) framework, that removes GASTs from the Tanner graph of NB codes, and results in at least $1$ order of magnitude performance gain over asymmetric Flash channels \cite{ahh_jsac, ahh_tit}. A particular class of graph-based codes that has received recent attention is the class of spatially-coupled (SC) codes \cite{fels_sc}. SC codes are constructed via partitioning an underlying LDPC code into components, and then coupling them together multiple times. Recent results on SC codes include asymptotic analysis, e.g., \cite{kud_sc}, and finite length designs, e.g., \cite{pus_sc, mitch_sc, iye_sc}. Non-binary SC (NB-SC) codes designed using cutting vector (CV) partitioning and optimized for 1-D magnetic recording applications were introduced in \cite{homa_sc}. The idea of partitioning the underlying block code by minimizing the overlap of its rows of circulants (so called minimum overlap (MO)) was recently introduced and applied to AWGN channels in \cite{homa_mo}. In this paper, we present the first study of NB-SC codes designed for practical Flash channels. The underlying block codes we focus on are circulant-based (CB) codes. Our combinatorial approach to design NB-SC codes comprises three stages. The first two stages aim at optimizing the unlabeled graph of the SC code (the graph of the SC code with all edge weights set to $1$), while the third stage aims at optimizing the edge weights. The three consecutive stages are: \begin{enumerate} \item We operate on the binary protograph of the SC code, and express the number of subgraphs we want to minimize in terms of the \textit{\textbf{overlap parameters}}, which characterize the partitioning of the block code. Then, we solve this discrete optimization problem to determine the optimal overlap parameters. We call this new partitioning technique the \textit{\textbf{optimal overlap (OO) partitioning}}. \item Given the optimal partitioning, we then apply a new heuristic program to optimize the \textit{\textbf{circulant powers}} of the underlying block code to further reduce the number of problematic subgraphs in the unlabeled graph of the SC code. We call this heuristic program the \textit{\textbf{circulant power optimizer (CPO)}}. \item Having optimized the underlying topology using the first two stages (OO-CPO), in the last stage, we focus on the edge weight processing in order to remove as many as possible of the remaining detrimental GASTs in the NB-SC code. To achieve this goal, we use the \textit{\textbf{WCM framework}} \cite{ahh_jsac, ahh_tit}. We also enumerate the minimum cardinality sets of edge weight changes that are candidates for the GAST removal. \end{enumerate} The three stages are necessary for the NB-SC code design procedure. We demonstrate the advantages of our code design approach over approaches that use CV partitioning and MO partitioning in the context of column weight $3$ SC codes. The rest of the paper is organized as follows. In Section~\ref{sec_prelim}, we present some preliminaries. In Section \ref{sec_oo}, we detail the theory of the OO partitioning in the context of column weight $3$ SC codes. The CPO is then described in Section \ref{sec_cpo}. Next, in Section \ref{sec_wcm}, we propose a further discussion about the WCM framework. Our NB-SC code design steps and simulation results are presented in Section \ref{sec_sims}. Finally, the paper is concluded in Section \ref{sec_conc}. \vspace{-0.3em} \section{Preliminaries}\label{sec_prelim} \vspace{-0.1em} In this section, we review the construction of NB-SC~codes, as well as the CV and MO partitioning techniques. Furthermore, we recall the definition of GASTs and the key idea of the WCM framework. Throughout this paper, each column (resp., row) in a parity-check matrix corresponds to a variable node (VN) (resp., check node (CN)) in the equivalent graph of the matrix. Moreover, each non-zero entry in a parity-check matrix corresponds to an edge in the equivalent graph of the matrix. Let $\bold{H}$ be the parity-check matrix of the underlying regular non-binary CB code that has column weight (VN degree) $\gamma$ and row weight (CN degree) $\kappa$. The binary image of $\bold{H}$, which is $\bold{H}^b$, consists of $\gamma \kappa$ circulants. Each circulant is of the form $\sigma^{f_{i, j}}$, where $i$, $0 \leq i \leq \gamma-1$, is the row group index, $j$, $0 \leq j \leq \kappa-1$, is the column group index, and $\sigma$ is the $p \times p$ identity matrix cyclically shifted one unit to the left (a circulant permutation matrix). Circulant powers are $f_{i, j}$, $\forall i$ and $\forall j$. Array-based (AB) codes are CB codes with $f_{i, j} = ij$, $\kappa = p$, and $p$ prime. In this paper, the underlying block codes we use to design SC codes are CB codes with no zero circulants. The NB-SC code is constructed as follows. First, $\bold{H}^b$ is partitioned into $m+1$ disjoint components (of the same size as $\bold{H}^b$): $\bold{H}^b_0, \bold{H}^b_1, \dots, \bold{H}^b_m$, where $m$ is defined as the memory of the SC code. Each component $\bold{H}^b_y$, $0 \leq y \leq m$, contains some of the $\gamma \kappa$ circulants of $\bold{H}^b$ and zero circulants elsewhere such that $\bold{H}^b = \sum_{y=0}^{m} \bold{H}^b_y$. In this work, we focus on $m=1$, i.e., $\bold{H}^b = \bold{H}^b_0 + \bold{H}^b_1$. Second, $\bold{H}^b_0$ and $\bold{H}^b_1$ are coupled together $L$ times (see \cite{mitch_sc} and \cite{homa_sc}) to construct the binary image of the parity-check matrix of the NB-SC code, $\bold{H}^b_{SC}$, which is of size $(L+1)\gamma p \times L\kappa p$. A \textit{\textbf{replica}} is any $(L+1)\gamma p \times \kappa p$ submatrix of $\bold{H}^b_{SC}$ that contains $\left [\bold{H}^{bT}_0 \text{ } \bold{H}^{bT}_1 \right ]^T$ and zero circulants elsewhere (see \cite{homa_mo}). Replicas are denoted by $\bold{R}_r$, $1 \leq r \leq L$. Overlap parameters for partitioning as well as circulant powers can be selected to enhance the properties of $\bold{H}^b_{SC}$. Third, the matrix $\bold{H}$ is generated by replacing each $1$ in $\bold{H}^b$ with a value $\in$ GF($q$)$\backslash \{0\}$ (we focus on $q=2^\lambda \geq 4$). Fourth, the parity-check matrix of the NB-SC code, $\bold{H}_{SC}$, is constructed by applying the partitioning and coupling scheme described above to $\bold{H}$. The \textit{\textbf{binary protograph matrix (BPM)}} of a general binary CB matrix is the matrix resulting from replacing each $p \times p$ non-zero circulant with $1$, and each $p \times p$ zero circulant with $0$. The BPMs of $\bold{H}^b$, $\bold{H}^b_0$, and $\bold{H}^b_1$ are $\bold{H}^{bp}$, $\bold{H}^{bp}_0$, and $\bold{H}^{bp}_1$, respectively, and they are all of size $\gamma \times \kappa$. The BPM of $\bold{H}^b_{SC}$ is $\bold{H}^{bp}_{SC}$, and it is of size $(L+1)\gamma \times L\kappa$. This $\bold{H}^{bp}_{SC}$ also has $L$ replicas, $\bold{R}_r$, $1 \leq r \leq L$, but with $1 \times 1$ circulants. A technique for partitioning $\bold{H}^b$ to construct $\bold{H}^b_{SC}$ is the CV partitioning \cite{mitch_sc, homa_sc}. In this technique, a vector of ascending non-negative integers, $\boldsymbol{\zeta} = [\zeta_0 \text{ } \zeta_1 \text{ } \dots \text{ } \zeta_{\gamma-1}]$, is used to partition $\bold{H}^b$ into $\bold{H}^b_0$ and $\bold{H}^b_1$. The matrix $\bold{H}^b_0$ has all the circulants in $\bold{H}^b$ with the indices $\{(i, j): j<\zeta_i\}$, and zero circulants elsewhere, and the matrix $\bold{H}^b_1$ is $\bold{H}^b-\bold{H}^b_0$. Another recently introduced partitioning technique is the MO partitioning \cite{homa_mo}, in which $\bold{H}^b$ is partitioned into $\bold{H}^b_0$ and $\bold{H}^b_1$ such that the overlap of each pair of rows of circulants in both $\bold{H}^b_0$ and $\bold{H}^b_1$ is minimized. Moreover, the MO partitioning assumes balanced partitioning between $\bold{H}^b_0$ and $\bold{H}^b_1$, and also balanced distribution of circulants among the rows in each of them. The MO partitioning significantly outperforms the CV partitioning \cite{homa_mo}. In this paper, we demonstrate that the new OO-CPO technique outperforms the MO technique. GASTs are the objects that dominate the error floor of NB codes on asymmetric channels, e.g., practical Flash channels. We recall the definitions of GASTs and unlabeled GASTs. \begin{definition}\label{def_gast} (cf. \cite{ahh_jsac}) Consider a subgraph induced by a subset $\mathcal{V}$ of VNs in the Tanner graph of an NB code. Set all the VNs in $\mathcal{V}$ to values $\in$ GF($q$)$\backslash \{0\}$ and set all other VNs to $0$. The set $\mathcal{V}$ is said to be an $(a, b, d_1, d_2, d_3)$ \textbf{general absorbing set of type two (GAST)} over GF($q$) if the size of $\mathcal{V}$ is $a$, the number of unsatisfied CNs connected to $\mathcal{V}$ is $b$, the number of degree-$1$ (resp., $2$ and $> 2$) CNs connected to $\mathcal{V}$ is $d_1$ (resp., $d_2$ and $d_3$), $d_2 > d_3$, all the unsatisfied CNs connected to $\mathcal{V}$ (if any) have either degree $1$ or degree $2$, and each VN in $\mathcal{V}$ is connected to strictly more satisfied than unsatisfied neighboring CNs (for some set of given VN values). \end{definition} \begin{definition}\label{def_ugas} (cf. \cite{ahh_jsac}) Let $\mathcal{V}$ be a subset of VNs in the unlabeled Tanner graph of an NB code. Let $\mathcal{O}$ (resp., $\mathcal{T}$ and $\mathcal{H}$) be the set of degree-$1$ (resp., $2$ and $> 2$) CNs connected to $\mathcal{V}$. This graphical configuration is an $(a, d_1, d_2, d_3)$ \textbf{unlabeled GAST (UGAST)} if it satisfies the following two conditions: \vspace{-0.2em} \begin{enumerate} \item $|\mathcal{V}| = a$, $\vert{\mathcal{O}}\vert=d_1$, $\vert{\mathcal{T}}\vert=d_2$, $\vert{\mathcal{H}}\vert=d_3$, and $d_2 > d_3$. \item Each VN in $\mathcal{V}$ is connected to strictly more neighbors in $\{\mathcal{T} \cup \mathcal{H}\}$ than in $\mathcal{O}$. \end{enumerate} \end{definition} Examples on GASTs and UGASTs are shown in Fig. \ref{Fig_gasts}. The WCM framework \cite{ahh_jsac, ahh_tit} removes a GAST by careful processing of its edge weights. The key idea of this framework is to represent the GAST in terms of a set of submatrices of the GAST adjacency matrix. These submatrices are the WCMs, and they have the property that once the edge weights of the GAST are processed to force the null spaces of the WCMs to have a particular property, the GAST is completely removed from the Tanner graph of the NB code (see \cite{ahh_jsac} and \cite{ahh_tit}). \vspace{-0.1em} \section{OO Partitioning: Theoretical Analysis}\label{sec_oo} \vspace{-0.1em} In order to simultaneously reduce the number of multiple UGASTs, we determine a common substructure in them, then minimize the number of instances of this substructure in the unlabeled Tanner graph of the SC code (the graph of $\bold{H}^b_{SC}$) \cite{homa_sc}. We propose our new partitioning scheme in the context of SC codes with $\gamma = 3$ (the scheme can be extended to higher column weights). For the overwhelming majority of dominant GASTs we have encountered in NB codes with $\gamma = 3$ simulated over Flash channels, the $(3, 3, 3, 0)$ UGAST occurs as a common substructure most frequently \cite{ahh_jsac, ahh_tit} (see Fig.~\ref{Fig_gasts}). Thus, we focus on the removal of $(3, 3, 3, 0)$ UGASTs. \begin{figure}[H] \vspace{-1.5em} \center \includegraphics[width=3.3in]{The_configs.pdf}\vspace{-1.6em} \vspace{-0.5em} \text{\hspace{4em}\footnotesize{(a) \hspace{14em} (b)}} \caption{(a) Two dominant GASTs for NB codes with $\gamma = 3$ over Flash; a $(4, 2, 2, 5, 0)$ and a $(6, 0, 0, 9, 0)$ GASTs. Appropriate edge weights ($w$'s) are assumed. (b) A $(3, 3, 3, 0)$ UGAST ($\gamma=3$).} \label{Fig_gasts} \vspace{-0.5em} \end{figure} A cycle of length $2z$ in the graph of $\bold{H}^{bp}_{SC}$ (the binary protograph of the SC code), which is defined by the non-zero entries $\{(h_1, \ell_1), (h_2, \ell_2), \dots, (h_{2z}, \ell_{2z})\}$ in $\bold{H}^{bp}_{SC}$, results in $p$ cycles of length $2z$ in the graph of $\bold{H}^b_{SC}$ if and only if \cite{baz_qc, fos_qc}: \vspace{-0.1em} \begin{align}\label{eq_cycle} \sum_{e=1}^{z} f_{h_{2e-1}, \ell_{2e-1}} \equiv \sum_{e=1}^{z} f_{h_{2e}, \ell_{2e}} \text{ } (\text{mod } p), \end{align} where $f_{h, \ell}$ is the power of the circulant indexed by $(h, \ell)$ in $\bold{H}^b_{SC}$. Otherwise, this cycle results in $p/\beta$ cycle(s) of length $2z\beta$ in the graph of $\bold{H}^b_{SC}$, where $\beta$ is an integer $\geq 2$ that divides $p$ \cite{baz_qc}. It is clear from Fig. \ref{Fig_gasts}(b) that the $(3, 3, 3, 0)$ UGAST is a cycle of length $6$. Thus, and motivated by the above fact, our OO partitioning aims at deriving the overlap parameters of $\bold{H}^{bp}$ that result in the minimum number of cycles of length $6$ in the graph of $\bold{H}^{bp}_{SC}$, which is the binary protograph of the SC code. Then, we run the CPO to further reduce the number of $(3, 3, 3, 0)$ UGASTs in the graph of $\bold{H}^{b}_{SC}$ (which is the unlabeled graph of the SC code) by breaking the condition in (\ref{eq_cycle}) (with $z=3$ for cycles of length $6$) for as many cycles in the optimized graph of $\bold{H}^{bp}_{SC}$ as possible. The goal here is to minimize the number of cycles of length $6$ in the binary protograph of the SC code via the OO partitioning of $\bold{H}^{bp}$, which is also the OO partitioning of $\bold{H}^b$. To achieve this goal, we establish a discrete optimization problem by expressing the number of cycles of length $6$ in the graph of $\bold{H}^{bp}_{SC}$ as a function of the overlap parameters and standard code parameters, then solve for the optimal overlap parameters. We start off with the following lemma. \vspace{-0.2em} \begin{lemma}\label{lem_total} In the Tanner graph of an SC code with parameters $\gamma = 3$, $\kappa$, $p=1$, $m=1$, and $L$ (which is the binary protograph), the number of cycles of length $6$ is given by: \begin{equation}\label{eq_Ftot} F = LF_s + (L-1)F_d,\vspace{-0.1em} \end{equation} where $F_s$ is the number of cycles of length $6$ that have their VNs spanning only one particular replica (say $\bold{R}_1$), and $F_d$ is the number of cycles of length $6$ that have their VNs spanning two particular consecutive replicas (say $\bold{R}_1$ and $\bold{R}_2$). \end{lemma} \begin{proof} From \cite[Lemma 1]{homa_mo}, the maximum number of consecutive replicas spanned by the $3$ VNs of a cycle of length $6$ in an SC code with $m = 1$ is $2$. Thus, the VNs of any cycle of length $6$ span either one replica or two consecutive replicas. Since there exist $L$ replicas and $L-1$ distinct pairs of consecutive replicas, and because of the repetitive nature of the SC code, (\ref{eq_Ftot}) follows. \end{proof} \vspace{-0.4em} Let the \textit{\textbf{overlapping set}} of $x$ rows of a binary matrix be the set of positions in which all the $x$ rows have $1$'s simultaneously (overlap). Now, define the overlap parameters as follows: \begin{itemize} \item $t_i$ (resp., $t_{i+3}$), $0 \leq i \leq 2$, is the number of $1$'s in row $i$ of $\bold{H}^{bp}_0$ (resp., $\bold{H}^{bp}_1$). From the definitions of $\bold{H}^{bp}_0$ and $\bold{H}^{bp}_1$, $t_{i+3}=\kappa-t_i$. \item $t_{i_1,i_2}$, $0 \leq i_1 \leq 2$, $0 \leq i_2 \leq 2$, and $i_2 > i_1$, is the size of the overlapping set of rows $i_1$ and $i_2$ of $\bold{H}^{bp}_0$. \item $t_{i_3,i_4}$, $i_3=i_1+3$, $i_4=i_2+3$, and $i_4 > i_3$, is the size of the overlapping set of rows $i_1$ and $i_2$ of $\bold{H}^{bp}_1$. From the definitions, $t_{i_3,i_4}=\kappa-t_{i_1}-t_{i_2}+t_{i_1,i_2}$. \item $t_{0,1,2}$ (resp., $t_{3,4,5}$) is the size of the overlapping set of rows $0$, $1$, and $2$ of $\bold{H}^{bp}_0$ (resp., $\bold{H}^{bp}_1$). Moreover, $t_{3,4,5} = \kappa - (t_0+t_1+t_2) + (t_{0,1}+t_{0,2}+t_{1,2})-t_{0,1,2}$. \end{itemize} Let $[x]^+=\max(x, 0)$. We define the following functions to be used in Theorem \ref{th_fsfd}: \vspace{-0.1em} \begin{align} &\mathcal{A}(t_{0,1}, t_{0,2}, t_{1,2}, t_{0,1,2}) = \left [ t_{0,1,2}(t_{0,1,2}-1)(t_{1,2}-2) \right ]^+ \nonumber \\ &+ \left [ t_{0,1,2}(t_{0,2}-t_{0,1,2})(t_{1,2}-1) \right ]^+ \nonumber \\ &+ \left [ (t_{0,1}-t_{0,1,2})t_{0,1,2}(t_{1,2}-1) \right ] ^+ \nonumber \\ &+ \left [ (t_{0,1}-t_{0,1,2})(t_{0,2}-t_{0,1,2})t_{1,2} \right ]^+, \end{align} \begin{align} &\mathcal{B}(t_0, t_1, t_2, t_{0,1}, t_{0,2}, t_{1,2}, t_{0,1,2}) \nonumber \\ &= \left [ t_{0,1,2}(t_{0,1}-t_{0,1,2})(t_1-t_{1,2}-1) \right ]^+ \nonumber \\ &+ \left [ t_{0,1,2}(t_0-t_{0,1}-t_{0,2}+t_{0,1,2})(t_1-t_{1,2}) \right ]^+ \nonumber \\ &+ \left [ (t_{0,1}-t_{0,1,2})(t_{0,1}-t_{0,1,2}-1)(t_1-t_{1,2}-2) \right ]^+ \nonumber \\ &+ \left [ (t_{0,1}-t_{0,1,2})(t_0-t_{0,1}-t_{0,2}+t_{0,1,2})(t_1-t_{1,2}-1) \right ]^+ \nonumber \\ &+ \left [ t_{0,1,2}(t_{0,2}-t_{0,1,2})(t_0-t_{0,1}-1) \right ]^+ \nonumber \\ &+ \left [ t_{0,1,2}(t_2-t_{0,2}-t_{1,2}+t_{0,1,2})(t_0-t_{0,1}) \right ]^+ \nonumber \\ &+ \left [ (t_{0,2}-t_{0,1,2})(t_{0,2}-t_{0,1,2}-1)(t_0-t_{0,1}-2) \right ]^+ \nonumber \\ &+ \left [ (t_{0,2}-t_{0,1,2})(t_2-t_{0,2}-t_{1,2}+t_{0,1,2})(t_0-t_{0,1}-1) \right ]^+ \nonumber \\ &+ \left [ t_{0,1,2}(t_{1,2}-t_{0,1,2})(t_2-t_{0,2}-1) \right ]^+ \nonumber \\ &+ \left [ t_{0,1,2}(t_1-t_{0,1}-t_{1,2}+t_{0,1,2})(t_2-t_{0,2}) \right ]^+ \nonumber \\ &+ \left [ (t_{1,2}-t_{0,1,2})(t_{1,2}-t_{0,1,2}-1)(t_2-t_{0,2}-2) \right ]^+ \nonumber \\ &+ \left [ (t_{1,2}-t_{0,1,2})(t_1-t_{0,1}-t_{1,2}+t_{0,1,2})(t_2-t_{0,2}-1) \right ]^+, \\ &\mathcal{C}(\kappa, t_0, t_1, t_2, t_{0,1}, t_{0,2}, t_{1,2}, t_{0,1,2}) \nonumber \\ &= \left [ t_{3,4}t_{0,1,2}(t_{1,2}-1) \right ]^+ + \left [ t_{3,4}(t_{0,2}-t_{0,1,2})t_{1,2} \right ]^+ \nonumber \\ &+ \left [ t_{3,5}t_{0,1,2}(t_{0,1}-1) \right ]^+ + \left [ t_{3,5}(t_{1,2}-t_{0,1,2})t_{0,1} \right ]^+ \nonumber \\ &+ \left [ t_{4,5}t_{0,1,2}(t_{0,2}-1) \right ]^+ + \left [ t_{4,5}(t_{0,1}-t_{0,1,2})t_{0,2} \right ]^+, \textit{ and} \\ &\mathcal{D}(t_0, t_1, t_2, t_{0,1}, t_{0,2}, t_{1,2}, t_{0,1,2}) \nonumber \\ &= \left [ t_{0,1}(t_2-t_{0,2}-t_{1,2}+t_{0,1,2})(t_2-t_{1,2}-1) \right ]^+ \nonumber \\ &+ \left [ t_{0,1}(t_{1,2}-t_{0,1,2})(t_2-t_{1,2}) \right ]^+ \nonumber \\ & + \left [ t_{0,2}(t_1-t_{0,1}-t_{1,2}+t_{0,1,2})(t_1-t_{0,1}-1) \right ]^+ \nonumber \\ &+ \left [ t_{0,2}(t_{0,1}-t_{0,1,2})(t_1-t_{0,1}) \right ]^+ \nonumber \\ &+ \left [ t_{1,2}(t_0-t_{0,1}-t_{0,2}+t_{0,1,2})(t_0-t_{0,2}-1) \right ]^+ \nonumber \\ &+ \left [ t_{1,2}(t_{0,2}-t_{0,1,2})(t_0-t_{0,2}) \right ]^+. \end{align} \vspace{-1.3em} Theorem \ref{th_fsfd} uses combinatorics to give the exact expressions for $F_s$ and $F_d$ in terms of the above overlap parameters. \vspace{-0.1em} \begin{thm}\label{th_fsfd} In the Tanner graph of an SC code with parameters $\gamma = 3$, $\kappa$, $p=1$, $m=1$, and $L$ (which is the binary protograph), $F_s$ and $F_d$ are computed as follows: \vspace{-0.2em}\begin{align} F_s &= F_{s,0} + F_{s,1} + F_{s,2} + F_{s,3}, \textit{ and} \\ \vspace{-0.2em} F_d &= F_{d,0} + F_{d,1} + F_{d,2} + F_{d,3},\vspace{-0.6em} \end{align} where $F_{s,0}$, $F_{s,1}$, $F_{s,2}$, $F_{s,3}$, $F_{d,0}$, $F_{d,1}$, $F_{d,2}$, and $F_{d,3}$ are: \vspace{-0.3em}\begin{align} F_{s,0} &= \mathcal{A}(t_{0,1}, t_{0,2}, t_{1,2}, t_{0,1,2}), \nonumber \\ F_{s,1} &= \mathcal{A}(t_{3,4}, t_{3,5}, t_{4,5}, t_{3,4,5}), \nonumber \\ F_{s,2} &= \mathcal{B}(t_0, t_1, t_2, t_{0,1}, t_{0,2}, t_{1,2}, t_{0,1,2}), \nonumber \\ F_{s,3} &= \mathcal{B}(t_3, t_4, t_5, t_{3,4}, t_{3,5}, t_{4,5}, t_{3,4,5}), \\ F_{d,0} &= \mathcal{C}(\kappa, t_0, t_1, t_2, t_{0,1}, t_{0,2}, t_{1,2}, t_{0,1,2}), \nonumber \\ F_{d,1} &= \mathcal{C}(\kappa, t_3, t_4, t_5, t_{3,4}, t_{3,5}, t_{4,5}, t_{3,4,5}), \nonumber \\ F_{d,2} &= \mathcal{D}(t_0, t_1, t_2, t_{0,1}, t_{0,2}, t_{1,2}, t_{0,1,2}), \textit{ and} \nonumber \\ F_{d,3} &= \mathcal{D}(t_3, t_4, t_5, t_{3,4}, t_{3,5}, t_{4,5}, t_{3,4,5}). \end{align} \end{thm} \begin{proof} The term $F_s$ represents the number of cycles of length $6$ that have their VNs spanning only one replica. The non-zero submatrix of a replica is $\left [\bold{H}_0^{bpT} \text{ } \bold{H}_1^{bpT} \right ]^T$. There are four possible cases of arrangement for the CNs of a cycle of length $6$ that has its VNs spanning only one replica. These cases are listed below: \begin{enumerate} \item All the three CNs are within $\bold{H}_0^{bp}$. The number of cycles of length $6$ that have all their CNs inside $\bold{H}_0^{bp}$ is denoted by $F_{s,0}$. \item All the three CNs are within $\bold{H}_1^{bp}$. The number of cycles of length $6$ that have all their CNs inside $\bold{H}_1^{bp}$ is denoted by $F_{s,1}$. \item Two CNs are within $\bold{H}_0^{bp}$, and one CN is within $\bold{H}_1^{bp}$. The number of cycles of length $6$ in this case is denoted by $F_{s,2}$. \item Two CNs are within $\bold{H}_1^{bp}$, and one CN is within $\bold{H}_0^{bp}$. The number of cycles of length $6$ in this case is denoted by $F_{s,3}$. \end{enumerate} These four different cases of arrangement are illustrated in the upper panel of Fig. \ref{Fig_oo}. Next, we find the number of cycles of length $6$ in each of the four cases in terms of the overlap parameters and standard code parameters, particularly, $\{\kappa,t_0,t_1,t_2,t_{0,1},t_{0,2},t_{1,2},t_{0,1,2}\}$. In case 1, a cycle of length $6$ is comprised of an overlap between rows $0$ and $1$, an overlap between rows $0$ and $2$, and an overlap between rows $1$ and $2$ of $\bold{H}_0^{bp}$. Note that each overlap must have a distinct associated column index (position) to result in a valid cycle of length $6$. The overlap between rows $0$ and $1$ can be selected among $t_{0,1}$ possible choices. Among these $t_{0,1}$ overlaps, there exist $t_{0,1,2}$ overlaps that have the same associated column indices as some overlaps between other pairs of rows. Thus, these $t_{0,1,2}$ overlaps need to be considered separately to avoid incorrect counting. The same argument applies when we choose the overlap between the other two pairs of rows. As a result, the number of different ways to choose these overlaps and form a cycle of length $6$ is $F_{s,0}=\mathcal{A}(t_{0,1},t_{0,2},t_{1,2},t_{0,1,2})$, and $\mathcal{A}$ is defined in (3). In case 2, the number of cycles of length $6$, $F_{s,1}$, is computed exactly as in case 1, but using the overlap parameters of the matrix $\bold{H}_1^{bp}$. Thus, $F_{s,1}=\mathcal{A}(t_{3,4},t_{3,5},t_{4,5},t_{3,4,5})$. In case 3, one overlap solely belongs to $\bold{H}_0^{bp}$, and the two other overlaps cross $\bold{H}_0^{bp}$ to $\bold{H}_1^{bp}$ (see Fig. \ref{Fig_oo}). For the overlap in $\bold{H}_0^{bp}$, we have three options to choose two rows out of three. For example, suppose that the overlap is chosen between rows $0$ and $1$ of $\bold{H}_0^{bp}$. Then, the cross overlaps will be between row $0$ of $\bold{H}_0^{bp}$ and row $2$ of $\bold{H}_1^{bp}$, and also between row $1$ of $\bold{H}_0^{bp}$ and row $2$ of $\bold{H}_1^{bp}$. Note that since $\bold{H}_0^{bp}$ and $\bold{H}_1^{bp}$ are the result of partitioning $\bold{H}^{bp}$, there are no overlaps between row $i$ of $\bold{H}_0^{bp}$ and row $i$ of $\bold{H}_1^{bp}$, $0 \leq i \leq 2$. Based on which option of the three is chosen, the number of cycles of length $6$ is computed using the overlap parameters of $\bold{H}_0^{bp}$ and $\bold{H}_1^{bp}$. The total number of cycles of length $6$ in this case is $F_{s,2}=\mathcal{B}(t_0,t_1,t_2,t_{0,1},t_{0,2},t_{1,2},t_{0,1,2})$, and $\mathcal{B}$ is defined in (4). In case 4, the number of cycles of length $6$, $F_{s,3}$, is computed as in case 3. The only difference is that in case 4, one overlap solely belongs to $\bold{H}_1^{bp}$, and the two other overlaps cross $\bold{H}_0^{bp}$ to $\bold{H}_1^{bp}$ (see Fig. \ref{Fig_oo}). Consequently, $F_{s,3}=\mathcal{B}(t_3,t_4,t_5,t_{3,4},t_{3,5},t_{4,5},t_{3,4,5})$. On the other hand, the term $F_d$ represents the number of cycles of length $6$ that have their VNs spanning two consecutive replicas. The non-zero submatrix of two consecutive replicas is: \begin{equation*} \left[\begin{array}{cc} \bold{H}_0^{bp}&\bold{0}\\ \bold{H}_1^{bp}&\bold{H}_0^{bp}\\ \bold{0}&\bold{H}_1^{bp} \end{array}\right]. \end{equation*} There are four possible cases of arrangement for the CNs and VNs of a cycle of length $6$ that has its VNs spanning two consecutive replicas. These cases are listed below: \begin{enumerate} \item All the three CNs are within $[\bold{H}_1^{bp} \text{ } \bold{H}_0^{bp}]$, two VNs belong to the first replica, and one VN belongs to the second replica. The number of cycles of length $6$ in this case is denoted by $F_{d,0}$. \item All the three CNs are within $[\bold{H}_1^{bp} \text{ } \bold{H}_0^{bp}]$, one VN belongs to the first replica, and two VNs belong to the second replica. The number of cycles of length $6$ in this case is denoted by $F_{d,1}$. \item One CN is within $[\bold{H}_0^{bp}\text{ }\bold{0}]$, and two CNs are within $[\bold{H}_1^{bp} \text{ } \bold{H}_0^{bp}]$. Besides, two VNs belong to the first replica, and one VN belongs to the second replica. The number of cycles of length $6$ in this case is denoted by $F_{d,2}$. \item Two CNs are within $[\bold{H}_1^{bp} \text{ } \bold{H}_0^{bp}]$, and one CN is within $[\bold{0}\text{ }\bold{H}_1^{bp}]$. Besides, one VN belongs to the first replica, and two VNs belong to the second replica. The number of cycles of length $6$ in this case is denoted by $F_{d,3}$. \end{enumerate} These four different cases of arrangement are illustrated in the lower panel of Fig. \ref{Fig_oo}. Next, we find the number of cycles of length $6$ in each of the four cases in terms of the overlap parameters and standard code parameters, particularly, $\{\kappa,t_0,t_1,t_2,t_{0,1},t_{0,2},t_{1,2},t_{0,1,2}\}$. In case 1, two overlaps belong to $\bold{H}_1^{bp}$ in the first replica, and one overlap belongs to $\bold{H}_0^{bp}$ in the second replica (see Fig.~\ref{Fig_oo}). For the overlap in $\bold{H}_0^{bp}$, we have three options to choose two rows out of three. For each option, the two overlaps inside $\bold{H}_1^{bp}$ must have distinct associated column indices (positions) to result in a valid cycle of length $6$ (the overlap inside $\bold{H}_0^{bp}$ cannot have the same column index as any of the other two overlaps). Thus, the number of different ways to choose these overlaps and form a cycle of length $6$ is given by $F_{d,0}=\mathcal{C}(\kappa,t_0,t_1,t_2,t_{0,1},t_{0,2},t_{1,2},t_{0,1,2})$, and $\mathcal{C}$ is defined in (5). In case 2, the number of cycles of length $6$, $F_{d,1}$, is computed as in case 1. The only difference is that in case~2, one overlap belongs to $\bold{H}_1^{bp}$ in the first replica, and two overlaps belong to $\bold{H}_0^{bp}$ in the second replica (see Fig. \ref{Fig_oo}). Thus, $F_{d,1}=\mathcal{C}(\kappa,t_3,t_4,t_5,t_{3,4},t_{3,5},t_{4,5},t_{3,4,5})$. In case 3, one overlap solely belongs to $\bold{H}_0^{bp}$ in the second replica, and the two other overlaps cross $\bold{H}_0^{bp}$ to $\bold{H}_1^{bp}$ in the first replica (see Fig. \ref{Fig_oo}). For the overlap in $\bold{H}_0^{bp}$ of the second replica, we have three options to choose two rows out of three. The two overlaps that belong to the first replica must have distinct corresponding column indices (positions). Consequently, the total number of cycles of length $6$ in this case is given by $F_{d,2}=\mathcal{D}(t_0,t_1,t_2,t_{0,1},t_{0,2},t_{1,2},t_{0,1,2})$, and $\mathcal{D}$ is defined in (6). In case 4, the number of cycles of length $6$, $F_{d,3}$, is computed as in case 3. The only difference is that in case~4, one overlap solely belongs to $\bold{H}_1^{bp}$ in the first replica, and the two other overlaps cross $\bold{H}_0^{bp}$ to $\bold{H}_1^{bp}$ in the second replica (see Fig. \ref{Fig_oo}). Thus, $F_{d,3}=\mathcal{D}(t_3,t_4,t_5,t_{3,4},t_{3,5},t_{4,5},t_{3,4,5})$. Note that the operator $[.]^+$ is used to avoid counting options that are not valid. \end{proof} \vspace{-0.2em} \begin{figure}[H] \vspace{-1.2em} \center \includegraphics[trim={1.3in 7.6in 1.2in 0.9in},clip,width=3.5in]{OO_cases.pdf} \vspace{-1.5em} \caption{Different cases for the cycle of length $6$ (in red) in a $\gamma=3$ SC binary protograph. The upper panel (resp., lower panel) is for the case of the VNs spanning $\bold{R}_1$ (resp., $\bold{R}_1$ and $\bold{R}_2$).} \label{Fig_oo} \vspace{-0.3em} \end{figure} The main idea of Theorem \ref{th_fsfd} is that both $F_s$ and $F_d$ can be computed by decomposing each of them into four more tractable terms. Each term represents a distinct case for the existence of a cycle of length $6$ in the SC binary protograph, and the union of these cases covers all the existence possibilities. Each case is characterized by the locations of the CNs and VNs comprising the cycle with respect to $\bold{H}^{bp}_0$ and $\bold{H}^{bp}_1$ of the replica $\bold{R}_1$ (for $F_s$) or the replicas $\bold{R}_1$ and $\bold{R}_2$ (for $F_d$). Fig. \ref{Fig_oo} illustrates these eight cases, along with the terms in $F_s$ and $F_d$ that corresponds to each case. \vspace{-0.1em} \begin{remark} Consider the special situation of $t_{0,1,2}=0$ (rows $0$, $1$, and $2$ in $\bold{H}^{bp}_0$ do not have a $3$-way overlap). Here, $F_{s,0}$ reduces to $t_{0,1}t_{0,2}t_{1,2}$, which is simply the number of ways to select one position from the overlapping set of each pair. \end{remark} Now, define $F^*$ to be the minimum number of cycles of length $6$ in the graph of $\bold{H}^{bp}_{SC}$ (the binary protograph). Thus, our \textit{\textbf{discrete optimization problem}} is formulated as follows: \begin{equation}\label{eq_optp} F^* = \min_{t_0, t_1, t_2, t_{0,1}, t_{0,2}, t_{1,2}, t_{0,1,2}} F. \end{equation} The constraints of our optimization problem are the conditions under which the overlap parameters are valid. Thus, these constraints on the seven parameters in (\ref{eq_optp}) are: \begin{align}\label{eq_const} &0 \leq t_0 \leq \kappa, \text{ } 0 \leq t_{0,1} \leq t_0, \text{ } t_{0,1} \leq t_1 \leq \kappa-t_0+t_{0,1}, \nonumber \\ &0 \leq t_{0,1,2} \leq t_{0,1}, \text{ } t_{0,1,2} \leq t_{0,2} \leq t_0-t_{0,1}+t_{0,1,2}, \nonumber \\ &t_{0,1,2} \leq t_{1,2} \leq t_1-t_{0,1}+t_{0,1,2}, \nonumber \\ &t_{0,2}+t_{1,2}-t_{0,1,2} \leq t_2 \leq \kappa-t_0-t_1+t_{0,1}+t_{0,2}+t_{1,2} \nonumber \\ &-t_{0,1,2}, \textit{ and }\left \lfloor {3\kappa}/{2} \right \rfloor \leq t_0+t_1+t_2 \leq \left \lceil {3\kappa}/{2} \right \rceil. \end{align} The last constraint in (\ref{eq_const}) guarantees balanced partitioning between $\bold{H}^{bp}_{0}$ and $\bold{H}^{bp}_{1}$, and it is needed to prevent the case that a group of non-zero elements (a group of $1$'s) in either $\bold{H}^{bp}_{0}$ or $\bold{H}^{bp}_{1}$ are involved in significantly more cycles than the remaining non-zero elements (the remaining $1$'s). The solution of our optimization problem is not unique. However, since all the solutions result in the same number of OO partitioning choices and the same $F^*$, we work with one of these solutions, and call it an optimal vector, $\bold{t}^{*} = [t^*_0 \text{ } t^*_1 \text{ } t^*_2 \text{ } t^*_{0,1} \text{ } t^*_{0,2} \text{ } t^*_{1,2} \text{ } t^*_{0,1,2}]$. Lemma \ref{lem_choi} gives the total number of OO partitioning choices. \begin{lemma}\label{lem_choi} The total number of OO partitioning choices for an SC code with parameters $\gamma = 3$, $\kappa$, $p=1$, $m=1$, and $L$ (which is the binary protograph) given an optimal vector $\bold{t}^*$ is given by: \vspace{-0.4em} \begin{align}\label{eq_choi} \mathcal{N} = {\alpha} &\binom{\kappa}{t^*_0} \binom{t^*_{0}}{t^*_{0,1}} \binom{\kappa-t^*_{0}}{t^*_{1}-t^*_{0,1}} \binom{t^*_{0,1}}{ t^*_{0,1,2}} \binom{t^*_{0}-t^*_{0,1}}{t^*_{0,2}-t^*_{0,1,2}} \nonumber \\ &\binom{t^*_{1}-t^*_{0,1}}{t^*_{1,2}-t^*_{0,1,2}} \binom{\kappa-t^*_{0}-t^*_{1}+t^*_{0,1}}{t^*_{2}-t^*_{0,2}-t^*_{1,2}+t^*_{0,1,2}}, \end{align} where $\alpha$ is the number of distinct solutions (optimal vectors). \end{lemma} \begin{proof} The goal is to find the number of partitioning choices that achieve a general set of overlap parameters $\{t_0,t_1,t_2,t_{0,1}, \allowbreak t_{0,2},t_{1,2},t_{0,1,2}\}$ (not necessarily optimal). In particular, we need to find the number of different partitioning choices of an SC code with $\gamma=3$, $\kappa$, $p=1$, $m=1$, and $L$ such that: \begin{itemize} \item The number of $1$'s in row $i$, $0 \leq i \leq 2$, of $\bold{H}_0^{bp}$ is $t_i$. \item The size of the overlapping set of rows $i_1$ and $i_2$, $0 \leq i_1 \leq 2$, $0 \leq i_2 \leq 2$, and $i_2 > i_1$, of $\bold{H}^{bp}_0$ is $t_{i_1,i_2}$. \item The size of the overlapping set of rows $0$, $1$, and $2$ ($3$-way overlap) of $\bold{H}^{bp}_0$ is $t_{0,1,2}$. \end{itemize} We factorize the number of partitioning choices, $\mathcal{N}^g$, into three more tractable factors: \begin{itemize} \item[--] Choose $t_0$ positions, in which row $0$ of $\bold{H}_0^{bp}$ has $1$'s, out of $\kappa$ positions. The number of choices is: $$\mathcal{N}^g_0={\kappa\choose{t_0}}.$$ \item[--] Choose $t_1$ positions, in which row $1$ of $\bold{H}_0^{bp}$ has $1$'s, out of $\kappa$ positions. Among these $t_1$ positions, there exist $t_{0,1}$ positions in which row $0$ simultaneously has $1$'s. The number of choices is: $$\mathcal{N}^g_1={{t_0}\choose{t_{0,1}}}{{\kappa-t_0}\choose{t_1-t_{0,1}}}.$$ \item[--] Choose $t_2$ positions, in which row $2$ of $\bold{H}_0^{bp}$ has $1$'s, out of $\kappa$ positions. Among these $t_2$ positions, there exist $t_{0,1,2}$ positions in which rows $0$ and $1$ simultaneously have $1$'s, $t_{0,2}$ positions in which only rows $0$ simultaneously has $1$'s, and $t_{1,2}$ positions in which only rows $1$ simultaneously has $1$'s. The number of choices is: \begin{equation*} \begin{aligned} \mathcal{N}^g_2=&{{t_{0,1}}\choose{t_{0,1,2}}}{{t_0-t_{0,1}}\choose{t_{0,2}-t_{0,1,2}}}{{t_1-t_{0,1}}\choose{t_{1,2}-t_{0,1,2}}}\\ &{{\kappa-t_0-t_1+t_{0,1}}\choose{t_2-t_{0,2}-t_{1,2}+t_{0,1,2}}}. \end{aligned} \end{equation*} \end{itemize} In conclusion, the number of partitioning choices that achieve a general set of overlap parameters is $\mathcal{N}^g_0\mathcal{N}^g_1\mathcal{N}^g_2$. The solution of the optimization problem in (\ref{eq_optp}) is not unique, and there are $\alpha$ distinct solutions (optimal vectors) that all achieve $F^*$. Because of the symmetry of these $\alpha$ optimal vectors, each of them corresponds to the same $\mathcal{N}_0\mathcal{N}_1\mathcal{N}_2$ partitioning choices. The factors $\mathcal{N}_0$, $\mathcal{N}_1$, and $\mathcal{N}_2$ are obtianed by replacing each $t$ with $t^*$ (from an optimal vector $\bold{t}^*$) in the equations of $\mathcal{N}^g_0$, $\mathcal{N}^g_1$, and $\mathcal{N}^g_2$, respectively. Thus, the total number of OO partitioning choices given an optimal vector $\textbf{t}^*$ is $\mathcal{N}=\alpha\mathcal{N}_0\mathcal{N}_1\mathcal{N}_2$, which proves Lemma \ref{lem_choi}. \end{proof} \vspace{-0.8em} \begin{remark} The first seven constraints of the optimization problem, which are stated in (\ref{eq_const}), can be easily verified from (\ref{eq_choi}) in Lemma \ref{lem_choi} by replacing each $t^*$ with $t$. \end{remark} \vspace{-0.9em} \section{Circulant Power Optimization}\label{sec_cpo} After picking an optimal vector $\bold{t}^*$ to partition $\bold{H}^{bp}$ and design $\bold{H}^{bp}_{SC}$, we run our heuristic CPO to further reduce the number of $(3, 3, 3, 0)$ UGASTs in the graph of $\bold{H}^{b}_{SC}$, which has $\gamma=3$. The steps of the CPO are: \begin{enumerate} \item Initially, assign circulant powers as in AB codes to all the $\gamma \kappa$ $1$'s in $\bold{H}^{bp}$ (results in no cycles of length $4$ in $\bold{H}^{b}$ and $\bold{H}^b_{SC}$). \item Design $\bold{H}^{bp}_{SC2}$ using $\bold{H}^{bp}$ and $\bold{t}^*$ such that $\bold{H}^{bp}_{SC2}$ contains only two replicas, $\bold{R}_1$ and $\bold{R}_2$. Circulant powers of the $1$'s in $\bold{H}^{bp}_{SC2}$ are copied from the $1$'s in $\bold{H}^{bp}$. \item Locate all the cycles of lengths $4$ and $6$ in $\bold{H}^{bp}_{SC2}$. \item Specify the cycles of length $6$ in $\bold{H}^{bp}_{SC2}$ that have (\ref{eq_cycle}) satisfied, and call them \textit{\textbf{active cycles}}. Let $2F^a_s$ (resp., $F^a_d$) be the number of active cycles having their VNs spanning only $\bold{R}_1$ or only $\bold{R}_2$ (resp., both $\bold{R}_1$ and $\bold{R}_2$). \item Compute the number of $(3, 3, 3, 0)$ UGASTs in $\bold{H}^{b}_{SC}$ using the following formula: \vspace{-0.2em}\begin{equation} F_{SC} = \left ( LF^a_s + (L-1)F^a_d \right ) p. \vspace{-0.2em} \end{equation} \item Count the number of active cycles each $1$ in $\bold{H}^{bp}_{SC2}$ is involved in. Give weight $1$ (resp., $2$) to the number of active cycles having their VNs spanning only $\bold{R}_1$ or only $\bold{R}_2$ (resp., both $\bold{R}_1$ and $\bold{R}_2$). \item Map the counts from step 6 to the $1$'s in $\bold{H}^{bp}$, and sort these $1$'s in a list descendingly according to the counts. \item Pick a subset of $1$'s from the top of this list, and change the circulant powers associated with them. \item Using these interim new powers, do steps 4 and 5. \item If $F_{SC}$ is reduced while maintaining no cycles of length $4$ in $\bold{H}^{b}_{SC}$, update $F_{SC}$ and the circulant powers, then go to step 6. Otherwise, return to step 8. \item Iterate until the target $F_{SC}$ is achieved. \end{enumerate} Note that step 8 is performed heuristically. \begin{figure}[H] \vspace{-1.8em} \center \includegraphics[width=3.6in]{OO_p_7.pdf}\vspace{-1.7em} \vspace{-0.4em} \text{\hspace{-0em}\footnotesize{(a) \hspace{14em} (b)}} \caption{(a) The OO partitioning of $\bold{H}^{bp}$ (or $\bold{H}^{b}$) of the SC code in Example~\ref{ex_oo}. Entries with circles (resp., squares) are assigned to $\bold{H}^{bp}_0$ (resp., $\bold{H}^{bp}_1$). (b) The circulant power arrangement for the circulants in $\bold{H}^b$.} \label{Fig_OOp7} \vspace{-0.7em} \end{figure} \begin{example}\label{ex_oo} Suppose we want to design an SC code with $\gamma = 3$, $\kappa=7$, $p=7$, $m=1$, and $L=30$ using the OO partitioning and the CPO. Solving the optimization problem in (\ref{eq_optp}) yields an optimal vector $\bold{t}^*=[3 \text{ } 4 \text{ } 3 \text{ } 0 \text{ } 1 \text{ } 2 \text{ } 0]$, which gives $F^* = 1170$ cycles of length $6$ in the graph of $\bold{H}^{bp}_{SC}$. Fig. \ref{Fig_OOp7}(a) shows how the partitioning is applied on $\bold{H}^{bp}$ (or $\bold{H}^{b}$). Next, applying the CPO results in only $203$ $(3, 3, 3, 0)$ UGASTs in the unlabeled graph of the SC code, which is the graph of $\bold{H}^{b}_{SC}$. Fig. \ref{Fig_OOp7}(b) shows the final circulant power arrangement for all circulants in $\bold{H}^b$. \end{example} The OO-CPO technique for designing $\bold{H}^b_{SC}$ is based on solving a set of equations, then applying a heuristic program on two replicas to optimize the circulant powers. Moreover, the OO partitioning has orders of magnitude fewer number of partitioning choices compared to the MO partitioning (see \cite[Lemma 3]{homa_mo}). We can even use any choice of the OO partitioning choices without having to compare their performances explicitly. All these reasons demonstrate that the OO-CPO technique is not only better in performance (see Section \ref{sec_sims} for details), but also much faster than the MO technique. \vspace{-0.2em} \section{WCM Framework: On The Removal of GASTs}\label{sec_wcm} After applying the OO-CPO technique to optimize the unlabeled graph of the SC code, we optimize the edge weights. In particular, we use the WCM framework \cite{ahh_jsac, ahh_tit} to remove GASTs from the labeled graph of the NB-SC code through edge weight processing. There are multiple parameters that control the difficulty of the removal of a certain GAST from the Tanner graph of a code. The number of distinct WCMs associated with the UGAST and the minimum number of edge weight changes needed to remove the GAST, denoted by $E_{GAST,min}$, are among these parameters. A third parameter is the number of sets of edge weight changes that have cardinality $E_{GAST,min}$ and are candidates for the GAST removal process. The first two parameters are studied in \cite{ahh_tit}. We discuss the third parameter in this section. As the number of candidate sets of cardinality $E_{GAST,min}$ increases, the difficulty of the GAST removal decreases. In this section, unless otherwise stated, when we say nodes are ``connected'', we mean they are ``directly connected'' or they are ``neighbors''. The same applies conceptually when we say an edge is ``connected'' to a node or vice versa. \begin{remark} A GAST is removed by performing $E_{GAST,min}$ edge weight changes for edges connected to degree-$2$ CNs only (see also \cite{ahh_tit}). Whether a candidate set of edge weight changes indeed results in the GAST removal or not is determined by checking the null spaces of the WCMs \cite{ahh_jsac, ahh_tit}. \end{remark} To minimize the number of edge weight changes performed to remove a GAST, we need to work on the VNs that are connected to the maximum number of unsatisfied CNs. Thus, $E_{GAST,min}=g-b_{vm}+1$ (see \cite{ahh_tit}), where $g = \left \lfloor \frac{\gamma-1}{2} \right \rfloor$ and $b_{vm}$ is the maximum number of existing unsatisfied CNs per VN in the GAST. Define $E_{mu}$ as the topological upper bound on $E_{GAST,min}$ and $d_{1,vm}$ as the maximum number of existing degree-$1$ CNs per VN in the GAST. Thus, from \cite{ahh_tit}: \begin{equation}\label{eq_emu} E_{mu} = g-d_{1,vm}+1 \geq E_{GAST,min}. \end{equation} Note that (\ref{eq_emu}) follows from $d_{1,vm} \leq b_{vm}$. In this section, we study GASTs with $b=d_1$, which means the upper bound is achieved, i.e., $E_{GAST,min}=E_{mu}$. Moreover, for simplicity, we assume that all the VNs that are connected to $d_{1,vm}$ degree-$1$ CNs each are only connected to CNs of degree $\leq 2$. \begin{thm}\label{th_choices} Consider an $(a, b, d_1, d_2, d_3)$ GAST, with $b=d_1$, in an NB code defined over $GF($q$)$ that has column weight $\gamma$ and no cycles of length $4$. The number of sets of edge weight changes with cardinality $E_{GAST,min}$ (or $E_{mu}$) that are candidates for the GAST removal process is given as follows. If $d_{1,vm} \neq g$: \vspace{-0.3em}\begin{equation}\label{eq_choices1} S_{mu} = a_{vm}\binom{\gamma-d_{1,vm}}{E_{mu}}(2(q-2))^{E_{mu}}, \end{equation} where $a_{vm}$ is the number of VNs connected to $d_{1,vm}$ degree-$1$ CNs each. If $d_{1,vm} = g$: \vspace{-0.4em}\begin{equation}\label{eq_choices2} S_{mu} = \left ( a_{vm} \left \lceil \frac{\gamma+1}{2} \right \rceil - n_{co} \right )2(q-2), \end{equation} where $n_{co}$ is the number of degree-$2$ CNs connecting any two of these $a_{vm}$ VNs. \end{thm} \begin{proof} Whether $d_{1,vm} \neq g$ or not, to minimize the number of edge weight changes, we need to target the VNs that are connected to the maximum number of unsatisfied CNs. By definition, and since $b=d_1$, the number of VNs of this type is $a_{vm}$, and each is connected to $d_{1,vm}$ unsatisfied CNs. In the case of $d_{1,vm} \neq g$, which is the general case, for any VN of the $a_{vm}$ pertinent VNs, there are $\binom{\gamma-d_{1,vm}}{E_{mu}}$ different ways of selecting $E_{mu}$ degree-$2$ satisfied CNs connected to this VN. Each of these CNs has $2$ edges we can change their weights (not simultaneously). Moreover, each edge can have $(q-2)$ different new weights (excluding the $0$ and the current weight). Thus, the number of candidate sets is: \begin{equation}\label{eq_prch1} S_{mu} = a_{vm}\binom{\gamma-d_{1,vm}}{E_{mu}}2^{E_{mu}}(q-2)^{E_{mu}}, \end{equation} which is a rephrased version of (\ref{eq_choices1}). In the case of $d_{1,vm} = g$, from (\ref{eq_emu}), $E_{mu}=1$ (the GAST is removed by a single edge weight change). Moreover, \begin{equation}\label{eq_prch2} \gamma-d_{1,vm} = \gamma-g = \left \lceil \frac{\gamma+1}{2} \right \rceil. \end{equation} Substituting (\ref{eq_prch2}) and $E_{mu}=1$ into (\ref{eq_prch1}) gives that the number of candidate sets follows the inequality: \begin{equation}\label{eq_prch3} S_{mu} \leq a_{vm} \left \lceil \frac{\gamma+1}{2} \right \rceil 2(q-2). \end{equation} In (\ref{eq_prch3}), the equality is achieved only if there are no shared degree-$2$ CNs between the VNs that have $g$ unsatisfied CNs, i.e., $n_{co}=0$. Otherwise, $n_{co}$ has to be subtracted from $a_{vm} \left \lceil \frac{\gamma+1}{2} \right \rceil$, which proves (\ref{eq_choices2}). Note that the subtraction of $n_{co}$ is not needed if $d_{1,vm} \neq g$. The reason is that if $d_{1,vm} \neq g$ (or $E_{mu} \neq 1$) multiple edges connected to the same CN cannot exist in the same candidate set. Additionally, since our codes have girth at least $6$, there does not exist more than one degree-$2$ CN connecting the same two VNs in a GAST. \end{proof} \begin{figure}[H] \vspace{-2.5em} \center \includegraphics[width=3.5in]{Choices_ex.pdf}\vspace{-0.8em} \vspace{-0.5em} \text{\hspace{-0em}\footnotesize{(a) \hspace{14em} (b)}} \caption{(a) A $(7, 9, 9, 13, 0)$ GAST ($\gamma=5$). (b) An $(8, 0, 0, 16, 0)$ GAST ($\gamma=4$). Appropriate non-binary edge weights are assumed.} \label{Fig_choices} \vspace{-0.6em} \end{figure} \begin{example} Consider the $(7, 9, 9, 13, 0)$ GAST over GF($q$) in Fig. \ref{Fig_choices}(a) ($\gamma = 5$). For this GAST, $g=2$, $d_{1,vm}=2$, $a_{vm}=3$, and from (\ref{eq_emu}), $E_{GAST,min}=E_{mu}=1$. Moreover, $n_{co}=1$ (only one shared degree-$2$ CN between two VNs having two unsatisfied CNs each). Thus, from (\ref{eq_choices2}), the number of candidate sets of cardinality $1$ is: \begin{equation} S_{mu}=(3(3)-1)(2)(q-2)=16(q-2). \end{equation} Contrarily, for the $(8, 0, 0, 16, 0)$ GAST over GF($q$) in Fig. \ref{Fig_choices}(b) ($\gamma = 4$), $g=1$, $d_{1,vm}=0$, $a_{vm}=8$, and from (\ref{eq_emu}), $E_{GAST,min}=E_{mu}=2$. Thus, from (\ref{eq_choices1}) (the general relation), the number of candidate sets of cardinality $2$ is: \begin{equation} S_{mu}=8 \binom{4}{2} (2)^2 (q-2)^2 = 192(q-2)^2. \end{equation} \end{example} \section{Code Design Steps and Simulation Results}\label{sec_sims} In this section, we present our $\gamma = 3$ NB-SC code design approach for Flash memories, and the experimental results demonstrating its effectiveness. The steps of our OO-CPO-WCM approach are: \begin{enumerate} \item Specify the code parameters, $\kappa$, $p$, and $L$, with $m=1$. \item Solve the optimization problem in (\ref{eq_optp}) for an optimal vector of overlap parameters, $\bold{t}^*$. \item Using $\bold{H}^{bp}$ and $\bold{t}^*$, apply the circulant power optimizer to reach the powers of the circulants in $\bold{H}^{b}$ and $\bold{H}^{b}_{SC}$. Now, the binary image, $\bold{H}^{b}_{SC}$, is designed. \item Assign the edge weights in $\bold{H}^{b}$ to generate $\bold{H}$. Next, partition $\bold{H}$ using $\bold{t}^*$, and couple the components to construct $\bold{H}_{SC}$. \item Using initial simulations over a practical Flash channel and combinatorial techniques, determine the set $\mathcal{G}$ of GASTs to be removed from the graph of $\bold{H}_{SC}$. \item Use the WCM framework (see \cite[Algorithm 2]{ahh_jsac}) to remove as many as possible of the GASTs in $\mathcal{G}$. \end{enumerate} In this section, the CV and MO results proposed are the best that can be achieved by these two techniques \cite{homa_sc, homa_mo}. \vspace{-0.8em} \begin{table}[H] \caption{Number of $(3, 3, 3, 0)$ UGASTs in SC codes with $\gamma = 3$, $m = 1$, and $L = 30$ designed using different techniques. } \vspace{-0.5em} \centering \scalebox{0.85} { \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Design technique} & \multicolumn{4}{|c|}{\makecell{Number of $(3, 3, 3, 0)$ UGASTs}} \\ \cline{2-5} {} & $\kappa=p=7$ & \makecell{$\kappa=p=11$} & \makecell{$\kappa=p=13$} & \makecell{$\kappa=p=17$} \\ \hline Uncoupled with AB & 8820 & 36300 & 60840 & 138720 \\ \hline SC CV with AB & 3290 & 14872 & 25233 & 59024 \\ \hline SC MO with AB & 609 & 3850 & 6851 & 15997 \\ \hline SC best with AB & 609 & 3520 & & \\ \hline SC OO-CPO with CB & 203 & 2596 & 5356 & 14960 \\ \hline \end{tabular}} \label{table_1} \end{table} \vspace{-0.5em} We start our experimental results with a table comparing the number of $(3, 3, 3, 0)$ UGASTs in SC codes designed using various techniques. All the SC codes have $\gamma=3$, $m=1$, and $L=30$. AB codes are used as the underlying block codes in all the SC code design techniques we are comparing the proposed OO-CPO technique against. Table \ref{table_1} demonstrates reductions in the number of $(3, 3, 3, 0)$ UGASTs achieved by the OO-CPO technique over the MO technique (resp., the CV technique) that ranges between $6.5\%$ and $66.7\%$ (resp., $74.7\%$ and $93.8\%$). More intriguingly, the table shows that the OO-CPO technique provides lower number of $(3, 3, 3, 0)$ UGASTs than the best that can be achieved if AB underlying block codes are used. Note that this ``best'' is reached using exhaustive search, and that is the reason why we could not provide its counts for $\kappa = p > 11$. Next, we provide simulation results verifying the performance gains achieved by our NB-SC code design approach for Flash memories. The Flash channel we use is a practical Flash channel, which is the normal-Laplace mixture (NLM) Flash channel \cite{mit_nl}. Here, we use $3$ reads, and the sector size is $512$ bytes. We define RBER as the raw bit error rate \cite{ahh_jsac}, and UBER as the uncorrectable bit error rate \cite{ahh_jsac}. One formulation of UBER, which is recommended by industry, is the frame error rate (FER) divided by the sector size in bits. Simulations were done in software on a high speed cluster of machines. All the NB-SC codes we simulated are defined over GF($4$), and have $\gamma = 3$, $\kappa = p =19$, $m = 1$, and $L = 20$ (block length $=14440$ bits and rate $\approx 0.834$). Code 1 is uncoupled (AB). Code 2 is designed using the CV technique. Code 3 is designed using the OO technique (with no CPO applied). The underlying block codes of Codes 2 and 3 are AB codes. Code 4 is designed using the OO-CPO technique. The edge weights of Codes 1, 2, 3, and 4 are selected randomly. Code 5 (resp., Code 6) is the result of applying the WCM framework to Code 1 (resp., Code 4) to optimize the edge weights. Code 1 (resp., Code 2 and Code 4) has $129960$ (resp., $55366$ and $16340$) $(3, 3, 3, 0)$ UGASTs. Additionally, Code 1 (resp., Code 2 and Code 4) has $4873500$ (resp., $2002353$ and $1156264$) $(4, 4, 4, 0)$ UGASTs. The $(4, 4, 4, 0)$ UGAST is the second most common substructure in the dominant GASTs of NB codes with $\gamma=3$ simulated over Flash channels. \begin{figure}[H] \vspace{-1.0em} \center \includegraphics[trim={0.4in 0.2in 0.5in 0.2in},clip,width=3.5in]{Plot_RBER_UBER.pdf} \vspace{-1.7em} \caption{Simulation results over the NLM Flash channel for SC codes with $\gamma = 3$, $m = 1$, and $L = 20$ designed using different techniques.} \label{Fig_uber} \vspace{-0.5em} \end{figure} Fig. \ref{Fig_uber} demonstrates the performance gains achieved by each stage of our NB-SC code design approach. Code 3 outperforms Code 2 by about $0.6$ of an order of magnitude, which is the gain of the first stage (OO). Code 4 outperforms Code 3 by about $0.7$ of an order of magnitude, which is the gain of the second stage (CPO). Code 6 outperforms Code 4 by about $1.2$ orders of magnitude, which is the gain of the third stage (WCM). Moreover, the figure shows that the NB-SC code designed using our OO-CPO-WCM approach, which is Code 6, achieves about $200\%$ (resp., more than $500\%$) RBER gain compared to Code 2 (resp., Code 1) over a practical Flash channel. An intriguing observation we have encountered while performing these simulations is the change in the error floor properties when we go from Code 2 to Code 4. In particular, while the $(6, 0, 0, 9, 0)$ GAST was a dominant object in the case of Code 2, we have encountered very few $(6, 0, 0, 9, 0)$ GASTs in the error profile of Code 4. \section{Conclusion}\label{sec_conc} We proposed a combinatorial approach for the design of NB-SC codes optimized for practical Flash channels. The OO-CPO technique efficiently optimizes the underlying topology of the NB-SC code, then the WCM framework optimizes the edge weights. NB-SC codes designed using our approach have reduced number of detrimental GASTs, thus outperforming existing NB-SC codes over Flash channels. The proposed approach can help increase the reliability of ultra dense storage devices, e.g., emerging 3-D Flash devices. \section*{Acknowledgement}\label{sec_ack} The research was supported in part by a grant from ASTC-IDEMA and by NSF.
1,941,325,221,229
arxiv
\section{Introduction} The E989 experiment at Fermilab(FNAL) has released its first measurement of the muon anomalous magnetic moment $a_\mu \equiv (g-2)_\mu/2$ with 460 ppb precision~\cite{Abi:2021gix}, which corroborates the previous E821 result at the Brookhaven National Laboratory (BNL)~\cite{Bennett:2006fi} and increases the persistent tension between the experimental data and the Standard Model (SM) prediction. The averaged result after combining the BNL and FNAL data reads: \begin{equation} a_\mu^{\rm Exp} = 116592061(41) \times 10^{-11}, \label{amu-exp} \end{equation} and it corresponds to a 4.2 $\sigma$ discrepancy from the current consensus of the SM prediction~\cite{Aoyama:2020ynm,Aoyama:2012wk,Aoyama:2019ryr,Czarnecki:2002nt,Gnendiger:2013pva,Davier:2017zfy,Keshavarzi:2018mgv,Colangelo:2018mtw,Hoferichter:2019gzf,Davier:2019can, Keshavarzi:2019abf,Kurz:2014wya,Melnikov:2003xd,Masjuan:2017tvw,Colangelo:2017fiz,Hoferichter:2018kwz,Gerardin:2019vio,Bijnens:2019ghy,Colangelo:2019uex,Blum:2019ugy,Colangelo:2014qya}: \begin{align} a_\mu^{\rm SM} &= 116 591 810(43) \times 10^{-11},\\ \Delta a_\mu &= a_\mu^{\rm Exp} - a_\mu^{\rm SM} = (251 \pm 59) \times 10^{-11}. \label{delta-amu} \end{align} This excess is most likely to be further substantiated by more thorough analysis at FNAL and J-PARC~\cite{Abe:2019thb} in the future. Although there are doubts that the muon anomaly may arise from uncertainties in experimental analysis and theoretical calculations~\cite{Cowan:2021sdy}, it is generally believed that the anomaly originates from new physics beyond the SM (BSM)~\cite{Athron:2021iuf}. This speculation has motivated a great deal of BSM models to explain the anomaly, e.g., two Higgs doublet models, leptoquark models, vector-like lepton models, and so on (for a recent comprehensive study about this subject, see e.g.,~\cite{Athron:2021iuf}). Supersymmetry (SUSY), however, is of particular interest due to its elegant structure and natural solutions to many puzzles in the SM, such as the hierarchy problem, the unification of different forces, and the dark matter (DM) mystery~\cite{Fayet:1976cr,Haber:1984rc,Gunion:1984yn,Djouadi:2005gj,Martin:1997ns,Jungman:1995df}. In fact, many studies on the anomaly in low-energy supersymmetric models have demonstrated that the source of the significant deviation can be totally or partially attributed to the loop diagram corrections of supersymmetric particles (sparticles), i.e., smuon-neutralino loops and sneutrino-chargino loops~\cite{Moroi:1995yh,Hollik:1997vb,Czarnecki:2001pv,Stockinger:2006zn,Domingo:2008bb,Cao:2011sn,Athron:2015rva,Padley:2015uma,Kang:2016iok,Zhu:2016ncq,Okada:2016wlm,Yanagida:2017dao,Du:2017str,Ning:2017dng, Hagiwara:2017lse,Choudhury:2017fuu,Cox:2018qyi,Tran:2018kxv,Wang:2018vxp,Yang:2018guw,Cao:2019evo,Liu:2020nsm, Cao:2021lmj,Ke:2021kgy,Lamborn:2021snt,Li:2021xmw,Nakai:2021mha,Li:2021koa,Kim:2021suj,Li:2021pnt,Altmannshofer:2021hfu,Baer:2021aax, Chakraborti:2021bmv,Aboubrahim:2021xfi,Athron:2021iuf,Iwamoto:2021aaf,Chakraborti:2021dli,Cao:2021tuh,Yin:2021mls,Zhang:2021gun,Ibe:2021cvf,Han:2021ify, Wang:2021bcx,Zheng:2021gug,Chakraborti:2021mbr,Aboubrahim:2021myl,Ali:2021kxa,Wang:2021lwi,Chakraborti:2020vjp,Baum:2021qzx,Li:2021bbf,Forster:2021vyz,VanBeekveld:2021tgn,Zheng:2021wnu,Jeong:2021qey, Martin:2001st,Endo:2021zal,Chakraborti:2022vds,Gomez:2022qrb,Chakraborti:2021kkr,Cao:2022chy}. In the Minimal Supersymmetric Standard Model~(MSSM)~\cite{Haber:1984rc,Gunion:1984yn,Djouadi:2005gj}, the lightest neutralino $\tilde{\chi}_1^0$ is usually taken as the lightest supersymmetric particle~(LSP) and can behave like a weakly-interacting-massive-particle~(WIMP) DM candidate under the assumption of R-parity conservation~\cite{Farrar:1978xj,Jungman:1995df}. Specifically, the major component of $\tilde{\chi}_1^0$ should be Bino field since a Wino-like or Higgsino-like $\tilde{\chi}_1^0$ must be around $1~{\rm TeV}$ in mass to fully account for the correct DM relic density~\cite{Jungman:1995df}, and consequently the other sparticles are heavy to make the theory incapable of explaining the anomaly. Recent researches have discussed the MSSM explanation of the anomaly within 2$\sigma$ uncertainty by keeping the theory consistent with the measurement of the DM relic abundance, the negative results from DM direct detection~(DD) experiments, and searches for electroweakinos at the Large Hadron Collider~(LHC)~\cite{Chakraborti:2020vjp,Chakraborti:2021mbr,Chakraborti:2021squ,Chakraborti:2021dli}. Three scenarios classified by DM annihilation mechanisms were comprehensively analyzed~\cite{Chakraborti:2020vjp,Chakraborti:2021dli}. It was found that, under the assumption that $\tilde{\chi}_1^0$ makes up the full DM content of the universe, the improved $(g-2)_\mu$ data would bring an upper limit of roughly 600 GeV on the LSP and next-to-LSP (NLSP) mass, which set a clear target for future electroweakino searches at high-luminosity LHC and high-energy $e^+e^-$ colliders. This conclusion also applies to the Higgsino- and Wino-dominated LSP cases if the measured DM relic density is regarded as an upper bound~\cite{Chakraborti:2021kkr}. Even though the MSSM can readily explain the g-2 anomaly, the combined constraints from the DM and SUSY search experiments would require massive Higgsinos~\cite{Chakraborti:2021dli}, leading up to a fine-tuning in predicting Z boson mass~\cite{Baer:2012uy}. This fact motivates us to study the Next-to-Minimal Supersymmetric Standard Model with a $\mathbb{Z}_3$-symmetry ($\mathbb{Z}_3$-NMSSM), which is another economical realization of SUSY~\cite{Ellwanger:2009dp,Maniatis:2009re}, in explaining the anomaly. This model extends the MSSM with a singlet superfield $\hat{S}$, and consequently it can dynamically generate the $\mu$-parameter of the MSSM, significantly enhance the SM-like Higgs bosom mass, and predict much richer phenomenology than the MSSM (see, for example, Refs.~\cite{Cao:2012fz,Cao:2018rix}). One remarkable improvement of this study over the previous ones in~\cite{Moroi:1995yh,Hollik:1997vb,Czarnecki:2001pv,Stockinger:2006zn,Domingo:2008bb,Cao:2011sn,Athron:2015rva,Padley:2015uma,Kang:2016iok,Zhu:2016ncq,Okada:2016wlm,Yanagida:2017dao,Du:2017str,Ning:2017dng, Hagiwara:2017lse,Choudhury:2017fuu,Cox:2018qyi,Tran:2018kxv,Wang:2018vxp,Yang:2018guw,Cao:2019evo,Liu:2020nsm, Cao:2021lmj,Ke:2021kgy,Lamborn:2021snt,Li:2021xmw,Nakai:2021mha,Li:2021koa,Kim:2021suj,Li:2021pnt,Altmannshofer:2021hfu,Baer:2021aax, Chakraborti:2021bmv,Aboubrahim:2021xfi,Iwamoto:2021aaf,Chakraborti:2021dli,Cao:2021tuh,Yin:2021mls,Zhang:2021gun,Ibe:2021cvf,Han:2021ify, Wang:2021bcx,Zheng:2021gug,Chakraborti:2021mbr,Aboubrahim:2021myl,Ali:2021kxa,Wang:2021lwi,Chakraborti:2020vjp,Baum:2021qzx,Li:2021bbf,Forster:2021vyz,VanBeekveld:2021tgn,Zheng:2021wnu,Jeong:2021qey, Martin:2001st,Endo:2021zal,Chakraborti:2022vds,Gomez:2022qrb,Chakraborti:2021kkr} is that more SUSY searches at the LHC, such as ATLAS-2106-01676~\cite{ATLAS:2021moa}, CMS-SUS-16-039~\cite{CMS:2017moi}, CMS-SUS-17-004~\cite{CMS:2018szt}, and CMS-SUS-21-001~\cite{CMS:2020bfa}, are included to limit theory parameter space. As a result, lower bounds on sparticle mass spectra are obtained, i.e., $|M_1| \gtrsim 275~{\rm GeV}$, $M_2 \gtrsim 300~{\rm GeV}$, $\mu \gtrsim 460~{\rm GeV}$, $m_{\tilde{\mu}_L} \gtrsim 310~{\rm GeV}$, and $m_{\tilde{\mu}_R} \gtrsim 350~{\rm GeV}$, where $M_1$ and $M_2$ denote gaugino masses, $\mu$ represents the Higgsino mass, and $m_{\tilde{\mu}_L}$ and $m_{\tilde{\mu}_R}$ are the mass of Smuons with $L$ and $R$ denoting their dominant chiral component. These bounds are far beyond the reach of the LEP experiments in searching for SUSY, and have not been noticed before. They have significant impact on DM physics, e.g., the popular $Z$- and Higgs-funnel regions are excluded, and the neutralino DM obtained the correct density mainly by co-annihilating with the Wino-dominated electroweakinos. In addition, it is inferred that these conclusions also apply to the MSSM since the underlying physics for the bounds are the same. This work investigates the impact of the latest $(g-2)_{\mu}$ measurement and the LHC searches for SUSY on the $\mathbb{Z}_3$-NMSSM. It is organized as follows. In Section~\ref{theory-section}, we briefly introduce the SUSY contribution to the moment and the latest LHC probes of SUSY. In Section~\ref{numerical study}, we state the research strategy and analyze the numerical results. Lastly, we summarize the results of this study in Section~\ref{conclusion-section}. \vspace{-0.3cm} \section{\label{theory-section}Theoretical preliminaries} \vspace{-0.2cm} \subsection{\label{z3} $\mathbb{Z}_3$-NMSSM} Compared with the MSSM, the $\mathbb{Z}_3$-NMSSM introduces a new gauge-singlet Higgs superfield $\hat{S}$. Its superpotential is given by~\cite{Maniatis:2009re,Ellwanger:2009dp}: \begin{align} \label{eq:superpotential} W_\mathrm{\mathbb{Z}_{3}-NMSSM} = \lambda \hat{S} \hat{H_u} \cdot \hat{H_d} + \frac{1}{3} \kappa \hat{S}^3 + W_\mathrm{Yukawa}, \end{align} where $\hat{H}_u$ and $\hat{H}_d$ represent the up- and down-type doublet Higgs superfield, respectively, $\lambda$ and $\kappa$ are dimensionless Yukawa parameters, and $W_\mathrm{Yukawa}$ denotes the Yukawa couplings that are same as those in the MSSM. All the terms in $W_\mathrm{\mathbb{Z}_3-NMSSM}$ follow the $\mathbb{Z}_3$ symmetry. The corresponding soft-breaking terms for Eq.(\ref{eq:superpotential}) are: \begin{align} V_{\rm {\mathbb{Z}_{3}-NMSSM}}^{\rm soft} &=& m_{H_u}^2|H_u|^2 + m_{H_d}^2|H_d|^2 + m_S^2|S|^2 +( \lambda A_{\lambda} SH_u\cdot H_d +\frac{1}{3}\kappa A_{\kappa} S^3 + h.c.). \nonumber \end{align} In practice, the soft-breaking mass parameters $m_{H_u}^2,m_{H_d}^2,m_S^2$ can be fixed by solving the electroweak symmetry breaking~(EWSB) equations, where the vacuum expectation values of the Higgs scalar fields are taken as $\left\langle H_u^0 \right\rangle = v_u/\sqrt{2}$, $\left\langle H_d^0 \right\rangle = v_d/\sqrt{2}$ and $\left\langle S \right\rangle = v_s/\sqrt{2}$ with $v = \sqrt{v_u^2+v_d^2}\simeq 246~\mathrm{GeV}$, $\tan{\beta} \equiv v_u/v_d$ and the effective $\mu$ parameter is generated by $\mu=\lambda v_s$. Consequently, the Higgs sector of $\mathbb{Z}_3$-NMSSM is described by the following six parameters~\cite{Ellwanger:2009dp}: \begin{align} \lambda,~ \kappa,~ A_\lambda,~ A_\kappa,~ \mu,~ \tan\beta. \label{eq:six} \end{align} The mixings of the fields $H_u, H_d$ and $S$ result in five mass eigenstates, including three CP-even Higgs $h_i$ ($i=1,2,3$ with $m_{h_1} < m_{h_2} < m_{h_3}$) and two CP-odd Higgs $A_j$ ($j=1,2$ with $m_{A_1} < m_{A_2}$). In this work the lightest CP-even Higgs $h_1$ is treated as the SM-like Higgs boson since the Bayesian evidence of the $h_1$-scenario is much larger than that of $h_2$-scenario after considering experimental constraints~\cite{Cao:2018iyk}. The mixings between the $U(1)_{Y}$ and $SU(2)_{L}$ gaugino fields (Bino $\tilde{B}$ and Wino $\tilde{W}$), Higgsino fields ($\tilde{H_u}$ and $\tilde{H_d}$), and Singlino field $\tilde{S}$ form into five neutralinos $\tilde{\chi}^0_i$ ($i=1,2,...5$ and in an ascending mass order) and two charginos $\tilde{\chi}^\pm_j$ ($j=1,2$ with $m_{\tilde{\chi}_1^{\pm}}<m_{\tilde{\chi}_2^{\pm}}$). Their masses and mixings are determined by the soft-breaking gaugino masses $M_1$ and $M_2$, the Higgsino mass $\mu$, $\lambda$, $\kappa$, and $\tan\beta$. \subsection{\label{DMRD}Muon $g-2$ in the $\mathbb{Z}_3$-NMSSM } The SUSY contribution $a_{\mu}^{\rm{SUSY}}$ in the $\mathbb{Z}_3$-NMSSM comes from $\tilde{\mu}-\tilde{\chi}^0$ loops and $\tilde{\nu}_{\mu}-\tilde{\chi}^{\pm}$ loops~\cite{Domingo:2008bb,Martin:2001st}. The expression of the one-loop contribution to $a^{\rm SUSY}_{\mu}$ in the $\mathbb{Z}_3$-NMSSM is similar to that in the MSSM and given by~\cite{Domingo:2008bb}: \begin{small}\begin{equation}\begin{split} &a_{\mu}^{\rm SUSY} = a_{\mu}^{\tilde{\chi}^0 \tilde{\mu}} + a_{\mu}^{\tilde{\chi}^{\pm} \tilde{\nu}},\\ a_{\mu}^{\tilde{\chi}^0 \tilde{\mu}} &= \frac{m_{\mu}}{16 \pi^2}\sum_{i,l}\left\{ -\frac{m_{\mu}}{12 m_{\tilde{\mu}_l}^2} \left( |n_{il}^{\rm L}|^2 + |n_{il}^{\rm R}|^2 \right) F_1^{\rm N}(x_{il}) + \frac{m_{\tilde{\chi}_i^0}}{3 m_{\tilde{\mu}_l}^2} {\rm Re}(n_{il}^{\rm L} n_{il}^{\rm R}) F_2^{\rm N}(x_{il}) \right\}, \\ a_{\mu}^{\tilde{\chi}^\pm \tilde{\nu}} &= \frac{m_{\mu}}{16 \pi^2}\sum_{k}\left\{ \frac{m_{\mu}}{12 m_{\tilde{\nu}_{\mu}}^2} \left( |c_{k}^{\rm L}|^2 + |c_{k}^{\rm R}|^2 \right) F_1^{\rm C}(x_{k}) + \frac{2 m_{\tilde{\chi}_k^\pm}}{3 m_{\tilde{\nu}_{\mu}}^2} {\rm Re}(c_{k}^{\rm L}c_{k}^{ \rm R}) F_2^{\rm C}(x_{k}) \right\}, \label{amuon} \end{split} \end{equation}\end{small} where $i=1,\cdots,5$, $j=1,2$, and $l=1,2$ denote the neutralino, chargino and smuon index, respectively. \begin{equation} \begin{split} n_{il}^{\rm L} = \frac{1}{\sqrt{2}}\left( g_2 N_{i2} + g_1 N_{i1} \right)X^*_{l1} -y_{\mu} N_{i3}X^*_{l2}, \quad &n_{il}^{\rm R} = \sqrt{2} g_1 N_{i1} X_{l2} + y_{\mu} N_{i3} X_{l1},\\ c_{k}^{\rm L} = -g_2 V^{\rm c}_{k1}, \quad &c_{k}^{\rm R} = y_{\mu} U^{\rm c}_{k2}, \\ \end{split} \end{equation} where $N$ is the neutralino mass rotation matrix, $X$ the smuon mass rotation matrix, and $U^{\rm c}$ and $V^{\rm c}$ the chargino mass rotation matrices defined by ${U^{\rm c}}^* M_{C} {V^{\rm c}}^\dag = m_{\tilde{\chi}^\pm}^{\rm diag}$. $F(x)$s are the loop functions of the kinematic variables defined as $x_{il} \equiv m_{\tilde{\chi}_i^0}^2 / m_{\tilde{\mu}_l}^2$ and $x_{k} \equiv m_{\tilde{\chi}_k^\pm}^2 / m_{\tilde{\nu}_{\mu}}^2$, and take the form: \begin{align} F^N_1(x) & = \frac{2}{(1-x)^4}\left[ 1-6x+3x^2+2x^3-6x^2\ln x\right] \\ F^N_2(x) & = \frac{3}{(1-x)^3}\left[ 1-x^2+2x\ln x\right] \\ F^C_1(x) & = \frac{2}{(1-x)^4}\left[ 2+ 3x - 6x^2 + x^3 +6x\ln x\right] \\ F^C_2(x) & = -\frac{3}{2(1-x)^3}\left[ 3-4x+x^2 +2\ln x\right], \end{align} They satisfy $F^N_1(1) = F^N_2(1) = F^C_1(1) = F^C_2(1) = 1$ for the mass-degenerate sparticle case. It is helpful to understand the features of $a_\mu^{\rm SUSY}$ with the mass insertion calculation method~\cite{Moroi:1995yh}. In the lowest order approximation, the contributions of $a_\mu^{\rm SUSY}$ can be classified into four types: "WHL", "BHL", "BHR" and "BLR" where W, B, H, L and R denote the Wino, Bino, Higgsino, left-handed Smuon (or Sneutrino) and right-handed Smuon field, respectively. They can be expressed as~\cite{Athron:2015rva,Moroi:1995yh,Endo:2021zal}: \begin{eqnarray} a_{\mu, \rm WHL}^{\rm SUSY} &=&\frac{\alpha_2}{8 \pi} \frac{m_{\mu}^2 M_2 \mu \tan \beta}{M_{\tilde{\nu}_\mu}^4} \left \{ 2 f_C\left(\frac{M_2^2}{M_{\tilde{\nu}_{\mu}}^2}, \frac{\mu^2}{M_{\tilde{\nu}_{\mu}}^2} \right) - \frac{M_{\tilde{\nu}_\mu}^4}{M_{\tilde{\mu}_L}^4} f_N\left(\frac{M_2^2}{M_{\tilde{\mu}_L}^2}, \frac{\mu^2}{M_{\tilde{\mu}_L}^2} \right) \right \}\,, \quad \quad \label{eq:WHL} \\ a_{\mu, \rm BHL}^{\rm SUSY} &=& \frac{\alpha_Y}{8 \pi} \frac{m_\mu^2 M_1 \mu \tan \beta}{M_{\tilde{\mu}_L}^4} f_N\left(\frac{M_1^2}{M_{\tilde{\mu}_L}^2}, \frac{\mu^2}{M_{\tilde{\mu}_L}^2} \right)\,, \label{eq:BHL} \\ a_{\mu, \rm BHR}^{\rm SUSY} &=& - \frac{\alpha_Y}{4\pi} \frac{m_{\mu}^2 M_1 \mu \tan \beta}{M_{\tilde{\mu}_R}^4} f_N\left(\frac{M_1^2}{M_{\tilde{\mu}_R}^2}, \frac{\mu^2}{M_{\tilde{\mu}_R}^2} \right)\,, \label{eq:BHR} \\ a_{\mu \rm BLR}^{\rm SUSY} &=& \frac{\alpha_Y}{4\pi} \frac{m_{\mu}^2 M_1 \mu \tan \beta}{M_1^4} f_N\left(\frac{M_{\tilde{\mu}_L}^2}{M_1^2}, \frac{M_{\tilde{\mu}_R}^2}{M_1^2} \right)\,, \label{eq:BLR} \end{eqnarray} where $M_{\tilde{\mu}_L}$ and $M_{\tilde{\mu}_R}$ are the mass for left- and right-handed Smuon field, respectively. The loop function $f_C$ and $f_N$ take the following forms: \begin{eqnarray} \label{eq:loop-aprox} f_C(x,y) &=& \frac{5-3(x+y)+xy}{(x-1)^2(y-1)^2} - \frac{2\ln x}{(x-y)(x-1)^3}+\frac{2\ln y}{(x-y)(y-1)^3} \,, \\ f_N(x,y) &=& \frac{-3+x+y+xy}{(x-1)^2(y-1)^2} + \frac{2x\ln x}{(x-y)(x-1)^3}-\frac{2y\ln y}{(x-y)(y-1)^3}. \end{eqnarray} The expressions of $a_{\mu,\ i}^{\rm SUSY}$ (i= WHL, BHL, BHR) involve a prefactor of the Higgsino mass $\mu$ as well as the loop functions which approach zero with the increase of $|\mu|$. Consequently, they depend on $\mu$ in a complex way. For several typical patterns of sparticle mass spectra with a positive $\mu$, it is found that the "WHL" contribution decreases monotonously as $\mu$ increases, while the magnitude of the "BHL" and "BHR" contributions increases when $\mu$ is significantly smaller than the slepton mass and decreases when $\mu$ is larger than the slepton mass. In addition, it should be noted that the "WHL" contribution is usually the dominant one when $M_{\tilde{\mu}_L}$ is not significantly larger than $M_{\tilde{\mu}_R}$\footnote{We are not interested in the excessively large $|\mu|$ case, where the "BLR" contribution is the dominant one, because it needs fine tunings of SUSY parameters in predicting $Z$-boson mass~\cite{Baer:2012uy}. }. It should also be noted that, since the Singlino field only appears in the "WHL", "BHL" and "BHR" loops by two more insertions at the lowest order, its induced contribution to $a_{\mu}^{\rm{SUSY}}$ is less prominent. Thus the predictions for $a_\mu^{\rm SUSY}$ in the NMSSM is almost the same as that in the MSSM. Even so, the two theories may still display different features in fitting experimental constraints due to their possibly distinct DM physics and sparticle signals at the LHC~\cite{Cao:2021tuh,Cao:2022chy}. \subsection{\label{analyses}LHC Analyses} Since the electroweakinos and sleptons involved in $a_\mu^{\rm SUSY}$ are not very heavy to explain the anomaly, they can be copiously produced at the LHC, and thus are subjected to strong constraints from the analyses of the experimental data at $\sqrt{s}=13~\rm{TeV}$. Given the complexity of their production processes and decay modes, many signal topologies should be studied. It was found that the following analyses are particularly important for this work: \begin{itemize} \item \texttt{CMS-SUS-16-039 and CMS-SUS-17-004~\cite{CMS:2017moi,CMS:2018szt}}: Search for electroweakino productions in the pp collisions with two, three, or four leptons and missing transverse momentum ($\rm{E}_{\rm{T}}^{\rm{miss}}$) as the final states. Given the smallness of the production cross-sections, the analyses included all the possible final states and defined several categories by the number of leptons in the event, their flavors, and their charges to enhance the discovery potential. The results were interpreted in the context of simplified models for either Wino-like chargino-neutralino production or neutralino pair production in a gauge-mediated SUSY breaking (GMSB) scenario. An observed (expected) limit on $m_{\tilde{\chi}_1^{\pm}}$ in the chargino-neutralino production was about 650 (570) GeV for the WZ topology, 480 (455) GeV for the WH topology, and 535 (440) GeV for the mixed topology. Instead, the observed and expected limits on the neutralino mass in the GMSB scenario were 650–750 GeV and 550–750 GeV, respectively. \item \texttt{CMS-SUS-20-001~\cite{CMS:2020bfa}}: Search for $2~\rm{leptons} + \rm{jets} + \rm{E}_{\rm{T}}^{\rm{miss}}$ signal. Specifically, four scenarios were looked closely. The first one targeted strong sparticle productions with at least one on-shell $Z$ boson in the decay chain. Six disjoint categories were defined by the number of jets (i.e., SRA, SRB and SRC), which were reconstructed by requiring the distance parameter less than 0.4 and $p_{\rm{T}}^{j} \geq 35~\rm{GeV}$, and wether the presence of b-tagged jets. The second one also required the decay chain to contain an on-shell $Z$ boson, but it scrutinized the electroweakino production. It defined the VZ category by the decay modes $Z Z \to (\ell \bar{\ell}) (q \bar{q})$ and $Z W \to (\ell \bar{\ell}) (q \bar{q}^\prime)$, and the HZ category by $Z h \to (\ell \bar{\ell}) (b \bar{b})$, where $h$ denoted the SM Higgs boson. The third one, referred to as the "edge" scenario, investigated the strong production with an off-shell $Z$ boson or a slepton in the decay chain. It required two or more jets, $p_{\rm{T}}^{\rm{miss}} > 150$ or $200~\rm{GeV}$, and $M_{\rm{T}_2}(\ell\ell) > 80 ~\rm{GeV}$ in its signal regions. The last one studied slepton pair production by examining the signal with two leptons, $p_{\rm{T}}^{\rm{miss}} > 100~\rm{GeV}$, no b-tagged jets, and moderate jet activity. This analysis excluded sparticles up to 1870 GeV in mass for Gluinos, 1800 GeV for light-flavor Squarks, 1600 GeV for bottom Squarks, 750 GeV and 800 GeV for Wino-dominated chargino and neutralino, respectively, and 700 GeV for the first two-generation Sleptons. \item \texttt{ATLAS-2106-01676~\cite{ATLAS:2021moa}}: Search for Higgsino- and Wino-dominated chargino-neutralino production, including the cases of compressed and non-compressed mass spectra. This analysis studied on-shell $WZ$, off-shell $WZ$, and $Wh$ scenarios, and required the final states to contain exactly three leptons, possible ISR jets, and $\rm{E}_{\rm{T}}^{\rm{miss}}$. For the Higgsino model, $\tilde{\chi}_2^0$ was excluded up to $210~\rm{ GeV}$ in mass for the off-shell W/Z case; while for the Wino model, the exclusion bound of $\tilde{\chi}_2^0$ was $640~\rm{GeV}$ and $300~\rm{GeV}$ for the on-shell and off-shell W/Z case, respectively. \item \texttt{ATLAS-1908-08215~\cite{ATLAS:2019lff}}: Search for chargino pair and slepton pair productions with two leptons and missing transverse momentum as their final state. This analysis considered the following three simplified models: $p p \to \tilde{\chi}_1^\pm \tilde{\chi}_1^\mp \to (W^\pm \tilde{\chi}_1^0) (W^\mp \tilde{\chi}_1^0)$, $p p \to \tilde{\chi}_1^\pm \tilde{\chi}_1^\mp \to (\tilde{\ell}^\ast \nu_\ell) (\bar{\nu}_\ell \tilde{\ell}), (\bar{\ell} \tilde{\nu}_\ell) (\tilde{\nu}^\ast_\ell \ell)$, and $p p \to \tilde{\ell}^\ast \tilde{\ell} \to (\bar{\ell} \tilde{\chi}_1^0) (\ell \tilde{\chi}_1^0)$. For a massless $\tilde{\chi}_1^0$, $\tilde{\chi}_1^\pm$ could be excluded up to 420 GeV and 1 TeV, respectively, and the slepton excluded up to 700 GeV, assuming that sleptons are mass-degenerated in flavor and chiral space. \item \texttt{ATLAS-1911-12606~\cite{ATLAS:2019lng}}: Concentrate on the case of compressed mass spectra and search for the electroweakino production with two leptons and missing transverse momentum as the final state. Four scenarios were used to interpret the analyses. The first one studied $\tilde{\chi}_1^{\pm}\tilde{\chi}_1^{\mp}$, $\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}$ and $\tilde{\chi}_2^0\tilde{\chi}_1^0$ productions in the Higgsino model. The results were projected onto $\Delta m-\tilde{\chi}_2^0$ plane where $\Delta m \equiv m_{\tilde{\chi}_2^0} - m_{\tilde{\chi}_1^0}$. It was found that the tightest bound on $m_{\tilde{\chi}_2^0}$ was $193~{\rm GeV}$ for $\Delta m \simeq 9.3~{\rm GeV}$. The second scenario was quite similar to the first one except for the $\tilde{\chi}_2^0 \tilde{\chi}_1^\pm$ production in the Wino/Bino model. The optimum bound on $m_{\tilde{\chi}_2^0}$ was $240~{\rm GeV}$ when $\Delta m \simeq 7~{\rm GeV}$. The third one assumed that the electroweakino pair production proceeded via vector-boson fusion (VBF) process and used the kinematic cuts on $m_{\ell\ell}$ as the primary discriminator. Correspondingly, constraints on the $\Delta m-\tilde{\chi}_2^0$ plane were obtained for both the Higgsino and Wino/Bino models, which were significantly weaker than the previous results. The last one targeted the slepton pair production. It exploited the relationship between the lepton momenta and the missing transverse momentum through the transverse mass, $m_{T2}$, which exhibited a kinematic endpoint similar to that for $m_{\ell\ell}$ in the electroweakino decays. Light-flavor sleptons were found to be heavier than about 250 GeV for $\Delta m_{\tilde{\ell}} = 10~{\rm GeV}$, where $m_{\tilde{\ell}} \equiv m_{\tilde{\ell}} - m_{\tilde{\chi}_1^0}$. \end{itemize} Concerning these analyses, it should be noted that only the first one studied the data obtained with $36~{\rm fb}^{-1}$ integrated luminosity, and the others were based on $139~{\rm fb}^{-1}$ data. \begin{table}[] \caption{Experimental analyses included in the package \texttt{SModelS-2.1.1}.} \label{tab:1} \vspace{0.1cm} \resizebox{1.0 \textwidth}{!}{ \begin{tabular}{cccc} \hline\hline \texttt{Name} & \texttt{Scenario} &\texttt{Final State} &$\texttt{Luminosity} (\texttt{fb}^{\texttt{-1}})$ \\\hline \begin{tabular}[l]{@{}l@{}} CMS-SUS-17-010~\cite{CMS:2018xqw}\end{tabular} &\begin{tabular}[c]{@{}c@{}}$\tilde{\chi}_1^{\pm}\tilde{\chi}_1^{\mp}\rightarrow W^{\pm}\tilde{\chi}_1^0 W^{\mp}\tilde{\chi}_1^0$\\$\tilde{\chi}_1^{\pm}\tilde{\chi}_1^{\mp}\rightarrow \nu\tilde{\ell} \ell\tilde{\nu}$ \\ \end{tabular}&2$ \ell$ + $E_{\rm T}^{\rm miss}$ & 35.9 \\ \\ \begin{tabular}[l]{@{}l@{}} CMS-SUS-17-009~\cite{CMS:2018eqb}\end{tabular} &$\tilde{\ell}\tilde{\ell}\rightarrow \ell\tilde{\chi}_1^0\ell\tilde{\chi}_1^0$ &2$\bm \ell$ + $E_{\rm T}^{\rm miss}$ & 35.9 \\ \\ \begin{tabular}[l]{@{}l@{}} CMS-SUS-17-004~\cite{CMS:2018szt}\end{tabular} &$\tilde{\chi}_{2}^0\tilde{\chi}_1^{\pm}\rightarrow Wh(Z)\tilde{\chi}_1^0\tilde{\chi}_1^0$ & n$ \ell$(n\textgreater{}=0) + nj(n\textgreater{}=0) + $ E_{\rm T}^{\rm miss}$ & 35.9 \\ \\ \begin{tabular}[l]{@{}l@{}}CMS-SUS-16-045~\cite{CMS:2017bki}\end{tabular} &$\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}\rightarrow W^{\pm}\tilde{\chi}_1^0h\tilde{\chi}_1^0$& 1$ \ell$ 2b + $ E_{\rm T}^{\rm miss}$ & 35.9 \\ \\ \begin{tabular}[l]{@{}l@{}} CMS-SUSY-16-039~\cite{CMS:2017moi} \end{tabular} &\begin{tabular}[c]{@{}c@{}c@{}c@{}c@{}} $\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}\rightarrow \ell\tilde{\nu}\ell\tilde{\ell}$\\$\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}\rightarrow\tilde{\tau}\nu\tilde{\ell}\ell$\\$\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}\rightarrow\tilde{\tau}\nu\tilde{\tau}\tau$\\ $\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}\rightarrow WZ\tilde{\chi}_1^0\tilde{\chi}_1^0$\\$\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}\rightarrow WH\tilde{\chi}_1^0\tilde{\chi}_1^0$\end{tabular} & n$\ell(n\textgreater{}0)$($\tau$) + $E_{\rm T}^{\rm miss}$& 35.9\\ \\ \begin{tabular}[l]{@{}l@{}}CMS-SUS-16-034~\cite{CMS:2017kxn}\end{tabular}&$\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}\rightarrow W\tilde{\chi}_1^0Z(h)\tilde{\chi}_1^0$ & n$\ell$(n\textgreater{}=2) + nj(n\textgreater{}=1) $E_{\rm T}^{\rm miss}$ & 35.9 \\ \\ \begin{tabular}[l]{@{}l@{}}ATLAS-1803-02762~\cite{ATLAS:2018ojr}\end{tabular} &\begin{tabular}[c]{@{}c@{}c@{}c@{}}$\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}\rightarrow WZ\tilde{\chi}_1^0\tilde{\chi}_1^0$\\$\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}\rightarrow \nu\tilde{\ell}l\tilde{\ell}$\\$\tilde{\chi}_1^{\pm}\tilde{\chi}_1^{\mp}\rightarrow \nu\tilde{\ell}\nu\tilde{\ell}$\\ $ \tilde{\ell}\tilde{\ell}\rightarrow \ell\tilde{\chi}_1^0\ell\tilde{\chi}_1^0$\end{tabular} & n$ \ell$ (n\textgreater{}=2) + $ E_{\rm T}^{\rm miss}$ & 36.1 \\ \\ \begin{tabular}[l]{@{}l@{}}ATLAS-1812-09432~\cite{ATLAS:2018qmw}\end{tabular} &$\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}\rightarrow Wh\tilde{\chi}_1^0\tilde{\chi}_1^0$ & n$ \ell$ (n\textgreater{}=0) + nj(n\textgreater{}=0) + nb(n\textgreater{}=0) + n$\gamma$(n\textgreater{}=0) + $E_{\rm T}^{\rm miss}$ & 36.1 \\ \\ \begin{tabular}[l]{@{}l@{}}ATLAS-1806-02293~\cite{ATLAS:2018eui}\end{tabular} &$\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}\rightarrow WZ\tilde{\chi}_1^0\tilde{\chi}_1^0$ &n$\ell$(n\textgreater{}=2) + nj(n\textgreater{}=0) + $ E_T^{miss}$ & 36.1 \\ \\ \begin{tabular}[l]{@{}l@{}}ATLAS-1912-08479~\cite{ATLAS:2019wgx}\end{tabular} &$\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}\rightarrow W(\rightarrow l\nu)\tilde{\chi}_1^0Z(\rightarrow\ell\ell)\tilde{\chi}_1^0$& 3$\ell $ + $ E_{\rm T}^{\rm miss}$ & 139 \\ \\ \begin{tabular}[l]{@{}l@{}}ATLAS-1908-08215~\cite{ATLAS:2019lff}\end{tabular} &\begin{tabular}[c]{@{}c@{}}$\tilde{\ell}\tilde{\ell}\rightarrow \ell\tilde{\chi}_1^0\ell\tilde{\chi}_1^0$\\$\tilde{\chi}_1^{\pm}\tilde{\chi}_1^{\mp}$ \\ \end{tabular} & 2$\ell$ + $ E_{\rm T}^{\rm miss}$ & 139 \\ \\ \begin{tabular}[l]{@{}l@{}}ATLAS-1909-09226~\cite{Aad:2019vvf}\end{tabular} & $\tilde{\chi}_{2}^0\tilde{\chi}_1^{\pm}\rightarrow Wh\tilde{\chi}_1^0\tilde{\chi}_1^0$ & 1$ \ell$ + h($\bm \rightarrow$ bb) + $ E_{\rm T}^{\rm miss}$ & 139 \\ \hline\\ \end{tabular}} \end{table} \begin{table}[] \caption{Experimental analyses used in this study. All the analyses have been implemented in~\texttt{CheckMATE-2.0.29}, and some of them were finished by us.} \label{tab:2} \vspace{0.1cm} \resizebox{1.0 \textwidth}{!}{ \begin{tabular}{cccc} \hline\hline \texttt{Name} & \texttt{Scenario} &\texttt{Final State} &$\texttt{Luminosity} (\texttt{fb}^{\texttt{-1}})$ \\\hline \texttt{ATLAS-1909-09226}~\cite{Aad:2019vvf} & $\tilde{\chi}_{2}^0\tilde{\chi}_1^{\pm}\rightarrow Wh\tilde{\chi}_1^0\tilde{\chi}_1^0$ & $1\ell + h(h\rightarrow bb) + \text{E}_\text{T}^{\text{miss}}$ & 139 \\\\ \multirow{3}{*}{\texttt{ATLAS-1911-12606}~\cite{ATLAS:2019lng}} &$\tilde{\ell}\tilde{\ell}j\rightarrow \ell\tilde{\chi}_1^0 \ell\tilde{\chi}_1^0j$& \multirow{3}{*}{$2\ell + nj(n\textgreater{}=0) + \text{E}_\text{T}^{\text{miss}}$} & \multirow{3}{*}{139} \\ &$(\text{Wino})\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}j\rightarrow W^{\star}Z^{\star}\tilde{\chi}_1^0\tilde{\chi}_1^{0}j$& & \\ &$(\text{Higgsino})\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}j$ + $\tilde{\chi}_1^{+}\tilde{\chi}_1^{-}j$ + $\tilde{\chi}_2^0\tilde{\chi}_1^{0}j$& & \\\\ \texttt{CMS-SUS-20-001}~\cite{CMS:2020bfa}& $\tilde{\chi}_{2}^0\tilde{\chi}_1^{\pm}\rightarrow WZ\tilde{\chi}_1^0\tilde{\chi}_1^0$ & $2\ell + nj(n\textgreater{}0) + \text{E}_\text{T}^{\text{miss}}$ & 137 \\\\ \multirow{2}{*}{\texttt{ATLAS-1908-08215}~\cite{ATLAS:2019lff}} &$\tilde{\ell}\tilde{\ell}\rightarrow \ell\tilde{\chi}_1^0\ell\tilde{\chi}_1^0$& \multirow{2}{*}{$2\ell + \text{E}_{\text{T}}^{\text{miss}}$} & \multirow{2}{*}{139} \\ &$\tilde{\chi}_1^{\pm}\tilde{\chi}_1^{\mp}(\tilde{\chi}_1^{\pm}\rightarrow \tilde{\ell}\nu/\tilde{\nu}\ell)$& & \\\\ \texttt{ATLAS-2106-01676}~\cite{ATLAS:2021moa} &$\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}\rightarrow W^{(*)}Z^{(*)}\tilde{\chi}_1^0\tilde{\chi}_1^0$,$\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}\rightarrow Wh\tilde{\chi}_1^0\tilde{\chi}_1^0$&$3\ell + \text{E}_\text{T}^{\text{miss}}$ & 139 \\\\ \multirow{3}{*}{\texttt{ATLAS-1803-02762}~\cite{ATLAS:2018ojr}} &$\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}\rightarrow WZ\tilde{\chi}_1^0\tilde{\chi}_1^0$,$\nu\tilde{\ell}l\tilde{\ell}$&\multirow{3}{*}{n$ \ell$ (n\textgreater{}=2) + $E_{\rm T}^{\rm miss}$}&\multirow{3}{*}{36.1} \\&$\tilde{\chi}_1^{\pm}\tilde{\chi}_1^{\mp}\rightarrow \nu\tilde{\ell}\nu\tilde{\ell}$&\\& $ \tilde{\ell}\tilde{\ell}\rightarrow\ell\tilde{\chi}_1^0\ell\tilde{\chi}_1^0$\\\\ \multirow{5}{*}{\texttt{ATLAS-1802-03158}~\cite{ATLAS:2018nud}} &$\tilde{g}\tilde{g}\rightarrow 2q\tilde{\chi}_1^0 2q\tilde{\chi}_1^0(\rightarrow \gamma \tilde{G})$&\multirow{5}{*}{n$ \gamma$ (n\textgreater{}=1) + nj(n\textgreater{}=0) + $E_{\rm T}^{\rm miss}$}&\multirow{5}{*}{36.1} \\&$\tilde{g}\tilde{g}\rightarrow 2q\tilde{\chi}_1^0(\rightarrow \gamma \tilde{G}) 2q\tilde{\chi}_1^0(\rightarrow Z \tilde{G})$\\&$\tilde{q}\tilde{q}\rightarrow q\tilde{\chi}_1^0(\rightarrow \gamma \tilde{G}) q\tilde{\chi}_1^0(\rightarrow \gamma \tilde{G})$&\\&$\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}\rightarrow Z/h\tilde{\chi}_1^0 W\tilde{\chi}_1^0$\\& $\tilde{\chi}_1^{\pm}\tilde{\chi}_1^{\pm}\rightarrow W\tilde{\chi}_1^0 W\tilde{\chi}_1^0$ \\\\ \multirow{3}{*}{\texttt{ATLAS-1712-08119}~\cite{ATLAS:2017vat}} &$\tilde{\ell}\tilde{\ell}\rightarrow \ell\tilde{\chi}_1^0\ell\tilde{\chi}_1^0$& \multirow{3}{*}{$2\ell + nj(n\textgreater{}=0) + \text{E}_\text{T}^{\text{miss}}$} & \multirow{3}{*}{36.1} \\ &$(\text{Wino})\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}\rightarrow WZ\tilde{\chi}_1^0\tilde{\chi}_1^0$& & \\ &$(\text{Higgsino})\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}$ + $\tilde{\chi}_1^{+}\tilde{\chi}_1^{-}$ + $\tilde{\chi}_2^0\tilde{\chi}_1^0$& & \\\\ \multirow{2}{*}{\texttt{CMS-SUS-17-004}~\cite{CMS:2018szt}} &$\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}\rightarrow WZ\tilde{\chi}_1^0\tilde{\chi}_1^0$,$ WH\tilde{\chi}_1^0\tilde{\chi}_1^0$& \multirow{2}{*}{$n\ell(n\textgreater{}0) + \text{E}_\text{T}^{\text{miss}}$}&\multirow{2}{*}{35.9} \\ &$\tilde{\chi}_1^0\tilde{\chi}_1^{0}\rightarrow ZZ\tilde{G}\tilde{G}$,$HZ\tilde{G}\tilde{G}$,$HH\tilde{G}\tilde{G}$& &\\\\ \multirow{3}{*}{\texttt{CMS-SUS-16-048}~\cite{CMS:2018kag}} &$\tilde{t}\tilde{t}\rightarrow b\tilde{\chi}_1^{\pm}b\tilde{\chi}_1^{\pm}$& \multirow{3}{*}{$n\ell(n\textgreater{}=0) + nb(n\textgreater{}=0) + nj(n\textgreater{}=0) + \text{E}_\text{T}^{\text{miss}}$} & \multirow{3}{*}{35.9} \\ &$\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}\rightarrow W^{*}Z^{*}\tilde{\chi}_1^0\tilde{\chi}_1^0$& & \\ &$(\text{Higgsino})\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}/\tilde{\chi}_1^0$& & \\\\ \multirow{3}{*}{\texttt{CMS-SUS-PAS-16-025}~\cite{CMS:2016zvj}} &$\tilde{t}\tilde{t}\rightarrow b\tilde{\chi}_1^{\pm}b\tilde{\chi}_1^{\pm}$& \multirow{3}{*}{$n\ell(n\textgreater{}=0) + nb(n\textgreater{}=0) + nj(n\textgreater{}=0) + \text{E}_\text{T}^{\text{miss}}$} & \multirow{3}{*}{12.9} \\ &$\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}\rightarrow W^{*}Z^{*}\tilde{\chi}_1^0\tilde{\chi}_1^0$& & \\\\ &$(\text{Higgsino})\tilde{\chi}_2^0\tilde{\chi}_1^{\pm}/\tilde{\chi}_1^0$& & \\\\ \multirow{2}{*}{\texttt{ATLAS-CONF-2016-096}~\cite{ATLAS:2016uwq}} &$\tilde{\chi}_1^{\pm}\tilde{\chi}_1^{\mp}(\tilde{\chi}_1^{\pm}\rightarrow \tilde{\ell}\nu/\tilde{\nu}\ell)$& \multirow{2}{*}{$n\ell(n\textgreater{}=2) + \text{E}_{\text{T}}^{\text{miss}}$} & \multirow{2}{*}{13.3} \\\\ &$\tilde{\chi}_1^{\pm}\tilde{\chi}_2^{0}(\tilde{\chi}_1^{\pm}\rightarrow \tilde{\ell}\nu/\tilde{\nu}\ell, \tilde{\chi}_2^{0}\rightarrow \tilde{\ell}\ell/\tilde{\nu}\nu)$& &\\ \hline \end{tabular}} \end{table} \vspace{-0.4cm} \section{\label{numerical study} Explaining $\Delta a_\mu$ in $\mathbb{Z}_3$-NMSSM} \vspace{-0.2cm} \subsection{\label{scan} Research strategies} In order to find out the features of the $\mathbb{Z}_3$-NMSSM in explaining the anomaly, a sophisticated scan was performed by the \texttt{MultiNest} algorithm~\cite{Feroz:2008xx} with $n_{\rm live}=8000$ in this study\footnote{The parameter $n_{\rm live}$ in the \texttt{MultiNest} algorithm controls the number of active points sampled from the prior distribution in each iteration of the scan.}. The explored parameter space was given by: \begin{eqnarray} \label{amuNMSSM-scan} && 0 < \lambda \leq 0.7, ~~|\kappa| \leq 0.7, ~~1 \leq \tan \beta \leq 60,~~100 ~{\rm GeV} \leq \mu \leq 1~{\rm TeV}, \nonumber \\ && |A_\kappa| \leq 1 ~{\rm TeV}, ~~|A_t| \leq 5~{\rm TeV},~~10~{\rm GeV} \leq A_\lambda \leq 5~{\rm TeV},~~|M_1| \leq 1.5~{\rm TeV}, \nonumber \\ && 100~\rm{GeV} \leq M_2 \leq 1.5~\rm{TeV}, 100~\rm{GeV} \leq \tilde{M}_{\tilde{\mu}_L} \leq 1~\rm{TeV}, 100~\rm{GeV} \leq \tilde{M}_{\tilde{\mu}_R} \leq 1~\rm{TeV}, \nonumber \end{eqnarray} where $\tilde{M}_{\tilde{\mu}_L}$ and $\tilde{M}_{\tilde{\mu}_R}$ denotes the soft-breaking mass of the left- and right-handed Smuon, respectively. The Gluino mass was fixed at $M_3=3\ensuremath{\,\text{TeV}}$. The other dimensional parameters that was not crucial to this work were set to 2 TeV, including $A_\mu$ and the soft-breaking masses and soft trilinear coefficients for all squarks and the first- and third-generation sleptons. All the input parameters were defined at the renormalization scale $Q = 1~{\rm TeV}$ and followed flat prior distributions. In the numerical calculation, the $\mathbb{Z}_3$-NMSSM model file was constructed by package \texttt{SARAH-4.14.3}~\cite{Staub:2008uz, Staub:2012pb, Staub:2013tta, Staub:2015kfa}. Particle mass spectra and low-energy observables, such as $a_\mu^{\rm SUSY}$ and B-physics observables, were calculated by programmes \texttt{SPheno- 4.0.4}~\cite{Porod:2003um, Porod:2011nf} and \texttt{FlavorKit}~\cite{Porod:2014xia}. The DM abundance and direct/indirect detection cross-sections were obtained by package~\texttt{micrOMEGAs-5.0.4}~\cite{Belanger:2001fz, Belanger:2005kh, Belanger:2006is, Belanger:2010pz, Belanger:2013oya, Barducci:2016pcb}. The likelihood function that guided the scan process was dominated by the Gaussian distribution of $a_\mu^{\rm SUSY}$, which was expressed as: \begin{equation} \mathcal{L}_{a_\mu^{\rm SUSY}}=\exp\left[-\frac{1}{2} \left( \frac{a_{\mu}^{\rm SUSY}- 2.51\times 10^{-9}}{5.9\times 10^{-10} }\right)^2\right], \end{equation} where the central value and error were taken from the combined result in Eq.(\ref{delta-amu}). Concerning experimental constraints, the likelihood function was set to 1 if the corresponding experimental limit was satisfied, otherwise it took $e^{-100}$ as a penalty. These constraints included: \begin{itemize} \item {\bf DM relic density.} Samples were required to predict the correct DM relic density with $0.096 \leq \Omega h^2 \leq 0.144$, which corresponds to the Planck-2018 measurement, $\Omega h^2 = 0.120$~\cite{Planck:2018vyg}, with an assumed 20\% theoretical uncertainty. \item {\bf DM direct and indirect detections.} The spin-dependant (SD) and spin-independent (SI) DM-nucleon scattering cross-sections should be lower than their upper limits from the latest XENON-1T experiments~\cite{Aprile:2018dbl,Aprile:2019dbj}. In addition, the prediction on the gamma-ray spectrum from DM annihilation in the dwarf spheroidal galaxies should agree with the limit placed by the Fermi-LAT observations~\cite{Fermi-LAT:2015att}. This restriction was implemented by the joint-likelihood analysis suggested in~\cite{Carpenter:2016thc}. \item {\bf Higgs physics.} The lightest CP-even Higgs boson corresponds to the SM-like Higgs boson discovered at the LHC. Its properties should coincide with corresponding data obtained by ATLAS and CMS collaborations at 95\% confidence level. This requirement was examined by programme \texttt{HiggsSignal-2.2.3}~\cite{HS2013xfa,HSConstraining2013hwa,HS2014ewa,HS2020uwn}. In addition, the direct searches for extra Higgs bosons at LEP, Tevatron and LHC were checked by programme \texttt{HiggsBounds-5.3.2}~\cite{HB2008jh,HB2011sb,HB2013wla,HB2020pkv}. \item {\bf B-physics.} The branching ratios of $B_s \to \mu^+ \mu^-$ and $B \to X_s \gamma$ should agree with their experimental measurements at $2\sigma$ level~\cite{PhysRevD.98.030001}. \item {\bf Vacuum stability.} The vacuum state of the scalar potential consisting of the Higgs fields and the last two generation slepton fields should be either stable or long-lived. This condition was checked by programme Vevacious~\cite{Camargo-Molina:2013qva}. \end{itemize} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{M1-M2.png}\hspace{-0.3cm} \includegraphics[width=0.45\textwidth]{mu-tanb.png} \caption{\label{Z3Amufig1} Samples obtained from the parameter scan, which are projected onto the $|M_1|-M_2$ plane~(left panel) and $\mu-\rm{tan\beta}$ plane~(right panel). The grey points represent the samples that are consistent with the results of DM physics experiments, the blue triangles denotes the ones that can further explain the anomaly at $2\sigma$ level, and the red stars are those that satisfy all experimental constraints, in particular the limit from the LHC searches for SUSY.} \end{figure} To get to know the impact of LHC searches for SUSY on the scan results, the following processes were studied by Monte Carlo simulations: \begin{equation}\begin{split} pp \to \tilde{\chi}_i^0\tilde{\chi}_j^{\pm} &, \quad i = 2, 3, 4, 5; \quad j = 1, 2 \\ pp \to \tilde{\chi}_i^{\pm}\tilde{\chi}_j^{\mp} &, \quad i,j = 1, 2; \\ pp \to \tilde{\chi}_i^{0}\tilde{\chi}_j^{0} &, \quad i,j = 2, 3, 4, 5; \\ pp \to \tilde{\mu}_i \tilde{\mu}_j &,\quad i,j = L, R; \end{split}\end{equation} Specifically, in order to save computing time, programme \texttt{SModelS-2.1.1}~\cite{Khosa:2020zar}, which contains the experimental analyses in Table~\ref{tab:1}, was firstly used to exclude the obtained samples. Given this programme's capability in implementing the LHC constraints was limited by its database and the strict prerequisites to use it, the rest samples were further surveyed by simulating the analyses listed in Table~\ref{tab:2}. In this research, the cross-sections for each process were calculated to the next-to leading order by programme \texttt{Prospino2}~\cite{Beenakker:1996ed}. 60000 and 40000 events were generated for electroweakino and slepton production processes, respectively, by package \texttt{MadGraph\_aMC@NLO}~\cite{Alwall:2011uj, Conte:2012fm} and their parton shower and hadronization were finished by program \texttt{PYTHIA8}~\cite{Sjostrand:2014zea}. Detector simulations were implemented with program \texttt{Delphes}~\cite{deFavereau:2013fsa}. Finally, the event files were put into the package \texttt{CheckMATE\-2.0.29}~\cite{Drees:2013wra,Dercks:2016npn, Kim:2015wza} to calculate the $R$ value defined by $R \equiv max\{S_i/S_{i,obs}^{95}\}$ for all the involved analyses, where $S_i$ represents the simulated event number of the $i$-th signal region (SR), and $S_{i,obs}^{95}$ is the corresponding $95\%$ confidence level upper limit. Evidently, $R > 1 $ indicates that the sample is experimentally excluded if the involved uncertainties are neglected~\cite{Cao:2021tuh}, while $R < 1$ means that it is consistent with the experimental analyses. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{mn1-mc1.png}\hspace{-0.3cm} \includegraphics[width=0.45\textwidth]{mn1-mu.png} \caption{\label{Z3Amufig2} Similar to Fig.~\ref{Z3Amufig1}, but projected onto the $|m_{\tilde{\chi}^0_1}|-m_{\tilde{\chi}^\pm_1}$ plane~(left panel) and $|m_{\tilde{\chi}^0_1}|-m_{\tilde{\mu}_1}$ plane~(right panel).} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{mn1-msmuL.png}\hspace{-0.3cm} \includegraphics[width=0.45\textwidth]{mn1-msmuR.png} \caption{\label{Z3Amufig3} Similar to Fig.~\ref{Z3Amufig1} and Fig.~\ref{Z3Amufig2}, but projected onto the $|m_{\tilde{\chi}^0_1}|-m_{\tilde{\mu}_{\rm{L}}}$ plane (left panel) and $|m_{\tilde{\chi}^0_1}|-m_{\tilde{\mu}_{\rm{R}}}$ plane (right panel).} \end{figure} \vspace{-0.2cm} \subsection{\label{region} Key features of the results} \begin{table}[] \centering \caption{\label{tab:number} Sample numbers before and after implementing the LHC restriction. These samples were marked by blue and red color, respectively, in Figs.~\ref{Z3Amufig1}-\ref{Z3Amufig3}. } \vspace{0.2cm} \begin{tabular}{l|c|c} \hline Annihilation Mechanisms & Without LHC Constraints & With LHC Constraints \\ \hline \multicolumn{1}{l|}{Total Sample} & 21241 & 7280 \\ \multicolumn{1}{l|}{Bino-Wino Co-annihilation} & 18517 & 7189 \\ \multicolumn{1}{l|}{Bino-Smuon Co-annihilation} & 1886 & 87 \\ \multicolumn{1}{l|}{$Z$-funnel} & 401 & 0 \\ \multicolumn{1}{l|}{$h_1$-funnel} & 323 & 0 \\\hline \end{tabular} \end{table} All samples obtained by the scan were projected onto two-dimensional planes in Figs.~\ref{Z3Amufig1}-\ref{Z3Amufig3}, where they were classified by three different colors to distinguish the impacts of the DM experiments, the muon anomaly, and the LHC probes of SUSY on the parameter space. From these plots, the following conclusions can be inferred: \begin{itemize} \item If only the constraints from DM physics are implemented, $\tilde{\chi}_1^0$ is Bino-dominated when $|m_{\tilde{\chi}_1^0}| \lesssim 700~{\rm GeV}$. It may achieve the correct relic density by Z-funnel, $h_1$-funnel, or co-annihilating with Wino-like electroweakinos and/or Smuons. $\tilde{\chi}_1^0$ may also be Higgsino-dominated when $ 800~{\rm GeV} \lesssim |m_{\tilde{\chi}_1^0}| < 1~{\rm TeV}$ and $800~{\rm GeV} \lesssim |M_1| \leq 1.5~{\rm TeV}$. This is a scenario specific to the $\mathbb{Z}_3$-NMSSM~\cite{Cao:2016nix} since the mass splittings among Higgsino-dominated electroweakinos, i.e., $\tilde{\chi}_1^0$, $\tilde{\chi}_2^0$, and $\tilde{\chi}_1^\pm$, depend on $\lambda$ and the Singlino mass $m_{\tilde{S}} \equiv 2\kappa \mu/\lambda$ (see formulae 3.3 in~\cite{Cao:2021lmj}), and consequently the effective cross-section of their co-annihilation may differ sizably from that of the MSSM~\cite{Griest:1990kh,Baker:2015qna}. In the intermediate mass range $700~{\rm GeV} < |m_{\tilde{\chi}_1^0} | < 800~{\rm GeV}$, $\tilde{\chi}_1^0$ may be either Bino-dominated or Higgsino-dominated, which is reflected by the left panel of Fig.~\ref{Z3Amufig2}. In addition, the DM DD experiments have required $\mu \gtrsim 300~{\rm GeV}$, which was explained by analytic formulae in~\cite{Cao:2019qng}. \item If the $\mathbb{Z}_3$-NMSSM is further required to explain the $(g-2)_\mu$ anomaly at $2\sigma$ level, $\tilde{\chi}_1^0$ should be lighter than about $620~{\rm GeV}$ and thus Bino-dominated. With the increase of $|m_{\tilde{\chi}_1^0}|$, $\mu$, $m_{\tilde{\mu}_L}$, and $m_{\tilde{\mu}_R}$ prefer smaller and smaller values. This tendency is more obvious for $\mu$ and $m_{\tilde{\mu}_L}$ than for $m_{\tilde{\mu}_R}$. The fundamental reason comes from the facts that the $\mathbb{Z}_3$-NMSSM is a decoupled theory in heavy sparticles limit and $\Delta a_\mu^{\rm SUSY}$ is more sensitive to $\mu$ and $m_{\tilde{\mu}_L}$ than to $m_{\tilde{\mu}_R}$, which is shown in Eqs. \ref{eq:WHL}-\ref{eq:BLR}. In addition, $\tan \beta$ must be larger than about 10 to explain the anomaly. \item The LHC searches for SUSY have significant impact on the explanation of the $(g-2)_\mu$ anomaly. Specifically, the involved sparticles are set a lower bound in mass, i.e., $|m_{\tilde{\chi}_1^0}| \gtrsim 275~{\rm GeV}$, $m_{\tilde{\chi}_1^\pm} \gtrsim 300~{\rm GeV}$, $m_{\tilde{\mu}_L} \gtrsim 310~{\rm GeV}$, $m_{\tilde{\mu}_R} \gtrsim 350~{\rm GeV}$, and $\mu \gtrsim 460~{\rm GeV}$. The basic reasons are as follows: if $\tilde{\chi}_1^0$ is lighter, more missing momentum will be emitted in the sparticle production processes at the LHC, which can improve the sensitivities of the experimental analyses; if the sparticles other than $\tilde{\chi}_1^0$ are lighter, they will be more copiously produced at the LHC to increase the events containing multiple leptons. In Appendix A of~\cite{Cao:2022chy}, the reason why $\mu \lesssim 500~{\rm GeV}$ is disfavored was also explained by analytic expressions. Since lighter sparticles are forbidden, $\tan \beta$ has to be larger than about 20 to solve the discrepancy. \end{itemize} In Table \ref{tab:number}, the impact of the LHC constraints on DM physics was shown. This table indicates that the resonant annihilations have been completely excluded, and it is the co-annihilation with Wino-like electroweakinos (in most cases) and/or Smuons (for a few cases) that are responsible the measured relic density\footnote{For the same mass of the electroweakinos and Smuons, the cross-sections of DM co-annihilation with Smuons are much lower than that with the electroweakinos, which causes that Smuons will be much lighter than the electroweakinos to achieve the correct relic density.}. The LHC constraints are more efficient in excluding the Smuon co-annihilating mechanism than the electroweakino co-annihilating mechanism. The reason is that the Smuon corresponds to the NLSP for the former mechanism, and it can increase the leptonic signal rate since heavy sparticles will decay into the Smuon~\cite{Cao:2021tuh,Cao:2022chy}. It was verified that the signal regions for more than 3 leptons of CMS-SUS-16-039 and for more than 200 GeV of $\rm{E}_{\rm{T}}^{\rm{miss}}$ of CMS-SUS-20-001 played a crucial role in excluding the samples. Concerning the obtained results, several comments are in order: \begin{itemize} \item It is evident that the parameter distributions shown in Figs. \ref{Z3Amufig1}-\ref{Z3Amufig3} depend on the scan strategy. To make the conclusions of this study as robust as possible, different strategies, e.g., narrowing or broadening the parameter space in Eq.~(\ref{amuNMSSM-scan}) and/or changing the prior distribution of the inputs, were adopted to compare the obtained results. It was found that the main conclusions were scarcely changed by the strategy. In this aspect, it should be noted that a lower bound on Wino mass, $|M_2| \gtrsim 230~{\rm GeV}$, has been set for the Bino-Wino co-annihilation case by the experimental analyses in~\cite{ATLAS:2021moa}. This conclusion can be directly applied to this study. \item Analyzing the properties of the samples indicates that all the singlet-dominated particles, including the Singlino-dominated neutralino and CP-even and -odd Higgs bosons, are heavier than $600~{\rm GeV}$. It also indicates that $\lambda < 0.4$ and the Singlino-dominated neutralino never co-annihilates with the Bino-dominated DM to obtain the measured density~\cite{Baum:2017enm}. As a result, the $\mathbb{Z}_3$-NMSSM and MSSM have roughly same underlying physics in explaining the anomaly, which means that the conclusions in this work should be applied to the MSSM. In addition, this study did not find the case that $\tilde{\chi}_1^0$ was Singlino-dominated. The main reason comes from its suppressed Bayesian evidence~\cite{Zhou:2021pit}. This conclusion were also commented in~\cite{Cao:2022chy}. \item Throughout this study, both the theoretical uncertainties incurred by the simulations and the experimental (systematic and statistic) uncertainties were not taken into account. These effects can relax the LHC constraints. However, given the advent of high-luminosity LHC, it is expected that much tighter constraints on the $\mathbb{Z}_3$-NMSSM will be obtained in near future. \item In some high energy SUSY-breaking theories, $\tilde{\tau}$ may be the NLSP. In this case, the production rate of the $e/\mu$ final states will be changed in comparison with current study. As a result, both the LHC constraints and subsequently the explanation of the anomaly show different features (see, e.g., the discussion in~\cite{Hagiwara:2017lse}). Such a possibility will be discussed in our future work. \end{itemize} \vspace{-0.2cm} \section{\label{conclusion-section}Summary} The discrepancy between $a_\mu^{\rm Exp}$ and $a_\mu^{\rm SM}$ were recently corroborated by the E989 experiment at FNAL. It hints the existence of new physics, and supersymmetry as the most compelling one has attracted a lot of attention. However, most of the studies focused on the MSSM in explaining the anomaly and few works were carried out in the $\mathbb{Z}_3$-NMSSM. This fact motivates us to explore the implications of the anomaly to this extended theory. One remarkable improvement of this study over the previous ones, in particular Refs. \cite{Chakraborti:2020vjp} and \cite{Baum:2021qzx}, is that the constraints from the LHC probes of SUSY are surveyed comprehensively to limit the parameter space of $\mathbb{Z}_3$-NMSSM. As a result, lower bounds on sparticle mass spectra are obtained, i.e., $|M_1| \gtrsim 275~{\rm GeV}$, $M_2 \gtrsim 300~{\rm GeV}$, $\mu \gtrsim 460~{\rm GeV}$, $m_{\tilde{\mu}_L} \gtrsim 310~{\rm GeV}$, and $m_{\tilde{\mu}_R} \gtrsim 350~{\rm GeV}$. The basic reasons for the results are as follows: if $\tilde{\chi}_1^0$ is lighter, more missing momentum will be emitted in the sparticle production processes at the LHC, which can improve the sensitivities of the experimental analyses; while if the sparticles other than $\tilde{\chi}_1^0$ are lighter, they will be more copiously produced at the LHC to increase the events containing multiple leptons. These bounds are far beyond the reach of the LEP experiments in searching for SUSY, and have not been noticed before. They have significant impact on DM physics, e.g., the popular $Z$- and Higgs-funnel regions have been excluded, and the Bino-dominated neutralino DM has to co-annihilate with the Wino-dominated electroweakinos (in most cases) and/or Smuons (in few cases) to obtain the correct density. Furthermore, it is inferred that these conclusions should apply to the MSSM since the underlying physics for the bounds are the same. This research provides useful information for future SUSY search at colliders. \vspace{-0.3cm} \section*{Acknowledgement} The authors thank Lei Meng for helpful discussion about the posterior probability distribution function of the scan performed in this study. This work is supported by the National Natural Science Foundation of China (NNSFC) under grant No. 12075076. \bibliographystyle{CitationStyle}
1,941,325,221,230
arxiv
\section{Conclusions and Future Work}\label{sec:contribution} In this paper, we presented a novel experience-based pipeline for \acf{SAGAT}. Our approach enhances the deployment reliability of current state-of-the-art methods on grasp affordance detection, by extracting multiple grasp configuration candidates from a given grasp affordance region. The outcome of executing a task from different grasp candidates is estimated via forward simulation. These estimates are evaluated and ranked via a heuristic confidence function in relation to task performance and grasp configuration candidates. Such information is stored in a library of task affordances, which serves as a basis for one-shot transfer estimation to identify grasp affordance configurations similar to those previously experienced, with the insight that similar regions lead to similar deployments of the task. We evaluate the method's efficacy on novel task affordance problems by training on a single object and testing on multiple new ones. We observe a significant performance improvement up to approximately $11.7\%$ in our experiments when using our proposal in comparison to state-of-the-art approaches on grasp affordance detection. Experimental evaluation on a PR2 robotic platform demonstrates highly reliable deployability of the proposed method to deal with real-world task affordance problems. This work encourages multiple interesting directions for future work. Our follow-up work will study a unified probabilistic framework to infer the most suitable grasp affordance candidate. We envision that this will allow sets of actions and grasps to be predicted when dealing with multiple correlated objects in the scene. Another interesting extension is the assessment of the end-state comfort-effect for grasping in human-robot collaboration tasks, such that the robot's grasp affordance considers the human's grasp capabilities. \section{Experimental Evaluation and Discussion}\label{sec:results} \begin{figure*}[th!] \centering\includegraphics[width= 17.7cm]{Figures/contrast} \caption{Entropy measurements on the \ac{2-D} frame for the pouring task. We consider as reference a socially acceptable pouring demonstration (green) against successful (blue) and undesired (red) task repetitions from different grasp candidates. The candidates are numbered with the corresponding observed effect. Successful tasks present low entropy whereas undesired effects have higher entropy. Our proposal exploits this relation to discern among grasp candidates at deployment time. \label{fig:examples_demo}} \end{figure*} The proposed methodology endows a robot with the ability to determine a suitable grasp configuration to succeed on an affordance task. Importantly, such a challenge is addressed without the need for extensive prior trials and errors. We demonstrate the potential of our method following the experimental setup described in Section~\ref{sec:experimental_setup} and a thorough evaluation based on the following tests: (i)~the spatial similarity between learnt and computed configurations across objects (Section~\ref{sc:reach_grasp}), (ii)~the accuracy of the task affordance deployment when transferred to new objects (Section~\ref{sc:zero}), and (iii)~the performance of our proposal when compared to other methodologies (Section~\ref{sc:reliability}). \subsection{Experimental Setup \label{sec:experimental_setup}} The end-to-end execution framework presented in Algorithm~\ref{alg:framework_kb} is deployed on a PR2 robotic platform, in both simulated and real-world scenarios. We use a Kinect mounted on the PR2's head as our visual sensor and the position sensors on the right arm joints to encode the end-effector state pose for learning the task policies in the library. We evaluate the proposed approach with an experimental setup that considers objects with variate affordable actions and suitable grasping configurations. Particularly, the library of task affordances is built uniquely using the blue mug depicted in Fig.~\ref{fig:examples_demo}, but evaluated with the objects depicted in Fig.~\ref{fig:objects}. As can be observed, the training and testing sets present a challenging and significant variability on the grasp affordance relation. Our experimental setup also considers multiple affordances, namely: pouring, handover and shaking. The choice of these affordances is determined by those being both common among the considered objects and socially acceptable according to~\cite{ardon2019learning}. \begin{figure}[b!] \centering \includegraphics[width=6.8cm]{Figures/objects} \caption{Novel objects to test the self-assessed grasp transfer.\label{fig:objects}} \end{figure} The task policy and its expected effect corresponding to each affordance are taught to the robot via kinaesthetic demonstration. The end-effector state evolution is used to learn the task policy in form of a set of \acp{DMP}, and the state evolution of the container's action region segmented on the \ac{2-D} camera frame to learn the expected effect. As depicted in Fig.~\ref{fig:examples_demo} for the pouring task, the learnt policy is replicated $9$ times from different grasping candidates, including suitable grasp affordances (blue) and undesired deployments (red). The collected demonstrations are used to adjust the confidence threshold in $\eqref{eq:optimal_grasp}$ via a binary classifier, where the confidence level computed following \eqref{eq:conf} is the support, and the label $\{``successful", ``undesired"\}$ is the target. Only successful deployments are included in the library. \begin{figure*}[ht!] \centering \subfloat[Pour ($d_{hd}=0.21$)\label{fig:pour_d_h}]{ \centering \includegraphics[width=5.85cm]{Figures/pour_d_h_l} } \hspace{-0.4cm} \subfloat[Shake ($d_{hd}=0.23$)\label{fig:shake_d_h} ]{ \centering \includegraphics[width=5.85cm]{Figures/shake_d_h_l} } \hspace{-0.45cm} \subfloat[Handover ($d_{hd}=0.20$)\label{fig:handover_d_h}]{ \centering \includegraphics[width=5.85cm]{Figures/handover_d_h_l} } \caption{Visualisation of the dissimilarity metric between an object's action region and the corresponding suitable grasp configuration, in comparison to the mean dissimilarity observed during the demonstrations ($d_{hd}$, blue horizontal line).} \label{fig:object_samples_for_zero_shot} \end{figure*} \subsection{Spatial Similarity of Grasp Configurations\label{sc:reach_grasp}} Our method allows the system for one-shot transfer of grasp configurations to new objects. As explained in Section~\ref{sec:one_shot_library}, we rank the grasp candidates on new objects as those that closely resemble the experiences stored in the library of task affordances. This approximation is based on the expectation that similar spatial configurations should offer similar performance when dealing with the same task. In this set of experiments, we demonstrate the validity of such a hypothesis by evaluating the spatial similarity between the proposals estimated on new objects and the ones previously identified as suitable and stored in the library. For an object, we calculate the Euclidean distance between the segmented action region $S_O$ and the obtained grasp configuration $g^*_p$. Fig.~\ref{fig:object_samples_for_zero_shot} shows the obtained distances denoted as $d_h(S_O,g^*_p)$. The blue horizontal line represents the mean distance obtained during the demonstrations. Overall, we observe similar distances from action regions to grasp configurations across objects. For dissimilar cases such as $4$ and $5$ (ashtray and bowl respectively), the difference is given by the fact that the obtained grasping region for most of the tasks lies on the edges of the object compartment. Even though these grasping configurations are relatively close to the action region, we will see on Table~\ref{tb:grasp} that the average performance of the tasks is preserved. To further evaluate similarity across obtained grasping configurations, we are also interested in how much the system prunes the grasping space based on the information stored in the library. As defined in~\eqref{eq:optimal_grasp}, we use a confidence threshold for the pruning process of the grasping space. Thus, based on the prior of well-performing grasp configurations, highly dissimilar proposals are not considered on the self-assessed transfer process. Fig.~\ref{fig:rejection} depicts the rejection rate of grasp configuration proposals per task affordance. From the plot, we see that the pouring task shows the highest rejection rate, especially for objects that have handles. This hints that for this task the grasping choice is more critical. \begin{figure}[b!] \centering \includegraphics[width=7cm]{Figures/rejection_plot} \caption Rejection rate of grasp candidates with prospective unsuccessful task deployment. Grasp configurations, as extracted with DeepGrasp \cite{chu2018real}, that do not relate to the prior on successful task deployment, as stored in the library, are rejected in the one-shot transfer scheme\label{fig:rejection}. } \end{figure} \subsection{One-Shot Transfer of Task Affordances\label{sc:zero}} \begin{figure}[bh!] \centering \subfloat[Pour task affordance\label{fig:pour}]{ \centering \includegraphics[width=8.4cm]{Figures/pour_results} } \subfloat[Shake task affordance\label{fig:shake} ]{ \centering \includegraphics[width=8.4cm]{Figures/shake_results} } \subfloat[Handover task affordance\label{fig:handover}]{ \centering \includegraphics[width=8.4cm]{Figures/handover_results} } \caption{Task affordance performance when deployed on novel objects (colour-coded lines) in comparison with the multiple successful demonstrations (green scale distribution).} \label{fig:trajectories} \end{figure} The second experimental test analyses the performance of our method when addressing task affordances on new objects. The goal of this evaluation is to determine if the chosen grasp configuration enables objects to perform the task affordance as successfully as the prior stored in the library. Fig.~\ref{fig:trajectories} depicts the mean and variance (green scale) of the prior experiences in the library for the tasks pour, shake and handover. Each task was performed with three real objects with notably different features: a travel mug (dark blue), measurement spoon (magenta) and a glass (blue). The resulting effect when performing the tasks from the computed grasping configuration is colour-coded on top of the prior experiences distribution. Subject to the task affordance, the three objects show different grasp affordance regions. After the one-shot self-assessment procedure, the computed grasp configurations are the most spatially similar to the most successful grasp configuration in the experience dataset. Importantly, as illustrated in Fig.~\ref{fig:trajectories}, this strategy is invariant to different initial and final states of the task. This is reflected in the obtained task affordance effect, which falls inside the variance of the demonstrations. \subsection{Comparison of Task Deployment Reliability\label{sc:reliability}} \begin{figure}[t!] \centering \includegraphics[width=8.7cm]{Figures/soa_contrast_reduced} \caption{Comparison of grasp affordance detection for the task of pouring with state-of-the-art methods and \ac{SAGAT}. The resulting grasp configuration proposals obtained with \ac{SAGAT} are highlighted for better visualisation. \label{fig:soa_comparison}} \end{figure} The last experimental test is to demonstrate at which level the proposed method enhances the task deployment reliability when used in conjunction with methods for grasp affordance detection~\cite{AffordanceNet18,chu2019learning,ardon2019learning}. To conduct this evaluation, we use the open-source implementations of~\cite{AffordanceNet18,chu2019learning,ardon2019learning} on all objects illustrated in Fig.~\ref{fig:objects}, in the real and simulated robotic platform. The obtained grasp regions are used to execute the task in two different ways: (i)~in stand-alone fashion, i.e. as originally proposed, and (ii)~as input of our \ac{SAGAT} approach to determine the most suitable grasp candidate. Fig.~\ref{fig:soa_comparison} shows some examples of the grasp affordance detected with the previously mentioned methods and our approach. We use the policies in the learnt library of task affordances to replicate the pour, shake and handover tasks on each object, for each grasp affordance, and for each method when used as stand-alone and combined with \ac{SAGAT}. This results in a total of $126$ tasks deployments on the robotic platform\footnote{A compilation of experiments can be found in: \url{https://youtu.be/nCCc3_Rk8Ks}}. Table~\ref{tb:grasp} summarises the obtained results. As can be observed, deploying a task using state-of-the-art methods on grasp affordance detection provides an average success rate of $79.2\%$ across tasks. With our approach, the deployability success is enhanced for all the tasks, with an average rate of $85.4\%$. Interestingly, the $5.2\%$ improvement is not equally distributed across tasks; more challenging tasks experience a higher success rate. This is the case of the pouring tasks where deployability success is increased by $11.67\%$. \begin{table}[t!] \renewcommand{\arraystretch}{1.25} \centering \begin{tabular}{|p{0.8cm}|c|c|c|c|c|c|} \cline{2-7} \multicolumn{1}{c|}{} & \textbf{\notsotiny{\cite{AffordanceNet18}}} & \!\!\textbf{\notsotiny{\cite{AffordanceNet18}$+$}\notsotiny{SAGAT}\!\!} & \textbf{\notsotiny{\cite{ardon2019learning}}} & \!\!\textbf{\notsotiny{\cite{ardon2019learning}$+$}\notsotiny{SAGAT}\!\!} & \textbf{\notsotiny{\cite{chu2019learning}}} & \!\!\textbf{\notsotiny{\cite{chu2019learning}$+$}\notsotiny{SAGAT}\!\!} \\ \hline \!\!\!Pour & 70\% & \textbf{82\%} &72\% & \textbf{83\% } & 73\% & \textbf{85\%}\\ \hline \!\!\!Shake & 84\% & \textbf{87\%} & 85\% & \textbf{87\%} & 86\% & \textbf{88\% }\\ \hline \!\!\!Handover & 80\% & \textbf{85\%} & 81\% & \textbf{86\%} & 82\% & \textbf{86\%} \\ \hline \end{tabular} \caption{Comparison of success rates on task affordance deployment when using state-of-the-art grasp affordance extractors as stand-alone and with our method.\label{tb:grasp}} \end{table} \section{Proposed Method \label{sec:method}} An autonomous agent must be able to perform a task affordance in different scenarios. Given a particular object and task $\mathcal{T}$ to perform, the robot must select a suitable grasp affordance configuration $g^*_p$ that allows executing the task's policy $\pi_\tau$ successfully. Only the correct choice of both $g^*_p$ and $\pi_\tau$ leads to the robot being successful at addressing the task affordance problem. Despite the strong correlation between $g^*_p$ and the $\pi_\tau$ execution performance, current approaches in the literature consider these elements to be independent. This results in grasping configurations that are not suitable for completing the task. In this section, we introduce our approach to self-assess the selection of a suitable grasp affordance configuration according to an estimate of the task performance. Fig.~\ref{fig:framework} illustrates the proposed pipeline which (i)~detects from visual information a set of grasping candidates lying in the object's grasp affordance space (Section~\ref{sc:kb}), (ii)~exploits a learnt library of task affordance policies to forward simulate the outcome of executing the task from the grasping candidates (Section~\ref{sc:mrcnn_and_dmp}), and then (iii)~evaluates the grasp configuration candidates subject to a heuristic confidence metric (Section~\ref{sec:entropy}) which allows for one-shot transfer of the grasp proposal (Section~\ref{sec:one_shot_library}). Finally, in Section~\ref{sec:framework}, we detail how theses components fit in the scheme of a robotic agent dealing with task affordance problems autonomously. \subsection{Prediction of Grasp Affordance Configurations\label{sc:kb}} The overall goal of this work is, given an object's grasp affordance region $G^*$, to find a grasp configuration ${g^*_p}$ that allows the robot to successfully employ an object for a particular task. In the grasp affordance literature, it is common to visually detect and segment the grasp affordance region $G^*$ using mapping to labels~\cite{AffordanceNet18,ardon2019learning,chu2019learning}. While these methods all predict $g^*_p$ via visual detection hypotheses, none estimate the configuration proposals based on a task performance insight. This relational gap endangers a successful task execution. Instead, an autonomous agent should be capable of discerning the most suitable grasp that benefits the execution of a task. To bridge this gap, in our method we consider a grasp affordance region $G^*$ in a generic form such as the bounding box provided by \cite{ardon2019learning} (see Fig.~\ref{fig:kb}). We are interested in pruning this region by finding multiple grasp proposal candidates. With this aim, we use the pre-trained DeepGrasp model~\cite{chu2018real}, a deep CNN that computes reliable grasp configurations on objects. The output grasp proposals $g_{p_i}$ from DeepGrasp, which do not account for affordance relation, are shown in Fig.~\ref{fig:deepGrasp}. The pruned region (see Fig.~\ref{fig:both}), denoted as ${g_{p_i} \in G^*}$, provides a set of grasp configuration candidates that accounts for both reliability and affordability. \subsection{Library of Task Affordances \label{sc:mrcnn_and_dmp}} The success of an affordance task $\mathcal{T}$ lies in executing the corresponding task policy $\pi_\tau$ from a suitable grasp configuration $g^*_p$. This is a difficult problem given that the $\pi_\tau$ and $g^*_p$ are codefining~\cite{Montesano2008LearningOA}. Namely, the task's requirements constrain the possibly suitable grasp configurations $g^*_p$, at the same time that the choice of $g^*_p$ conditions the outcome of executing the task's policy $\pi_\tau$. Additionally, determining whether the execution of a task is successful requires a performance indicator. To cope with this challenge, we build on our previous work~\cite{pairet2019learning} to learn a library $\mathcal{L}$ of task affordances from human demonstrations. The library aims at simultaneously guiding the robot on the search of a suitable task policy $\pi_\tau$ while informing about its expected outcome $\alpha_\tau$ when successful. All these elements serve as the basis of the method described in Section~\ref{sec:entropy} to determine $g^*_p$ via self-assessment of the candidates ${g_{p_i} \in G^*}$. In this work, we build the library of task affordances as: \begin{equation} \mathcal{L} = \bigl\{\mathcal{T}_1 \rightarrow \{\pi_{\tau_1}, A_{\tau_1}\}, \cdots, \mathcal{T}_n \rightarrow \{\pi_{\tau_n}, A_{\tau_n}\}\bigr\}, \label{eq:library} \end{equation} where $\pi_\tau$ is a policy encoding the task in a generalisable form, and ${\alpha_\tau \in A_\tau}$ is a set of possible successful outcomes when executing $\pi_\tau$. In our implementation, $\pi_\tau$ is based on \acp{DMP}~\cite{ijspeert2013dynamical,pairet2019learningb}. \acp{DMP} are differential equations encoding behaviour towards a goal attractor. We initialise the policies via imitation learning, and use them to reproduce an observed motion while generalising to different start and goal locations, as well as task durations. Regarding the set of possible successful outcomes ${\alpha_\tau \in A_\tau}$, we provide the robot with multiple experiences. We define the outcome $\alpha_\tau$ as the state evolution of the object's action region $S_O$ through the execution of the task. We employ \ac{M-RCNN} \cite{he2017mask} to train a model that detects objects subparts as action regions $S_O$. As exemplified in Fig.~\ref{fig:grasp}, the action region state provides a meaningful indicator of the task. This information is used as the basis for our confidence metric, which evaluates the level of success of an affordance task for a grasping proposal. \begin{figure}[t!] \centering \subfloat[$G^*$ from \cite{ardon2019learning} \label{fig:kb}]{\includegraphics[height=2.38cm]{Figures/kb_ori.png}} \;\!\! \subfloat[$g_{p_i}$ from \cite{chu2018real} \label{fig:deepGrasp}]{\includegraphics[height=2.38cm]{Figures/ori_dgrasp}} \;\!\! \subfloat[Combined ${g_{p_i} \in G^*}$\label{fig:both}]{\includegraphics[height=2.38cm]{Figures/combined_grasp}} \caption{Prediction of grasp affordance configurations for the pouring task. (a)~Patch affording the pouring task, (b)~reliable grasp configurations from DeepGrasp, (c)~pruned space for reliable grasp candidates that afford the task pouring.} \label{fig:examples_on_mug} \end{figure} \begin{figure}[b!] \centering \subfloat[Unsuccessful pour (grasping at $g_{p_1}\:\!\!$)]{\includegraphics[width=7.4cm]{Figures/wrong_grasp}} \\ \subfloat[Successful pour (grasping at $g_{p_2}\:\!\!$)]{\centering \includegraphics[width=7.4cm]{Figures/right_grasp}} \caption{Example of a pouring task from two different grasp configurations. Each situation illustrates the raw \ac{2-D} camera input of the object and the segmented action region that affords the pouring task. \label{fig:grasp}} \end{figure} \subsection{Search-Based Self-Assessment of Task Affordances} \label{sec:entropy} The task policies $\pi_\tau$ learnt in Section~\ref{sc:mrcnn_and_dmp} allow a previously experienced task from any candidate grasp ${g_{p_i} \in G^*}$ to be performed. Nonetheless, executing $\pi_\tau$ from any grasp configuration may not always lead to suitable performance. For example, Fig.~\ref{fig:grasp} depicts the case where grasping the mug from $g_{p_1}$ prevents the robot from performing a pouring task as adequately as when grasping it from $g_{p_2}$. We propose to self-assess the outcome of executing the task's policy $\pi_\tau$ from ${g_{p_i} \in G^*}$ before deciding the most suitable grasp configuration $g^*_p$ on a new object. This is efficiently done by forward simulation of the \ac{DMP}-encoded $\pi_\tau$. From each roll-out, we look at the object's state action region $\overline{\alpha}_\tau$ as a suitable task performance indicator. To this aim, we consider the entropy between the demonstrated successful task outcomes $\alpha_\tau$ and the simulated outcome $\overline{\alpha}_\tau$ in the form of Kullback-Leibler divergence \cite{perez2008kullback}: \begin{equation} \label{eq:kld} D(\alpha_\tau || \overline{\alpha}_\tau) = \sum_{i \in I} \alpha_\tau(i) \log\!\left( \frac{\alpha_\tau(i)}{\overline{\alpha}_\tau(i)} \right)\!, \end{equation} which results in a low penalisation when the forward simulated outcome $\overline{\alpha}_\tau$ is similar to a previously experienced outcome in $A_\tau$, and a high penalisation otherwise. Then, we propose to rank the grasping candidates ${g_{p_i} \in G^*}$ according to a confidence metric which estimates the suitability of a candidate $g_{p_i}$ for a given $\mathcal{T}$ as: \begin{equation} \label{eq:conf} C(g_{p_i}) = \max_{\alpha_\tau \in A_\tau} D^{\shortminus1}(\alpha_\tau || \overline{\alpha}_\tau). \end{equation} Finally, we select the grasping configuration $g^*_p$ among all grasping candidate ${g_{p_i} \in G^*}$ as: \begin{equation} g^*_p = \arg \max_{g_{p_i} \in G^*} C(g_{p_i}) \quad s.t. \quad C(g_{p_i}) > \delta, \label{eq:optimal_grasp} \end{equation} which returns the grasp configuration with highest confidence of successfully completing the task. This assessment is subject to a minimum user-defined confidence level $\delta$ that rejects under-performing grasp configuration proposals. As explained in the experimental setup, such a threshold is adjusted from demonstration by a binary classifier. \subsection{One-Shot Self-Assessment of Task Affordances \label{sec:one_shot_library}} The search-based strategy presented in Section~\ref{sec:entropy} in the grasp affordance region can be time and resource consuming if performed for every single task affordance problem. Alternatively, we propose to augment the library in~\eqref{eq:library} with an approximate of the prior experienced outcomes $\alpha_\tau$ per grasp configuration $g_{p_i}$, such that it allows for one-shot assessment. Namely, we extract the spatial transform of all experienced grasps with respect to the detected grasp affordance region $G^*$. The relevance of these transforms is ranked in a list $R$ according to their confidence score computed following~\eqref{eq:conf}. Therefore, the augmented library is denoted as: \begin{equation} \mathcal{L} = \bigl\{\mathcal{T}_1 \rightarrow \{\pi_{\tau_1}, A_{\tau_1}, R_{\tau_1}\}, \cdots, \mathcal{T}_n \rightarrow \{\pi_{\tau_n}, A_{\tau_n}, R_{\tau_n}\}\bigr\}. \end{equation} At deployment time, we look at the spatial transform from the new grasping candidates that resembles the most well-ranked transform in $R$. This allows us to hierarchically self-assess the candidates by order of prospective success. \subsection{Deployment on Autonomous Agent}\label{sec:framework} Algorithm \ref{alg:framework_kb} presents the outline of \ac{SAGAT}'s end-to-end deployment, which aims at improving the success of an autonomous agent when performing a task. Given visual perception of the environment, the desired affordance, the pre-trained model to extract the grasp affordance relation (see Section~\ref{sc:kb}), the model to detect the action region, and the learnt library of task affordances (see Section~\ref{sc:mrcnn_and_dmp} to Section~\ref{sec:one_shot_library}) (lines~\ref{alg_line:input1} to \ref{alg_line:model3}), the end-to-end execution is as follows. First, the visual data is processed to extract the grasp affordance region (line~\ref{alg_line:optimal_region}) and the object's action region (line~\ref{alg_line:action_region}). The resulting grasp affordance region along with the desired affordance are used to estimate the grasp configuration proposals on the new object using the library of task affordances as prior experiences (line~\ref{alg_line:grasp_proposal}). The retrieved set of grasp configuration candidates is analysed in order of decreasing prospective success (line~\ref{alg_line:while} to line~\ref{alg_line:return_success}) until either exhausting all candidates or finding a suitable grasp for the affordance task. Importantly, the hierarchy of the proposed self-assessment analysis allows for one-shot transfer of the grasp configuration proposals, i.e. to find, on the first trial, a suitable grasp affordance by analysing the top-ranked grasp candidate. Nonetheless, the method also considers the case that exhaustive exploration of all candidates might be required, thus ensuring algorithmic completeness. Notably, the proposed method is not dependant on a particular grasp affordance or action region description. This modularity allows the usage of the proposed method in a wide range of setups. We demonstrate the generality of the proposed method by first, using multiple state-of-the-art approaches for grasp affordance detection, and then, determining the improvement on task performance and deployability when used altogether with our approach. \begin{algorithm}[t!] \caption{deployment of \ac{SAGAT} \label{alg:framework_kb}} \textbf{Input:} \\ $\;\;$ $\text{CVF}$: camera visual feed \\ \label{alg_line:input1} $\;\;$ \textit{affordance}: affordance choice \\ $\;\;$ $\texttt{graspAffordance}$: grasp affordance model \\ \label{alg_line:model1} $\;\;$ $\texttt{actionRegion}$: MRCNN learnt model \\ \label{alg_line:model2} $\;\;$ $\texttt{libTaskAffordances}$: task affordance library \\ \label{alg_line:model3} \vspace{0.3em} \textbf{Output:} \\ $\;\;$ $g^*_p$: most suitable grasp affordance configuration\\ \label{alg_line:output} \vspace{0.3em} \Begin{ \textit{$G^*$} $\gets \texttt{graspAffordance}$(CVF,$\;$\textit{affordance})\\ \label{alg_line:optimal_region} $S_O \gets$ \texttt{actionRegion}(CVF,$\;$\textit{affordance}) \\ \label{alg_line:action_region} $\textit{$g_{p}$} \gets$ \texttt{libTaskAffordances}($G^*$,$\;$\textit{affordance})\\ \label{alg_line:grasp_proposal} \While{\KwNot \textup{\texttt{isEmpty}($g_{p}$)}} {\label{alg_line:while} $g_{p_i} \gets$ \texttt{popHighestCondifence}($g_{p}$)\\ \label{alg_line:highest_confidence} $\overline{\alpha}_\tau \gets$ \texttt{forwardSimulateTask}($g_{p_i}$, $S_O$)\\ \label{alg_line:forward_simulate} \If{\textup{\texttt{prospectiveTaskSuccess}($\overline{\alpha}_\tau$)}}{ \label{alg_line:prospective} return $g_{p_i}$ \label{alg_line:return_success} } } return none\\ \label{alg_line:next_trial} } \end{algorithm} \section{Related Work}\label{sec:related_work} Understanding grasp affordances for objects has been an active area of research for robotic manipulation tasks. Ideally, an autonomous agent should be able to identify all the tasks that an object can afford, and infer the grasp configuration that leads to a successful completion of each task. A common approach to tackle this challenge is via visual features, e.g.~\cite{Lenz2015,chu2018real,chu2019learning,bohg2010learning}. Methods based on visual grasp affordance detection identify candidate grasps either via deep learning architectures that detect grasp areas on an object \cite{Lenz2015,chu2018real,chu2019learning}, or via supervised learning techniques that obtain grasping configurations based on an object's shape \cite{bohg2010learning}. While these techniques offer robust grasp candidates, they uniquely seek grasp stability. Consequently, these methods cannot guarantee any level of performance when executing a task, and in fact, not even a successful task completion. In order to, move towards reliable task deployment on autonomous agents, there is the need to bridge the gap between grasp affordance detection and task-oriented grasping. \subsubsection*{\textbf{Grasp affordances}} Work on grasp affordances aims at robust interactions between objects and the autonomous agent. However, it is typically limited to a single grasp affordance detection per object, thus reducing its deployment in real-world scenarios. Some works, such as \cite{kruger2011object}, focus on relating abstractions of sensory-motor processes with object structures (e.g., \ac{OACs}) to extract the best grasp candidate given an object affordance. Others use purely visual input to learn affordances using deep learning \cite{AffordanceNet18,chu2019learning} or supervised learning techniques to relate objects and actions \cite{song2010learning,Montesano2009LearningGA,moldovan2012learning,ardon2019learning}. Although these works are successful in detecting grasp affordance regions, they hypothesise suitable grasp configurations based on visual features, rather than indicators that hint such proposals suitability to accomplish an affordance task. \subsubsection*{\textbf{Task affordances}} The end goal of grasping is to manipulate an object to fulfil a goal-directed task. When the grasping problem is contextualised into tasks, solely satisfying the grasp stability constraints is no longer sufficient. Nonetheless, codefining grasp configurations with task success is still an open problem. Along this line, some works focus entirely on learning tasks where the object category does not influence the outcome, such as pushing or pulling \cite{song2010learning,moldovan2012learning}. Hence, reliable extraction of grasp configurations is neglected. Another approach is to learn grasp quality measures for task performance via trial and error \cite{fang2019learning,mandlekar2018roboturk,kroemer2012kernel}. Based on the experiences, these studies build semantic constraints to specify which object regions to hold or avoid. Nonetheless, their dependency on great amounts of prior experiences and the lack of generalisation between object instances remain to be the main hurdle of these methods. Our work seeks to bridge the gap between grasp affordances and task performance existing in prior work. The proposed approach unifies grasp affordance reasoning and task deployment in a self-assessed system that, without the need for extensive prior experiences, is able to transfer grasp affordance configurations to novel object instances. \section{Introduction}\label{sec:intro} Affordances have attained new relevance in robotics over the last decade~\cite{jamone2018affordances,min2016affordance}. Affordance refers to the possibility of performing different tasks with an object~\cite{Gibson77-affordances}. As an example, grasping a pair of scissors from the tip affords the task handing over, but not a cutting task. Analogously, not all the regions on a mug's handle comfortably afford to pour liquid from it. Current grasp affordance solutions successfully detect the parts of an object that afford different tasks~\cite{Lenz2015,chu2018real,chu2019learning,bohg2010learning,AffordanceNet18,ardon2019learning}. This allows agents to contextualise the grasp according to the objective task and also, to novel object instances. Nonetheless, these approaches lack an insight into the level of suitability that the grasp offers to accomplish the task. As a consequence, current literature on grasp affordance cannot guarantee any level of performance when executing the task and, in fact, not even a successful task completion. On the grounds of the limitations mentioned above, a system should consider the expected task performance when deciding a grasp affordance. However, this is a challenging problem, given that the grasp and the task performance are codefining and conditional on each other~\cite{Montesano2008LearningOA}. Recent research in robot affordances proposes to learn this relation via trial and error of the task~\cite{fang2019learning,mandlekar2018roboturk,kroemer2012kernel}. Nevertheless, given the extensive amount of required data, the method can solely learn a single task at a time and perform on known scenarios. In contrast, an autonomous agent is expected to be capable of dealing with multiple task affordance problems even when those involve unfamiliar objects and new scenarios. \begin{figure}[t!] \centering \includegraphics[width= 8.5cm]{Figures/intro} \vspace{-0.2cm} \caption{PR2 self-assessing a pouring affordance task. The system first predicts the object's grasp affordances. Then, based on prior affordance task experiences and a heuristic confidence metric, it self-assesses the new object's grasp configuration that is most likely to succeed at pouring. \label{fig:intro}} \vspace{-0.35cm} \end{figure} In this paper, we present a novel experience-based pipeline for \acf{SAGAT} that seeks to overcome the lack of deployment reliability of current state-of-the-art methods of grasp affordance detection. The proposed approach, depicted in Fig.~\ref{fig:intro}, starts by extracting multiple grasp configuration candidates from a given grasp affordance region. The outcome of executing a task from the different grasp candidates is estimated via forward simulation. These estimates are employed to evaluate and rank the relation of task performance and grasp configuration candidates via a heuristic confidence function. Such information is stored in a library of task affordances. The library serves as a basis for one-shot transfer to identify grasp affordance configurations similar to those previously experienced, with the insight that similar regions lead to similar deployments of the task. We evaluate the method's efficacy on addressing novel task affordance problems by training on one single object and testing on multiple new ones. We observe a significant performance improvement up to $11.7\%$ in the considered tasks when using our proposal in comparison to state-of-the-art approaches on grasp affordance detection. Experimental evaluation on a PR2 robotic platform demonstrates highly reliable deployability of the proposed method in real-world task affordance problems. \begin{figure*}[!hbt] \centering \includegraphics[width=17.8cm]{Figures/schema} \caption{Proposed framework for \acl{SAGAT}. After predicting a grasp affordance region, the most suitable grasp is determined based on a library of prior task affordance experiences and a heuristic confidence metric. }\label{fig:framework} \end{figure*}
1,941,325,221,231
arxiv
\section{THE PREHISTORIC ERA} When heated up, physical systems undergo phase transitions from ordered to less ordered phases. This deep belief, encouraged by everyday life experiences, would tell us that at high temperature spontaneously broken symmetries of high energy physics get restored. This, in fact, is what happens in the Standard Model (SM) of electroweak interactions. Whether or not true in general is an important question in its own right, but it also has a potentially dramatic impact on cosmology. Namely, most of the extensions of SM tend to suggest the existence of the so called topological defects and it is known that two types of such defects, {\it i.e.}domain walls and monopoles pose cosmological catastrophe. More precisely, they are supposed to be produced during phase transitions at high temperature $T$ \cite{kibble} and they simply carry too much energy density to be in accord with the standard big-bang cosmology. One possible way out of this problem could be provided by eliminating phase transitions if possible. In fact, it has been known for a long time \cite{goran} that in theories with a more than one Higgs field (and the existence of topological defects requires more than one such field in realistic theories) symmetries may remain broken at high $T$, and even unbroken ones may get broken as the temperature is increased. This offers a simple way out of the domain wall problem \cite{gia}, whereas the situation regarding the monopole problem is somewhat less clear \cite{monopole}. Unfortunately, the same mechanism seems to be inoperative in supersymmetric theories. Whereas supersymmetry (SUSY) itself gets broken at high $T$, internal symmetries on the other hand get necessarily restored. This has been proven at the level of renormalizable theories \cite{mangano}. The argument goes as follows: in supersymmetric theories at finite temperature, the leading term in the effective potential is given by $T^2 \left({\rm Tr}{\cal M}^{\dagger}_f{\cal M}_f +{\rm Str}{\cal M}^2\right)$, where ${\cal M}_f$ is the mass matrix of the femionic degrees of freedom and ${\rm Str}{\cal M}^2$ is a field-independent quantity proportional to the soft breaking mass terms. It is easy to show that $\delta {\rm Tr}{\cal M}_F{\cal M}^{\dagger}_f/\delta\varphi_i=0$ gives $\langle \varphi_i \rangle=0$ \cite{mangano}. Recent attempt to evade this argument using higher dimensional nonrenormalizable operators \cite{tamvakis}, has been shown not to work \cite{borut}. \section{THE DARK AGES} About a year ago, at the SUSY96 Conference, all these considerations led my friends Borut Bajc and G. Senjanovi\'c to whisper (or, in other words, with the same tone as when you say {\it Adelante Pedro, ma con judicio}, A. Manzoni, {\it I promessi sposi}) that ".... internal symmetries in supersymmetric theories seem to get restored at high temperature, even in the presence of non-renormalizable interactions. However, one must admit that the proof offered is valid only for a single chiral superfield. We {\it suspect} that the above is true in general....". \cite{susy96}. As we know, dark ages are necessarily followed by a delightful period of renaissance. Luckily enough, that {\it suspect} has been shown recently to be false. \section{ THE RENAISSANCE ERA} All the considerations mentioned above about SUSY and broken symmetries at high temperatures have an important assumption in common: the chemical potential is taken to be zero. In other words, it is assumed the vanishing of any conserved charge. What is the fate of internal symmetries at high temperatures in SUSY theories if we relax the assumption of vanishing conserved charge in the system? The answer to this question has been recently given in \cite{noi}. Actually, the answer was already known in nonsupersymmetric theories where it has been proven that the background charge asymmetry may postpone symmetry restoration at high temperature \cite{haberweldon}, and even more remarkably that it can lead to symmetry breaking of internal symmetries, both in cases of global \cite{scott} and local symmetries \cite{linde} at arbitrarily high temperatures. This is simply a consequence of the fact that, if the conserved charge stored in the system is larger than a critical value, the charge cannot entirely reside in the thermal excited modes, but it must flow into the vacuum. This is an indication that the expectation value of the charged field is non-zero, {\it i.e.} that the symmetry is spontaneously broken. Furthermore, from the work of Affleck and Dine \cite{affleck} we know that there is nothing unnatural about large densities in SUSY theories. To leave the medieval age and come up to the air, let us warm up with the simplest supersymmetric model with a global $U(1)$ symmetry is provided by a chiral superfield $\Phi$ and a superpotential \begin{equation} W = {\lambda\over 3} \Phi^3. \label{superpot} \end{equation} It has a global $U(1)$ $R$-symmetry under which fields transform as \begin{equation} \phi \to {\rm e}^{i\alpha} \phi\:\:\:{\rm and } \:\:\:\psi \to {\rm e}^{-i\alpha/2} \psi, \end{equation} where $\phi$ and $\psi$ are the scalar and the fermionic component of the superfield $\Phi$. Thus the fermionic and bosonic charges are related by $2 Q_\psi + Q_\phi = 0$, or in other words the chemical potentials satisfy the relation \begin{equation} \mu\equiv \mu_\phi = - 2 \mu_\psi. \end{equation} The presence of a nonvanishing net $R$-charge leads to a mass term $-\mu^2 \phi^\dagger\phi$ for the scalar field with a ``wrong'' (negative) sign after canonical momenta relative to the boson degrees of freedom have been integrated out in the path integral \cite{haberweldon}. The crucial new ingredient here is that such a bosonic contribution to the mass squared term already appears at the tree-level (and therefore is not depending upon $\lambda$), while the $\mu^2$ dependent part of the fermionic contribution to the mass squared term only appears at the one-loop level and is therefore suppressed by $\lambda^2$. To see this explicitly, one has to compute the fermionic-loop tadpole at finite temperature and chemical potential. Performing the summation over the Matsubara modes and integrating over the momentum $p$ one may show that for a small Yukawa coupling $\lambda\ll 1$ and $\mu<T$, the fermionic contribution in the chemical potential term to the mass squared is of order of $\lambda^2 \mu^2 \phi^\dagger\phi$ and therefore suppressed for small $\lambda$. We refer the reader to ref. \cite{harrington} for more details. As a result, the fermionic degrees of freedom cannot compensate the genuine term $-\mu^2 \phi^\dagger\phi$ originated by the bosonic partners. At finite temperature and finite chemical potential SUSY is broken. The one-loop high temperature for small Yukawa coupling $\lambda\ll 1$ reads (for $\mu<T$) \begin{equation} V_T(\phi) = \left( -\mu^2 + {1\over 2} \lambda^2 T^2\right) \phi^\dagger \phi + \lambda^2 (\phi^\dagger \phi)^2. \label{Tpotential} \end{equation} Obviously, for $\mu^2 > \lambda^2 T^2/2$ the symmetry is spontaneously broken at high temperature and the field $\phi$ gets a vacuum expectation value (VEV) \begin{equation} \langle\phi \rangle^2 = {\mu^2 - {\lambda^2\over 2}T^2\over \lambda^2}. \label{vev} \end{equation} This result is valid as long as the chemical potential $\mu$ is smaller than the scalar mass in the $\langle\phi\rangle$-background, {\it i.e.} $\mu^2 < m_{\phi}^2 = 2 \lambda^2 \langle\phi\rangle^2$ \cite{scott}. This in turn implies $\mu > \lambda T$. In short, for a perfectly reasonable range \begin{equation} \lambda T < \mu < T \label{reasonable} \end{equation} the original $U(1)_R$ global symmetry is spontaneously broken and this is valid at arbitrarily high temperatures (as long as the approximation of $\lambda$ small holds true). Notice that, in all the above we have assumed unbroken supersymmetry. When supersymmetry is softly broken, $U(1)_R$ gets also explicitly broken because of the presence of soft trilinear scalar couplings in the Lagrangian. Therefore, the associated net charge vanishes and the reader might be worried about the validity of our result. However, the typical rate for $U(1)_R$-symmetry breaking effects is given by $\Gamma\sim \widetilde{m}^2/T$, where we have indicated by $\widetilde{m}\sim 10^2$ GeV the typical soft SUSY breaking mass term. Since the expansion rate of the Universe is given by $H\sim 30\:T^2/M_{P\ell}$, $M_{P\ell}$ being the Planck mass, one finds that $U(1)_R$-symmetry breaking effects are in equilibrium and the net charge must vanish only at temperatures {\it smaller} than $T_{{\rm SS}}\sim\widetilde{m}^{2/3}M_{P\ell}^{1/3}\sim 10^7$ GeV. Therefore, it is perfectly legitimate to consider the presence of a nonvanishing $R$-charge at very high temperatures even in the case of softly broken SUSY. Thus, we have provided a simple and natural counterexample to the theorem of the restoration of internal symmetries in supersymmetry \cite{noi}. This is a consequence of the fact that the charge cannot be stored in the thermal excited modes, but it must reside in the vacuum and this is an indication that the expectation value of the charged field is non-zero, {\it i.e.} that the symmetry is spontaneously broken. What about gauge symmetries? It has been known for a long time \cite{linde} that a background charge asymmetry tends to increase symmetry breaking in the case of a local gauge symmetry in nonsupersymmetric theories. In his work, Linde has shown how a large fermion number density would prevent symmetry restoration at high temperature in both abelian \cite{linde} and nonabelian theories \cite{linde1}. The essential point is that the external charge leads to the condensation of the gauge field which in turn implies the nonvanishing VEV of the Higgs field. This phenomenon may be easily understood if one recalls that an increase of an external fermion current ${\bf j}$ leads to symmetry restoration in the superconductivity theory \cite{super}. In gauge theories symmetry breaking is necessarily a function of $j^2=j_0^2-{\bf j}^2$, where $j_0$ is the charge density of fermions. An increase of $j_0$ is therefore accompanied by an increase of symmetry breaking \cite{linde}. We now demonstrate that this phenomenon persists in supersymmetric theories, at least in the case of abelian symmetry. The simplest model is based on $U(1)$ supersymmetric local gauge symmetry. The minimal anomaly free matter content consists of two chiral superfields $\Phi^+$ and $\Phi^-$ with opposite gauge charges and the most general renormalizable superpotential takes the form \begin{equation} W = m \Phi^+ \Phi^-. \label{gaugeW} \end{equation} Notice that the symmetry is $\it not$ spontaneously broken at zero temperature. Since there is no Yukawa interaction, there is also a global $U(1)$ $R$-symmetry, under which the bosons have, say, the same charge and fermions are invariant. Furthermore at very high temperature, $T> m^{2/3}M_{P\ell}^{1/3}$, the fermion mass can be neglected and we get also a chiral $U(1)$ symmetry under which the bosons are invariant. We may now suppose for simplicity that there is a net background charge density $j_0$, with the zero current density and that it lies entirely in the fermionic sector. In other words we assume the background charge to be in the form of the chiral fermionic charge. Thus only the fermions have a nonvanishing chemical potential. Equally important, we assume that the gauge charge of the Universe is zero, just as in \cite{linde}. In the realistic version of this example, one would imagine the gauge charge to be the electromagnetic one and the chiral fermionic charge to be, say, the lepton charge in the MSSM. We know from observation that the electromagnetic charge of the Universe vanishes to a good precision. Thus we have to minimize the action with the constraint that the electric field is zero. What will happen is that some amount of bosonic charge will get stored into the vacuum in order to compensate for the fermionic one and achieve the vanishing of the electric field. In this way, the total $U(1)$ charge density of the system including the charge of the condensate is equal to zero even if symmetry is broken and the gauge forces are short-range ones. This is the principal reason behind the resulting spontaneous symmetry breaking of the local gauge symmetry, as we show below. It is crucial thus to have some nonvanishing external background charge, {\it i.e.} the model should have some extra global symmetry as provided by our chiral symmetry. We can obviously take $A_i=0$ in the vacuum and treat $A_0$ on the same footing with the scalar fields $\phi^\pm$ (due to the net charge $A_0$ cannot vanish in the vacuum). If we now integrate out $A_0$ using its equation of motion, assuming the electric field to be zero, we can then compute the high temperature potential for the scalar fields in question at high temperature and large charge density with the following result \begin{eqnarray} V_{\rm eff}(T) &= &{g^2 \over 2} T^2\left(|\phi^+|^2 + |\phi^-|^2\right) \nonumber\\ &+&{g^2 \over 2} \left(|\phi^+|^2 - |\phi^-|^2\right)^2 \nonumber \\ &+ & {1 \over 2} {j_0^2 \over 2 \left(|\phi^+|^2 + |\phi^-|^2\right) + T^2}, \label{eff} \end{eqnarray} where we have taken $T\gg m$ and we have included both scalar and fermionic loop contributions in the $T^2$ mass term for $A_0$. Now, except for the $D$-term, the rest of the potential depends only on the sum $\phi^2 \equiv |\phi^+|^2 + |\phi^-|^2 $, and thus the energy is minimized for the vanishing of the $D$-term potential, {\it i.e.} for $|\phi^+|^2 = |\phi^-|^2$. It is easy to see that in this case the effective potential has two extrema: \begin{equation} \phi = 0 \quad {\rm and} \quad \phi^2 = {j_0 \over\sqrt{2} g T} - {T^2 \over 2}. \label{vevT} \end{equation} The second extremum obviously exists only for \begin{equation} j_0 > {g T^3\over\sqrt{2}}. \end{equation} Moreover in that case it is an absolute minimum, while $\phi = 0$ is a maximum. Now, we can rephrase the above condition in the language of the chemical potential (using $g_* = 4$ ) \begin{equation} \mu > g T. \label{mulocal} \end{equation} For $ g\ll1$, which is of our interest, $\mu $ easily satisfies the condition $\mu < T$. As noted above, since Yukawa interactions are absent, the role of the external charge may have been played by the $R$-charge in the scalar sector, the two scalars being equally charged under this symmetry. In such a case, the analysis requires careful handling because of issues related to gauge invariance. In a future publication \cite{future} we will extensively explore this issue as weel generalize our results to theories containing an arbitrary number of abelian symmetries and to nonabelian theories. \section{THE FUTURE DAYS} We hope to have convinced the audience that internal symmetries in supersymmetric theories, contrary to the general belief, may be broken at high temperature, as long as the system has a nonvanishing background charge. The examples we have provided here, based on both global and local abelian symmetries, are natural and simple and should be viewed as prototypes of more realistic theories. The necessary requirement for the phenomenon to take place is that the chemical potential be bigger than a fraction of temperature on the order of (1-10)\%. Notice that this is by no means unnatural. In the expanding universe the chemical potential is proportional to temperature, and thus unless zero for some reason, $\mu /T$ is naturally expected to be of order one. More important, this chemical potential could be zero today, all that is needed is that it is nonvanishing at high temperature. We have seen how soft supersymmetry breaking may naturally provide such a scenario if there is some nonvanishing external charge. Now, it is well known that in suspersymmetry the existence of flat directions may lead to large baryon and lepton number densities at very high temperature \cite{affleck}. The most natural candidate for the large density of the universe is the lepton number that may reside in the form of neutrinos. It should be stressed that a large lepton number is perfectly consistent with the ideas of grand unification. It can be shown that in $SO(10)$ one can naturally arrive at a small baryon number and a large lepton number \cite{hk81}. In any case, we wish to be even more open minded and simply allow for a charge density without worrying about its origin. It is noteworthy that a large neutrino number density may persist all the way through nucleosynthesis up to today \cite{chemicalpotential}. This has been used by Linde \cite{linde1} in order to argue that even in SM the $SU(2)_L\otimes U(1)_Y$ symmetry may not be restored at high temperature. Since SUSY, as we have seen, does not spoil the possibility of large chemical potentials allowing symmetry breaking at high $T$, it is important to see if our results remain valid in the case of nonabelian global and local symmetries. This work requires particular attention due to the issues related to gauge invariance and is now in progress \cite{future}. We should stress that there is more than a sole academic interest to the issue discussed in this paper. If symmetries remain broken at high temperature, there may be no domain wall and monopole problems at all. Furthermore, it is well known that in SUSY grand unified theories symmetry restoration at high temperature prevents the system from leaving the false vacuum and finding itself in the broken phase at low $T$. This is a direct consequence of the vacuum degeneracy characteristic of supersymmetry which says that at zero temperature the SM vacuum and the unbroken GUT symmetry one have the same (zero) energy. If the symmetry is restored at high $T$, one would start with the unbroken symmetry in the early universe and would thus get caught in this state forever. Obviously, if our ideas hold true in realistic grand unified theories, this problem would not arise in the first place. Moreover, we know that the presence of anomalous baryon number violating precesses induced by sphaleron transitions pose a serious problem for the survival of the baryon asymmetry in the early Universe. However, sphalerons are active at high temperatures only if the standard model gauge symmetry is restored. The presence of a nonvanishing (lepton?) conserved charge in the minimal supersymmetric standard model might cause the symmetry nonrestoration of the gauge symmetry when the system is heated up, as originally suggested by Liu and Segr\`e for the nonsupersymmetric standard model \cite{ls94}. This would allow to freeze the baryon number violating processes and to preserve the baryon asymmetry generated at the GUT scale. \section*{Acknowledgments} It is with immense pleasure that we thank my collaborators Borut Bajc and Goran Senjanovi\'c with whom this journey out of the dark ages towards a so exciting period has begun. This work is supported by the DOE and NASA under Grant NAG5--2788.
1,941,325,221,232
arxiv
\section{Introduction} The quantum Yang-Baxter equation (QYBE for short) \cite{bax,mc} plays an important role in the study of bialgebras. Coquasitriangular bialgebras give birth to solutions to the QYBE. On the other hand, Faddeev, Reshetikhin, and Takhtajan \cite{frt} introduced construction of coquasitriangular bialgebras using solutions to the QYBE, called the FRT construction. This construction has been generalized with the development of the study of the QYBE and bialgebras. The quantum dynamical Yang-Baxter equation (QDYBE for short), a generalization of the QYBE, was introduced by Gervais and Neveu \cite{gn}. Dynamical R-matrices, solutions to the QDYBE, give birth to \( \mathfrak{h} \)-bialgebroids introduced by Etingof and Varchenko \cite{etva}. If a dynamical R-matrice satisfies a certain condition, called rigidity, this \( \mathfrak{h} \)-bialgebroid has an antipode and is called an \( \mathfrak{h} \)-Hopf algebroid. A set-theoretical analogue of the QDYBE is the dynamical Yang-Baxter map (DYBM for short) introduced by Shibukawa \cite{shibudy, shibuin}. The DYBM is a generalization of the Yang-Baxter map \cite{ets, ves} suggested by Drinfel'd \cite{dri}. Shibukawa and Takeuchi studied the FRT construction for the DYBM in \cite{shibufr,shiburi}. A left bialgebroid \( A_{\sigma} \) is obtained by this construction. If a solution \( \sigma \) satisfies rigidity, then \( A_{\sigma} \) becomes a Hopf algebroid with a bijective antipode. The notion of left bialgebroids (Takeuchi's \( \times_R \)-bialgebras) was introduced in \cite{take}. This is a genralization of the bialgebra using a non-commutative base algebra \( R \). Its comultiplication and counit are \( (R, R) \)-bimodule homomorphisms. Schauenburg \cite{schau} proposed a Hopf algebraic structure on the left bialgebroid without an antipode, called a \( \times_R \)-Hopf algebra. As a special case of the \( \times_R \)-Hopf algebra, B\"ohm and Szl\'achanyi \cite{bosz} introduced the Hopf algebroid, which has a bijective antipode. On the other hand, Hayashi \cite{haI} introduced the notion of face algebras. In \cite{hayas}, a coquasitriangular face algebra \( \mathfrak{A}(w) \) was constructed using a solution \( w \) to the quiver-theoretical QYBE. In addition, if a coquasitriangular face algebra satisfies the closurability, this face algebra has a Hopf closure, which is a Hopf face algebra satisfying a certain universal property. Hayashi \cite{hayas} constructed this Hopf closure by using the double cross product and the localization of the face algebra. Later (Hopf) face algebras were integrated to weak bialgebras (weak Hopf algebras) by B\"ohm, Nill, and Szl\'achanyi \cite{boszn}. Bennoun and Pfeiffer mentioned to the Hopf closure (they called the Hopf envelope) of the coquasitriangular weak bialgebra in \cite{benp}. Schauenburg \cite{schau} showed that a weak bialgebra (weak Hopf algebra) is a left bialgebroid (\( \times_R \)-Hopf algebra) whose base algebra is Frobenius-separable. Conversely, a left bialgebroid (\( \times_R \)-Hopf algebra) becomes a weak bialgebra (weak Hopf algebra) if its base algebra is Frobenius-separable. There is an interesting relation between Hayashi's generalization and Shibukawa-Takeuchi's generalization of the FRT construction. If the parameter set \( \Lambda \) of a DYBM \( \sigma \) is finite, the left bialgebroid \( A_{\sigma} \) becomes a weak bialgebra since the base algebra \( M_{\Lambda}(\mathbb{K}) \) consisting of maps from \( \Lambda \) to a field \( \mathbb{K} \) is a Frobenius-separable algebra over \( \mathbb{K} \). Matsumoto and Shimizu \cite{matsu} showed that a DYBM \( \sigma \) gives birth to a solution \( w_{\sigma} \) to the quiver-theoretical YBE and gave a weak bialgebra homomorphism \( \Phi \) from \( \mathfrak{A}(w_{\sigma}) \) to \( A_{\sigma} \). Shibukawa and the auther generalized the FRT construction for the DYBM to get a left bialgebroid (Hopf algebroid) \( A_{\sigma} \) that is not a weak bialgebra (weak Hopf algebra) from a DYBM with a finite parameter set \( \Lambda \). In \cite{oshibu}, we generalized \( M_{\Lambda}(\mathbb{K}) \) to an arbitrary \( \mathbb{K} \)-algebra \( L \). It is natural to try to get a left bialgebroid \( \mathfrak{A}(w_{\sigma}) \) corresponding to the generalized left bialgebroid \( A_{\sigma} \). The purpose of this paper is to discuss relations of two generalized FRT constructions by extending Matsumoto-Shimizu's homomorphism. Let \( R \) be an arbitrary \( \mathbb{K} \)-algebra and we denote by \( M_{\Lambda}(R) \) the \( \mathbb{K} \)-algebra consisting of maps from \( \Lambda \) to \( R \). We try to generalize Hayashi's construction to gain a left bialgebroid \( \mathfrak{A}(w_{\sigma}) \) corresponding to the generalized \( A_{\sigma} \) with the base algebra \( L = M_{\Lambda}(R) \) and construct a left bialgebroid homomorphism from \( \mathfrak{A}(w_{\sigma}) \) to \( A_{\sigma} \). We also show that the weak Hopf algebra \( A_{\sigma} \) with the Frobenius-separable base algebra \( M_{\Lambda}(R) \) becomes a Hopf closure of \( \mathfrak{A}(w_{\sigma}) \) through \( \Phi \). The paper is organized as follows. In Section \ref{sec:lw}, we review relations between left bialgebroids (Hopf algebroids) and weak bialgebras (weak Hopf algebras) following \cite{bosz} and \cite{schau}. In Section \ref{sec:TL}, we recall a left bialgebroid (Hopf algebroid) \( A_{\sigma} \) with the base algebra \( M_{\Lambda}( R ) \) in \cite{oshibu} and introduce a left bialgebroid \( \mathfrak{A}(w) \) as a generalization of \cite{hayas}. The weak bialgebra \( \mathfrak{A}(w) \) in \cite{hayas} has the base algebra \( M_{\Lambda}( \mathbb{K} ) \) as a left bialgebroid. We generalize \( M_{\Lambda}( \mathbb{K} ) \) to \( M_{\Lambda}( R ) \) like the above left bialgebroid \( A_{\sigma} \). In Section \ref{sec:Phi}, we induce a left bialgebroid \( \mathfrak{A}(w_{\sigma}) := \mathfrak{A}(w) \) by using the setting of the left bialgebroid \( A_{\sigma} \) and construct a left bialgebroid homomorphism \( \Phi \) from \( \mathfrak{A}(w_{\sigma}) \) to \( A_{\sigma} \). As a point of difference between \cite{matsu} and this paper, we do not use DYBMs to construct these \( \mathfrak{A}(w_{\sigma}) \) and \( \Phi \). We also give an example of the left bialgebroids \( A_{\sigma} \), \( \mathfrak{A}(w_{\sigma}) \) and \( \Phi \) not using the DYBM. In Section \ref{sec:pro}, if the \( \mathbb{K} \)-algebra \( R \) is a Frobenius-separable \( \mathbb{K} \)-algebra and \( A_{\sigma} \) is a weak Hopf algebra, \( \mathfrak{A}(w_{\sigma}) \), \( A_{\sigma} \), and \( \Phi \) satisfy a certain universal property, called the Hopf closure. To complete this purpose, we introduce the notion of the antipode of the \( \mathbb{K} \)-algebra homomorphism whose domain is a weak bialgebra. This is a generalization of Hayashi's antipode with respect to the face algebra in \cite{hayas}. We can characterize weak bialgebras and generalize Hopf envelopes in \cite{benp} by using these antipodes. \section{Preliminaries} \label{sec:lw} In this section, we recall the notion of left bialgebroids (Hopf algebroids) and discuss relation with weak bialgebras (weak Hopf algebras). If the base algebra of a left bialgebroid is a Frobenius-separable algebra, the total algebra has a weak bialgebra structure. In addition, the total algebra is a weak Hopf algebra when the left bialgebroid becomes a Hopf algebroid. For more details, we refer to \cite{bosz} and \cite{schau}. Throughout this paper, we denote by \( \mathbb{K} \) a field. \begin{defi} Let \( A \) and \( L \) be \( \mathbb{K} \)-algebras. A left bialgebroid (or Takeuchi's \( \times_L \)-bialgebra) \( \mathcal{A}_{L} \) is a sextuplet \(\mathcal{A}_{L} := (A,L,s_L,t_L,\Delta_L,\pi_L)\) satisfying the following conditions: \begin{enumerate} \item The maps \( s_L \colon L \to A \) and \( t_L \colon L^{op} \to A \) are \( \mathbb{K} \)-algebra homomorphisms and satisfy \begin{equation} s_L(l)t_L(l^{\prime}) = t_L(l^{\prime})s_L(l)\;\;(\forall l, l^{\prime} \in L). \label{def:stcom} \end{equation} Here \( L^{op} \) means the opposite \( \mathbb{K} \)-algebra of \( L \). These two homomorphisms make \( A \) an \( (L, L) \)-bimodule \( _L A_L \) by the following left and right \( L \)-module structures \( _L A \) and \( A_L \): \begin{equation} _L A \colon l \cdot a = s_L(l) a; \;\; A_L \colon a \cdot l = t_L(l) a \;\; ( l \in L, a \in A ). \label{def:ac} \end{equation} \item The triple \( (_L A_L, \Delta_L, \pi_L) \) is a comonoid in the category of \( (L, L) \)-bimodules such that \begin{align} &a_{[1]}t_L(l) \otimes a_{[2]} = a_{[1]} \otimes a_{[2]}s_L(l); \label{def:st} \\ &\Delta_L(1_A) = 1_A \otimes 1_A; \label{def:Duni} \\ &\Delta_L(ab) = \Delta_L(a)\Delta_L(b); \label{def:Dmulti} \\ &\pi_L(1_A) = 1_L; \label{def:punit} \\ &\pi_L(as_L(\pi_L(b))) = \pi_L(ab) = \pi(at_L(\pi_L(b))) \label{def:pmulti} \end{align} for all \( l \in L \) and \( a, b \in A \). Here we write \(\Delta_L(a) = a_{[1]} \otimes a_{[2]}\), called Sweedler's notation. The right-hand-side of \eqref{def:Dmulti} is well defined because of \eqref{def:st}. \end{enumerate} We write \( \mathcal{A}_{L} = (A,L,s^A_L,t^A_L,\Delta^A_L,\pi^A_L) \) if there is a possibility of confusion. For a left bialgebroid \( \mathcal{A}_L \), these \( \mathbb{K} \)-algebras \( A \) and \( L \) are called the total algebra and the base algebra, respectively. \end{defi} \begin{defi} Let \( \mathcal{A}_L = (A, L, s_L, t_L, \Delta_L, \pi_L) \) and \( \mathcal{A}^{\prime}_{L^{\prime}} = (A^{\prime}, L^{\prime}, s_{L^{\prime}}, t_{L^{\prime}}, \Delta_{L^{\prime}}, \pi_{L^{\prime}}) \) be left bialgebroids. A pair of \( \mathbb{K} \)-algebra homomorphisms \( (\Phi \colon A \to A^{\prime}, \phi \colon L \to L^{\prime}) \) is called a left bialgebroid homomorphism \( \mathcal{A}_L \to \mathcal{A}^{\prime}_{L^{\prime}} \), iff \begin{align} &s_{L^{\prime}} \circ \phi = \Phi \circ s_L; \label{def:sp} \\ &t_{L^{\prime}} \circ \phi = \Phi \circ t_L; \label{def:tp} \\ &\pi_{L^{\prime}} \circ \Phi = \phi \circ \pi_L; \label{def:piP} \\ &\Delta_{L^{\prime}} \circ \Phi = ( \Phi \otimes \Phi ) \circ \Delta_L. \label{def:Dp} \end{align} The map \( \Phi \otimes \Phi : A \otimes_L A \to A^{\prime} \otimes_{L^{\prime}} A^{\prime} \) makes sense because of \eqref{def:sp} and \eqref{def:tp}. \end{defi} Let \( \mathcal{A}_L := (A, L, s_L, t_L, \Delta_L, \pi_L) \) be a left bialgebroid and \( N \) a \( \mathbb{K} \)-algebra isomorphic to the opposite \( \mathbb{K} \)-algebra \( L^{op} \). We suppose that \( A \) has a \( \mathbb{K} \)-algebra anti-automorphism \( S \) satisfying \begin{align} S \circ t_L &= s_L; \label{St} \\ S( a_{[1]} ) a_{[2]} &= t_L \circ \pi_L \circ S( a ) \label{Sa} \end{align} for all \( a \in A \). The left hand side of \eqref{Sa} makes sense because of \eqref{St}. We fix a \( \mathbb{K} \)-algebra isomorphism \( \omega \colon L^{op} \to N \). Then \( A \) has left and right \( N \)-module structures \( ^NA \) and \( A^N \) through the following actions: \begin{equation} ^N A \colon r \cdot a = a s_L \circ \omega^{-1}( n ); \;\; A^N \colon a \cdot r = a S \circ s_L \circ \omega^{-1}( n ) \;\; (a \in A, n \in N). \label{NAN} \end{equation} By virtue of \eqref{St}, these two actions \eqref{NAN} make \( A \) an \( (N, N) \)-bimodule. We can also define two \( \mathbb{K} \)-linear maps \( S_{A \otimes_{L} A} \) and \( S_{A \otimes_{N} A} \) by \begin{align} S_{A \otimes_{L} A} \colon A \otimes_L A \ni a \otimes b \mapsto S(b) \otimes S(a) \in A \otimes_N A; \label{SAL} \\ S_{A \otimes_{N} A} \colon A \otimes_N A \ni a \otimes b \mapsto S(b) \otimes S(a) \in A \otimes_L A. \label{SAN} \end{align} \begin{defi} Let \( (\mathcal{A}_L, S) \) be a pair of a left bialgebroid \( \mathcal{A}_L \) and a \( \mathbb{K} \)-algebra anti-automorphism \( S \colon A \to A \) satisfying \eqref{St} and \eqref{Sa}. Suppose that \( S_{A \otimes_{N} A} \) has the inverse \( S_{A \otimes_{N} A}^{-1} \). We say that the pair \( (\mathcal{A}_L, S) \) is a Hopf algebroid, iff \begin{align} S_{A \otimes_{L} A} \circ \Delta_L \circ S^{-1} &= S_{A \otimes_{N} A}^{-1} \circ \Delta_L \circ S; \\ (\Delta_L \otimes {\rm id}_A) \circ \Delta_N &= ({\rm id}_A \otimes \Delta_N) \circ \Delta_L; \\ (\Delta_N \otimes {\rm id}_A) \circ \Delta_L &= ({\rm id}_A \otimes \Delta_L) \circ \Delta_N. \end{align} Here we define \( \Delta_N = S_{A \otimes_{L} A} \circ \Delta_L \circ S^{-1} \). The map \( S \) is called an antipode. \end{defi} We next introduce the notion of weak bialgebras and weak Hopf algebras. \begin{defi} \label{def:WBA} Let \( B \) a \( \mathbb{K} \)-algebra endowed with a \( \mathbb{K} \)-coalgebra structure by \( \Delta \colon B \to B \otimes_{\mathbb{K}} B \) and \( \varepsilon \colon B \to \mathbb{K} \). We say that a triple \( B := (B, \Delta, \varepsilon) \) is a weak bialgebra if the folloing conditions are satisfied: \begin{align} \Delta(ab) =& \Delta(a)\Delta(b); \label{def:wdm} \\ (\Delta(1) \otimes 1)(1 \otimes \Delta(1)) = 1_{(1)} \otimes 1_{(2)}& \otimes 1_{(3)} = (1 \otimes \Delta(1))(\Delta(1) \otimes 1); \label{def:D13} \\ \varepsilon( a b_{(1)} ) \varepsilon( b_{(2)} c ) = \varepsilon&( abc ) = \varepsilon( a b_{(2)} ) \varepsilon( b_{(1)} c ) \label{def:cum} \end{align} for all \( a, b, c \in B \). Here we write simply \( 1 = 1_B \) and use Sweedler's notation, which is written by \begin{equation*} \Delta(a) = a_{(1)} \otimes a_{(2)} \;\;\; {\rm and} \;\;\; (\Delta \otimes {\rm id}_B) \circ \Delta(a) = a_{(1)} \otimes a_{(2)} \otimes a_{(3)} = ({\rm id}_B \otimes \Delta) \circ \Delta(a). \end{equation*} In order to avoid ambiguity, we write \( \Delta_B = \Delta \) and \( \varepsilon_B = \varepsilon \) as needed. The biopposite weak bialgebra \( B^{bop} \) of \( B \) can be defined similar to the ordinary bialgebra. Let \( B^{\prime} \) be a weak bialgebra. A \( \mathbb{K} \)-linear map \( f \colon B \to B^{\prime} \) is called a weak bialgebra homomorphism if \( f \) is a \( \mathbb{K} \)-algebra and \( \mathbb{K} \)-coalgebra homomorphism. \end{defi} We introduce two maps \( \varepsilon_s \), \( \varepsilon_t \colon B \to B \) defined by \begin{align} \varepsilon_s( a ) &= 1_{(1)} \varepsilon( a 1_{(2)} ); \\ \varepsilon_t( a ) &= \varepsilon( 1_{(1)} a ) 1_{(2)} \;\; (a \in B). \end{align} These maps \( \varepsilon_s \) and \( \varepsilon_t \) are respectively called the source counital map and the target counital map. \begin{lemm} The maps \( \varepsilon_s \) and \( \varepsilon_t \) satisfy \begin{align} &\varepsilon_s \circ \varepsilon_s = \varepsilon_s, \;\;\; \varepsilon_t \circ \varepsilon_t = \varepsilon_t; \label{lem:estid} \\ &\varepsilon_s( 1_B ) = \varepsilon_t( 1_B ) = 1_B. \label{lem:estu} \end{align} \end{lemm} \begin{lemm} For an arbitrary element \( a \) in a weak bialgebra \( B \), \begin{align} &1_{(1)} \varepsilon_s( a 1_{(2)} ) = \varepsilon_s( a ), \;\;\; \varepsilon_t( 1_{(1)} a ) 1_{(2)} = \varepsilon_t( a ); \\ &\Delta( \varepsilon_s( a ) ) = 1_{(1)} \otimes \varepsilon_s( a ) 1_{(2)} = 1_{(1)} \otimes 1_{(2)} \varepsilon_s( a ); \label{lem:coes} \\ &\Delta( \varepsilon_t( a ) ) = \varepsilon_t( a ) 1_{(1)} \otimes 1_{(2)} = 1_{(1)} \varepsilon_t( a ) \otimes 1_{(2)}; \label{lem:coet} \\ &a_{(1)} \otimes \varepsilon_s( a_{(2)} ) = a 1_{(1)} \otimes \varepsilon_s( 1_{(2)} ), \;\; \varepsilon_t( a_{(1)} ) \otimes a_{(2)} = \varepsilon_t( 1_{(1)} ) \otimes 1_{(2)} a; \label{lem:estd2} \\ &\varepsilon_s( a_{(1)} ) \otimes a_{(2)} = 1_{(1)} \otimes a 1_{(2)}, \;\; a_{(1)} \otimes \varepsilon_t( a_{(2)} ) = 1_{(1)} a \otimes 1_{(2)}. \label{lem:estd1} \end{align} \end{lemm} \begin{lemm} We denote by \( B \) a weak bialgebra. For any \( a, b \in B \), \begin{align} &\varepsilon_s( a ) \varepsilon_t( b ) = \varepsilon_t( b ) \varepsilon_s( a ); \label{lem:estcomm} \\ &\varepsilon( a b ) = \varepsilon( \varepsilon_s( a ) b ), \;\;\; \varepsilon( ab ) = \varepsilon( a \varepsilon_t( b ) ); \label{lem:eest2} \\ &\varepsilon_s( a b ) = \varepsilon_s( \varepsilon_s( a ) b ), \;\;\; \varepsilon_t( ab ) = \varepsilon_t( a \varepsilon_t( b ) ); \label{lem:est2} \\ &\varepsilon_s( a ) b = b_{(1)} \varepsilon_s( a b_{(2)} ), \;\;\; a \varepsilon_t( b ) = \varepsilon_t( a_{(1)} b ) a_{(2)}; \label{lem:esco} \\ &\varepsilon_s( a ) \varepsilon_s( b ) = \varepsilon_s( a \varepsilon_s( b ) ), \;\;\; \varepsilon_t( a ) \varepsilon_t( b ) = \varepsilon_t( \varepsilon_t( a ) b ). \label{lem:estm} \end{align} \end{lemm} \begin{defi} A weak bialgebra \( H \) with an \( \mathbb{K} \)-linear map \( S \colon H \to H \) is called a weak Hopf algebra, iff \begin{align} S( h_{(1)} ) h_{(2)} = \varepsilon_s( h ); \\ h_{(1)} S( h_{(2)} ) = \varepsilon_t( h ); \\ S( h_{(1)} ) h_{(2)} S( h_{(3)} ) = S( h ) \end{align} are satisfied for all \( h \in H \). This \( S \), also called an antipode, is unique if there exists. \end{defi} If there is a possibility of confusion, we write \( S^{{\rm WHA}} \) for the antipode of a weak Hopf algebra and \( S^{{\rm HAD}} \) for an antipode of a Hopf algebroid. Let us recall the notion of Frobenius-separable \( \mathbb{K} \)-algebras to discuss relations between the left bialgebroid and the weak bialgebra. A Frobenius-separable \( \mathbb{K} \)-algebra is a \( \mathbb{K} \)-algebra \( L \) equipped with a \( \mathbb{K} \)-linear map \( \psi \colon L \to \mathbb{K} \) and an element \( e^{(1)} \otimes e^{(2)} \in L \otimes_{\mathbb{K}} L \) such that \begin{align} l = \psi( l e^{(1)} ) e^{(2)} = e^{(1)} \psi( e^{(2)} l ), \;\; e^{(1)} e^{(2)} = 1_L \;\; ( \forall l \in L ). \end{align} This pair \( (\psi, e^{(1)} \otimes e^{(2)}) \) is called an idempotent Frobenius system. \begin{prop}(See \cite[Theorem 5.5]{schau}.) \label{prop:LWF} Let \( \mathcal{A}_L = (A, L, s_L, t_L, \Delta_L, \pi_L) \) be a left bialgebroid. If the base algebra \( L \) is a Frobenius-separable \( \mathbb{K} \)-algebra with an idempotent Frobenius system \( (\psi, e^{(1)} \otimes e^{(2)}) \), then the total algebra \( A \) has the following weak bialgebra structure \( (A, \Delta, \varepsilon) \): \begin{align} \Delta( a ) &= t_L(e^{(1)}) a_{[1]} \otimes s_L(e^{(2)}) a_{[2]}; \\ \varepsilon( a ) &= \psi \circ \pi_L( a ) \;\; (a \in A). \end{align} \end{prop} Under the conditions of Proposition \ref{prop:LWF}, we suppose that the left bialgebroid \( \mathcal{A}_L \) has an antipode \( S^{{\rm HAD}} \). Then it is important to discuss whether the weak bialgebra \( A \) becomes a weak Hopf algebra or not. Schauenburg \cite{schau} solved this problem when \( \mathcal{A}_L \) is a \( \times_L \)-Hopf algebra, which is a generalization of the Hopf algebroid. We briefly sketch a special case of Corollary 6.2 in \cite{schau}. For the total algebra \( A \) of a Hopf algebroid \( ( \mathcal{A}_L, S^{{\rm HAD}} ) \), we can define another left \( N \)-module structure \( _N A \) by \begin{equation} _N A \colon n \cdot a = S^{\mathrm{HAD}} \circ s_L \circ \omega^{-1}( n ) a \;\; (a \in A, n \in N). \end{equation} Then the tensor product \( A \otimes_N A \) has two meanings depending on left actions \( ^N A \) and \( _N A \). In order to avoid misunderstandings, we specify these actions by \( A ^N \otimes^N A \) and \( A ^N \otimes_N A \). For example, the tensor product \( A \otimes_N A \) in \eqref{SAL} and \eqref{SAN} stands for \( A ^N \otimes^N A \). \begin{prop}(See \cite[Proposition 4.2(iv)]{bosz}.) \label{prop:bija} Let \( (\mathcal{A}_L, S^{\mathrm{HAD}}) \) be a Hopf algebroid. Then the following \( \mathbb{K} \)-linear map \( \alpha \) is bijective with the inverse \( \alpha^{-1} \). \begin{align} &\alpha \colon A^N \otimes_N A \ni a \otimes b \mapsto a_{[1]} \otimes a_{[2]} b \in A \otimes_L A; \\ &\alpha^{-1} \colon A \otimes_L A \ni a \otimes b \mapsto a^{[1]} \otimes S^{{\rm HAD}}( a^{[2]} ) b \in A^N \otimes_N A. \label{alpha-1} \end{align} These maps make sense by virtue of \( \eqref{St} \). Here we write \( \Delta_N( a ) = a^{[1]} \otimes a^{[2]} \). \end{prop} \begin{prop}(See \cite[Corollary 6.2]{schau}.) \label{prop:WHD} Let \( \mathcal{A}_L = (A, L, s_L, t_L, \Delta_L, \pi_L) \) be a left bialgebroid satisfying the conditions of Proposition \ref{prop:LWF}. If \( (\mathcal{A}_L, S^{{\rm HAD}}) \) is a Hopf algebroid, then the total algebra \( A \) becomes a weak Hopf algebra whose antipode \( S^{\mathrm{WHA}} \) is defined by \begin{equation} S^{{\rm WHA}}( a ) = \varepsilon_s( a^{[1]} ) S^{{\rm HAD}}( a^{[2]} ) \;\; (a \in A). \end{equation} This \( S^{{\rm WHA}} \) makes sense because of \( \alpha^{-1} \) and the following \( \mathbb{K} \)-linear map: \begin{align} &\beta \colon A^N \otimes_N A \ni a \otimes b \mapsto \varepsilon_s( a ) b \in A. \label{beta} \end{align} This \( \beta \) is well defined because of \eqref{def:wdm} and \eqref{lem:esco}. \end{prop} \section{Two left bialgebroids \( \mathfrak{A}(w) \) and \( A_{\sigma} \)} \label{sec:TL} \subsection{Summary of left bialgebroid \( A_{\sigma} \)} \label{sec:As} In this subsection, we recall a left bialgebroid \( A_{\sigma} \). For more details, we refer to \cite{oshibu}. This is a generalization of \cite{shiburi}. Let \( R \) be a \( \mathbb{K} \)-algebra and \( \Lambda \) a non-empty finite set. Let \(G\) denote the opposite group of the symmetric group on the set \( \Lambda \). We can define a right group action of this group \( G \) on the set \( \Lambda \):\( \lambda \alpha = \alpha(\lambda) \; (\lambda \in \Lambda, \alpha \in G) \). We denote by \( M_{\Lambda}(R) \) the \( \mathbb{K} \)-algebra consisting of maps from \( \Lambda \) to \( R \). For any \( \alpha \in G \), the map \( T_{\alpha} : M_{\Lambda}(R) \to M_{\Lambda}(R) \) is defined by \begin{equation*} T_{\alpha}(f)(\lambda) = f(\lambda \alpha) \;\; (f \in M_{\Lambda}(R), \lambda \in \Lambda). \end{equation*} The map \( T_{\alpha} \; (\alpha \in G) \) is a \( \mathbb{K} \)-algebra homomorphism such that \( T_{\alpha} \circ T_{\alpha^{-1}} = {\rm id}_{M_{\Lambda}(R)} \). Let \( \deg \) be a map from a finite set \( X \) to the group \( G \). Define \begin{align*} \Lambda X := ( M_{\Lambda}(R) \otimes_{\mathbb{K}} M_{\Lambda}(R)^{op} ) \bigsqcup \{ L_{ab} \; | \; a, b \in X \} \bigsqcup \{ (L^{-1})_{ab} \; | \; a, b \in X \}. \end{align*} Let \( \sigma^{ab}_{cd} \in M_{\Lambda}(R) \; (a, b, c, d \in X) \) and we denote by \( \mathbb{K} \langle \Lambda X \rangle \) the free \( \mathbb{K} \)-algebra generated by the set \( \Lambda X \). The symbol \( I_{\sigma} \) means the two-sided ideal of \( \mathbb{K} \langle \Lambda X \rangle \) whose generators are: \begin{itemize} \item[(1)] \(\xi+\xi^{\prime} - (\xi+\xi^{\prime}),\;c\xi - (c\xi),\;\xi\xi^{\prime} - (\xi\xi^{\prime})\;\;(\forall c\in\mathbb{K},\forall \xi,\xi^{\prime}\in M_{\Lambda}(R)\otimes_{\mathbb{K}} M_{\Lambda}(R)^{op}).\) \\ Here the notation \( \xi+\xi^{\prime} \) stands for the addition in the algebra \( \mathbb{K} \langle \Lambda X \rangle \), while the notation \( ( \xi+\xi^{\prime} ) ( \in \Lambda X ) \) is that of the algebra \( M_{\Lambda}(R) \otimes_{\mathbb{K}} M_{\Lambda}(R)^{op} \). The other two generators for the scalar multiplication and multiplication are similar. \item[(2)] \(\displaystyle\sum_{c\in X} L_{ac}(L^{-1})_{cb}-\delta_{a, b} \emptyset ,\;\sum_{c\in X} (L^{-1})_{ac}L_{cb}-\delta_{a, b} \emptyset \;\; (\forall a,b\in X). \) \\ Here \( \delta_{a, b} \in \mathbb{K} \; (a, b \in X) \) means Kronecker's delta symbol and \( \emptyset \) means the empty word. \item[(3)] \((T_{\deg (a)}(f)\otimes1_{M_{\Lambda}(R)})L_{ab} - L_{ab}(f\otimes1_{M_{\Lambda}(R)}), \\ (1_{M_{\Lambda}(R)} \otimes T_{\deg (b)}(f))L_{ab} - L_{ab}(1_{M_{\Lambda}(R)}\otimes f), \\ (f\otimes1_{M_{\Lambda}(R)})(L^{-1})_{ab} - (L^{-1})_{ab}(T_{\deg (b)}(f)\otimes1_{M_{\Lambda}(R)}), \\ (1_{M_{\Lambda}(R)}\otimes f)(L^{-1})_{ab} - (L^{-1})_{ab}(1_{M_{\Lambda}(R)}\otimes T_{\deg (a)}(f))\;\;(\forall f\in M_{\Lambda}(R), \forall a,b\in X).\) \\ \item[(4)] \(\displaystyle\sum_{x,y\in X} (\sigma^{xy}_{ac}\otimes1_{M_{\Lambda}(R)})L_{yd}L_{xb} - \sum_{x,y\in X} (1_{M_{\Lambda}(R)}\otimes\sigma^{bd}_{xy})L_{cy}L_{ax}\;\;(\forall a,b,c,d\in X).\) \\ \item[(5)] \(\emptyset - 1_{M_{\Lambda}(R)}\otimes1_{M_{\Lambda}(R)}\). \\ \end{itemize} \begin{theo}(See \cite[Theorem 2.1]{oshibu}.) If the following conditions are satisfied, then the quotient \( A_{\sigma} := \mathbb{K} \langle \Lambda X \rangle / I_{\sigma} \) is a left bialgebroid. \begin{equation} \begin{aligned} \begin{cases} \sigma^{ab}_{cd} (\lambda) \in Z(R) \;\; ( \forall \lambda \in \Lambda, \forall a, b, c, d \in X ) ; \\ \lambda \deg(d) \deg(b) \neq \lambda \deg(c) \deg(a) \Rightarrow \sigma^{bd}_{ac} (\lambda) = 0. \label{cond:invc} \end{cases} \end{aligned} \end{equation} Here \( Z(R) \) is the center of \( R \). \end{theo} The maps \( s_{M_{\Lambda}(R)} \colon M_{\Lambda}(R) \to A_{\sigma} \) and \( t_{M_{\Lambda}(R)} \colon M_{\Lambda}(R)^{op} \to A_{\sigma} \) are defined by \begin{align*} &s_{M_{\Lambda}(R)}(f) = f \otimes 1_{M_{\Lambda}(R)} + I_{\sigma}; \\ &t_{M_{\Lambda}(R)}(f) = 1_{M_{\Lambda}(R)} \otimes f + I_{\sigma} \;\; (f \in M_{\Lambda}(R)). \end{align*} These are \( \mathbb{K} \)-algebra homomorphisms and satisfy \eqref{def:stcom}. Thus \( A_{\sigma} \) is an \( (M_{\Lambda}(R), M_{\Lambda}(R)) \)-bimodule via \eqref{def:ac}. Let \( I_2 \) denote the right ideal of \( A_{\sigma} \) whose generators are \( t_{M_{\Lambda}(R)}( f ) \otimes 1_{A_{\sigma}} - 1_{A_{\sigma}} \otimes s_{M_{\Lambda}(R)}( f ) \; (\forall f \in M_{\Lambda}(R)) \). The \( \mathbb{K} \)-algebra homomorphism \( \overline{\Delta} \colon \mathbb{K} \langle \Lambda X \rangle \to A_{\sigma} \otimes_{\mathbb{K}} A_{\sigma} \) is defined by \begin{align*} &\overline{\Delta}(\xi) = s_{M_{\Lambda}(R)} \otimes t_{M_{\Lambda}(R)}(\xi) \;\; (\xi \in M_{\Lambda}(R) \otimes_{\mathbb{K}} M_{\Lambda}(R)^{op}); \\ &\overline{\Delta}(L_{ab}) = \sum_{c \in X} L_{ac} + I_{\sigma} \otimes L_{cb} + I_{\sigma} \;\; (a,b \in X); \\ &\overline{\Delta}((L^{-1})_{ab}) = \sum_{c \in X} (L^{-1})_{cb} + I_{\sigma} \otimes (L^{-1})_{ac} + I_{\sigma}. \end{align*} This map \( \overline{\Delta} \) satisfies \( \overline{\Delta}( I_{\sigma} ) \subset I_2 \). Thus the \( \mathbb{K} \)-linear map \( \tilde{\Delta}( \alpha + I_{\sigma} ) = \overline{\Delta}( \alpha ) + I_2 \; (\alpha \in \mathbb{K} \langle \Lambda X \rangle) \) is well defined. Since \( A_{\sigma} \otimes_{\mathbb{K}} A_{\sigma} / I_2 \cong A_{\sigma} \otimes_{M_{\Lambda}(R)} A_{\sigma} \) as \( \mathbb{K} \)-vector spaces, we can induce the \( \mathbb{K} \)-linear map \( \Delta_{M_{\Lambda}(R)} \colon A_{\sigma} \to A_{\sigma} \otimes_{M_{\Lambda}(R)} A_{\sigma} \) from the map \( \tilde{\Delta} \). This \( \Delta_{M_{\Lambda}(R)} \) is an \( (M_{\Lambda}(R), M_{\Lambda}(R)) \)-bimodule homomorphism. The next is to define the map \( \pi_{M_{\Lambda}(R)} \colon A_{\sigma} \to M_{\Lambda}(R) \). The \( \mathbb{K} \)-algebra homomorphism \( \overline{\chi} \colon \mathbb{K} \langle \Lambda X \rangle \to {\rm End}_{\mathbb{K}}( M_{\Lambda}(R) ) \) is defined by \begin{align*} &\overline{\chi}(f \otimes g) = \rho_l( f ) \rho_r(g) \; (f, g \in M_{\Lambda}(R)) \\ &\overline{\chi} ( L_{ab} ) = \delta_{a, b} T_{\deg (a)}; \\ &\overline{\chi} ( (L^{-1})_{ab} ) = \delta_{a, b} T_{\deg (a)^{-1}} \;\; ( a, b \in X ) \end{align*} Here \( \rho_l (f) \) and \( \rho_r (f) \; (f \in M_{\Lambda}(R)) \) are maps defined by \begin{equation*} \rho_l(f) \colon M_{\Lambda}(R) \ni g \mapsto fg \in M_{\Lambda}(R); \;\; \rho_r(f) \colon M_{\Lambda}(R) \ni g \mapsto gf \in M_{\Lambda}(R). \end{equation*} Because \( \overline{\chi}( I_{\sigma} ) = \{ 0 \} \) is satisfied, the map \( \chi(\alpha + I_{\sigma}) = \overline{\chi}(\alpha) \; (\alpha \in \mathbb{K} \langle \Lambda X \rangle) \) makes sense and is a \( \mathbb{K} \)-algebra homomorphism. We define the map \( \pi_{M_{\Lambda}(R)} \) by \begin{equation} \pi_{M_{\Lambda}(R)} \colon A_{\sigma} \ni a \mapsto \chi(a)( 1_{M_{\Lambda}(R)} ) \in M_{\Lambda}(R). \end{equation} This \( \pi_{M_{\Lambda}(R)} \) is an \( (M_{\Lambda}(R), M_{\Lambda}(R)) \)-bimodule homomorphism. The triplet \( (A_{\sigma}, \Delta_{M_{\Lambda}(R)}, \pi_{M_{\Lambda}(R)}) \) is a comonoid in the tensor category of \( (M_{\Lambda}(R), M_{\Lambda}(R)) \)-bimodules. Since the maps \( \Delta_{M_{\Lambda}(R)} \) and \( \pi_{M_{\Lambda}(R)} \) satisfy the conditions \eqref{def:st} - \eqref{def:pmulti}, the sextuplet \( (A_{\sigma}, M_{\Lambda}(R), s_{M_{\Lambda}(R)}, t_{M_{\Lambda}(R)}, \Delta_{M_{\Lambda}(R)}, \pi_{M_{\Lambda}(R)}) \) is a left bialgebroid. Let \( \sigma = \{ \sigma^{ab}_{cd} \}_{a, b, c, d \in X} \). This left bialgebroid \( A_{\sigma} \) has a Hopf algebroid structure if \( \sigma \) satisfies a certain condition, called rigidity. \begin{defi}(See \cite[Definition 4.2]{oshibu}.) The family \( \sigma = \{ \sigma^{ab}_{cd} \}_{a, b, c, d \in X} \) is called rigid, iff, for any \( a, b \in X \), there exist \( x_{ab}, y_{ab} \in A_{\sigma} \) such that \begin{align*} \sum_{c \in X} ( (L^{-1})_{cb} + I_{\sigma} ) x_{ac} &= \sum_{c \in X} x_{cb} ( (L^{-1})_{ac} + I_{\sigma} ) \\ &= \sum_{c \in X} ( L_{cb} + I_{\sigma} ) y_{ac} \\ &= \sum_{c \in X} y_{cb} ( L_{ac} + I_{\sigma} ) \\ &= \delta_{a, b} 1_{A_{\sigma}}. \end{align*} \end{defi} \begin{prop}(See \cite[Proposition 4.1]{oshibu}.) The following are equivalent: \begin{enumerate} \item \( \sigma \) is rigid; \item There exists a unique \( \mathbb{K} \)-algebra anti-automorphism \( S \colon A_{\sigma} \to A_{\sigma} \) satisfying \begin{equation} \begin{cases} S(f \otimes g + I_{\sigma}) = g \otimes f + I_{\sigma} \; (f, g \in M_{\Lambda}(R)); \\ S(L_{ab} + I_{\sigma}) = (L^{-1})_{ab} + I_{\sigma}; \\ S((L^{-1})_{ab} + I_{\sigma}) = x_{ab} \;\; (a, b \in X). \end{cases} \end{equation} \end{enumerate} \end{prop} \begin{prop}(See \cite[Proposition 4.2]{oshibu}.) If \( \sigma \) is rigid, then the pair \( (A_{\sigma}, S) \) is a Hopf algebroid for \( N = M_{\Lambda}( R )^{op} \) and \( \omega = {\rm id}_{M_{\Lambda}( R )} \). \end{prop} \subsection{Left bialgebroid \( \mathfrak{A}(w) \)} \label{sec:Aw} In this subsection, we introduce a left bialgebroid \( \mathfrak{A}(w) \). This is a generalization of \cite{hayas}. \begin{defi} Let \( \Lambda \) be a non-empty set. A set \(Q\) endowed with two maps \( \mathfrak{s}, \mathfrak{t} \colon Q \to \Lambda \) is said to be a quiver over \( \Lambda \). These maps \( \mathfrak{s} \) and \( \mathfrak{t} \) are respectively called the source map and the target map. For a non-negative integer \( m \), we define the fiber product \( Q^{(m)} \) by \( Q^{(0)} := \Lambda \), \( Q^{(1)} := Q \), and \( Q^{(m)} := \{ q = (q_1, \ldots, q_m) \in Q^m \; | \; \mathfrak{t}( q_{i} ) = \mathfrak{s}( q_{i + 1} ), 1 \leq \forall i \leq m-1 \} \; (m>1) \). The set \( Q^{(m)} \; (m>0) \) is a quiver over \( \Lambda \) with \( \mathfrak{s}(q) = \mathfrak{s}(q_1) \), \( \mathfrak{t}(q) = \mathfrak{t}(q_m) \). \( Q^{(0)} \) is also a quiver over \( \Lambda \) by \( \mathfrak{s} = \mathfrak{t} = {\rm id}_{\Lambda} \). \end{defi} Let \( \Lambda \) be a non-empty finite set, and \( Q \) a finite quiver over \( \Lambda \). We denote by \( \mathfrak{G}(Q) \) the linear span of the symbols \( \displaystyle \mathbf{e}\genfrac{[}{]}{0pt}{}{p}{q} \) \( ( p, q \in Q^{(m)}, m \in \mathbb{Z}_{\geq 0} ) \): \begin{equation} \mathfrak{G}(Q) := \bigoplus_{p, q \in Q^{(m)}, m \in \mathbb{Z}_{\geq 0}} \mathbb{K} \; \mathbf{e}\genfrac{[}{]}{0pt}{}{p}{q}. \end{equation} This \( \mathfrak{G}(Q) \) is a \( \mathbb{K} \)-algebra by the following multiplication: \begin{align*} \mathbf{e}\genfrac{[}{]}{0pt}{}{p}{q} \mathbf{e}\genfrac{[}{]}{0pt}{}{p^{\prime}}{q^{\prime}} &= \delta_{\mathfrak{t}( p ), \mathfrak{s}( p^{\prime} )} \delta_{\mathfrak{t}( q ), \mathfrak{s}( q^{\prime} )} \mathbf{e}\genfrac{[}{]}{0pt}{}{p p^{\prime}}{q q^{\prime}}; \\ 1_{\mathfrak{G}(Q)} &= \sum_{\lambda, \mu \in \Lambda} \mathbf{e}\genfrac{[}{]}{0pt}{}{\lambda}{\mu} \end{align*} for \( p, q \in Q^{(m)}, p^{\prime}, q^{\prime} \in Q^{(n)} \), and \( m, n \in \mathbb{Z}_{\geq 0} \). Here \( \delta_{\lambda, \mu} \in \mathbb{K} \; (\lambda, \mu \in \Lambda) \) means Kronecker's delta symbol. For a \( \mathbb{K} \)-algebra \(R\), let \( \mathbf{w}\begin{sumibmatrix} & a & \\ c & & b \\ & d & \end{sumibmatrix} \in R \; ((a, b), (c, d) \in Q^{(2)}) \). We write \( \mathfrak{I}_{\mathbf{w}} \) for the two-sided ideal of the \( \mathbb{K} \)-algebra \( \mathfrak{H}(Q) := R \otimes_{\mathbb{K}} R^{op} \otimes_{\mathbb{K}} \mathfrak{G}(Q) \) whose generators are \begin{align} &\sum_{( x, y ) \in Q^{(2)}} \mathbf{w}\begin{sumibmatrix} & x & \\ a & & y \\ & b & \end{sumibmatrix} \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{x}{c} \mathbf{e}\genfrac{[}{]}{0pt}{}{y}{d} \nonumber \\ &- \sum_{( x, y ) \in Q^{(2)}} 1_R \otimes \mathbf{w}\begin{sumibmatrix} & c & \\ x & & d \\ & y & \end{sumibmatrix} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{a}{x} \mathbf{e}\genfrac{[}{]}{0pt}{}{b}{y} \;\; ( \forall (a, b), (c, d) \in Q^{(2)} ). \label{gen:face} \end{align} We define \( \mathfrak{A}(w) \) by the quotient \( \mathfrak{A}(w) := \mathfrak{H}(Q) / \mathfrak{I}_{\mathbf{w}} \). \begin{theo}\label{theo:lb} If the following conditions are satisfied, then \( \mathfrak{A}(w) \) is a left bialgebroid. \begin{equation} \begin{aligned} \begin{cases} \mathbf{w}\begin{sumibmatrix} & a & \\ c & & b \\ & d & \end{sumibmatrix} \in Z(R) \;\; ( \forall (a, b) , (c, d) \in Q^{(2)} ) ; \\ \mathfrak{s}( a ) \neq \mathfrak{s}( c ) \; {\rm or} \; \mathfrak{t}( b ) \neq \mathfrak{t}( d ) \Rightarrow \mathbf{w}\begin{sumibmatrix} & a & \\ c & & b \\ & d & \end{sumibmatrix} = 0. \end{cases} \end{aligned} \label{face} \end{equation} \end{theo} The maps \( s_{M_{\Lambda}(R)} \colon M_{\Lambda}(R) \to \mathfrak{A}(w) \) and \( t_{M_{\Lambda}(R)} \colon M_{\Lambda}(R)^{op} \to \mathfrak{A}(w) \) are defined by \begin{align*} &s_{M_{\Lambda}(R)}(f) = \sum_{\lambda, \mu \in \Lambda} f( \lambda ) \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\lambda}{\mu} + \mathfrak{I}_{\mathbf{w}} ; \\ &t_{M_{\Lambda}(R)}(f) = \sum_{\lambda, \mu \in \Lambda} 1_R \otimes f( \lambda ) \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\mu}{\lambda} + \mathfrak{I}_{\mathbf{w}} \;\; (f \in M_{\Lambda}(R)). \end{align*} These maps are \( \mathbb{K} \)-algebra homomorphisms satisfying \eqref{def:stcom}. As a result, \( \mathfrak{A}(w) \) is an \( (M_{\Lambda}(R), M_{\Lambda}(R)) \)-bimodule by the action \eqref{def:ac}. Let \( \mathfrak{I}_2 \) denote the right ideal of \( \mathfrak{A}(w) \otimes_{\mathbb{K}} \mathfrak{A}(w) \) whose generators are \( t_{M_{\Lambda}(R)}( f ) \otimes 1_{\mathfrak{A}(w)} - 1_{\mathfrak{A}(w)} \otimes s_{M_{\Lambda}(R)}( f ) \; (\forall f \in M_{\Lambda}(R)) \). In order to construct the map \( \Delta_{M_{\Lambda}(R)} \), we define the \( \mathbb{K} \)-linear map \( \overline{\nabla} \colon \mathfrak{H}(Q) \to \mathfrak{A}(w) \otimes_{\mathbb{K}} \mathfrak{A}(w) \) by \begin{align*} \overline{\nabla}( r \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{p}{q} ) = \sum_{u \in Q^{(m)}} ( r \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{p}{u} + \mathfrak{I}_{\mathbf{w}} ) \otimes ( 1_R \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{u}{q} + \mathfrak{I}_{\mathbf{w}} ) \end{align*} for \( r, r^{\prime} \in R \), \( p, q \in Q^{(m)} \), and \( m \in \mathbb{Z}_{\geq 0} \). This map \( \overline{\nabla} \) preserves the multiplication of the \( \mathbb{K} \)-algebra \( \mathfrak{H}(Q) \). \begin{prop} \( \overline{\nabla}( \mathfrak{I}_{\mathbf{w}} ) \subset \mathfrak{I}_2 \). \end{prop} \begin{proof} It is easy to check that \( \overline{\nabla}( \alpha ) \beta \in \mathfrak{I}_2 \) for any \( \alpha \in \mathfrak{H}(Q) \) and \( \beta \in \mathfrak{I}_2 \). In order to complete the proof, we need to show that \( \overline{\nabla}( \gamma ) \in \mathfrak{I}_2 \) for an arbitrary generator \( \gamma \) in \eqref{gen:face}. For any \( (a,b), (c,d) \in Q^{(2)} \), we can induce the following equality by using the definition of \( \mathfrak{I}_{\mathbf{w}} \): \begin{align*} &\overline{\nabla}( \sum_{( x, y ) \in Q^{(2)}} \mathbf{w}\begin{sumibmatrix} & x & \\ a & & y \\ & b & \end{sumibmatrix} \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{x}{c} \mathbf{e}\genfrac{[}{]}{0pt}{}{y}{d} - \sum_{( x, y ) \in Q^{(2)}} 1_R \otimes \mathbf{w}\begin{sumibmatrix} & c & \\ x & & d \\ & y & \end{sumibmatrix} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{a}{x} \mathbf{e}\genfrac{[}{]}{0pt}{}{b}{y} ) \\ =& \sum_{(x, y), (u, v) \in Q^{(m)}} ( \mathbf{w}\begin{sumibmatrix} & x & \\ a & & y \\ & b & \end{sumibmatrix} \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{x}{u} \mathbf{e}\genfrac{[}{]}{0pt}{}{y}{v} + \mathfrak{I}_{\mathbf{w}} ) \otimes ( 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{u}{c} \mathbf{e}\genfrac{[}{]}{0pt}{}{v}{d} + \mathfrak{I}_{\mathbf{w}} ) \\ -& \sum_{(x, y), (u, v) \in Q^{(m)}} ( 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{a}{u} \mathbf{e}\genfrac{[}{]}{0pt}{}{b}{v} + \mathfrak{I}_{\mathbf{w}} ) \otimes ( 1_R \otimes \mathbf{w}\begin{sumibmatrix} & c & \\ x & & d \\ & y & \end{sumibmatrix} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{u}{x} \mathbf{e}\genfrac{[}{]}{0pt}{}{v}{y} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \sum_{(x, y), (u, v) \in Q^{(m)}} ( 1_R \otimes \mathbf{w}\begin{sumibmatrix} & u & \\ x & & v \\ & y & \end{sumibmatrix} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{a}{x} \mathbf{e}\genfrac{[}{]}{0pt}{}{b}{y} + \mathfrak{I}_{\mathbf{w}} ) \otimes ( 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{u}{c} \mathbf{e}\genfrac{[}{]}{0pt}{}{v}{d} + \mathfrak{I}_{\mathbf{w}} ) \\ -& \sum_{(x, y), (u, v) \in Q^{(m)}} ( 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{a}{x} \mathbf{e}\genfrac{[}{]}{0pt}{}{b}{y} + \mathfrak{I}_{\mathbf{w}} ) \otimes ( \mathbf{w}\begin{sumibmatrix} & u & \\ x & & v \\ & y & \end{sumibmatrix} \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{u}{c} \mathbf{e}\genfrac{[}{]}{0pt}{}{v}{d} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \sum_{(x, y), (u, v) \in Q^{(m)}} ( t_{M_{\Lambda}(R)}( \mathbf{w} \begin{sumibmatrix} & u & \\ x & & v \\ & y & \end{sumibmatrix}_M ) \otimes 1_{\mathfrak{A}(w)} - 1_{\mathfrak{A}(w)} \otimes s_{M_{\Lambda}(R)}( \mathbf{w}\begin{sumibmatrix} & u & \\ x & & v \\ & y & \end{sumibmatrix}_M ) ) \\ &\times ( ( 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{a}{x} \mathbf{e}\genfrac{[}{]}{0pt}{}{b}{y} + \mathfrak{I}_{\mathbf{w}} ) \otimes ( 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{u}{c} \mathbf{e}\genfrac{[}{]}{0pt}{}{v}{d} + \mathfrak{I}_{\mathbf{w}} ) ) \\ &\in \mathfrak{I}_2. \end{align*} Here \( r_M \in M_{\Lambda}(R) \; (r \in R) \) is the map defined by \( r_M( \lambda ) = r \; (\lambda \in \Lambda) \). Thus this proposition is proved. \end{proof} This proposition induces a \( \mathbb{K} \)-linear map \( \tilde{\nabla}( \alpha + \mathfrak{I}_{\mathbf{w}} ) = \overline{\nabla}( \alpha ) + \mathfrak{I}_2 \; (\alpha \in \mathfrak{H}(Q)) \). Since \( \mathfrak{A}(w) \otimes_{\mathbb{K}} \mathfrak{A}(w) / \mathfrak{I}_2 \cong \mathfrak{A}(w) \otimes_{M_{\Lambda}(R)} \mathfrak{A}(w) \) as \( \mathbb{K} \)-vector spaces, we can construct the \( \mathbb{K} \)-linear map \( \Delta_{M_{\Lambda}(R)} \colon \mathfrak{A}(w) \to \mathfrak{A}(w) \otimes_{M_{\Lambda}(R)} \mathfrak{A}(w) \) from the map \( \tilde{\nabla} \). This \( \Delta_{M_{\Lambda}(R)} \) is an \( (M_{\Lambda}(R), M_{\Lambda}(R)) \)-bimodule homomorphism. The next task is to construct the map \( \pi_{M_{\Lambda}(R)} \colon \mathfrak{A}(w) \to M_{\Lambda}(R) \). We first define the \( \mathbb{K} \)-linear map \( \overline{\zeta} \colon \mathfrak{H}(Q) \to {\rm End}_{\mathbb{K}}( M_{\Lambda}(R) ) \) as follows: \begin{align*} \overline{\zeta}( r \otimes r^{\prime} \otimes {\bf e}\genfrac{[}{]}{0pt}{}{p}{q} ) ( f ) = \delta_{p, q} (r f( \mathfrak{t}( q ) ) r^{\prime})_M \delta_{\mathfrak{s}( q )} \;\; (f \in M_{\Lambda}(R)). \end{align*} Here \( \delta_{\lambda} \in M_{\Lambda}(R) \; (\lambda \in \Lambda) \) is the map defined by \(\delta_{\lambda}( \mu ) = \delta_{\lambda, \mu} \; (\mu \in \Lambda) \). This map \( \overline{\zeta} \) is a \( \mathbb{K} \)-algebra homomorphism. \begin{prop} \( \overline{\zeta}( \mathfrak{I}_{\mathbf{w}} ) = \{ 0 \} \). \end{prop} \begin{proof} We denote by \( f \) an arbitrary element in \( M_{\Lambda}(R) \). By using the first condition in \eqref{face}, \begin{align*} &\overline{\zeta}( \sum_{(x, y ) \in Q^{(2)}} {\bf w}\begin{sumibmatrix} & x & \\ a & & y \\ & b & \end{sumibmatrix} \otimes 1_R \otimes {\bf e}\genfrac{[}{]}{0pt}{}{x}{c} {\bf e}\genfrac{[}{]}{0pt}{}{y}{d} \\ &- \sum_{( x, y ) \in Q^{(2)}} 1_R \otimes {\bf w}\begin{sumibmatrix} & c & \\ x & & d \\ & y & \end{sumibmatrix} \otimes {\bf e}\genfrac{[}{]}{0pt}{}{a}{x} {\bf e}\genfrac{[}{]}{0pt}{}{b}{y} ) ( f ) \\ =& ({\bf w}\begin{sumibmatrix} & c & \\ a & & d \\ & b & \end{sumibmatrix} f( \mathfrak{t}( d ) ))_M \delta_{\mathfrak{s}(c)} - ( f( \mathfrak{t}( b ) {\bf w}\begin{sumibmatrix} & c & \\ a & & d \\ & b & \end{sumibmatrix} )_M \delta_{\mathfrak{s}(a)} \\ =& ({\bf w}\begin{sumibmatrix} & c & \\ a & & d \\ & b & \end{sumibmatrix} f( \mathfrak{t}( d ) ))_M \delta_{\mathfrak{s}(c)} - ({\bf w}\begin{sumibmatrix} & c & \\ a & & d \\ & b & \end{sumibmatrix} f( \mathfrak{t}( b ) ))_M \delta_{\mathfrak{s}(a)} \end{align*} for all \( (a, b) \) and \( (c, d) \in Q^{(2)} \). If \( {\bf w}\begin{sumibmatrix} & c & \\ a & & d \\ & b & \end{sumibmatrix} \neq 0 \), \( \mathfrak{s}(a) = \mathfrak{s}(c) \) is satisfied because of the second condition in \eqref{face}. This completes the proof. \end{proof} As a result of this proposition, the map \( \zeta(\alpha + \mathfrak{I}_{\mathbf{w}}) = \overline{\zeta}(\alpha) \; (\alpha \in \mathfrak{H}(Q)) \) is an well defined \( \mathbb{K} \)-algebra homomorphism. We define the map \( \pi_{M_{\Lambda}(R)} \) by \begin{equation} \pi_{M_{\Lambda}(R)} \colon \mathfrak{A}(w) \ni a \mapsto \zeta(a)( 1_{M_{\Lambda}(R)} ) \in M_{\Lambda}(R). \end{equation} This \( \pi_{M_{\Lambda}(R)} \) is an \( (M_{\Lambda}(R), M_{\Lambda}(R)) \)-bimodule homomorphism. \begin{prop} The triplet \( (\mathfrak{A}(w), \Delta_{M_{\Lambda}(R)}, \pi_{M_{\Lambda}(R)}) \) is a comonoid in the tensor category of \( (M_{\Lambda}(R), M_{\Lambda}(R)) \)-bimodules. \end{prop} \begin{proof} For any \( r, r^{\prime} \in R \), \( p, q \in Q^{(m)} \) and \( m \in \mathbb{Z}_{\geq 0} \), \begin{align*} &(\Delta_{M_{\Lambda}(R)} \otimes {\rm id_{\mathfrak{A}(w)}}) \circ \Delta_{M_{\Lambda}(R)}( r \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{p}{q} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \sum_{u, v \in Q^{(m)}} (r \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{p}{v} + \mathfrak{I}_{\mathbf{w}}) \otimes (1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{v}{u} + \mathfrak{I}_{\mathbf{w}}) \otimes (1_R \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{u}{q} + \mathfrak{I}_{\mathbf{w}}) \\ =& ( {\rm id_{\mathfrak{A}(w)}} \otimes \Delta_{M_{\Lambda}(R)} ) \circ \Delta_{M_{\Lambda}(R)}( r \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{p}{q} + \mathfrak{I}_{\mathbf{w}} ). \end{align*} We write \( \displaystyle a = r \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{p}{q} + \mathfrak{I}_{\mathbf{w}} \). By using Sweedler's notation \( \Delta_{M_{\Lambda}(R)}(a) = a_{(1)} \otimes a_{(2)} \), \begin{align*} \pi_{M_{\Lambda}(R)}(a_{(1)}) a_{(2)} =& \sum_{\substack{u \in Q^{(m)} \\ \lambda, \mu \in \Lambda}} \delta_{p, u} (r_M \delta_{\mathfrak{s}(u)}(\lambda) \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\lambda}{\mu}) (1_R \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{u}{q}) + \mathfrak{I}_{\mathbf{w}} \\ =& \sum_{u \in Q^{(m)}} \delta_{p,u} (r \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{u}{q}) + \mathfrak{I}_{\mathbf{w}} \\ =& r \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{p}{q} + \mathfrak{I}_{\mathbf{w}}. \end{align*} The proof for \( a_{(1)} \pi_{M_{\Lambda}(R)}(a_{(2)}) = a \) is similar. This is the desired conclusion. \end{proof} \begin{prop} The maps \( \Delta_{M_{\Lambda}(R)} \) and \( \pi_{M_{\Lambda}(R)} \) satisfy the conditions \eqref{def:st} - \eqref{def:pmulti}. \end{prop} \begin{proof} We first show \eqref{def:st}. For any \( r, r^{\prime} \in R \), \( p, q \in Q^{(m)} \) and \( m \in \mathbb{Z}_{\geq 0} \), we write \( a = r \otimes r^{\prime} \otimes \displaystyle\mathbf{e}\genfrac{[}{]}{0pt}{}{p}{q} + \mathfrak{I}_{\mathbf{w}} \). Let \( f \) be an arbitrary element in \( M_{\Lambda}(R) \). We can evaluate that \begin{align*} &a_{(1)} t_{M_{\Lambda}(R)}( f ) \otimes a_{(2)} \\ =& \sum_{u \in Q^{(m)}} (r \otimes f(\mathfrak{t}(u)) \otimes \genfrac{[}{]}{0pt}{}{p}{u} + \mathfrak{I}_{\mathbf{w}}) \otimes (1_R \otimes r^{\prime} \otimes \genfrac{[}{]}{0pt}{}{u}{q} + \mathfrak{I}_{\mathbf{w}}) \\ =& \sum_{u \in Q^{(m)}} t_{M_{\Lambda}(R)}( f( \mathfrak{t}(u) )_M ) (r \otimes 1_R \otimes \genfrac{[}{]}{0pt}{}{p}{u} + \mathfrak{I}_{\mathbf{w}}) \otimes (1_R \otimes r^{\prime} \otimes \genfrac{[}{]}{0pt}{}{u}{q} + \mathfrak{I}_{\mathbf{w}}) \\ =& \sum_{u \in Q^{(m)}} (r \otimes 1_R \otimes \genfrac{[}{]}{0pt}{}{p}{u} + \mathfrak{I}_{\mathbf{w}}) \otimes s_{M_{\Lambda}(R)}( f( \mathfrak{t}(u) )_M ) (1_R \otimes r^{\prime} \otimes \genfrac{[}{]}{0pt}{}{u}{q} + \mathfrak{I}_{\mathbf{w}}) \\ =& \sum_{u \in Q^{(m)}} (r \otimes 1_R \otimes \genfrac{[}{]}{0pt}{}{p}{u} + \mathfrak{I}_{\mathbf{w}}) \otimes (f( \mathfrak{t}(u) \otimes r^{\prime} \otimes \genfrac{[}{]}{0pt}{}{u}{q} + \mathfrak{I}_{\mathbf{w}}) \\ =& a_{(1)} \otimes a_{(2)} s_{M_{\Lambda}(R)}(f). \end{align*} Therefore \eqref{def:st} is satisfied. We next prove \eqref{def:Duni}. For any \( \lambda \in \Lambda \), \begin{align*} &(t_{M_{\Lambda}(R)}( \delta_{\lambda} ) \otimes 1_{\mathfrak{A}(w)} - 1_{\mathfrak{A}(w)} \otimes s_{M_{\Lambda}(R)}( \delta_{\lambda} ) ) ( \sum_{\mu \in \Lambda} 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\mu}{\lambda} + \mathfrak{I}_{\mathbf{w}} \otimes 1_{\mathfrak{A}(w)} )\\ =& \sum_{\substack{\mu, \tau, \nu \in \Lambda \\ \lambda \neq \tau}} (1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\mu}{\lambda} + \mathfrak{I}_{\mathbf{w}}) \otimes (1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\tau}{\nu} + \mathfrak{I}_{\mathbf{w}}) \in \mathfrak{I}_2. \end{align*} Thus we can induce that \begin{align*} &\overline{\nabla}( 1_R \otimes 1_R \otimes 1_{\mathfrak{G}(Q)} ) + \mathfrak{I}_2 \\ =& \sum_{\lambda, \mu, \nu \in \Lambda} (1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\lambda}{\nu} + \mathfrak{I}_{\mathbf{w}}) \otimes (1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\nu}{\mu} + \mathfrak{I}_{\mathbf{w}}) \\ & + \sum_{\substack{\lambda, \mu, \nu, \tau \in \Lambda \\ \nu \neq \tau}} (1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\lambda}{\nu} + \mathfrak{I}_{\mathbf{w}}) \otimes (1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\tau}{\mu} + \mathfrak{I}_{\mathbf{w}}) + \mathfrak{I}_2 \\ =& \sum_{\lambda, \mu, \nu, \tau \in \Lambda} (1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\lambda}{\nu} + \mathfrak{I}_{\mathbf{w}}) \otimes (1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\tau}{\mu} + \mathfrak{I}_{\mathbf{w}}) + \mathfrak{I}_2. \end{align*} Since \( \mathfrak{A}(w) \otimes_{\mathbb{K}} \mathfrak{A}(w) / \mathfrak{I}_2 \cong \mathfrak{A}(w) \otimes_{M_{\Lambda}(R)} \mathfrak{A}(w) \) as \( \mathbb{K} \)-vector spaces, \eqref{def:Duni} is proved. The proof for \eqref{def:Dmulti} is similar to that of multiplicativity of the map \( \overline{\nabla} \). Let us prove \eqref{def:punit}. Because \( 1_{\mathfrak{G}(Q)} = \sum_{\lambda, \mu \in \Lambda} \displaystyle\mathbf{e}\genfrac{[}{]}{0pt}{}{\lambda}{\mu} \), \begin{align*} \pi_{M_{\Lambda}(R)}( 1_{\mathfrak{A}(w)} ) =& \sum_{\lambda, \mu \in \Lambda} \delta_{\lambda, \mu} \delta_{\mu} \\ =& \sum_{\lambda \in \Lambda} \delta_{\lambda} = 1_{M_{\Lambda}(R)}. \end{align*} Finally, we give a proof of \eqref{def:pmulti}. Because \( \zeta \) is a \( \mathbb{K} \)-algebra homomorphism, it is sufficient to prove that \( \zeta(a)( 1_{M_{\Lambda}(R)} ) =\zeta( s_{M_{\Lambda}(R)}( \pi_{M_{\Lambda}(R)}( a ) ) )( 1_{M_{\Lambda}(R)} ) = \zeta( t_{M_{\Lambda}(R)}( \pi_{M_{\Lambda}(R)}( a ) ) )( 1_{M_{\Lambda}(R)} ) \) for all \( a \in \mathfrak{A}(w) \). Let \( r, r^{\prime} \in R \), \( p, q \in Q^{(m)} \), and \( m \in \mathbb{Z}_{\geq 0} \). We can evaluate that \begin{align*} &\zeta( s_{M_{\Lambda}(R)}( \pi_{M_{\Lambda}(R)}( r \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{p}{q} + \mathfrak{I}_{\mathbf{w}} ) ) )( 1_{M_{\Lambda}(R)} ) \\ =& \sum_{\lambda \in \Lambda} \delta_{p, q} \zeta( r r^{\prime} \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\mathfrak{s}(q)}{\lambda} + \mathfrak{I}_{\mathbf{w}} )( 1_{M_{\Lambda}(R)} ) \\ =& \delta_{p, q} (r r^{\prime})_M \delta_{\mathfrak{s}(q)} \\ =& \zeta( r \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{p}{q} + \mathfrak{I}_{\mathbf{w}} )( 1_{M_{\Lambda}(R)} ) \end{align*} The proof for \( \displaystyle \zeta( t_{M_{\Lambda}(R)}( \pi_{M_{\Lambda}(R)}( r \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{p}{q} + \mathfrak{I}_{\mathbf{w}} ) ) )( 1_{M_{\Lambda}(R)} ) = \zeta( r \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{p}{q} + \mathfrak{I}_{\mathbf{w}} )( 1_{M_{\Lambda}(R)} ) \) is similar. Thus we conclude \eqref{def:pmulti}. This completes the proof. \end{proof} The sextuplet \( (\mathfrak{A}(w), M_{\Lambda}(R), s_{M_{\Lambda}(R)}, t_{M_{\Lambda}(R)}, \Delta_{M_{\Lambda}(R)}, \pi_{M_{\Lambda}(R)}) \) is therefore a left bialgebroid by the above propositions. \section{Left bialgebroid homomorphism \( \Phi \)} \label{sec:Phi} In this section, we induce a left bialgebroid \( \mathfrak{A}(w_{\sigma}) \) in Subsection \ref{sec:Aw} from the settings of the left bialgebroid \( A_{\sigma} \) in Subsection \ref{sec:As}, and construct a left bialgebroid homomorphism \( \Phi \) from \( \mathfrak{A}(w_{\sigma}) \) to \( A_{\sigma} \). This is a generalization of \cite{matsu}. Let \( A_{\sigma} \) be a left bialgebroid in Subsection \ref{sec:As} and \( \sigma^{ab}_{cd} \in M_{\Lambda}(R) \; ( a, b, c, d \in X ) \) satisfying the condition \eqref{cond:invc}. We define a quiver \( Q \) over \( \Lambda \) by \begin{equation} Q := \Lambda \times X, \; \mathfrak{s}(\lambda, x) = \lambda, \; \mathfrak{t}(\lambda, x) = \lambda \deg(x) \;\; (\lambda \in \Lambda, x \in X) \label{quilx} \end{equation} and set \begin{equation} \mathbf{w}\begin{sumibmatrix} & (\lambda, a) & \\ (\mu, c) & & (\lambda^{\prime}, b) \\ & (\mu^{\prime}, d) & \end{sumibmatrix} = \delta_{\lambda, \mu} \sigma^{ba}_{dc}( \lambda ) \label{wsig} \end{equation} for all \( ( (\lambda, a), ( \lambda^{\prime}, b ) ), ( (\mu, c), (\mu^{\prime}, d) ) \in Q^{(2)} \). \begin{prop} The definition \eqref{wsig} satisfies the condition \( \eqref{face} \). \end{prop} \begin{proof} Let \( ( (\lambda, a), ( \lambda^{\prime}, b ) ), ( (\mu, c), (\mu^{\prime}, d) ) \in Q^{(2)} \). \( \mathbf{w}\begin{sumibmatrix} & (\lambda, a) & \\ (\mu, c) & & (\lambda^{\prime}, b) \\ & (\mu^{\prime}, d) & \end{sumibmatrix} \in Z(R) \) is clear because of \( \sigma^{ba}_{dc}( \lambda ) \in Z(R) \). We next prove that \( \mathbf{w}\begin{sumibmatrix} & (\lambda, a) & \\ (\mu, c) & & (\lambda^{\prime}, b) \\ & (\mu^{\prime}, d) & \end{sumibmatrix} = 0 \) if \( \mathfrak{s}(\lambda, a) \neq \mathfrak{s}(\mu, c) \) or \( \mathfrak{t}(\lambda^{\prime}, b) \neq \mathfrak{t}(\mu^{\prime}, d) \). It follows from \eqref{wsig} that \( \mathbf{w}\begin{sumibmatrix} & (\lambda, a) & \\ (\mu, c) & & (\lambda^{\prime}, b) \\ & (\mu^{\prime}, d) & \end{sumibmatrix} = 0 \) if \( \mathfrak{s}(\lambda, a) \neq \mathfrak{s}(\mu, c) \). Suppose that \( \mathfrak{t}(\lambda^{\prime}, b) \neq \mathfrak{t}(\mu^{\prime}, d) \). By the definition of the fiber product of the quiver \( Q \), we have \begin{equation*} \mathfrak{t}(\lambda^{\prime}, b) = \lambda \deg(a) \deg(b) \;\;\; {\rm and} \;\;\; \mathfrak{t}(\mu^{\prime}, d) = \mu \deg(c) \deg(d). \end{equation*} If \( \lambda = \mu \), then \( \mathbf{w}\begin{sumibmatrix} & (\lambda, a) & \\ (\mu, c) & & (\lambda^{\prime}, b) \\ & (\mu^{\prime}, d) & \end{sumibmatrix} = 0 \) because \( \sigma^{ba}_{dc}(\lambda) = 0 \). This completes the proof. \end{proof} Therefore we can construct the left bialgebroid \( \mathfrak{A}(w_{\sigma}) := \mathfrak{A}(w) \) in Subsection \ref{sec:Aw}. \begin{theo} \label{theo:wh} Let \( r, r^{\prime} \in R \), \( m \in \mathbb{Z}_{\geq 0} \), \( p = ( (\lambda_1, x_1), \ldots, (\lambda_m, x_m) ) \), and \( q = ( (\mu_1, y_1), \ldots, (\mu_m, y_m) ) \in Q^{(m)} \). We define the \( \mathbb{K} \)-linear map \( \overline{\Phi} \colon \mathfrak{H}(Q) \to A_{\sigma} \) by \begin{equation*} \overline{\Phi}( r \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{p}{q} ) = (r_M \otimes r^{\prime}_M) (\delta_{\mathfrak{s}( p )} \otimes \delta_{\mathfrak{s}( q )}) L_{x_1 y_1} \dotsb L_{x_m y_m} + I_{\sigma}. \end{equation*} Then \( \overline{\Phi} \) induces a left bialgebroid homomorphism \( ( \Phi \colon \mathfrak{A}(w_{\sigma}) \to A_{\sigma}, {\rm id}_{M_{\Lambda}(R)} ) \). \end{theo} \begin{proof} We first prove that \( \overline{\Phi}( \mathfrak{I}_{w_{\sigma}} ) = \{ 0 \} \). Since the map \( \overline{\Phi} \) is a \( \mathbb{K} \)-algebra homomorphism, we only need to prove that \( \overline{\Phi}( \alpha ) = 0 \) for every generator \( \alpha \) in \eqref{gen:face}. For any \( (( \mu, a ), ( \mu^{\prime}, b )) \), \( (( \nu, c ), ( \nu^{\prime}, d )) \in Q^{(2)} \), \begin{align*} &\sum_{( (\lambda, x), (\lambda^{\prime}, y) ) \in Q^{(2)}} \overline{\Phi}( \mathbf{w}\begin{sumibmatrix} & (\lambda, x) & \\ (\mu, a) & & (\lambda^{\prime}, y) \\ & (\mu^{\prime}, b) & \end{sumibmatrix} \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\lambda, x)}{(\nu, c)} \mathbf{e}\genfrac{[}{]}{0pt}{}{(\lambda^{\prime}, y)}{(\nu^{\prime}, d)} ) \\ =& \sum_{\lambda \in \Lambda, x,y \in X} ( \delta_{\lambda, \mu} \sigma^{yx}_{ba}( \lambda )_M \otimes 1_{M_{\Lambda}(R)} ) ( \delta_{\lambda} \otimes \delta_{\nu} ) L_{xc} L_{yd} + I_{\sigma} \\ =& \sum_{x, y \in X} (\sigma^{yx}_{ba} \otimes 1_{M_{\Lambda}(R)})( \delta_{\mu} \otimes \delta_{\nu} ) L_{xc} L_{yd} + I_{\sigma} \\ =& ( \delta_{\mu} \otimes \delta_{\nu} ) \sum_{x, y \in X} (\sigma^{yx}_{ba} \otimes 1_{M_{\Lambda}(R)}) L_{xc} L_{yd} + I_{\sigma}, \displaybreak[0] \\ &\sum_{( (\lambda, x), (\lambda^{\prime}, y) ) \in Q^{(2)}} \overline{\Phi}( 1_R \otimes \mathbf{w}\begin{sumibmatrix} & (\nu, c) & \\ (\lambda, x) & & (\nu^{\prime}, d) \\ & (\lambda^{\prime}, y) & \end{sumibmatrix} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\mu, a)}{(\lambda, x)} \mathbf{e}\genfrac{[}{]}{0pt}{}{(\mu^{\prime}, b)}{(\lambda^{\prime}, y)} ) \\ =& \sum_{\lambda \in \Lambda, x, y \in X} (1_{M_{\Lambda}(R)} \otimes \delta_{\nu, \lambda} \sigma^{dc}_{yx}( \nu )_M) ( \delta_{\mu} \otimes \delta_{\lambda} ) L_{ax} L_{by} + I_{\sigma} \\ =& \sum_{x, y \in X} (1_{M_{\Lambda}(R)} \otimes \sigma^{dc}_{yx}) ( \delta_{\mu} \otimes \delta_{\nu} ) L_{ax} L_{by} + I_{\sigma} \\ =& ( \delta_{\mu} \otimes \delta_{\nu} ) \sum_{x, y \in X} (1_{M_{\Lambda}(R)} \otimes \sigma^{dc}_{yx}) L_{ax} L_{by} + I_{\sigma}. \end{align*} Hence \( \overline{\Phi} \) induces a \( \mathbb{K} \)-algebra homomorphism \( \Phi \colon \mathfrak{A}(w_{\sigma}) \to A_{\sigma} \). Let us prove that the pair of \( \mathbb{K} \)-algebra homomorphisms \( (\Phi, \mathrm{id}_{M_{\Lambda}(R)}) \) satisfies \eqref{def:sp}-\eqref{def:Dp}. We can prove \eqref{def:sp} as follows: \begin{align*} \Phi \circ s^{\mathfrak{A}(w_{\sigma})}_{M_{\Lambda}(R)} (f) =& \sum_{\lambda, \mu \in \Lambda} ( f(\lambda) \otimes 1_{M_{\Lambda}(R)} )( \delta_{\lambda} \otimes \delta_{\mu} ) + I_{\sigma} \\ =& \sum_{\lambda \in \Lambda} f(\lambda) \delta_{\lambda} \otimes 1_{M_{\Lambda}(R)} + I_{\sigma} \\ =& f \otimes 1_{M_{\Lambda}(R)} + I_{\sigma} = s^{A_{\sigma}}_{M_{\Lambda}(R)}(f). \end{align*} The proof of \eqref{def:tp} is similar to that of \eqref{def:sp}. We next prove \eqref{def:piP}. Since \( T_{\deg(a)} \) is a \( \mathbb{K} \)-algebra homomorphism for all \( a \in X \), the left hand side of \eqref{def:piP} satisfies that \begin{align*} &\pi^{A_{\sigma}}_{M_{\Lambda}(R)} \circ \Phi( r \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{p}{q} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \chi( (r_M \otimes r^{\prime}_M) (\delta_{\lambda_1} \otimes \delta_{\mu_1}) L_{x_1y_1} \dotsb L_{x_m y_m} + I_{\sigma} )( 1_{M_{\Lambda}(R)} ) \\ =& \delta_{x_1, y_1} \dotsb \delta_{x_m, y_m} (r r^{\prime})_M \delta_{\lambda_1} \delta_{\mu_1}. \end{align*} For any \( i \in \{ 1, \dotsb , m-1 \} \), we can induce that \( \lambda_{i+1} = \lambda_{i} \deg(x_i) \) and \( \mu_{i+1} = \mu_i \deg( y_i ) \). This fact implies that \begin{align*} &\pi^{\mathfrak{A}(w_{\sigma})}_{M_{\Lambda}(R)} ( r \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{p}{q} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \delta_{\lambda_1, \mu_1} \delta_{x_1, y_1} \dotsb \delta_{x_m, y_m} (r r^{\prime})_M \delta_{\mu_1} \\ =& \begin{cases} (rr^{\prime})_M \delta_{\lambda_1}, & \text{\( ( p = q ); \)} \\ 0, & \text{(otherwise).} \end{cases} \end{align*} We conclude \eqref{def:piP} because of the above calculation. Finally, we give a proof of \eqref{def:Dp}. \begin{align*} &\Delta^{A_{\sigma}}_{M_{\Lambda}(R)} \circ \Phi ( r \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{p}{q} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \sum_{z_1, \ldots, z_m \in X} (r_M \delta_{\lambda_1} \otimes 1_{M_{\Lambda}(R)} ) L_{x_1 z_1} \dotsb L_{x_m z_m} + I_{\sigma} \otimes (1_{M_{\Lambda}(R)} \otimes r^{\prime}_M \delta_{\mu_1}) L_{z_1 y_1} \dotsb L_{z_m y_m} + I_{\sigma}, \\ &(\Phi \otimes \Phi) \circ \Delta^{\mathfrak{A}(w_{\sigma})}_{M_{\Lambda}(R)} ( r \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{p}{q} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \sum_{\substack{\tau \in \Lambda \\ z_1, \ldots, z_m \in X}} (r_M \delta_{\lambda_1} \otimes \delta_{\tau} ) L_{x_1 z_1} \dotsb L_{x_m z_m} + I_{\sigma} \otimes (\delta_{\tau} \otimes r^{\prime}_M \delta_{\mu_1}) L_{z_1 y_1} \dotsb L_{z_m y_m} + I_{\sigma}. \end{align*} For any \( \lambda \in \Lambda \), \begin{align*} &(t^{A_{\sigma}}_{M_{\Lambda}(R)}( \delta_{\lambda} ) \otimes 1_{A_{\sigma}} - 1_{A_{\sigma}} \otimes s^{A_{\sigma}}_{M_{\Lambda}(R)}( \delta_{\lambda} ) ) ( (1_{M_{\Lambda}(R)} \otimes \delta_{\lambda}) + I_{\sigma} \otimes 1_{A_{\sigma}} )\\ =& \sum_{\substack{\mu \in \Lambda \\ \lambda \neq \mu}} (1_{M_{\Lambda}(R)} \otimes \delta_{\lambda}) + I_{\sigma} \otimes ( \delta_{\mu} \otimes 1_{M_{\Lambda}(R)} ) + I_{\sigma} \in I_2. \end{align*} Thus \eqref{def:Dp} is proved. \end{proof} \begin{ex} \label{ex:ndy} Let \( \Lambda := \mathbb{Z} / 2 \mathbb{Z} \) and \( X := \mathbb{Z}/2\mathbb{Z} \). For \( a \in \mathbb{Z} / 2 \mathbb{Z} \), \( \deg(a)(\lambda) = a + \lambda \;\; (\lambda \in \Lambda = \mathbb{Z}/2\mathbb{Z}) \). The map \( \sigma_i : \Lambda \times X \times X \to X \times X \; (i = 1, 2) \) is defined by the following table. \begin{table}[h] \begin{center} \begin{tabular}{l|rr} \hline \( (\lambda, a, b) \) & \( \sigma_1(\lambda, a, b) \) & \( \sigma_2(\lambda, a, b) \) \\ \hline \( (0, 0, 0) \) & \( (0, 0) \) & \( (1, 1) \) \\ \( (0, 0, 1) \) & \( (0, 1) \) & \( (1, 0) \) \\ \( (0, 1, 0) \) & \( (0, 1) \) & \( (1, 0) \) \\ \( (0, 1, 1) \) & \( (0, 0) \) & \( (1, 1) \) \\ \hline \end{tabular} \;\; \begin{tabular}{l|rr} \hline \( (\lambda, a, b) \) & \( \sigma_1(\lambda, a, b) \) & \( \sigma_2(\lambda, a, b) \) \\ \hline \( (1, 0, 0) \) & \( (1, 1) \) & \( (0, 0) \) \\ \( (1, 0, 1) \) & \( (1, 0) \) & \( (0, 1) \) \\ \( (1, 1, 0) \) & \( (1, 0) \) & \( (0, 1) \) \\ \( (1, 1, 1) \) & \( (1, 1) \) & \( (0, 0) \) \\ \hline \end{tabular} \end{center} \caption{The definition of \( \sigma_i \)} \end{table} \noindent We denote by \( R \) a \( \mathbb{K} \)-algebra. The map \( \sigma^{ab}_{cd} \in M_{\Lambda}(R) \) is defined by \begin{equation*} \sigma^{ab}_{cd}( \lambda ) = \begin{cases} 1_R, & \text{\( ( \sigma_i( \lambda, a, b ) = ( c, d ) ) \)}; \\ 0, & \text{(otherwise)}. \end{cases} \end{equation*} The maps \( \deg \) and \( \sigma^{ab}_{cd} \; (a,b,c,d \in X) \) satisfy the condition \eqref{cond:invc}. Thus we can construct the left bialgebroids \( A_{\sigma} \), \( \mathfrak{A}(w_{\sigma}) \), and the left bialgebroid homomorphism \( \Phi \). \end{ex} \begin{rem} The family \( \sigma = \{ \sigma^{ab}_{cd} \}_{a, b, c, d \in X} \) in Example \ref{ex:ndy} is rigid. Thus, this \( A_{\sigma} \) is a Hopf algebroid whose antipode \( S : A_{\sigma} \to A_{\sigma} \) satisfies \( S((L^{-1})_{ab} + I_{\sigma}) = L_{ab} + I_{\sigma} \; (a, b \in X) \). \end{rem} \section{Properties of \( \Phi \)} \label{sec:pro} In this section, we show that \( \mathfrak{A}(w) \), \( A_{\sigma} \), and \( \Phi \) satisfy a certain universal property in case the base algebra \( R \) is a Frobenius-separable \( \mathbb{K} \)-algebra. In order to complete this purpose, we characterize weak bialgebras (weak Hopf algebras) by generalizing the notion of the antipode \( S^{\mathrm{WHA}} \) and Hayashi's antipode \( f^- \) in \cite[Section 2]{hayas}. We first recall the convolution product. For an arbitrary \( \mathbb{K} \)-coalgebra \( (C, \Delta, \varepsilon) \) and \( \mathbb{K} \)-algebra \( A \), the \( \mathbb{K} \)-vector space \( {\rm Hom}_{\mathbb{K}}(C, A) \) becomes a \( \mathbb{K} \)-algebra by the following multiplication: \begin{align*} &(f \star g)( c ) = f(c_{(1)}) f(c_{(2)}) \;\; (f, g \in {\rm Hom}_{\mathbb{K}}(C, A), c \in C); \\ &1_{{\rm Hom}_{\mathbb{K}}(C, A)}(c) = \varepsilon( c ) 1_A. \end{align*} This multiplication is called the convolution product. Let \( A \) be a \( \mathbb{K} \)-algebra and \( e^+ \), \( e^- \), and \( x^+ \) elements in \( A \). An element \( x^{-} \in A \) is called an \( (e^+, e^-) \)-generalized inverse of \( x^+ \) if the following conditions are satisfied: \begin{equation*} x^{\pm} x^{\mp} = e^{\mp}, \;\; x^{\pm} x^{\mp} x^{\pm} = x^{\pm}. \end{equation*} We can easily check that the \( (e^+, e^-) \)-generalized inverse of \( x^+ \) is unique if it exists. \begin{defi} Let \( H \) be a weak bialgebra, \( A \) a \( \mathbb{K} \)-algebra, and \( f^{+} \colon H \to A \) a \( \mathbb{K} \)-algebra homomorphism. A \( \mathbb{K} \)-linear map \( f^- \colon H \to A \) is called an antipode of \( f^+ \) if \( f^- \) is the \( (f^+ \circ \varepsilon_s, f^+ \circ \varepsilon_t) \)-generalized inverse of \( f^+ \) with regard to the convolution product of \( {\rm Hom}_{\mathbb{K}}(H, A) \). \end{defi} The following lemmas are generalizations of \cite[Lemma 2.1 and 2.2]{hayas}. \begin{lemm} Let \( H \) be a weak bialgebra. \begin{enumerate} \item This \( H \) is a weak Hopf algebra with the antipode \( S \) if and only if \( S \in {\rm End}_{\mathbb{K}}( H ) \) is the antipode of \( {\rm id}_H \). \item If \( H^{\prime} \) is a weak Hopf algebra with the antipode \( S \) and \( f^+ \colon H \to H^{\prime} \) is a weak bialgebra homomorphism, then \( S \circ f^+ \) is the antipode of \( f^{+} \). \end{enumerate} \end{lemm} \begin{proof} We first prove 1. It is clear that \( H \) becomes a weak Hopf algebra whose antipode is \( S \) if \( S \in {\rm End}_{\mathbb{K}}( H ) \) is the antipode of \( {\rm id}_H \). Suppose that \( H \) is a weak Hopf algebra with the antipode \( S \). We give the proof only for \( {\rm id}_H \star S \star {\rm id}_H = {\rm id}_H \). By using \eqref{def:wdm}, \begin{align*} h_{(1)} S( h_{(2)} ) h_{(3)} &= h_{(1)} \varepsilon_s( h_{(2)} ) \\ &= h_{(1)} 1_{(1)} \varepsilon( h_{(2)} 1_{(2)} ) \\ &= h \end{align*} for any \( h \in H \). Hence the antipode of \( {\rm id}_H \) is \( S \in {\rm End}_{\mathbb{K}}( H ) \). Let us show 2. Since \( f^+ \) is a weak bialgebra homomorphism, \begin{align*} \varepsilon_t \circ f^+( h ) &= \varepsilon_{H^{\prime}}( 1_{(1)} f^+(h) ) 1_{(2)} \\ &= \varepsilon_{H^{\prime}}( f^+( 1_{(1)} h ) ) f^+( 1_{(2)} ) \\ &= \varepsilon_H( 1_{(1)} h ) f^+( 1_{(2)} ) \\ &= f^+ \circ \varepsilon_t( h ), \\ ( f^+ \star S \circ f^+ )( h ) &= f^+(h)_{(1)} S( f^+(h)_{(2)} ) \\ &= \varepsilon_t \circ f^+( h ) \\ &= f^+ \circ \varepsilon_t( h ). \end{align*} Similarly, we can also prove that \( \varepsilon_s \circ f^+ = f^+ \circ \varepsilon_s \) and \( S \circ f^+ \star f^+ = f^+ \circ \varepsilon_s \). The identity \eqref{def:wdm} induce that \begin{align*} ( f^+ \star S \circ f^+ \star f^+ )( h ) &= f^+( h )_{(1)} \varepsilon_s ( f^+( h )_{(2)} ) \\ &= f^+( h )_{(1)} 1_{(1)} \varepsilon_{H^{\prime}}( f^+( h )_{(2)} 1_{(2)} ) \\ &= f^+( h ) \end{align*} for all \( h \in H \). The proof for \( S \circ f^+ \star f^+ \star S \circ f^+ = S \circ f^+ \) is similar. \end{proof} \begin{lemm} \label{lem:f-hom} Let \( H \) be a weak bialgebra, \( A \) a \( \mathbb{K} \)-algebra, and \( f^+ \colon H \to A \) a \( \mathbb{K} \)-algebra homomorphism. \begin{enumerate} \item If \( f^+ \) has the antipode \( f^- \), then \( f^- \colon H \to A^{op} \) is a \( \mathbb{K} \)-algebra homomorphism. \item In addition to the above situation 1, if \( A \) is a weak bialgebra and \( f^+ \) is a weak bialgebra homomorphism, then the antipode \( f^- \colon H \to A^{bop} \) is a weak bialgebra homomorphism. \end{enumerate} \end{lemm} \begin{proof} Let us first show 1. For \( g, h \in H \), \begin{align*} f^-(gh) &= f^-( g_{(1)} h_{(1)} ) f^+ \circ \varepsilon_t( g_{(2)} h_{(2)} ) \\ &\underset{\eqref{lem:est2}}= f^-( g_{(1)} h_{(1)} ) f^+ \circ \varepsilon_t( g_{(2)} \varepsilon_t( h_{(2)} ) ) \\ &\underset{\eqref{lem:esco}}= f^-( g_{(1)} h_{(1)} ) f^+ \circ \varepsilon_t( \varepsilon_t( g_{(2)} h_{(2)} ) g_{(3)} ) \\ &\underset{\eqref{lem:estm}}= f^-( g_{(1)} h_{(1)} ) f^+( \varepsilon_t( g_{(2)} h_{(2)} ) \varepsilon_t( g_{(3)} ) ) \\ &= f^-( g_{(1)} h_{(1)} ) f^+( \varepsilon_t( g_{(2)} h_{(2)} ) g_{(3)} ) f^-(g_{(4)}) \\ &\underset{\eqref{lem:esco}}= f^-( g_{(1)} h_{(1)} ) f^+( g_{(2)} \varepsilon_t( h_{(2)} ) ) f^-( g_{(3)} ) \\ &= f^-( g_{(1)} h_{(1)} ) f^+( g_{(2)} ) f^+( h_{(2)} ) f^-( h_{(3)} ) f^-( g_{(3)} ) \\ &= f^+ \circ \varepsilon_s ( g_{(1)} h_{(1)} ) f^-( h_{(2)} ) f^-( g_{(2)} ) \\ &\underset{\eqref{lem:est2}}= f^+ \circ \varepsilon_s ( \varepsilon_s( g_{(1)} ) h_{(1)} ) f^-( h_{(2)} ) f^-( g_{(2)} ) \\ &= f^-( \varepsilon_s( g_{(1)} )_{(1)} h_{(1)} ) f^+( \varepsilon_s( g_{(1)} )_{(2)} h_{(2)} ) f^-( h_{(3)} ) f^-( g_{(2)} ) \\ &\underset{\eqref{lem:coes}}= f^-( 1_{(1)} h_{(1)} ) f^+( \varepsilon_s( g_{(1)} ) 1_{(2)} h_{(2)} ) f^-( h_{(3)} ) f^-( g_{(2)} ) \\ &\underset{\eqref{def:wdm}}= f^-( h_{(1)} ) f^+( \varepsilon_s( g_{(1)} ) h_{(2)} ) f^-( h_{(3)} ) f^-( g_{(2)} ) \\ &= f^-( h_{(1)} ) f^+ \circ \varepsilon_s( g_{(1)} ) f^+ \circ \varepsilon_t( h_{(2)} ) f^-( g_{(2)} ) \\ &\underset{\eqref{lem:estcomm}}= f^-( h_{(1)} ) f^+ \circ \varepsilon_t( h_{(2)} ) f^+ \circ \varepsilon_s( g_{(1)} ) f^-( g_{(2)} ) \\ &= f^-( h ) f^-( g ). \end{align*} In addition, this \( f^- \) preserves the unit. By using \eqref{def:D13} and \eqref{lem:estu}, \begin{align*} f^-(1_H) &= f^-( 1_{(1)} ) f^+( 1_{(2)} 1^{\prime}_{(1)} ) f^-( 1^{\prime}_{(2)} ) \\ &= f^+ \circ \varepsilon_s( 1_H ) f^+ \circ \varepsilon_t( 1_H ) \\ &= 1_{A}. \end{align*} Therefore \( f^- \colon H \to A^{op} \) is a \( \mathbb{K} \)-algebra homomorphism. We next prove 2. In order to complete this purpose, we assume the following lemma for the moment (cf.~\cite[Lemma B1 and B2]{nil}). \begin{lemm} \label{lem:whip} Let \( H \) and \( A \) be a weak bialgebra. If a weak bialgebra homomorphism \( f^+ \colon H \to A \) has the antipode \( f^- \), the following conditions are satisfied for all \( g, h \in H \): \begin{align} &g h_{(1)} \otimes f^-( h_{(2)} ) f^+( h_{(3)} ) = g_{(1)} h_{(1)} \otimes f^-( g_{(2)} h_{(2)} ) f^+( g_{(3)} ) f^+( h_{(3)} ); \label{lem:gh-+} \\ &h_{(1)} g \otimes f^+( h_{(2)} ) f^-( h_{(3)} ) = h_{(1)} g_{(1)} \otimes f^+( h_{(2)} ) f^+( g_{(2)} ) f^-( h_{(3)} g_{(3)} ); \label{lem:hg+-} \\ &f^-( h_{(1)} ) \otimes f^-( h_{(2)} ) = f^-( h_{(1)} ) f^+( h_{(4)} ) f^-( h_{(5)} ) \otimes f^-( h_{(2)} ) f^+( h_{(3)} ) f^-( h_{(6)} ). \label{lem:145} \end{align} \end{lemm} \noindent For the comultiplicativity of \( f^- \), it is equivarent to show that \begin{equation*} f^-( h_{(2)} ) \otimes f^-( h_{(1)} ) = f^-( h )_{(1)} \otimes f^-( h )_{(2)} \end{equation*} for all \( h \in H \). We set \begin{align*} J = ( f^-(1_{(2)}) \otimes f^-(1_{(1)}) ) \Delta( f^+ \circ \varepsilon_t( 1_{(3)} ) ), \\ \tilde{J} = \Delta( f^+ \circ \varepsilon_s( 1_{(1)} ) )( f^-(1_{(3)}) \otimes f^-(1_{(2)}) ). \end{align*} For \( J \) and \( \tilde{J} \), see \cite[Proposition B4]{nil}. Let \( h \) be an arbitrary element in \( H \). \begin{align*} &(f^-( h_{(2)} ) \otimes f^-( h_{(1)} )) J \\ =& (f^-(1_{(2)} h_{(2)}) \otimes f^-(1_{(1)} h_{(1)})) \Delta( f^+( 1_{(3)} ) f^-( 1_{(4)} ) ) \\ \underset{\eqref{lem:hg+-}}=& (f^-(1_{(2)} h_{(2)}) \otimes f^-(1_{(1)} h_{(1)})) \Delta( f^+( 1_{(3)} h_{(3)} ) f^-( 1_{(4)} h_{(4)} ) ) \\ \underset{\eqref{def:wdm}}=& (f^-( h_{(2)} ) \otimes f^-( h_{(1)} )) \Delta( f^+( h_{(3)} ) f^-( h_{(4)} ) ) \\ =& (f^+ \circ \varepsilon_s( h_{(2)} ) \otimes f^-( h_{(1)} ) f^+( h_{(3)} )) \Delta( f^-( h_{(4)} ) ) \\ \underset{\eqref{lem:estd2}}=& (f^+ \circ \varepsilon_s( 1_{(2)} ) \otimes f^-( h_{(1)} 1_{(1)} ) f^+( h_{(2)} )) \Delta( f^-( h_{(3)} ) ) \\ =& (f^+ \circ \varepsilon_s( 1_{(2)} ) \otimes f^-( 1_{(1)} ) f^+ \circ \varepsilon_s( h_{(1)} )) \Delta( f^-( h_{(2)} ) ) \\ \underset{\eqref{lem:estd1}}=& (f^+ \circ \varepsilon_s( 1_{(2)} ) \otimes f^-( 1_{(1)} ) f^+ \circ \varepsilon_s( 1^{\prime}_{(1)} )) \Delta( f^-( h 1^{\prime}_{(2)} ) ) \\ =& ( f^-( 1_{(2)} ) f^+( 1_{(3)} ) \otimes f^-( 1_{(1)}^{\prime} 1_{(1)} ) f^+( 1_{(2)}^{\prime} ) ) \Delta( f^-( h 1^{\prime}_{(3)} ) ) \\ \underset{\eqref{lem:gh-+}}=& (f^-(1_{(2)} 1^{\prime}_{(2)}) f^+(1_{(3)} 1^{\prime}_{(3)}) \otimes f^-(1_{(1)} 1^{\prime}_{(1)}) f^+(1_{(4)})) \Delta( f^-( h 1_{(5)} ) ) \\ \underset{\eqref{def:wdm}}=& (f^-(1_{(2)}) \otimes f^-(1_{(1)})) \Delta( f^+(1_{(3)}) ) \Delta( f^-(1_{(4)}) ) \Delta( f^-( h ) ) \\ =& J ( f^-(h)_{(1)} \otimes f^-(h)_{(2)} ). \end{align*} By using \eqref{def:D13}, \eqref{lem:estu}, and \eqref{lem:estd1}, \( J \) satisfies that \begin{align*} J &= (f^+ \circ \varepsilon_s( 1_{(2)} ) \otimes f^-( 1_{(1)} ) f^+(1_{(3)})) \Delta( f^-(1_{(4)}) ) \\ &= (f^+( 1_{(1)} ) \otimes f^-( 1^{\prime}_{(1)} ) f^+( 1^{\prime}_{(2)} 1_{(2)} )) \Delta(f^-(1^{\prime}_{(3)})) \\ &= (f^+( 1_{(1)} ) \otimes f^+ \circ \varepsilon_s( 1^{\prime}_{(1)} ) f^+( 1_{(2)} )) \Delta(f^-(1^{\prime}_{(2)})) \\ &= (f^+( 1_{(1)} ) \otimes f^+( 1^{\prime}_{(1)} 1_{(2)} )) \Delta(f^-(1^{\prime}_{(2)})) \\ &= (f^+( 1_{(1)} ) \otimes f^+( 1_{(2)} )) \Delta(f^-(1_{(3)})) \\ &= \Delta( f^+ \circ \varepsilon_t( 1_H ) ) \\ &= \Delta( 1_A ). \end{align*} Similarly, we can prove that \( \tilde{J} = \Delta( 1_A ) \). The identities \eqref{def:wdm}, \eqref{lem:hg+-}, and \eqref{lem:145} induce that \begin{align*} J \tilde{J} &= J \Delta( f^-( 1_{(1)} ) ) \Delta( f^+( 1_{(2)} ) ) ( f^-(1_{(4)}) \otimes f^-(1_{(3)}) ) \\ &= (f^-(1_{(2)}) \otimes f^-(1_{(1)})) J \Delta( f^+( 1_{(3)} ) ) ( f^-(1_{(5)}) \otimes f^-(1_{(4)}) ) \\ &= (f^-(1_{(2)} 1^{\prime}_{(2)}) \otimes f^-( 1_{(1)} 1^{\prime}_{(1)} )) \Delta( f^+(1_{(3)}) f^-(1_{(4)}) f^+(1^{\prime}_{(3)}) ) (f^-( 1^{\prime}_{(5)} ) \otimes f^-( 1^{\prime}_{(4)} )) \\ &= (f^-(1_{(2)} 1^{\prime}_{(2)}) \otimes f^-( 1_{(1)} 1^{\prime}_{(1)} )) \\ & \;\;\; \times \Delta( f^+(1_{(3)} 1^{\prime}_{(3)}) f^-(1_{(4)} 1^{\prime}_{(4)}) f^+(1^{\prime}_{(5)}) ) (f^-( 1^{\prime}_{(7)} ) \otimes f^-( 1^{\prime}_{(6)} )) \\ &= (f^-(1_{(2)}) \otimes f^-( 1_{(1)} )) \Delta( f^+(1_{(3)}) ) (f^-( 1_{(5)} ) \otimes f^-( 1_{(4)} )) \\ &= f^-( 1_{(2)} ) f^+( 1_{(3)} ) f^-( 1_{(6)} ) \otimes f^-( 1_{(1)} ) f^+( 1_{(4)} ) f^-( 1_{(5)} ) \\ &= f^-(1_{(2)}) \otimes f^-(1_{(1)}). \end{align*} We can calculate that \begin{align*} ( f^-(h)_{(1)} \otimes f^-(h)_{(2)} ) &= \Delta(1_A)( f^-(h)_{(1)} \otimes f^-(h)_{(2)} ) \\ &= J ( f^-(h)_{(1)} \otimes f^-(h)_{(2)} ), \\ &= (f^-(h_{(2)}) \otimes f^-(h_{(1)}))J \\ &= (f^-(h_{(2)}) \otimes f^-(h_{(1)})) \Delta( 1_A ) \\ &= (f^-(h_{(2)}) \otimes f^-(h_{(1)})) J \tilde{J} \\ &= (f^-(h_{(2)}) \otimes f^-(h_{(1)})) (f^-(1_{(2)}) \otimes f^-(1_{(1)})) \\ &= (f^-(h_{(2)}) \otimes f^-(h_{(1)})) \end{align*} for any \( h \in H \). Thus \( f^- \colon H \to A^{bop} \) preserves the comultiplication. By using \eqref{lem:eest2}, we can prove that \( f^- \) is counital: \begin{align*} \varepsilon_A \circ f^-( h ) &= \varepsilon_A( f^-( h_{(1)} ) f^+ \circ \varepsilon_t( h_{(2)} ) ) \\ &= \varepsilon_A( f^-( h_{(1)} ) f^+( h_{(2)} ) ) \\ &= \varepsilon_A( \varepsilon_s \circ f^+( h ) ) \\ &= \varepsilon_A \circ f^+( h ) \\ &= \varepsilon_H( h ) \end{align*} for \( h \in H \). This is the desired conclusion. \end{proof} \begin{proof}[{\bf Proof of Lemma \ref{lem:whip}}] We first prove \eqref{lem:gh-+}. For all \( g, h \in H \), \begin{align*} g h_{(1)} \otimes f^-( h_{(2)} ) f^+( h_{(3)} ) &= g h_{(1)} \otimes f^+ \circ \varepsilon_s( h_{(2)} ) \\ &= g h 1_{(1)} \otimes f^+ \circ \varepsilon_s( 1_{(2)} ) \\ &= g_{(1)} h_{(1)} \otimes f^+ \circ \varepsilon_s( g_{(2)} h_{(2)} ) \\ &= g_{(1)} h_{(1)} \otimes f^-( g_{(2)} h_{(2)} ) f^+( g_{(3)} ) f^+( h_{(3)} ). \end{align*} Here we use the identity \eqref{lem:estd2}. The proof for \eqref{lem:hg+-} is similar. Let us evaluate \eqref{lem:145}. By using \eqref{lem:gh-+}, \eqref{lem:hg+-}, and Lemma \ref{lem:f-hom}-1, \begin{align*} &f^-( h_{(1)} ) \otimes f^-( h_{(2)} ) \\ =& f^-(h_{(1)}) f^-(1_{(1)}) f^+(1_{(2)}) f^-(1_{(3)}) \otimes f^-(h_{(2)}) f^+(h_{(3)}) f^-(h_{(4)}) \\ =& f^-(1_{(1)} h_{(1)}) f^+(1_{(4)}) f^-(1_{(5)}) \otimes f^-(1_{(2)} h_{(2)}) f^+(1_{(3)} h_{(3)}) f^-(h_{(4)}) \\ =& f^-(1_{(1)} h_{(1)}) f^+(1_{(4)} h_{(4)}) f^-(1_{(5)} h_{(5)}) \otimes f^-(1_{(2)} h_{(2)}) f^+(1_{(3)} h_{(3)}) f^-(h_{(6)}) \\ =& f^-( h_{(1)} ) f^+( h_{(4)} ) f^-( h_{(5)} ) \otimes f^-( h_{(2)} ) f^+( h_{(3)} ) f^-( h_{(6)} ) \end{align*} for any \( h \in H \). This completes the proof. \end{proof} The convolution product and the antipode \( f^- \) generalize the notion of the Hopf envelope in \cite{benp}. \begin{defi} \label{def:WHC} Let \( H \) be a weak bialgebra, \( \overline{H} \) a weak Hopf algebra, and \( \iota \colon H \to \overline{H} \) a weak bialgebra homomorphism. A Hopf closure of \( H \) is a pair \( (\overline{H}, \iota) \) satisfying the following universal property: \begin{itemize} \item[] For any weak bialgebra \( B \) and any weak bialgebra homomorphism \( f^+ \colon H \to B \) with the antipode \( f^- \), there exists a unique weak bialgebra homomorphism \( F \colon \overline{H} \to B \) such that the following diagram is commutative: \end{itemize} \[ \xymatrix@C=30pt@R=30pt{ H \ar[rd]_-{f^+} \ar[r]^-{\iota} & \overline{H} \ar[d]^-{F} \\ & B. \\ } \] We can induce that \( \overline{H} \) is unique up to isomorphism if there exists. \end{defi} \begin{rem} \begin{enumerate} \item In \cite{benp}, the weak bialgebra \( B \) is always a weak Hopf algebra with the antipode \( S \). Thus Definition \ref{def:WHC} is a generalization of Definition 3.14 in \cite{benp} because \( S \circ f^+ \) gives an antipode of \( f^+ \in {\rm Hom}_{\mathbb{K}}(H, B) \). \item Let \( H \) be a face algebra. Hayashi \cite{hayas} considered construction of the Hopf closure \( \overline{H} \) if \( H \) is coquasitriangular and closurable. Then this \( \overline{H} \) satisfies Definition \ref{def:WHC} replaced with the notion of face algebras, that is to say, \( f^+ \colon H \to B \) and \( F \colon \overline{H} \to B \) are face algebra homomorphisms (see \cite[Theorem 5.1, 8.2, and 8.3] {hayas}). \end{enumerate} \end{rem} Let \( \Lambda \) be a non empty finite set and \( X \) a finite set. For a left bialgebroid \( A_{\sigma} \) in Subsection \ref{sec:As}, we suppose that the \( \mathbb{K} \)-algebra \( R \) is a Frobenius-separable \( \mathbb{K} \)-algebra with an idempotent Frobenius system \( (\psi, e^{(1)} \otimes e^{(2)}) \). This \( A_{\sigma} \) has a weak bialgebra structure by Proposition \ref{prop:LWF}. The quiver \( Q \) defined by \eqref{quilx} and elements \( \mathbf{w}\begin{sumibmatrix} & (\lambda, a) & \\ (\mu, c) & & (\lambda^{\prime}, b) \\ & (\mu^{\prime}, d) & \end{sumibmatrix} \in R \; ( ( (\lambda, a), ( \lambda^{\prime}, b ) ), ( (\mu, c), (\mu^{\prime}, d) ) \in Q^{(2)} ) \) in \eqref{wsig} give birth to a weak bialgebra \( \mathfrak{A}(w_{\sigma}) \) and its homomorphism \( \Phi \) in Section \ref{sec:Phi}. \begin{theo} If \( \sigma \) is rigid, the pair \( (A_{\sigma}, \Phi) \) satisfies the following universal property: \begin{itemize} \item[] For any \( \mathbb{K} \)-algebra \( A \) and any \( \mathbb{K} \)-algebra homomorphism \( f^+ \colon \mathfrak{A}(w_{\sigma}) \to A \) with the antipode \( f^- \), there exists a unique \( \mathbb{K} \)-algebra homomorphism \( F \colon A_{\sigma} \to A \) such that the following diagram is commutative: \end{itemize} \[ \xymatrix@C=30pt@R=30pt{ \mathfrak{A}(w_{\sigma}) \ar[rd]_-{f^+} \ar[r]^-{\Phi} & A_{\sigma} \ar[d]^-{F} \\ & A. \\ } \] If this \( \mathbb{K} \)-algebra \( A \) has a weak bialgebra structure \( (A, \Delta, \varepsilon) \) and \( f^+ \) is a weak bialgebra homomorphism, then so is \( F \). \end{theo} \begin{proof} We first show the existence of the \( \mathbb{K} \)-algebra homomorphism \( F \). The \( \mathbb{K} \)-algebra \( \overline{F} \colon \mathbb{K} \langle \Lambda X \rangle \to A \) is defined by \begin{align*} &\overline{F}(\xi) = \Upsilon(\xi) \;\; (\xi \in M_{\Lambda}(R) \otimes_{\mathbb{K}} M_{\Lambda}(R)^{op}); \\ &\overline{F}(L_{ab}) = \sum_{\lambda, \mu \in \Lambda} f^+ ( 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\lambda, a)}{(\mu, b)} + \mathfrak{I}_{\mathbf{w}} ) \;\; (a,b \in X); \\ &\overline{F}((L^{-1})_{ab}) = \sum_{\lambda, \mu \in \Lambda} f^- ( 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\lambda, a)}{(\mu, b)} + \mathfrak{I}_{\mathbf{w}} ). \end{align*} Here \( \Upsilon \) is a \( \mathbb{K} \)-algebra homomorphism defined by \begin{equation*} \Upsilon \colon M_{\Lambda}(R) \otimes_{\mathbb{K}} M_{\Lambda}(R)^{op} \ni g \otimes h \mapsto \sum_{\lambda, \mu \in \Lambda} f^+( g(\lambda) \otimes h(\mu) \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\lambda}{\mu} + \mathfrak{I}_{\mathbf{w}} ) \in A. \end{equation*} We prove that \( \overline{F}( I_{\sigma} ) = \{ 0 \} \). It suffices to check that \begin{equation} \label{F0} \overline{F}( \alpha ) = 0 \end{equation} for every generator \( \alpha \) in \( I_{\sigma} \). For any \( \alpha \) in the generators (1), the condition \eqref{F0} is obviously satisfied since the map \( \Upsilon \) is a \( \mathbb{K} \)-algebra homomorphism. We next prove that the generators (2) satisfy \eqref{F0}. By using \eqref{lem:estu}, \begin{align*} &\overline{F}( \sum_{c \in X} (L^{-1})_{ac} L_{cb} ) \\ =& \sum_{\substack{c \in X \\ \lambda, \mu, \tau, \nu \in \Lambda}} f^-( 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\lambda, a)}{(\mu, c)} + \mathfrak{I}_{\mathbf{w}} ) f^+ \circ \varepsilon_s( 1_{\mathfrak{A}(w_{\sigma})} ) \\ & \times f^+( 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\tau, c)}{(\nu, b)} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \sum_{\substack{c \in X \\ \lambda, \mu, \tau \in \Lambda}} f^-( 1_R \otimes e^{(1)} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\lambda, a)}{(\mu, c)} + \mathfrak{I}_{\mathbf{w}} ) f^+( e^{(2)} \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\mu, c)}{(\tau, b)} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \sum_{\lambda, \mu \in \Lambda} f^+ \circ \varepsilon_s( 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\lambda, a)}{(\mu, b)} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \delta_{a, b} \sum_{\lambda, \mu, \tau \in \Lambda} \delta_{\lambda, \tau \deg(b)^{-1}} f^+( 1_R \otimes e^{(1)} \psi(e^{(2)}) \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\mu}{\lambda \deg( a )} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \delta_{a, b} \sum_{\lambda, \mu \in \Lambda} f^+( 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\lambda}{\mu \deg(b)^{-1} \deg( a )} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \overline{F}( \delta_{a, b} \emptyset ) \end{align*} for all \( a, b \in X \). Therefore \( \overline{F}( \sum_{c \in X} (L^{-1})_{ac} L_{cb} - \delta_{a, b} \emptyset ) = 0 \) is satisfied for any \( a, b \in X \). We can prove that \( \overline{F}( \sum_{c \in X} L_{ac} (L^{-1})_{cb} - \delta_{a, b} \emptyset ) = 0 \) for all \( a, b \in X \) by the similar way. Let us check that any generator \( \alpha \) in \( (3) \) satisfies \eqref{F0}. For \( g \in M_{\Lambda}( R ) \), \( a \), and \( b \in X \), \begin{align*} & \overline{F}( ( T_{\deg(a)}( g ) \otimes 1_{M_{\Lambda}(R)} ) L_{ab} - L_{ab} (g \otimes 1_{M_{\Lambda}(R)}) ) \\ =& \sum_{\lambda, \mu, \tau, \nu \in \Lambda} f^+( (g( \lambda \deg(a) ) \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\lambda}{\mu})(1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\tau, a)}{(\nu, b)}) + \mathfrak{I}_{\mathbf{w}} ) \\ -& \sum_{\gamma, \eta, \theta, \kappa \in \Lambda} f^+( (1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\gamma, a)}{(\eta, b)})(g( \theta ) \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\theta}{\kappa}) + \mathfrak{I}_{\mathbf{w}} ) \\ =& \sum_{\lambda, \mu \in \Lambda} f^+( g( \lambda \deg(a) ) \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\lambda, a)}{(\mu, b)} + \mathfrak{I}_{\mathbf{w}} ) \\ -& \sum_{\eta, \theta \in \Lambda} f^+( g( \eta \deg(a) ) \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\eta, a)}{(\theta, b)} + \mathfrak{I}_{\mathbf{w}} ) \\ =& 0. \end{align*} The proof of \( \overline{F}( ( 1_{M_{\Lambda}(R)} \otimes T_{\deg(b)}(g) ) L_{ab} - L_{ab} ( 1_{M_{\Lambda}(R)} \otimes g ) ) = 0 \; (\forall a, b \in X) \) is similar. To complete the proof of the other two generators in (3), we assume the lemma below for the moment. \begin{lemm} \label{lem:estA} For any \( r \in R \) and \( \lambda \in \Lambda \), \begin{align} \sum_{\mu \in \Lambda} \varepsilon_s( 1_R \otimes r \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\mu}{\lambda} + \mathfrak{I}_{\mathbf{w}} ) = \sum_{\mu \in \Lambda} 1_R \otimes r \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\mu}{\lambda} + \mathfrak{I}_{\mathbf{w}}; \label{lem:es} \\ \sum_{\mu \in \Lambda} \varepsilon_t( r \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\lambda}{\mu} + \mathfrak{I}_{\mathbf{w}} ) = \sum_{\mu \in \Lambda} r \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\lambda}{\mu} + \mathfrak{I}_{\mathbf{w}}. \label{lem:et} \end{align} \end{lemm} \noindent Let \( g \in M_{\Lambda}(R) \), \( a \), and \( b \in X \). By using \eqref{lem:estu}, \eqref{lem:estcomm}, and \eqref{lem:et}, \begin{align*} &\overline{F}( ( g \otimes 1_{M_{\Lambda}(R)} ) (L^{-1})_{ab} ) \\ =& \sum_{\lambda, \mu, \tau, \nu \in \Lambda} f^+( g(\lambda) \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\lambda}{\mu} + \mathfrak{I}_{\mathbf{w}} ) f^-( 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\tau, a)}{(\nu, b)} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \sum_{\substack{c \in X \\ \lambda, \mu, \tau, \nu, \gamma \in \Lambda}} f^+ \circ \varepsilon_t( g(\lambda) \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\lambda}{\mu} + \mathfrak{I}_{\mathbf{w}} ) f^+ \circ \varepsilon_s( 1_R \otimes e^{(1)} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\tau, a)}{(\gamma, c)} + \mathfrak{I}_{\mathbf{w}} ) \\ & \times f^-( e^{(2)} \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\gamma, c)}{(\nu, b)} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \sum_{\substack{c, d \in X \\ \lambda, \mu, \tau, \nu \in \Lambda}} f^-( 1_R \otimes e^{(1)} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\lambda, a)}{(\mu, c)} + \mathfrak{I}_{\mathbf{w}} ) \\ & \times f^+( e^{(2)} g( \mu \deg(c) ) \otimes e^{(1)^{\prime \prime}} e^{(1)^{\prime}} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\mu, c)}{(\tau, d)} + \mathfrak{I}_{\mathbf{w}} ) \\ & \times f^-( e^{(2)^{\prime}} e^{(2)^{\prime \prime}} \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\tau, d)}{(\nu, b)} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \sum_{\substack{c \in X \\ \lambda, \mu, \tau \in \Lambda}} f^-( 1_R \otimes e^{(1)} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\lambda, a)}{(\mu, c)} + \mathfrak{I}_{\mathbf{w}} ) \\ & \times f^+ \circ \varepsilon_t( e^{(2)} g( \mu \deg(c) ) \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\mu, c)}{(\tau, b)} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \sum_{\substack{c \in X \\ \lambda, \mu, \tau, \nu \in \Lambda}} \delta_{\mu, \tau} \delta_{b, c} f^-( 1_R \otimes e^{(1)} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\lambda, a)}{(\mu, c)} + \mathfrak{I}_{\mathbf{w}} ) \\ & \times f^+( \psi( e^{(2)} g( \mu \deg(c) ) e^{(1)^{\prime}} ) e^{(2)^{\prime}} \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\tau}{\nu} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \sum_{\lambda, \mu, \tau \in \Lambda} f^-( 1_R \otimes e^{(1)} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\lambda, a)}{(\mu, b)} + \mathfrak{I}_{\mathbf{w}} ) f^+( e^{(2)} g( \mu \deg(b) ) \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\mu}{\tau} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \sum_{\lambda, \mu, \tau, \nu \in \Lambda} f^-( 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\lambda, a)}{(\mu, b)} + \mathfrak{I}_{\mathbf{w}} ) f^+ \circ \varepsilon_s( 1_{\mathfrak{A}(w_{\sigma})} ) \\ & \times f^+( g( \tau \deg(b) ) \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\tau}{\nu} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \overline{F}( (L^{-1})_{ab} ( T_{\deg(b)}( g ) \otimes 1_{M_{\Lambda}(R)} ) ) \end{align*} \( \overline{F}( (1_{M_{\Lambda}(R)}\otimes g)(L^{-1})_{ab} - (L^{-1})_{ab}(1_{M_{\Lambda}(R)}\otimes T_{\deg (a)}(g)) ) = 0 \; (\forall a, b \in X) \) is also induced by using \eqref{lem:estu}, \eqref{lem:estcomm}, and \eqref{lem:es}. We give a proof of \eqref{F0} for any generator \( \alpha \) in (4). For all \( a \), \( b \), \( c \), and \( d \in X \), \begin{align*} &\overline{F}( \sum_{x,y\in X} (\sigma^{xy}_{ac}\otimes1_{M_{\Lambda}(R)})L_{yd}L_{xb} - \sum_{x,y\in X} (1_{M_{\Lambda}(R)}\otimes\sigma^{bd}_{xy})L_{cy}L_{ax} ) \\ =& \sum_{\substack{\lambda. \mu \in \Lambda \\ x, y \in X}} f^+( \sigma^{xy}_{ac}( \lambda ) \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{((\lambda, y), (\lambda \deg(y), x))}{((\mu, d), (\mu \deg(d), b))} + \mathfrak{I}_{\mathbf{w}} ) \\ -& \sum_{\substack{\tau. \nu \in \Lambda \\ x, y \in X}} f^+( 1_R \otimes \sigma^{bd}_{xy}( \nu ) \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{((\tau, c), (\tau \deg(c), a))}{((\nu, y), (\nu \deg(y), x))} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \sum_{\substack{\lambda. \mu, \eta \in \Lambda \\ x, y \in X}} f^+( \mathbf{w}\begin{sumibmatrix} & (\lambda, y) & \\ (\eta, c) & & (\lambda \deg(y), x) \\ & (\eta \deg(c), a) & \end{sumibmatrix} \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{((\lambda, y), (\lambda \deg(y), x))}{((\mu, d), (\mu \deg(d), b))} + \mathfrak{I}_{\mathbf{w}} ) \\ -& \sum_{\substack{\tau. \nu, \theta \in \Lambda \\ x, y \in X}} f^+( 1_R \otimes \mathbf{w}\begin{sumibmatrix} & (\theta, d) & \\ (\nu, y) & & (\theta \deg(d), b) \\ & (\nu \deg(y), x) & \end{sumibmatrix} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{((\tau, c), (\tau \deg(c), a))}{((\nu, y), (\nu \deg(y), x))} + \mathfrak{I}_{\mathbf{w}} ) \\ =& 0. \end{align*} Here we use the setting \eqref{wsig} to show the second equality. We can easily induce that the generator (5) satisfies \eqref{F0} because of \( 1_{\mathfrak{A}(w_{\sigma})} = \displaystyle\sum_{\lambda, \mu \in \Lambda} 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\lambda}{\mu} + \mathfrak{I}_{\mathbf{w}} \). Hence the \( \mathbb{K} \)-algebra homomorphism \( F( \alpha + I_{\sigma} ) = \overline{F}( \alpha ) \; ( \alpha \in \mathbb{K} \langle \Lambda X \rangle ) \) is well defined. We next show that \( f^+ = F \circ \Phi \). Since these three maps \( f^+ \), \( F \) and \( \Phi \) are \( \mathbb{K} \)-algebra homomorphisms, it is sufficient to prove that \begin{equation*} f^+( r \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\lambda, a)}{(\mu, b)} + \mathfrak{I}_{\mathbf{w}} ) = F \circ \Phi( r \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\lambda, a)}{(\mu, b)} + \mathfrak{I}_{\mathbf{w}} ) \end{equation*} for all \( r, r^{\prime} \in R \), \( (\lambda, a) \), and \( (\mu, b) \in Q \). We can evaluate that \begin{align*} &F \circ \Phi( r \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\lambda, a)}{(\mu, b)} + \mathfrak{I}_{\mathbf{w}} ) \\ =& f^+( r \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\lambda}{\mu} + \mathfrak{I}_{\mathbf{w}} ) ( \sum_{\tau, \nu \in \Lambda} f^+( 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\tau, a)}{(\nu, b)} + \mathfrak{I}_{\mathbf{w}} ) ) \\ =& f^+( r \otimes r^{\prime} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\lambda, a)}{(\mu, b)} + \mathfrak{I}_{\mathbf{w}} ). \end{align*} We give a proof of the uniqueness of \( F \). Let \( F^{\prime} \) be a \( \mathbb{K} \)-algebra homomorphism such that \( f^+ = F^{\prime} \circ \Phi \). We see at once that \( F^{\prime}( g \otimes h + I_{\sigma} ) = F( g \otimes h + I_{\sigma} ) \) and \( F^{\prime}( L_{ab} + I_{\sigma} ) = F( L_{ab} + I_{\sigma} ) \) for all \( g, h \in M_{\Lambda}(R) \), \( a \), and \( b \in X \). We denote by \( S^{{\rm WHA}} \) the antipode of the weak Hopf algebra \( A_{\sigma} \). According to Proposition \ref{prop:WHD}, this \( S^{{\rm WHA}} \) satisfies that \( S^{{\rm WHA}}( L_{ab} + I_{\sigma} ) = (L^{-1})_{ab} + I_{\sigma} \) for any \( a, b \in X \). Therefore we compute that \begin{align*} F^{\prime}( (L^{-1})_{ab} + I_{\sigma} ) &= F^{\prime} \circ S^{{\rm WHA}}( L_{ab} + I_{\sigma} ) \\ &= \sum_{\lambda, \mu \in \Lambda} F^{\prime} \circ S^{{\rm WHA}} \circ \Phi( 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\lambda, a)}{(\mu, b)} + \mathfrak{I}_{\mathbf{w}} ). \end{align*} Let us prove that the map \( \tilde{f} := F^{\prime} \circ S^{{\rm WHA}} \circ \Phi \) is the antipode of \( f^+ \). For all \( \alpha \in \mathfrak{A}(w_{\sigma}) \), \begin{align*} (\tilde{f} \star f^+)( \alpha ) &= F^{\prime}( S^{{\rm WHA}} ( \Phi( \alpha_{(1)} ) ) \Phi( \alpha_{(2)} ) ) \\ &= F^{\prime}( S^{{\rm WHA}} ( \Phi( \alpha )_{(1)} ) \Phi( \alpha )_{(2)} ) \\ &= F^{\prime}( \varepsilon_s \circ \Phi( \alpha ) ) \\ &= F^{\prime} \circ \Phi \circ \varepsilon_s( \alpha ) ) \\ &= f^+ \circ \varepsilon_s( \alpha ) \end{align*} The proof for \( f^+ \star \tilde{f} = f^+ \circ \varepsilon_t \) and \( \tilde{f} \star f^+ \star \tilde{f} = \tilde{f} \) is similar. Thus \( \tilde{f} \) is the antipode of \( f^+ \). We can induce that \( \tilde{f} = f^- \) because of the uniqueness of the antipode. Hence \( F^{\prime} = F \) is satisfied. Finally, we show that \( F \) is a weak bialgebra homomorphism if \( A \) is a weak bialgebra and \( f^{+} \) is a weak bialgebra homomorphism. Let us prove that \( F \) is comultiplicative. Since \( \Delta_{A_{\sigma}} \) and \( \Delta_{A} \) satisfy \eqref{def:wdm}, it suffices to check that \( (F \otimes F) \circ \Delta_{A_{\sigma}}( \alpha + I_{\sigma} ) = \Delta_A \circ F( \alpha + I_{\sigma} ) \). Here, \begin{equation*} \alpha = \begin{cases} g \otimes h \;\; (\forall g, h \in M_{\Lambda}(R)); \\ L_{ab}; \\ (L^{-1})_{ab} \;\; (\forall a, b \in X). \end{cases} \end{equation*} If \( \alpha = g \otimes h \; (\forall g, h \in M_{\Lambda}(R)) \), \begin{align*} &(F \otimes F) \circ \Delta_{A_{\sigma}}( g \otimes h + I_{\sigma} ) \\ =& \sum_{\lambda, \mu, \tau \in \Lambda} f^+( g( \lambda ) \otimes e^{(1)} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\lambda}{\tau} + \mathfrak{I}_{\mathbf{w}} ) \otimes f^+( e^{(2)} \otimes h( \mu ) \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\tau}{\mu} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \sum_{\lambda, \mu \in \Lambda} \Delta_A \circ f^+ ( g( \lambda ) \otimes h( \mu ) \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\lambda}{\mu} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \Delta_A \circ F( g \otimes h + I_{\sigma} ). \end{align*} The proof for \( \alpha = L_{ab} \; (\forall a, b \in X) \) is similar. Let us suppose that \( \alpha = (L^{-1})_{ab} \) for any \( a \) and \( b \in X \). Since \( f^- \colon \mathfrak{A}(w_{\sigma}) \to A^{bop} \) is a weak bialgebra homomorphism, we can induce that \begin{align*} &(F \otimes F) \circ \Delta_{A_{\sigma}}( (L^{-1})_{ab} + I_{\sigma} ) \\ =& \sum_{\substack{c \in X \\ \lambda, \mu, \tau, \nu \in \Lambda}} \Delta_{A}( 1_A ) \\ &\times f^-( 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\lambda, c)}{(\mu, b)} + \mathfrak{I}_{\mathbf{w}} ) \otimes f^-( 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\tau, a)}{(\nu, c)} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \sum_{\substack{c \in X \\ \lambda, \mu, \tau, \nu \in \Lambda}} (f^- \otimes f^-) \circ \Delta_{\mathfrak{A}(w_{\sigma})}^{op}( 1_{\mathfrak{A}(w_{\sigma})} ) \\ & \times f^-( 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\lambda, c)}{(\mu, b)} + \mathfrak{I}_{\mathbf{w}} ) \otimes f^-( 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\tau, a)}{(\nu, c)} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \sum_{\lambda, \mu, \tau \in \Lambda} f^-( e^{(2)} \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\tau, c)}{(\mu, b)} + \mathfrak{I}_{\mathbf{w}} ) \otimes f^-( 1_R \otimes e^{(1)} \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\lambda, a)}{(\tau, c)} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \sum_{\lambda, \mu \in \Lambda} \Delta_A \circ f^-( 1_R \otimes 1_R \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{(\lambda, a)}{(\mu, b)} + \mathfrak{I}_{\mathbf{w}} ) \\ =& \Delta_A \circ F( (L^{-1})_{ab} + I_{\sigma} ). \end{align*} Hence the map \( F \) is comultiplicative. We prove that \( F \) preseves the counit. Since the counit satisfies \eqref{def:cum}, it suffices to show that \begin{align} \varepsilon_A \circ F( (g \otimes h) L_{ab} + I_{\sigma} ) = \varepsilon_{A_{\sigma}}( (g \otimes h) L_{ab} + I_{\sigma} ); \label{ug+} \\ \varepsilon_A \circ F( (g \otimes h) (L^{-1})_{ab} + I_{\sigma} ) = \varepsilon_{A_{\sigma}}( (g \otimes h) (L^{-1})_{ab} + I_{\sigma} ) \label{ug-} \end{align} for any \( g, h \in M_{\Lambda}(R) \), \( a \), and \( b \in X \). For (\ref{ug+}), we can evaluate that \begin{align*} \varepsilon_A \circ F( (g \otimes h) L_{ab} + I_{\sigma} ) &= \sum_{\lambda, \mu \in \Lambda} \varepsilon_{\mathfrak{A}(w_{\sigma})}( g( \lambda ) \otimes h( \mu ) \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\lambda}{\mu} + \mathfrak{I}_{\mathbf{w}} ) \\ &= \sum_{\lambda, \mu \in \Lambda} \delta_{\lambda, \mu} \delta_{a, b} \phi( g( \lambda ) h( \mu ) ) \\ &= \delta_{a, b} \sum_{\lambda \in \Lambda} \phi( (gh)( \lambda ) ) \\ &= \varepsilon_{A_{\sigma}}( (g \otimes h) L_{ab} + I_{\sigma} ). \end{align*} We can use the similar way to prove \eqref{ug-}. This completes the proof. \end{proof} \begin{proof}[{\bf Proof of Lemma \ref{lem:estA}}] We give the proof only for \eqref{lem:es}. For any \( r \in R \) and \( \lambda \in \Lambda \), \begin{align*} \sum_{\mu \in \Lambda} \varepsilon_s( 1_R \otimes r \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\mu}{\lambda} + \mathfrak{I}_{\mathbf{w}} ) &= \sum_{\mu, \tau \in \Lambda} \delta_{\lambda, \tau} 1_R \otimes e^{(1)} \psi( e^{(2)} r ) \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\mu}{\tau} + \mathfrak{I}_{\mathbf{w}} \\ &= \sum_{\mu \in \Lambda} 1_R \otimes r \otimes \mathbf{e}\genfrac{[}{]}{0pt}{}{\mu}{\lambda} + \mathfrak{I}_{\mathbf{w}}. \end{align*} This is the desired conclusion. \end{proof} \begin{cor} If \( \sigma \) is rigid, then the pair \( (A_{\sigma}, \Phi) \) is the Hopf closure of \( \mathfrak{A}(w_{\sigma}) \). \end{cor} \begin{ack*} The auther is deeply grateful to Professor Youichi Shibukawa for his helpful advice. \end{ack*}
1,941,325,221,233
arxiv
\section*{Introduction} Application of the unitarization procedure is an economy and convenient way to construct a true scattering amplitude obeying unitarity, in particular, when an input amplitude is not constrained by the unitarity. There are several different unitarization mechanisms generating the required final output (cf. for systematic comparison\cite{glushko} and references therein). The need for unitarization has become evident at the time when the total cross--sections rise has been discovered. To fit the Regge model to the experimental data one should introduce a Pomeron pole contribution having intercept $\alpha(0)$ greater than unity. Such contribution would finally violate unitarity and therefore requires unitarization. The input amplitude of the Regge model with linear trajectory $\sim (s/s_0)^{\alpha(t)}$, however, includes diffraction cone shrinkage ab initio, i.e. the slope parameter $B(s)$ increases with energy logarithmically, $B(s)\sim \alpha'(0)\ln (s/s_0)$, $\alpha'(0)\neq 0$, while unitarity requires its double logarithmic asymptotic growth, $B(s)\sim \ln^2(s/s_0)$ if the total cross-section saturate Froissart-Martin bound. To reconcile the speed of shrinkage with unitarity requirements and with asymptotic saturation of this bound, one can consider the slope of the Pomeron trajectory $\alpha'(0)$ being an energy--dependent effective function in the course of phenomenological consideration (cf. \cite{rysk}). An increase of the slope $B(s)$ and its speeding up with energy are then due to both the unitarization procedure and the Regge parameterization for the input amplitude; since the input amplitude itself implies growth of $B(s)$ with energy, it is difficult, therefore, to deal with the two sources of the energy dependence. The importance of such disentanglement for the studies of the hadron interaction dynamics and diffraction processes in soft region is evident. Both the $s$- and $t$- dependencies of the slope of the diffraction cone $B(s,t)$ are under active discussion nowadays in connection with the new elastic scattering measurements performed at the LHC \cite{totem}, which have posed new issues addressed in a number of papers, e.g. \cite{anal,epja,jenk}. This note is devoted to discussion of an alternative interpretation of the origin of $B(s)$ growth when it is a unitarity effect alone. No doubt, this is a model-dependent result and it correlates with a form for the input amplitude used at the unitarization. But, the set of such models is rather wide and includes all the models assuming a factorized $s$- and $t$- dependence of the input amplitude. It includes geometrical models operating with the amplitudes in the impact parameter representation (cf. for definition \cite{heny}). In these models an input amplitude is taken as an overlap of matter distributions $D_1 \otimes D_2$ of the colliding hadrons following to the pioneering paper by Chou and Yang \cite{chy}. It should be noted that the above said factorization results also from the tower diagrams calculations in electrodynamics \cite{cheng}. \section{Unitarization of a factorized input } In our qualitative consideration we suppose that the real part of the elastic scattering amplitude is vanishingly small and can be neglected\footnote{However, it should be noted that such assumption is not quite correct in view of dispersion relations and it can be corrected by the restoration of the real part of the scattering amplitude with the scenario described in \cite{anal}.} since the high energy experimental data are in favor of the pure imaginary amplitude. We discuss the slope of the diffraction cone \begin{equation}\label{bs0} B(s)=\frac{d}{dt}\ln \frac{d\sigma}{dt}|_{t=0}, \end{equation} It is determined by the mean value of the impact parameter squared $b^2$, \begin{equation}\label{aver} \langle b^2\rangle=\frac{\int_0^\infty b^3dbf(s,b)}{\int_0^\infty bdbf(s,b)}. \end{equation} The studies of geometrical properties of hadron interactions are important \cite{blg} for understanding of hadron dynamics ultimately related to the development of QCD in its nonperturbative sector. Unitarity allows variation of the impact parameter dependent amplitude $f(s,b)$ in the region $0\leq f \leq 1$, while the assumption on the absorptive scattering mode reduces the region of elastic scattering amplitude variation to the interval $0\leq f \leq 1/2$. The value of $f=1/2$ corresponding to the complete absorption of the initial state means that the elastic scattering matrix element is zero, $S=0$ ($S=1-2f$). Unitarization schemes can provide an output amplitude $f$ limited by the unitarity itself ($U$--matrix) or by the black disc limiting value of $1/2$ (eikonal, method of continued unitarity) \cite{glushko}. Mechanisms of generation diffraction cone slope increase with energy are similar for these unitarization schemes. Due to the above similarity, we consider particular one of the above schemes, namely $U$--matrix \cite{uma}. In the $U$--matrix approach (in the pure imaginary case) the relation between the scattering amplitude and the input quantity $u$ is simple: \begin{equation}\label{um} f(s,b)=u(s,b)/[1+u(s,b)], \end{equation} where $u$ is non-negative. The geometrical models asssume that $u(s,b)$ has a factorized form \begin{equation}\label{usb} u(s,b)=g(s)\omega(b), \end{equation} where $g(s)\sim s^\lambda$, the power dependence guarantees asymptotic growth of the total cross--section $\sigma_{tot}\sim \ln^2 s$. This factorized form and Eq. (\ref{um}) and the particular form of the function $\omega(b)$ chosen in accordance with the analytical properties of the scattering amplitude lead also to the behavior \begin{equation}\label{bs} B(s)\sim \ln^2 s. \end{equation} As it was noted in the Introduction, a simple way to construct the function $\omega(b)$ is to represent it as a convolution of the matter distributions in transverse plane as it was proposed by Chou and Yang \cite{chy}: \begin{equation} \omega (b)\sim D_1\otimes D_2\equiv \int D_1({\bf b}_1)D_2({\bf b}-{\bf b}_1). \end{equation} This function can also be constructed taking into account the hadron quark structure \cite{chiral}. The form of the function $\omega (b)$ consistent with analyticity is linear exponent at large values of $b$ and the following form can be adopted for simplicity \begin{equation}\label{omb} \omega (b)\sim \exp{(-\mu b)}. \end{equation} The energy independent parameter $\mu$ is related to a particular chosen physics model and the hadron structure, it can be assumed that $\mu=2 m_\pi$. A general issue is that the diffraction cone slope $B^0$ corresponding to this factorized input amplitudes does not depend on the collision energy. It is determined by the geometrical radii of colliding particles. The geometrical radius of particle is determined by the minimal mass of the quanta whose exchange is responsible for the scattering \cite{yuk}. The energy dependence of final slope $B(s)$ is generated by the unitarization itself. Simple physical interpretation based on the analogy with bremsstrahlung can be found in \cite{solo}. It should be noted that the energy dependence of $B(s)$ appears at any energy value, and it $\sim \ln^2 s$ at asymptotics. At moderately small energies, where $g(s)$ is small, it becomes (cf. Eqs. (\ref{um}), (\ref{usb}), (\ref{omb})): \begin{equation}\label{bsmall} \sim \frac{6}{\mu^2}(1+\frac{3}{16}g(s)). \end{equation} \section*{Conclusion} The cross-sections and the slope of the diffraction are important global features of hadron interaction dynamics and it is essential that their measurements can be directly performed experimentally. It is well known that unitarity imposes constraints for the scattering amplitude of the on mass shell particles only. The factorization of the input amplitude for the fast particles can be interpreted as a manifestation of the independence of transverse and longitudinal dynamics in the first approximation. Their interrelation as a consequence of the unitarization. The generation of the $B(s)$ energy growth can be treated due to unitarity alone, namely, the unitarization transforms energy independent slope into the one increasing like $\ln^2 s$ at $s\to\infty$. This procedure leads, in particular, to slowing down the asymptotic increase of the total cross--section: instead of violating Froissart--Martin bound by the power--like energy dependence $s^\lambda$, $\lambda>0$, unitarization turns it into a correct $\ln^2 s$ behavior. For the off--shell particles similar unitarity constraints are not applicable without extra assumptions \cite{yndur}. Indeed, there is no Froissart--Martin bound for the case of off--shell particles and unitarity does not rule out an asymptotic power-like behavior of the total cross-sections in this case. If unitarity generates the energy dependence of the diffraction cone slope parameter one could expect energy independence of this parameter, for example, in the case of the off--shell particle scattering. Contrary, the dominance of the Regge mechanism and Pomeron contribution with $\alpha'(0)\neq 0$ assumes ab initio similarity for the on--shell and off-shell scattering processes. In addition, the structure of the angular distributions of the off-mass shell particle scattering amplitude might be rather different compared to the angular structure of the amplitude for real particles scattering \cite{epjc}. The studies of the off-mass shell particle scattering in the sophisticated exclusive experiments with three particles in the final state would be definetely helpful for the investigations of the non--perturbative QCD through the soft hadron interactions. \small
1,941,325,221,234
arxiv
\section{Introduction} Consider the two graphs shown in Figure \ref{fig:peterporcu}. One is the well-known Petersen graph which we denote by $\Pi$ and the other is a graph which is not so well-known which we sometimes refer to as Petersen's cousin, for reasons which will soon become apparent, and which we denote by $\Lambda$. What relationship could there be between them? Consider the set of neighbourhoods of the vertices in the two graphs. These are: \begin{figure}[h] \begin{centering} \includegraphics[width=9.3cm,height=4.25cm]{peterporcu.eps} \caption{The Petersen graph $\Pi$ and its less well-known cousin $\Lambda$.} \label{fig:peterporcu} \end{centering} \end{figure} \bigskip\noindent {Neighbourhoods of $\Pi$: \{2,5,6\},\ \{1,3,7\},\ \{2,4,8\},\ \{3,5,9\},\ \{1,4,10\},\ \{1,8,9\},\ \{2,9,10\},\ \{3,6,10\},\ \{4,6,7\},\ \{5,7,8\}. }\\ {Neighbourhoods of $\Lambda$: \{4,6,7\},\ \{3,5,9\},\ \{2,4,8\},\ \{1,3,7\},\ \{2,9,10\},\ \{1,8,9\},\ \{1,4,10\},\ \{3,6,10\},\ \{2,5,6\},\ \{5,7,8\}. }\\ \bigskip Up to a re-ordering, both graphs have the same family of neighbourhoods. It is therefore clear that if one were given just the family of neighbourhoods of the Petersen graph one would not be able to determine that the graph they came from was Petersen---it could have been the second graph which also has the same neighbours. \\ \bigskip In the literature the following problem (the Neighbourhood Reconstruction Problem) has been proposed (for example, in \cite{Aigner1} and \cite{Aigner2}): given the neighbourhoods of the vertices of of $G$, can $G$ be determined uniquely up to isomorphism? The two graphs above clearly show that the answer to this question is ``no'' in general. The Petersen graph is not reconstructible this way because the second graph shown in the figure is a reconstruction of the Petersen which is not isomorphic to it. Why does this happen? We shall explain this below. How many other reconstructions of the Petersen graph can be obtained this way? We shall see later on that this second graph is the only such reconstruction of the Petersen graph. \\ There are a few other problems which have been considered in the graph theory literature which, as we shall see, are closely related to the neighbourhood reconstruction problem.\\ \begin{enumerate} \item {\bf The Realisability Problem.} When is a given family of vertices the neighbourhood family of a graph or a digraph? What is the computational complexity of determining whether such a given family is the neighbourhood family of a graph or a digraph?\\ \item {\bf The Matrix Symmetrization Problem.} Given a $(0,1)$-matrix $A$, is it possible to change it into a symmetric matrix using (independent) row and column permutations? Although it is not immediately obvious, we shall see that this problem is related to the Realisability Problem. This problem was first studied in the paper \cite{Scapsalvi1} starting with a matrix $A$ which is already symmetric.\\ \item {\bf Stability.} This problem was first raised and studied in \cite{Scapsalvi1}. Given the categorical product $G \times K_2$ of a graph or digraph $G$ with the complete graph $K_2$, the graph $G$ is said to be \emph{unstable} when the automorphism group of the product is not isomorphic to Aut$(G) \times \mathbb Z$. When is a graph unstable? This question was heavily studied in \cite{Scapsalvi2,Surowski1,Surowski2,Wilson01}, and again, although not immediately clear why, it is strongly related to the previous questions. \\ \end{enumerate} \smallskip The excellent survey paper \cite{Gurvich1} gives a good historical picture of work done on these problems.\\ \bigskip\noindent In this paper we shall present a new type of isomorphism between graphs and digraphs which, we believe, has independent interest but also unifies the above problems, as we shall demonstrate along the way while presenting our results.\\ \section{Notation} A \textit{mixed graph} is a pair $G=(\mbox{$V$}(G),\mbox{$A$}(G))$ where $V$$(G)$ is a set and $A(G)$ is a set of ordered pairs of elements of $V$$(G)$. The elements of $V$$(G)$ are called \textit{vertices} and the elements of $A(G)$ are called \textit{arcs}. When referring to an arc $(u,v)$, we say that $u$ is \textit{adjacent to} $v$ and $v$ is \textit{adjacent from} $u$. The vertex $u$ is the \textit{tail} and $v$ is the \textit{head} of a given arc $(u,v)$. An arc of the form $(u,u)$ is called a \textit{loop}. A mixed graph cannot contain multiple arcs, that is, it cannot contain the arc $(u,v)$ more than once. A set $S$ of arcs is \textit{self-paired} if, whenever $(u,v) \in$ $S$, $(v,u)$ is also in $S$. If $S$ $\ =\{(u,v), (v,u)\}$, then we identify $S$ with the unordered pair $\{u,v\}$; this unordered pair is called an \textit{edge}.\\ It is useful to consider two special cases of mixed graphs. A \textit{graph} is a mixed graph without loops whose arc-set is self-paired. The edge set of a graph is denoted by $E$$ (G)$. A \textit{digraph} is a mixed graph with no loops in which no set of arcs is self-paired. The \textit{inverse} $G'$ of a mixed graph $G$ is obtained from $G$ by reversing all its arcs, that is $V$$(G') =$$V$$(G)$ and $(v,u)$ is an arc of $G'$ if and only if $(u,v)$ is an arc of $G$. A digraph $G$ may therefore be characterised as a mixed graph for which $A(G)$ and $A(G')$ are disjoint and a graph as one for which $A(G)=A(G')$. The underlying graph $\widehat{G}$ of a mixed graph $G$ is a graph with the vertex set $V$$(\widehat{G})$ $=$ $V$$(G)$ and the edge set $E$$(\widehat{G})$ defined by $\{x,y\} \in$ $E$$(\widehat{G})$ if and only if either $(x,y)$ or $(y,x)$ is an element of $A(G)$. Two arcs are \emph{incident} in $G$ if the corresponding edges in the underlying graph $\widehat{G}$ have a common vertex. When we say that a \emph{mixed graph is connected}, we mean that the underlying graph is connected.\\ Given a mixed graph $G$ and a vertex $v \in$ $V$$(G)$, we define the \textit{in-neighbourhood} $N_{in}(v)$ by $N_{in}(v) = \{x \in \mbox{$V$}(G)- (x,v) \in \mbox{A}(G)\}$. Similarly we define the \textit{out-neighbourhood} $N_{out}(v)$ by $N_{out}(v) = \{x \in \mbox{$V$}(G)-(v,x) \in \mbox{A}(G)\}$. The \textit{in-degree} $ \rho_{in}(v)$ of a vertex $v$ is defined by $ \rho_{in}(v) = |N_{in}(v)|$ and the \textit{out-degree} $ \rho_{out}(v)$ of a vertex $v$ is defined by $ \rho_{out}(v) = |N_{out}(v)|$. When $G$ is a graph, these notions reduce to the usual neighbourhood $N(v)=N_{in}(v)=N_{out}(v)$ and degree $\rho(v)=\rho_{in}(v)=\rho_{out}(v)$. A vertex $v$ is called a \textit{source} if $ \rho_{in}(v)= 0$ and a \textit{sink} if $ \rho_{out}(v)=0$. A vertex is said to be \textit{isolated} when it is both a source and a sink, that is, it is not adjacent to or from any vertex.\\ A mixed graph $G$ is called \textit{bipartite} if there is a partition of $V$$(G)$ into two sets $X$ and $Y$, which we call \textit{colour classes}, such that for each arc $(u,v)$ of $G$ the set $\{u,v\}$ intersects both $X$ and $Y$. We call a bipartite digraph having one colour class consisting of sources and the other colour class consisting of sinks as a \emph{strongly bipartite digraph}. \\ Let $G$ be a digraph and let $(u,v)$ be an arc of $G$. If in $G-(u,v)$, the vertices $u$, $v$ are either both sources or both sinks, then we call $(u,v)$ an S-\textit{arc} of $G$.\\ A set $P$ of arcs of $G$ is called a \emph{trail} if its elements can be ordered in a sequence $a_{1},\ a_{2}, \dots,\ a_{k}$ such that each $a_{i}$ is incident with $a_{i+1}$ for all $i = 1,\ \dots,\ k-1$. If $u$ is the vertex of $a_{1}$, that is not in $a_{2}$ and $v$ is the vertex of $a_{k}$ which is not in $a_{k-1}$, then we say that $P$ \emph{joins} $u$ and $v$; $u$ is called the \emph{first vertex} of $P$ and $v$ is called the \emph{last vertex} with respect to the sequence $a_{1},\ a_{2},\ \dots,\ a_{k}$. If, whenever $a_{i}=(x,y)$, either $a_{i+1}=(x,z)$ or $a_{i+1}=(z,y)$ for some new vertex $z$, $P$ is called an \textit{alternating trail} or \textbf{A}-\textit{trail}.\nocite{Zelinka2} \\ If the first vertex $u$ and the start-vertex $v$ of an \textbf{A}-trail $P$ are different, then $P$ is said to be \emph{open}. If they are equal then we have to distinguish between two cases. When the number of arcs is even then $P$ is called \emph{closed} while when the number of arcs is odd then $P$ is called \emph{semi-closed}. Note that if $P$ is semi-closed then either (i) $a_{1}=(u,x)$ for some vertex $x$ and $a_{k} = (y,u)$ for some vertex $y$ or (ii) $a_{1}=(x,u)$ and $a_{k} = (u,y)$. If $P$ is closed then either $a_{1} =(u,x)$ or $a_{k}=(u,y)$ or $a_{1}=(x,u)$ and $a_{k} = (y,u)$. Observe also that the choice of the first (equal to the last) vertex for a closed \textbf{A}-trail is not unique but depends on the ordering of the arcs. However, this choice is unique for semi-closed \textbf{A}-trails as this simple argument shows. Suppose $P$ is semi-closed and the arcs of $P$ are ordered such that $u$ is the unique (in that ordering) first and last vertex, that is, it is the unique vertex such as the first and the last arcs in the ordering in $P$ do not alternate in direction at the meeting point $u$. Therefore, it is easy to see that both $\rho_{in}(u)$ and $\rho_{out}(u)$ (degrees taken in $P$ as a subgraph induced by its arcs) are odd whereas any other vertex $v$ in the trail has both $\rho_{in}(v)$ and $\rho_{out}(v)$ even. This is because, in the given ordering, arcs have to alternate in direction at $v$ and therefore in-arcs of the form $(x,v)$ are paired with out-arcs of the form $(v,y)$. Therefore, in no ordering of the arcs of $P$ can $u$ be anything but the only vertex at which the first and last arcs do not alternate. The same argument holds for open \textbf{A}-trails. Therefore, open and semi-closed \textbf{A}-trails are similar at least in the sense that the first and last vertices are uniquely determined regardless of the sequence of the arcs. This similarity will be strengthened by the results which we shall shortly present.\\ Let $G$ be a mixed graph. If, for every $u$, $v \in$ $V$$(G)$, there exists an \textbf{A}-trail which joins them, then we say that $G$ is \textbf{A}-\textit{connected}. Clearly, a connected graph $G$ with at least two edges is always \textbf{A}-connected since we can always choose any orientation for a given edge. In the case of a mixed graph, \textbf{A}-connectedness is not guaranteed. In fact, there are easy counterexamples also among digraphs. \\ In a connected graph, the length (that is, number of edges) of a shortest path between two given vertices $u$, $v$ is denoted by $d(u,v)$. Any other graph theoretical terms which we use are standard and can be found in textbooks such as \cite{bondy} and \cite{Harary01}. Information on automorphism groups of a graph can be found in \cite{lauri2}.\\ Let $G$ and $H$ be two mixed graphs and $\alpha$, $\beta$ be bijections from $V$$(G)$ to $V$$(H)$. The pair $(\alpha,\beta)$ is said to be a \textit{two-fold isomorphism} (or TF\textit{-isomorphism}) if the following holds: $(u,v)$ is an arc of $G$ if and only if $(\alpha(u),\beta(v))$ is an arc of $H$. We then say that $G$ and $H$ are TF-\textit{isomorphic} and write $G\cong ^{\mbox{{\tiny{\textbf{TF}}}}} H$. Note that when $\alpha=\beta$ the pair $(\alpha,\beta)$ is a TF-isomorphism if and only if $\alpha$ itself is an isomorphism. If $\alpha \neq \beta$, then the given TF-isomophism $(\alpha,\beta)$ is essentially different from a usual isomorphism and hence we call $(\alpha,\beta)$ a \textit{non}-\textit{trivial} TF-\textit{isomorphism}. In this case, we also say that $G$ and $H$ are \emph{non-trivially} TF-\emph{isomorphic}. If $(\alpha,\beta)$ is a non-trivial TF-isomorphism from a mixed graph $G$ to a mixed graph $H$, the bijections $\alpha$ and $\beta$ need not necessarily be isomorphisms from $G$ to $H$. This is illustrated by examples found in \cite{lms2}, and also others found below. \\ When $G=H$, $(\alpha,\beta)$ is said to be a TF-\textit{automorphism} and it is again called non-trivial if $\alpha \neq \beta$. The set of all TF-automorphisms of $G$ with multiplication defined by $(\alpha,\beta)(\gamma,\delta) = (\alpha \gamma, \beta \delta)$ is a subgroup of $S_{V(G)} \times S_{V(G) }$ and it is called the \textit{two-fold automorphism group} of $G$ and is denoted by $\mbox{Aut}^{\mbox{{\tiny{\textbf{TF}}}}}(G)$. Note that if we identify an automorphism $\alpha$ with the TF-automorphism $(\alpha,\alpha)$, then Aut$(G) \subseteq$ $\mbox{Aut}^{\mbox{{\tiny{\textbf{TF}}}}}(G)$. When a graph has no non-trivial TF-automorphisms, Aut$(G)= $$\mbox{Aut}^{\mbox{{\tiny{\textbf{TF}}}}}(G)$. It is possible for an asymmetric graph $G$, that is a graph with $|$Aut$(G)| = 1$, to have non-trivial TF-automorphisms \cite{lms2}.\\ \section{Some double covers and invariants under TF-isomorphisms} Let $G$ be a mixed graph. The \emph{incidence double cover} of $G$, denoted by \mbox{\textbf{IDC}}$(G)$ is a bipartite graph with vertex set $V$(\mbox{\textbf{IDC}}$(G)$)$\subseteq$ $V$$(G) \times \{0,\ 1\}$ and edge set $E$(\mbox{\textbf{IDC}}$(G)$) $=$ $\{(u,0),(v,1)\} \ | \ (u,v) \in$ $A(G)\}$. The reader may refer to \cite{klin112} for more information regarding the incidence double cover of graphs and its relevance to the study of association schemes. The \textbf{A}-\emph{cover} of $G$, denoted by $\mbox{\textbf{ADC}}(G)$ is a strongly bipartite digraph with vertex set $V$$(\mbox{\textbf{ADC}}(G))$ $\subseteq$ $V$$(G) \times \{0,\ 1\}$ and arc set $A$($\mbox{\textbf{ADC}}(G)$) $=$ $\{(u,0),(v,1)\} \ $ $| \ (u,v)$ $ \in$ $A(G)\}$. For a more concise notation, very often we use $u_{0}$ or $u_{1}$ to label the elements of $V$$(G) \times \{0,\ 1\}$ instead of $(u,0)$ or $(u,1)$. It is clear that \textbf{ADC}$(G)$ is obtained from \textbf{IDC}$(G)$ by removing isolated vertices and changing every edge $\{u_{0},v_{1}\}$ into an arc $(u_{0},v_{1})$.\\ {\Thm{Let $G$, $H$ be mixed graphs. Then $G$ and $H$ are two-fold isomorphic if and only if \emph{\mbox{\textbf{IDC}}$(G)$} and \emph{\mbox{\textbf{IDC}}$(H)$} are isomorphic.}\label{thm:idctf}}\\ {\proof{Let $(\alpha,\beta)$ be any two-fold isomorphism from $G$ to $H$. This implies that, for any $(u,v) \in$ $A(G)$, $(\alpha(u),\beta(v)) \in$ $A(H)$. Consequently, given the corresponding $\{(u,0),(v,1)\} \in$ $E$(\mbox{\textbf{IDC}}$(G))$, $\{(\alpha(u),0),(\beta(v),1)\}$ $\in$ $E$(\mbox{\textbf{IDC}}$(H))$. Define $\phi:$ $V$(\mbox{\textbf{IDC}}$(G)$) $\rightarrow$ $V$(\mbox{\textbf{IDC}}$(H)$) such that $\phi(x,0) = (\alpha(x),0)$ and $\phi(x,1) = (\beta(x),1)$ for any $x \in$ $A(G)$ such that $\phi$ is an isomorphism from \mbox{\textbf{IDC}}$(G)$ to \mbox{\textbf{IDC}}$(H)$.\\ Conversely, let $\alpha$ be any isomorphism from \mbox{\textbf{IDC}}$(G)$ to \mbox{\textbf{IDC}}$(H)$ such that $\alpha \{(u,0),$ $(v,1)\}$ $=$ $\{(u',0),$ $(v',1)\}$. Clearly $(u,v)\in$ $A(G)$ and $(u',v') \in$ $A(H)$ by the definition of IDC. Define $\alpha :$ $V$($G$) $\rightarrow$ $V$($H$) such that $\alpha(u) = u'$ if and only if $\phi(u,0) = (u',0)$. Similarly define $\beta :$ $V$($G$) $\rightarrow$ $V$($H$) such that $\beta(v) = v'$ if and only if $\phi(v,1) = (v',1)$. Given any $(x,y) \in$ $A(G)$, $(\alpha(x),\beta(y)) \in$ $A(H)$ since $\{(x,0),(y,1)\} \in$ $E$(\mbox{\textbf{IDC}}$(G)$) if and only if $\{\phi(x,0),\phi(y,1)\} = \{(x',0),(y',1)\}$ in $E$(\mbox{\textbf{IDC}}$(H)$) if and only if $(x',y')$ $\in$ $A(H)$. } \hfill $\Box$ \bigskip} We now present what was Theorem 3.7 in \cite{lms1}, one of our main results in \cite{lms1}, as a corollary to Theorem \ref{thm:idctf}.\\ The \emph{canonical double cover (CDC)} of a graph or digraph $G$ (also called its \emph{duplex} especially in computational chemistry literature, for example, \cite{randic83}) is the graph or digraph whose vertex set is $V(G) \times \{0,1\}$ and in which there is an arc from $(u,i)$ to $(v,j)$ if and only if $i \neq j$ and there is an arc from $u$ to $v$ in $G$. The canonical double cover of $G$ is often described as the direct or categorical product $G \times K_{2}$ \cite{imrich,HB}, and is sometimes also called the \emph{bipartite double cover} of $G$. For graphs, the canonical double cover is identical to the incidence double cover.\\ {\Cor{Two graphs $G$, $H$ are TF-isomorphic if and only if \emph{\textbf{CDC}}$(G)$ and \emph{\textbf{CDC}}$(H)$ are isomorphic.}\label{cor:tfidc01}}\\ {\proof{In fact, since $G$ and $H$ are graphs, \mbox{\textbf{IDC}}$(G)$ $\cong$ \textbf{CDC}$(G)$ and \mbox{\textbf{IDC}}$(H)$ $\cong$ \textbf{CDC}$(H)$. } \hfill $\Box$ \bigskip} Therefore, in general the IDC of a mixed graph $G$ is a structure which is invariant under the action of a TF-isomorphism acting on $G$. In the case of mixed graphs which are not graphs, Theorem \ref{thm:idctf} is a significant improvement over Theorem3.7 in \cite{lms1} which only considered TF-isomorphic graphs (not mixed graphs) and the canonical double cover. \\ \bigskip \subsection{Digression } Now we can see how the neighbourhood reconstruction problem and the other problems we discussed in the first section can be described in terms of TF-isomorphisms. First, consider this alternative way of looking at TF-isomorphisms. An \emph{incidence structure} or, alternatively, a \emph{hypergraph}, is a finite set of vertices with a system of subsets (blocks) some of which can be repeated. Number the $n$ vertices of a hypergraph in some arbitrary but fixed way, and do similarly for the $b$ blocks of the hypergraph. The \emph{incidence matrix} of the hypergraph is the $n\times b$ matrix whose $ij$ entry is 1 if the $i$th vertex is in the $j$ block, and is zero otherwise. Let $H_1, H_2$ be two such hypergraphs with incidence matrices $B_1, B_2$, respectively. Then usually $H_1$ and $H_2$ are said to be isomorphic if there is a bijection $\alpha$ from $V(H_1)$ to $V(H_2)$ (effectively, a relabelling of the vertices of $H_1$) such that, under the resulting relabelling, the blocks of $\alpha(H_1)$ are the same as the blocks of $H_2$, possibly in a different order. Similarly, an automorphism of a hypergraph $H$ is a permutation of $V(H)$ (a relabelling of the vertices) such that the new blocks are a re-odering of the old blocks.\\ In other words, we have a permutation $\alpha$ of the rows of the incidence matrix $B_1$ such that the columns become a permutation of the columns of $B_2$. We can remove this last detail and make even the columns the same as those of $B_2$ by saying that an isomorphism from $H_1$ to $H_2$ is an independent re-ordering $\alpha$ of the rows and $\beta$ of the columns of $B_1$ such that it becomes $B_2$. Similarly, an automorphism of $H$ is an independent re-ordering of the rows and columns of $B$ which leaves $B$ unchanged. Therefore if we consider the adjacency matrix $A$ of a graph $G$ as an incidence matrix of a hypergraph with $n$ vertices (corresponding to the rows) and $b=n$ blocks (corresponding to the columns), a TF-isomorphism (TF-automorphism) is an isomorphism (automorphism) of the hypergraph represented by $A$.\\ Looking back at the example of the Petersen graph $\Pi$ and what we have called its cousin $\Lambda$ we see that their neighbourhoods considered as the blocks of two hypergraphs give isomorphic hypergraphs which means, according to the previous discussions, that $\Pi$ and $\Lambda$ are non-trivially TF-isomorphic, and that is why one is a neighbourhood reconstruction of the other! What non-trivial TF-isomorphism can we write from one to the other? Looking at how the list of neighbourhoods of the vertices $\{1,2,\ldots,10\}$ of the second graph appear as a permutation of the same list of neighbourhoods of the first graph easily indicates that if $\alpha=\mbox{id}$ and $\beta=(1\ 9)(2\ 4)(5\ 7)$ then $(\alpha,\beta)$ is a TF-isomorphism from the $\Pi$ to $\Lambda$ as labelled in Figure \ref{fig:peterporcu}.\\ But how do we know that $\Lambda$ is the only graph which is a neighbourhood reconstruction of (that is, TF-isomorphic to) the Petersen graph? We shall soon see this when below, we present one more result on canonical double covers. \\ The Matrix Symmetrization Problem can also be described in terms of TF-isomorphisms: given a digraph $D$, is there a graph $G$ to which $D$ is non-trivially TF-isomorphic? In the case when the matrix $A$ is already symmetric, as the problem was originally posed in \cite{Scapsalvi1}, this question becomes: given a graph $G$ is it non-trivially isomorphic to some other graph (possibly $G$ itself)?\\ \begin{figure} \begin{centering} \includegraphics[width=8cm,height=7.5cm]{desargues.eps} \caption{The Desargues graph.} \label{fig:desargues} \end{centering} \end{figure} Let us now return to the Petersen graph $\Pi$ and its cousin $\Lambda$ from Figure \ref{fig:peterporcu}. Since these two graphs are TF-isomorphic then they have the same CDC, and in fact, their common CDC is the well known Desargues graph shown in Figure \ref{fig:desargues}. (We have labelled Petersen's cousin by $\Lambda$ in honour of Livio Porcu who seems to have been the first one to observe in \cite{porcu} that $\Pi$ and $\Lambda$ have the same CDC.)\\ But now we can explain why these two graphs are the only ones with the same neighbourhood family. First we recall this result proved by Pacco and Scapellato in \cite{pacco}. As an easy reference, Theorem 5.3 of \cite{pacco} may be restated as follows using our current terminology.\\ {\Thm{Given a connected bipartite mixed graph $H$, the number of non-isomorphic mixed graphs $G$ such that ${\emph{\mbox{\textbf{CDC}}(G)}} \cong H$ is equal to the number of conjugacy classes of involutions in \emph{Aut}$(H)$ that interchange the two colour classes of $H$. The number of non-isomorphic loopless mixed graphs $H$ such that ${\emph{\mbox{\textbf{CDC}}(G)}} \cong H$ is equal to the number of conjugacy classes of involutions in \emph{Aut}$(H)$ that interchange the two colour classes of $H$ \emph{and} do not take any vertex $u$ to a vertex $v$ such that $(u,v)$ is an arc.}\label{thm:conjugacyclassespeterporcu} \hfill $\Box$ \bigskip} Now, the automorphism group of the Desergues graph $D$ is isomorphic to $S_5 \times Z_2$, and has order 240. Letting $\beta$ be the automorphism of $D$ taking $(v,0)$ into $(v,1)$ and vice versa, note that $\beta$ belongs to the centre of the group. Hence, each involution of Aut$(D)$ takes the form $(\alpha,{\rm id})$ or $(\alpha,\beta)$, where $\alpha$ is an involution of $S_5$. Only the latter swaps the two colour classes; its conjugacy classes are as many as those of involutions of $S_5$. The number of conjugacy classes of involutions of $S_5$ is exactly $2$, corresponding to transpositions and double transpositions. Therefore, by Theorem \ref{thm:conjugacyclassespeterporcu}, there are exactly two non-isomorphic mixed graphs whose CDC is $D$. One of them must be Petersen itself, while the other one is $\Lambda$, for which we know already that the CDC is $D$. So in this case, only these proper graphs occur, not more general mixed graphs.\\ Observe that since the Petersen graph's automorphism group is isomorphic to $S_5$, this graph is stable. However, Aut($\Lambda$) is isomorphic to $S_3\times Z_2$. Thus the index of the automorphism group of $\Lambda$ in Aut$(D)$ is $20$ and so it is unstable.\\ \section{TF-isomorphism and alternating trails} We shall consider isomorphisms and TF-isomorphisms between pairs of mixed graphs, that is, we shall allow loops, directed arcs and edges. Configurations conserved by TF-isomorphisms must also be conserved by isomorphisms since the latter are just a special case of the former. However, the converse does not necessarily hold. It is well known that loops, paths and cycles are all conserved by isomorphisms, but it is easy to see that they are not necessarily conserved by TF-isomorphisms. For example, an arc $(u,v)$ can be mapped into a loop by a TF-isomorphism $(\alpha,\beta)$ if $\alpha(u)=\beta(u)$.\\ An isomorphism conserves degrees, in-degrees and out-degrees. In the case of TF-isomorphisms, the situation is slightly more elaborate. First note that $\alpha$ must conserve the out-degree of each vertex but not the in-degree. Likewise, $\beta$ must conserve the in-degree of each vertex but not the out-degree. Therefore, if some vertex $u \in$ $V(G)$ is a source, $\alpha(u)$ might not be a source but it is certainly not a sink. If $u \in$ $V(G)$ is a sink, then $\alpha(u)$ must also be a sink. An analogous argument holds for $\beta$. Hence in the case of a digraph whose vertex set consists only of sources and sinks, $\alpha$ and $\beta$ must take sources to sources and sinks to sinks. Also, if $G$ and $H$ are graphs and $(\alpha,\beta)$ is a TF-isomorphism from $G$ to some graph $H$, then $\alpha$, $\beta$ must preserve the degree $\rho(v)$ of any vertex $v \in$ $V(G)$ since for every vertex $v$ in a graph, $\rho(v) = \rho_{in}(v) = \rho_{out}(v)$.\\ The definitions of the term \textit{path} found in the literature tacitly imply a specific direction from one vertex to the subsequent vertex in a sequence. For example, if $u ,v, w$ is a path in graph $G$ and $\alpha$ is an isomorphism from $G$ to $H$, then the arcs $(u,v)$ and $(v,w)$ are mapped into the arcs $(\alpha(u),\alpha(v))$ and $(\alpha(v),\alpha(w))$ in $H$, with the common vertex $\alpha(v)$. But, if $(\alpha,\beta)$ is a TF-isomorphism from $G$ to $H$ then the arc $(u,v)$ is mapped into $(\alpha(u),\beta(v))$ and it is the arc $(v,w)$ which is mapped into $(\alpha(w),\beta(v))$ containing the common vertex $\beta(v)$ with the previous arc. That is, to obtain a common vertex between images of successive arcs, we need to alternate the directions in the original path as $(u,v)$, $(w,v)$. This motivates our definition of \textbf{A}-trails and indicates the trend of our next results which show what type of \textbf{A}-trails are conserved by TF-isomorphisms.\\ {\Prop{Let $G$ and $G'$ be mixed graphs and $P$ be an \emph{\textbf{A}}-trail in $G$. Let $(\alpha,\beta)$ be any \emph{TF}-isomorphism from $G$ to $G'$. Then there exists an \emph{\textbf{A}}-trail $P'$ in $G'$ such that $(\alpha,\beta)$ restricted to $P$ maps $P$ to $P'$.}\label{Prop:altertrails01}}\\ {\proof{ For an \textbf{A}-trail consisting of just one arc, the result is trivial. Let us therefore consider an \textbf{A}-trail consisting of $k$ arcs with $k\geq 2$. Let the start vertex of a given \textbf{A}-trail $P $ be $x_{0}$ and label the successive vertices by $x_{1}, \dots,\ x_{k}$. Assume without loss of generality that $x_{0}$ is the tail of $a_{1}$. The TF-isomorphism maps the arc $a_{1}= (x_{0},x_{1})$ into the arc $a_{1}'=(\alpha(x_{0}),\beta(x_{1}))$. The next arc in $P$ is $a_{2}=(x_{2},x_{1})$ which is mapped by the TF-isomorphism to the arc $a_{2}'=(\alpha(x_{2}),\beta(x_{1}))$ with $\beta(x_{1})$ as a common vertex with $a_{1}'$. By repeating the process until all arcs of $P$ have been included, we obtain an \textbf{A}-trail $P'$ of $G'$. Then, by restricting the action of the pair $(\alpha,\beta)$ to $P$ we obtain $P'$ as its image. }} \hfill $\Box$ \bigskip This proposition immediately gives the following corollary.\\ {\Cor{If $G$ is a \emph{\textbf{A}}-connected mixed graph which is TF-isomorphic to $H$, then $H$ is also A-connected.} \hfill $\Box$ \bigskip} \bigskip Proposition \ref{Prop:altertrails01} implies that $Z$-trails are invariant under the action of a TF-isomorphism. The following remarks are aimed to present a clearer picture to the reader. Recall that in an \textbf{A}-trail vertices may be repeated so that different alternating trails such as the \textbf{A}-trails $P$ and $P'$ described in Proposition \ref{Prop:altertrails01}, when taken as digraphs in their own right, may not necessarily be TF-isomorphic. This is illustrated in Figure \ref{fig:nontfisotrails01}.\\ \begin{figure}[h] \centering \includegraphics[width=8cm,height=8cm]{nontfisotrails01} \caption{$G$ and $G'$ are TF-isomorphic digraphs but $P$ and $P'$ are not.}\label{fig:nontfisotrails01} \end{figure} For $G$ and $G'$ as in Figure \ref{fig:nontfisotrails01}, let $\alpha$ map $5$, $7$, $3$ into $5'$, $3'$, $7'$ respectively and let it map arbitrarily the rest of the vertices of $G$ to the rest of the vertices of $G'$. Let $\beta$ map $6$, $5$, $1$, $4$, $2$ into $4'$, $2'$, $1'$, $6'$, $5'$ respectively and let it map the rest of the vertices of $G$ to the rest of the vertices of $G'$. The maps $\alpha$ and $\beta$ may be represented as shown below where the entries labelled by $*$ may be replaced arbitrarily but without repetitions by any of the vertices to which there is no defined mapping. \[ {\alpha = \left( \begin{tabular}{ccccccc}1&2&3&4&5&6&7\\ \ * &*& 7$'$&*&5$'$&*&3$'$ \end{tabular} \right)}\qquad {\beta = \left( \begin{tabular}{ccccccc}1&2&3&4&5&6&7\\1$'$&5$'$&*&6$'$&2$'$&4$'$&* \end{tabular} \right)} \] \noindent The pair $(\alpha,\beta)$ is then a TF-isomorphism from $G$ to $G'$. However, the alternating trails $P$ and $P'$ in Figure \ref{fig:nontfisotrails01} are not TF-isomorphic digraphs. On the other hand, as stated in Proposition \ref{Prop:altertrails01}, any \textbf{A}-trail of a given graph, mixed graph or digraph $G$ is mapped by a TF-isomorphism to some \textbf{A}-trail of graph $G'$ whenever $G$ and $G'$ are TF-isomorphic. This is also the case of the trails $P$ and $P'$ in Figure \ref{fig:nontfisotrails01}. In fact it is easy to check that the open trail $P$ is mapped to the semi-closed trail $P'$ by the pair $(\alpha,\beta)$ as defined above. However $P$ and $P'$ are not TF-isomorphic.\\ {\Prop{Let $G$ and $H$ be mixed graphs. Then a \emph{TF}-isomorphism $(\alpha,\beta)$ from $G$ to $H$ takes closed \textbf{\emph{A}}-trails of $G$ to closed \textbf{\emph{A}}-trails of $H$.} \label{prop:tftrailtypefundmap01}} {\proof{ A closed trail $P$ has an even number of arcs and so it cannot be mapped to a semi-closed trail. Besides, if $P$ were mapped to an open trail, then $\alpha$ or $\beta$ must map some vertex of $P$ to both the first vertex and last vertex of the open \textbf{A}-trail, which is a contradiction since $\alpha$ and $\beta$ are bijections. As regards the latter case, note that a semi-closed \textbf{A}-trail has an odd number of arcs and a closed \textbf{A}-trail has an even number of arcs.} \hfill $\Box$ \bigskip} \bigskip Therefore, closed \textbf{A}-trails are preserved by TF-isomorphisms just as they are by isomorphisms, but the situation is different for open and semi-closed \textbf{A}-trails.\\ {\Prop{Let $G$ and $H$ be \emph{\textbf{A}}-connected mixed graphs. Then any non-trivial TF-isomorphism $(\alpha,\beta)$ from $G$ to $H$ takes at least one open \textbf{\emph{A}}-trail into a semi-closed \textbf{\emph{A}}-trail and vice-versa.}\label{prop:tftrailtypefundmap}} {\proof{ As $(\alpha,\beta)$ is non-trivial, there is at least one vertex $u \in$ $V(G)$ such that $\alpha(u)\neq \beta(u)$. Since both $\alpha$ and $\beta$ are bijections, we get $\alpha(u)=\beta(v)$ for some $v\neq u$. Since $G$ is Z-connected, there exists an \textbf{A}-trail joining $u$ and $v$. Clearly $P$ is open. Its image $P'$ under $(\alpha,\beta)$ is an \textbf{A}-trail of $H$, that starts by $\alpha(u)$ and ends by $\beta(v)$, but since they are equal, $P'$ is semi-closed.\\ Since $(\alpha,\beta)$ is a non-trivial TF-isomorphism from $G$ to $H$, $(\alpha^{-1},\beta^{-1})$ is a non-trivial TF-isomorphism from $H$ to $G$. Therefore, we may use the same arguments to show that $(\alpha^{-1},\beta^{-1})$ must take an open \textbf{A}-trail of $H$ to a semi-closed \textbf{A}-trail of $G$. This implies that $(\alpha,\beta)$ must take some semi-closed \textbf{A}-trail in $G$ to an open \textbf{A}-trail in $H$.} \hfill $\Box$ \bigskip} Consider the following example. Let $G$ be a closed \textbf{A}-trail with $6$ vertices and let $H$ consist of a $K_{3}$ and $3$ isolated vertices. Note that $G$ is \textbf{A}-connected whereas $H$ is not. It is straightforward to check that $G$ and $H$ are TF-isomorphic. However, any TF-isomorphism from $G$ to $H$ is clearly non-trivial and maps an open \textbf{A}-trail of length $3$ in $G$ to a semi-closed \textbf{A}-trail of $H$. Therefore, the result of Proposition \ref{prop:tftrailtypefundmap} is false if the hypothesis, namely that both $G$ and $H$ are \textbf{A}-connected, is dropped.\\ As an application of Proposition \ref{prop:tftrailtypefundmap} we get the following result. {\Cor{A bipartite graph and a non-bipartite graph cannot be TF-isomorphic. Indeed if $G$ is bipartite and $(\alpha,\beta)$ is a TF-isomorphism from $G$ to some other graph $H$, then $\alpha=\beta$.}\label{cor:tftrailtypefundmap}} {\proof{Let $G$ be a graph and let $(\alpha,\beta)$ be a non-trivial TF-isomorphism from $G$ to some graph $H$. Then, in view of Proposition \ref{prop:tftrailtypefundmap}, there is an open \textbf{A}-trail of $G$ that is taken to a semi-closed \textbf{A}-trail of $H$. Therefore $H$ has an odd cycle and is non-bipartite. Conversely, $(\alpha^{-1},\beta^{-1})$ is a non-trivial TF-isomorphism from $H$ to $G$ and therefore, by the same argument $G$ cannot be bipartite.} \hfill $\Box$ \bigskip} The next section contains a more detailed study of how, using \textbf{A}-trails, a mixed graph $G$ can be made to correspond to a strongly bipartite digraphs, extending the results of Zelinka, particularly those exposed in \cite{zelinka4}. It will turn out that this digraph is a double cover of $G$ which we have already encountered.\\ {\section{Alternating double covers and an equivalence relation on arcs}\label{sec:mzdequival}} Let $G$ be any mixed graph. Consider the relation $R$ on the set $A(G)$ defined by: $xRy$ if and only if $x$ and $y$ are the first and last arcs of an \textbf{A}-trail of $G$. Clearly $xRx$ since any given arc is the first and also the last arc of an \textbf{A}-trail containing only one arc. If $xRy$ then $yRx$ since if $x$ is the first arc of an \textbf{A}-trail, then $y$ is the last arc and vice-versa. Now suppose that $xRy$ and $yRz$. If $x$ is the first arc of an \textbf{A}-trail $P$, then $y$ is the last arc of the $P$. Then if $y$ is the first arc of an \textbf{A}-trail $Q$ and $z$ is the last arc of $Q$, then the set-theoretical union of $P$ and $Q$ is an \textbf{A}-trail which has $x$ as first arc and $z$ as last arc.\\ {\Lem{Let $G$ be a connected graph. Then every two edges of G are joined by trails of both odd and even length if and only if $G$ is not bipartite.}\label{lem:equivar01}} {\proof{Let $G$ be non-bipartite. Take any two edges $e_1$, $ e_2$ and fix an odd cycle $C$ of $G$ and two vertices $v_1$, $v_2$ of $C$. Choose two trails $P_1$, $P_2$ joining $e_1$, $e_2$ with $v_1$, $v_2$ respectively. Let $P'$ and $P''$ the two trails that join $v_1$ and $v_2$ using the edges of $C$. Then there are two trails joining $e_1$, $e_2$, namely $P_1,\ P',\ P_2$ and $P_1,\ P'',\ P_2$. One of them has odd length and the other even because if $P''$ is odd, $P'$ is even and vice versa and therefore, the inclusion of one instead of the other switches parity. Conversely, suppose that there are trails of odd and even length between two fixed edges. In the subgraph induced by these paths, the vertices cannot be partitioned in two distinct colour classes and hence this subgraph must be non-bipartite and hence must contain an odd circuit. Hence $G$ contains an odd circuit and is also non-bipartite. } \hfill $\Box$ \bigskip} {\Cor{ Let $G$ be a connected graph. Then $R$ has one equivalence class if $G$ is not bipartite and two if it is bipartite.}\label{Cor:equivalr01}} {\proof{Suppose that $G$ is non-bipartite. Consider two arcs $x_1$ and $x_2$ and take the corresponding edges as $e_1$ and $e_2$. Start from $x_1$. On each edge of trails joining $e_1$ and $e_2$, choose the arc to obtain an \textbf{A}-trail. Note that before this process may continue for all edges except at most $e_2$. When $e_2$ is reached, the arc corresponding to edge incident with $e_2$ may form an \textbf{A}-trail of order 2 or a directed path. But this depends on whether the concerned trail has odd or even length, so one of the two will give a whole \textbf{A}-trail containing both $x$ and $y$. On the other hand, if $G$ is bipartite, given that $x$ and $y$ form a directed path, which always happens, then every trail joining $x$ and $y$ will be of even length; but an \textbf{A}-trail of even length is open and can't allow directed paths.} \hfill $\Box$ \bigskip} \bigskip Each equivalence class of $R$ is a set, to which one can naturally associate an \textbf{A}-connected sub-digraph, whose arcs are the elements of the class and whose vertices are those incident to at least one of such arcs. In general, the relation $R$ may yield any number of classes, not just one or two as in the case of graphs, as we shall see in Theorem \ref{thm:equivalnewA4} below. \\ If $v$ is any vertex, two different arcs that have $v$ as a first vertex form an \textbf{A}-trail; the same can be said for two different arcs having $v$ as a head. Therefore, the arcs incident with $v$ belong to only one class or two. In the latter case, we say that $v$ is a {\it frontier} vertex. Let $F(G)$ be the set of all frontier vertices of $G$. In view of Corollary \ref{Cor:equivalr01}, if $G$ is a graph then $F(G)$ is either empty (if $G$ is not bipartite) or $F(G)$ $=$ $V$$(G)$ (if $G$ is bipartite). \\ The proof of the next result is straightforward. {\Prop {Let $G$ be a connected mixed graph. The following are equivalent:\begin{description} \item{(i) All classes of $R$ are singletons.} \item{(ii) All \textbf{\emph{A}}-trails of $G$ are singletons.} \item{(iii) Each vertex of $G$ has both in-degree and out-degree less than or equal to $1$.} \item{(iv) $G$ is a directed path or a directed cycle.} \hfill $\Box$ \bigskip \end{description}}\label{Prop:equivalnewA1} } {\Prop{ Let $G$ be a connected mixed graph. Then $R$ has only one class if and only if the set $F(G)$ is empty. }\label{Prop:equivalnewA2}} {\proof{The condition is clearly necessary, for if $v$ were an element of $F(G)$ then by definition we would have at least two different classes. On the other hand, if there is more than one class, let $x$ and $y$ be arcs that belong to different classes. Since $G$ is connected, there is a trail $P$ that joins $x$ and $y$. Somewhere in $P$ there must be $x'$ and $y'$ that belong to different classes and are incident with a vertex $v$. Thus $v \in F(G)$. } \hfill $\Box$ \bigskip} {\Prop{ Let $G$ be a connected mixed graph. Then $F(G)$ is empty or $F(G)$ $=$ $V$$(G)$ or $F(G)$ is a disconnected set of the underlying graph.}\label{Prop:equivalnewA3}} {\proof{We can assume that $F(G)$ is a proper subset of $V$$(G)$. By Proposition \ref{Prop:equivalnewA2}, there are at least two classes for $R$. Letting $x$, $y$ be elements of different classes, by the same argument as in Proposition \ref{Prop:equivalnewA2} we infer that each trial joining $x$ and $y$ must pass through a vertex $v\in F(G)$. Therefore, removing $F(G)$ the arcs $x$ and $y$ end up in different connected components.} \hfill $\Box$ \bigskip} {\Thm{ For every pair $(m,k)$ of positive integers, there exists a mixed graph on which the equivalence relation $R$ induces $m$ classes and having $k$ frontier vertices if and only if $m-1\leq k$.}\label{thm:equivalnewA4}} {\proof{Let us first construct a mixed graph with $m$ classes and $k$ frontier vertices whenever $m-1 \leq k$. Note that if $m-1=k$ a directed path satisfies the statement (each class consists of a single arc). The same holds for $m-2=k$ and a directed cycle. Assume then that $m-3\leq k$. Consider the $4$-set $\{a,b,c,d\}$ and consider $m-2$ mixed graphs $H_i$ for $i=1,\ \dots,m-2$ where $V$$(H_i)$ $=$ $\{(a,i),(b,i),(c,i),(d,i),(a,i+1)\}$ and $A(H_i)$ contains all the arcs of the triangle $(b,i)$, $(c,i)$, $(d,i)$, plus the additional arcs $((a,i),(b,i))$ and $((d,i),(a,i+1))$. Take any connected bipartite graph $K$ with $k-m+2$ vertices and fix a vertex $u$ of $K$. Let $L$ be the digraph consisting of the single arc $(u,(a,1))$.\\ Let $G$ be the (standard graph-theoretical) union of $K$, $L$, $H_1,\ \dots, H_{m-2}$. Then $G$ is a connected mixed graph. The classes for $R$ in $G$ are: (i) the class of $K$ containing the arcs incident to $u$; (ii) the class of $K$ containing the arcs incident from $u$, together with the extra arc $(u(a,1))$; (iii) each of the $H_i$'s for $i=1,\dots ,\ m-2$. Hence, their number is $m$. Moreover, $F(G)=V(K) \cup \{(a,1),(a,2),...,(a,m-2)\}$, then $|F(G)|=(k-m+2)+(m-2)=k$. Therefore, for all cases where the stated inequality holds, there is a mixed graph $G$ as claimed.\\ Conversely, consider now any mixed graph $G$ and define a graph $X$ such that $V$$(X)$ $=$ $V$$(G)/R$ (that is, the set of classes of $R$ in $G$) and two vertices are adjacent when the associated mixed graphs share a frontier vertex. Then $m=|X|$, while the number $k'$ of edges of $X$ is less or equal to $k=|F(G)|$ (because two classes might share more than a frontier vertex). The known inequality $m-1\leq k'$ implies $m-1 \leq k$ as claimed.} \hfill $\Box$ \bigskip} As remarked earlier, a strongly bipartite digraph can be associated with each equivalence class of $R$. Now let these strongly bipartite digraphs $D_{1}, D_{2}, \dots, D_{k}$ corresponding to the different classes of $R$ obtained from the mixed graph $G$. Let any vertex $u$ of $V(G)$ which appears as a source in $D_{i}$ be labelled $u_{0}$ and let any vertex $v$ of $V(G)$ which appears as a sink in $D_{i}$ be labelled $v_{1}$. Therefore, an arc $(u,v)$ in $D_{i}$ now becomes $(u_{0},v_{1})$. It turns out that the strongly bipartite digraph consisting of the components $D_{i}$ labelled this way is \mbox{\textbf{ADC}}$(G)$ which we have already defined earlier.\\ Figure \ref{fig:mzd04a} shows an example which may be used to illustrate the following remarks which highlight certain properties of $\mbox{\textbf{ADC}}(G)$ in relation to the mixed graph $G$:\\ \begin{description} \item{1. We know that, for any vertex $u$ of $G$, all incoming arcs $(x,u)$ of $G$ are in the same component of \textbf{ADC}$(G)$ and similarly all outgoing arcs $(u,x)$ of $G$ are in the same component of \textbf{ADC}$(G)$. Therefore $u_0$ if present in \textbf{ADC}$(G)$, cannot appear in two different components. Similarly for $u_1$. However, as we see in examples below, $u_0,$ and $u_1$ can, in some cases, appear in the same component and they can, in other cases, appear in different components. In particular, if $G$ is a bipartite graph they appear in different components as shown in Figure \ref{fig:mzd02a}$(ii)$ and if $G$ is a non-bipartite graph, they are in the same component as shown in Figure \ref{fig:mzd01a}$(ii)$.} \item{2. \mbox{\textbf{ADC}}$(G)$ is a strongly bipartite digraph.} \item{3. By definition, there is no $u_0$ in \mbox{\textbf{ADC}}($G$) if $u$ is a sink in $G$, and there is no $u_1$ if $u$ is a source. } \end{description} \medskip \begin{figure}[h] \centering \includegraphics[width = 10 cm, height = 11cm]{mzd04a.eps} \caption{$\mbox{\textbf{ADC}}(G)$ obtained from a digraph $G$.} \end{figure}\label{fig:mzd04a} \clearpage \begin{figure}[h] \centering \includegraphics[width = 12.8cm, height = 5.0cm]{mzd01a.eps} \caption{$\mbox{\textbf{ADC}}(G)$ obtained form a non-bipartite graph $G$.}\label{fig:mzd01a} \end{figure} \begin{figure}[h] \centering \includegraphics[width = 12.8cm, height = 6.4cm]{mzd02a.eps} \caption{$\mbox{\textbf{ADC}}(G)$ obtained from a bipartite graph $G$.}\label{fig:mzd02a} \end{figure} \section{TF-isomorphisms and mixed graph covers} The following result can be seen as a corollary to Theorem \ref{thm:idctf} and the proof is easy since the IDC of a mixed graph $G$ can be obtained from $\mbox{\textbf{ADC}}(G)$ simply by removing the directions of the arcs and isolated vertices are irrelevant. Here we give an direct proof because it will help us in later constructions. {\Thm{Let $G$, $H$ be mixed graphs. The $G$ and $H$ are TF-isomorphic if and only if $\mbox{\textbf{\emph{ADC}}}(G)$ and $\mbox{\textbf{\emph{ADC}}}(H)$ are isomorphic.}\label{Thm:mzdfund01}} {\proof{Let $(\alpha,\beta)$ be a TF-isomorphism from $G$ to $H$. Let $(u,v)$ be an arc of $G$. First note that if $(\alpha(u),\beta(v))$ is an arc of $H$, then $(u_{0},v_{1})$ is an arc of $\mbox{\textbf{ADC}}(G)$ and $(\alpha(u)_{0},\beta(u)_{1})$ is an arc of $\mbox{\textbf{ADC}}(H)$. Let $f$ be a map from $V$$(\mbox{\textbf{ADC}}(G))$ to $V$$(\mbox{\textbf{ADC}}(H))$ such that $f: u_{0} \mapsto x_{0}$ if $x = \alpha(u)$ and $f: v_{1} \mapsto y_{1}$ if $y =\beta(v)$. Consider any arc $(u,v)$ of $G$ and consider the corresponding arc $(u_{0},v_{1})$ in $A(\mbox{\textbf{ADC}}(G))$. Let $(\alpha,\beta)(u,v) = (x,y)$. Then by definition $f$ takes $(u_{0},v_{1})$ to $(x_{0},y_{1})$ in $A(\mbox{\textbf{ADC}}(H))$. The function $f$ maps arcs of $\mbox{\textbf{ADC}}(G)$ to arcs of $\mbox{\textbf{ADC}}(H)$ and it is clearly bijective. Hence, $f$ is an isomorphism from $\mbox{\textbf{ADC}}(G)$ to $\mbox{\textbf{ADC}}(H)$.\\ Now suppose that $\mbox{\textbf{ADC}}(G)$ and $\mbox{\textbf{ADC}}(H)$ are isomorphic. This implies that there exists a map $f$ such that $f(u_{0},v_{1})$ $=$ $(x_{0},y_{1})$. Note that the arcs must always start from a vertex whose label has $0$ as subscript and incident to a vertex whose label has $1$ as subscript, by virtue of the construction presented above. Define $\alpha$, $\beta$ from $V$$(G)$ to $V$$(H)$ as follows. Let $\alpha(u) = x$ if $f(u_{0}) = x_{0}$ where $u \in$ $V$$(G)$ and $x \in$ $V$$(H)$ and let $\beta(v) = y$ if $f(v_{1}) = y_{1}$ where $v \in$ $V$$(G)$ and $y \in$ $V$$(H)$. Then $(\alpha ,\beta)$ takes any arc $(u,v) \in$ $A(G)$ to some $(x,y)$ in $A(H)$. This two-fold mapping is bijective and hence $(\alpha,\beta)$ is a TF-isomorphism from $G$ to $H$.} \hfill $\Box$ \bigskip} {\Cor{Let $(\alpha,\beta)$ be a TF-isomorphism from a mixed graph $G$ to a mixed graph $H$. Then there exists an isomorphism $f_{\alpha,\beta}$ from $\mbox{\textbf{\emph{ADC}}}(G)$ to $\mbox{\textbf{\emph{ADC}}}(H)$ such that $f_{\alpha,\beta}(u_{0},v_{1}) = (x_{0},y_{1})$ if and only if $x = \alpha(u)$ and $y =\beta(v)$ for some TF-isomorphism $(\alpha,\beta)$ from $G$ to $H$. }\label{cor:zassociated}} \proof{ The result follows from the proof of Theorem \ref{Thm:mzdfund01}. \hfill $\Box$ \bigskip} Refer to Figure \ref{fig:mzd04}. An isomorphism $f$ from $\mbox{\textbf{ADC}}(G)$ to $\mbox{\textbf{ADC}}(H)$ and the corresponding maps $\alpha$ and $\beta$ from V$(G)$ onto V$(H)$, which are derived from $f$ as described in the proof of Theorem \ref{Thm:mzdfund01}, are given below. \\ {{\begin{eqnarray*} f: 1_{0} \mapsto 1_{0}' & \ & f: 1_{1} \mapsto 1_{1}'\\ f: 2_{1} \mapsto 2_{0}' & \ & f: 2_{1} \mapsto 3_{1}'\\ f: 3_{0} \mapsto 3_{0}' & \ & f: 3_{1} \mapsto 2_{1}'\\ f: 4_{0} \mapsto 6_{0}' &\ & f: 4_{1} \mapsto 5_{1}'\\ f: 5_{0} \mapsto 7_{0}' &\ & f: 5_{1} \mapsto 4_{1}'\\ f: 6_{0} \mapsto 5_{0}' & \ & f: 6_{1} \mapsto 6_{1}'\\ f: 7_{0} \mapsto 4_{0}' & \ & f: 7_{1} \mapsto 7_{1}' \end{eqnarray*} }\qquad {{\begin{eqnarray*} \alpha: 1 \mapsto 1' & \ & \beta: 1 \mapsto 1'\\ \alpha: 2 \mapsto 2' & \ & \beta: 2 \mapsto 3'\\ \alpha: 3 \mapsto 3' & \ & \beta: 3 \mapsto 2'\\ \alpha: 4 \mapsto 6' &\ & \beta: 4 \mapsto 5'\\ \alpha: 5 \mapsto 7' &\ & \beta: 5 \mapsto 4'\\ \alpha: 6 \mapsto 5' & \ & \beta: 6 \mapsto 6'\\ \alpha: 7 \mapsto 4' & \ & \beta: 7 \mapsto 7' \end{eqnarray*} } Figure \ref{fig:tod01a} shows a digraph $G$ and its alternating double cover $\mbox{\textbf{ADC}}(G)$ which in this case has three components, namely $D_{1}$, $D_{2}$ and $D_{3}$. Figure \ref{fig:tod01a} also shows how the components of $\mbox{\textbf{ADC}}(G)$ can be combined by associating vertices of the form $u_{0}$ with vertices of the form $v_{1}$, irrespective of whether $u=v$ or $u \not = v$, to form $G$ or other digraphs such as $G_{1}$, $G_{2}$ and $G_{3}$ having the same number of vertices as $G$. It is easy to check that $G$, $G_{1}$, $G_{2}$ and $G_{3}$ are pairwise two-fold isomorphic as expected from the result of Theorem \ref{Thm:mzdfund01} since each of these digraphs have the same number of vertices and have isomorphic \textbf{ADC}s.\\ {\Prop{{(i)} A digraph $H$ is isomorphic to $\mbox{\emph{\textbf{ADC}}}(G)$ for some $G$ if and only if $H$ is strongly bipartite. {(ii)} For every digraph $G$, ${\mbox{\emph{\textbf{ADC}}}}(\mbox{\emph{\textbf{ADC}}}(G))$ is isomorphic to $\mbox{\emph{\textbf{ADC}}}(G)$.}\label{Prop:gzking01}} {\proof{We already know that the condition stated in $(i)$ is necessary in order to have $H$ isomorphic to some $\mbox{\textbf{ADC}}(G)$. Conversely, if $H$ has this property, define map $f:$ $V(H)$ $\rightarrow V (\mbox{\textbf{ADC}}(H))$ as follows: $f(u) = u_{0}$ if $u$ is a source, $f(u) = u_{1}$ if $u$ is a sink. Clearly $f$ is a bijection If $(u,v)$ is an arc of $H$ then by our assumption $u$ is a source and $v$ is a sink of $H$. Then $(u_{0},v_{1})$ $=$ $(f(u),f(v))$ is an arc of $\mbox{\textbf{ADC}}(H)$. Likewise, each arc of $\mbox{\textbf{ADC}}(H)$ takes the form $(u_{0},v_{1})$, with $u$ source and $v$ sink of $H$ and hence $(u_{0},v_{1})$ is the image of $(u,v)$ under $f$. This proves that $f$ is an isomorphism from $H$ to $\mbox{\textbf{ADC}}(H)$, so $(i)$ is satisfied with $G=H$.\\ \noindent Now $(ii)$ is a straightforward consequence of $(i)$, taking $H = \mbox{\textbf{ADC}}(G)$.} \hfill $\Box$ \bigskip} \begin{figure}[h] \centering \includegraphics[width = 14cm, height = 18cm]{mzd04.eps} \caption{$G$ and $H$ are TF-isomorphic graphs and have isomorphic \textbf{ADC}s.}\label{fig:mzd04} \end{figure} \clearpage \begin{figure}[h] \centering \includegraphics[width = 14cm, height = 17.5cm]{tod01a.eps} \caption{$G$ and $H$ are TF-isomorphic graphs and have isomorphic \textbf{ADC}s.}\label{fig:tod01a} \end{figure} \clearpage \bigskip \section{Two-fold orbitals} Let $\mathbf{\Gamma} \leq \mathcal{S} = S_{\mbox{\scriptsize{$|V|$}}} \times S_{\mbox{\scriptsize{$|V|$}}}$. For a fixed element $(u,v)$ of $V\times V$ let \[ \mathbf{\Gamma}(u,v) = \{(\alpha(u),\beta(v)\ |\ (\alpha,\beta) \in \mathbf{\Gamma}\}. \] The set $\mathbf{\Gamma}(u,v)$ is called a \emph{two-fold orbital} or TF-\emph{orbital}. A two-fold orbital is the set of arcs of a digraph $G$ having vertex set $V$ which we call \emph{two-fold orbital digraph} or TF-\emph{orbital digraph}. If for every arc $(x,y)$ in $\mathbf{\Gamma}(u,v)$, the oppositely directed arc $(y,x)$ is also contained in $\mathbf{\Gamma}(u,v)$, then $G$ is a \emph{two-fold orbital graph} or TF-\emph{orbital graph}. This generalisation of the well-known concept of orbital (di)graph has been discussed in \cite{lms1}.\\ {\Prop{Let $G$ be a strongly bipartite digraph. Then \begin{description} \item{(i) There is a homomorphism $\psi$ of \emph{Aut}$^{\mbox{\tiny\textbf{{TF}}}}(G)$ onto \emph{Aut}$(G)$.} \item{(ii) If $G$ is a TF-orbital digraph, then it is also an orbital digraph.} \end{description}}\label{Prop:gzking02}} {\proof{If $(\alpha,\beta)$ is a TF-automorphism of $G$, define $\psi(\alpha,\beta)$ $=$ $f:$ $V(G) \rightarrow$ $V(G)$ as follows: $f(u) = \alpha(u)$ if $u$ is a source and $f(u)=\beta(u)$ if $u$ is a sink. Since $\alpha$ preserves sources then $f$ takes sources to sources. Similarly, since $\beta$ preserves sinks, then $f$ takes sinks to sinks. Since both $\alpha$ and $\beta$ are permutations, the restrictions of $f$ to the set of sources and to the set of sinks are also permutations. Hence $f$ is a permutation of V$(G)$. Given any arc $(u,v)$ of $G$, note that $(\alpha,\beta)$ takes $(u,v)$ to $(\alpha(u),\beta(v))$, which is equal to $(f(u),f(v))$ because $u$ is a source and $v$ is a sink. Hence $f$ is an automorphism of $G$. so $\psi$ maps $\mbox{Aut}^{\mbox{{\tiny{\textbf{TF}}}}}(G)$ $\ $ to Aut$(G)$. A direct computation proves that $\psi$ is a group homomorphism, hence $(i)$ holds.\\ Assume now that $G = \mathbf{\Gamma}(u,v)$ for some $\mathbf{\Gamma}$. Then $\mathbf{\Gamma}$ is a subgroup of $\mbox{Aut}^{\mbox{{\tiny{\textbf{TF}}}}}(G)$ $\ $ and $\psi (\mathbf{\Gamma})$ is a subgroup of Aut $G$. Each arc of $G$ takes the form $(\alpha(u),\beta(v))$, where $(\alpha,\beta)$ $\in$ $\mathbf{\Gamma}$ and $u$, $v$ are a source and a sink respectively. Letting $f = \psi(\alpha,\beta)$ this arc is $(f(u),f(v))$, so it belongs to the orbital digraph $\psi(\mathbf{\Gamma})(u,v)$. This proves that $G$ is contained in this orbital digraph. The opposite inclusion can be shown the same way, so that $G = \psi(\Gamma)(u,v)$ and $(ii)$ follows. \hfill $\Box$ \bigskip}}\\ {\Cor{Let $G$ be a strongly bipartite digraph. Then $G$ is a two-fold orbital digraph if and only if ${\emph{\mbox{\textbf{CDC}}(G)}}$ is an orbital digraph.}\label{cor:gzking03}} {\proof{By Proposition \ref{Prop:gzking01}, $G$ and $\mbox{\textbf{ADC}}(G)$ are isomorphic. If either of them is a TF-orbital, then of course the same holds for the other one, but by Proposition \ref{Prop:gzking02} in this case these TF-orbitals are both orbitals. } \hfill $\Box$ \bigskip} \section{Conclusion} We believe that TF-isomorphisms is a relatively new concept. The only other author who considered them was Zelinka in a short paper motivated by the concept of isotopy in semigroups \cite{zelinka1, Zelinka2}. Our papers (\cite{lms1} and \cite{lms2}) are the first attempts at a systematic study of TF-isomorphisms. \\ In this paper we have shown close links between TF-isomorphisms and double covers, and how the decomposition of a particular double cover can be used to obtain TF-isomorphic graphs.\\ We have also seen that TF-isomorphisms give a new angle for looking at some older problems in graph theory. But does the notion of TF-isomorphism add anything new to these older questions? We believe that it does. For example, in \cite{lms3} we prove this result which explains instability of graphs in terms of TF-automorphisms. \\ {\Thm{Let \emph{Aut}$^{\mbox{\tiny\textbf{{TF}}}}(G)$ be the group of TF-automorphisms of a mixed graph $G$. Then \emph{Aut}$({\emph{\mbox{\textbf{CDC}}(G)}})$ is isomorphic to the semi-direct product \emph{Aut}$^{\mbox{\tiny\textbf{{TF}}}}(G)\rtimes \mathbb{Z}_2$. Therefore $G$ is unstable if and only if it has a non-trivial TF-automorphism. \hfill $\Box$ \bigskip}} Also, it is not very likely that looking at these questions without the notion of TF-isomorphisms would lead one to the notion of \textbf{A}-trails, a technique which we feel is very useful, or the construction of asymmetric graphs with a non-trivial TF-isomorphism, an interesting notion which would be not so natural to formulate using only matrix methods, say. Some results and proofs are clearer in the TF-isomorphism setting. For example, in some of the papers cited we find this result about graph reconstruction from neighbourhoods.\\ \begin{Thm}[\cite{Aigner1}] \label{thm:nhoodrecbipartite} If $G$ is a connected bipartite graph, then any nonisomorphic graph $H$ with the same neighbourhood family as $G$ must be a disconnected graph with two components which themselves have identical neighbourhood hypergraphs. \hfill $\Box$ \bigskip \end{Thm} From the TF-isomorphism point of view, this result follows from three very basic facts: (i) two graphs have the same neighbourhood family (equivalent to being TF-isomorphic) if and only if they have the same canonical double cover; (ii) the canonical double cover of a graph $G$ is disconnected if and only if $G$ is bipartite; and (iii) when $G$ is bipartite, the canonical double cover of $G$ is simply two disjoint copies of $G$. Therefore, for $H$ to have the same canonical double cover as $G$, it must consist of two components isomorphic to $K$, where $G$ is the canonical double cover of $K$. This gives Theorem \ref{thm:nhoodrecbipartite}. And moreover, from these remarks we also see that the only bipartite graphs for which there are non-isomorphic graphs with the same neighbourhood hypergraph are those which are canonical double covers. The Realisability Problem restricted to bipartite graphs therefore becomes: given a bipartite graph $G$, is there a graph $K$ such that $G$ is the canonical double cover of $K$? A result in this direction was proved in \cite{Scapsalvi2}, where graphs whose canonical double covers are Cayley graphs are characterised\medskip So it seems that the TF-isomorphism point of view can give a new handle on some of these problems. We intend to pursue this line of research in a forthcoming work.\\ \section*{Acknowledgement} We are grateful to M.~Muzychuk for first pointing out to us the usefulness of considering TF-isomorphisms as isomorphisms between incidence structures. \nocite{porcu} \nocite{Scapsalvi2} \clearpage
1,941,325,221,235
arxiv
\section{Introduction} Stellar population synthesis is a powerful technique to study the stellar contents of galaxies and star clusters (see, e.g., \citealt{Yungelson:1997}, \citealt{Tout:1997}, \citealt{Pols:1998}, \citealt{Hurley:2007}). It is also an important method to study the formation and evolution of galaxies. Simple stellar population (SSP) models that do not take binary interactions into account are usually used for spectral stellar population studies, as most models are ssSSP models (e.g., \citealt{Bruzual:2003}, \citealt{Vazdekis:1999}, \citealt{Fioc:1997}, \citealt{Worthey:1994}). However, as pointed by, e.g., \cite{Duquennoy:1991}, \cite{Pinfield:2003}, and \cite{Lodieu:2007}, about 50\% of stars are in binaries and they evolve differently from single stars. We can see this when comparing the isochrone of an ssSSP to that of a bsSSP (Fig.1). In fact, bsSSPs better fit the colour-magnitude diagrams (CMDs) of star clusters than ssSSPs \citep{Li:2007database}. This suggests that binary interactions can affect stellar population synthesis studies significantly and it is, therefore, necessary to consider binary interactions. This is also supported by some observational results, e.g., the Far-UV excess of elliptical galaxies \citep{Han:2007} and blue stragglers in star clusters (e.g., \citealt{Davies:2004}; \citealt{Tian:2006}; \citealt{Xin:2007}). These phenomena can be naturally explained via stellar populations with binary interactions, without any special assumptions. A few works have tried to model populations via binary stars and have presented some results on the effects of binary interactions on spectral stellar population synthesis. For example, \citet{Zhang:2004, Zhang:2005} showed that binary interactions can make bsSSPs bluer than ssSSPs. However, there is not a more detailed investigation about how binary interactions affect the Lick indices and colours of stellar populations, and on the determination of stellar ages and metallicities. One of our previous works, \citet{Li:2007database}, compared bsSSPs to ssSSPs, but various stellar population models are used. This makes it difficult to understand the effects of the changes of Lick indices and colours of populations that result only from binary interactions. Furthermore, it did not show how binary interactions affect the determinations of stellar ages and metallicities. In this case, we have no clear picture for the differences between the predictions of bsSSPs and ssSSPs, and do not well know the differences between luminosity-weighted stellar-population parameters (age and metallicity) determined by bsSSPs and ssSSPs. Because all galaxies contain some binaries, detailed studies of the effects of binary interactions on the Lick indices and colours of populations are important, as are the determinations of stellar-population parameters. In this work, we perform a detailed study of the effects of binary interactions on spectral stellar population synthesis studies, via the rapid spectral population synthesis ($RPS$) model of \citet{Li:2007database}. The paper is organized as follows. In Sect. 2, we briefly introduce the $RPS$ model. In Sect. 3, we study the effects of binary interactions on the isochrones, spectral energy distributions (SEDs), Lick Observatory Image Dissector Scanner absorption line indices (Lick indices), and colours of stellar populations. In Sect. 4, we investigate the differences between stellar ages and metallicities fitted by ssSSPs and bsSSPs. Finally, in Sect. 5, we give our discussions and conclusions. \section{The rapid spectral population synthesis model} We take the results of the $RPS$ model of \citet{Li:2007database} for the work, as there is no other available model. The $RPS$ model calculated the evolution of binaries and single stars via the rapid stellar evolution code of \citet{Hurley:2002} (hereafter Hurley code) and took the spectral libraries of \citet{Martins:2005} and \citet{Westera:2002} (BaSeL 3.1) for spectral synthesis. The model calculated the high-resolution (0.3 $\rm \AA$) SEDs, Lick indices and colours for both bsSSPs and ssSSPs with the initial mass functions (IMFs) of \citet{Salpeter:1955} and \citet{Chabrier:2003}. Note that the $RPS$ model used a statistical isochrone database for modeling stellar populations \citep{Li:2007database}. Each bsSSP contains about 50\% stars that are in binaries with orbital periods less than 100 yr (the typical value of the Galaxy), and binary interactions such as mass transfer, mass accretion, common-envelope evolution, collisions, supernova kicks, angular momentum loss mechanism, and tidal interactions are considered when evolving binaries via Hurley code. Thus the $RPS$ model is suitable for studying the effects of binary interactions on stellar population synthesis studies. However, some parameters such as the ones used for describing the common envelope prescription, mass-loss rates, and supernova kicks are free parameters and the default values in Hurley code, i.e., 0.5, 1.5, 1.0, 0.0, 0.001, 3.0, 190.0, 0.5, and 0.5, are taken in this work for wind velocity factor ($\beta_{\rm w}$), Bondi-Hoyle wind accretion faction ($\alpha_{\rm w}$), wind accretion efficiency factor ($\mu_{\rm w}$), binary enhanced mass loss parameter ($B_{\rm w}$), fraction of accreted material retained in supernova eruption ($\epsilon$), common-envelope efficiency ($\alpha_{\rm CE}$), dispersion in the Maxwellian distribution for the supernovae kick speed ($\sigma_{\rm k}$), Reimers coefficient for mass loss ($\eta$), and binding energy factor ($\lambda$), respectively. These default values are taken because they have been tested by the developer of Hurley code and seem more reliable. One can refer to the paper of \citet{Hurley:2002} for more details. In fact, many of these free parameters remain uncertain, and their uncertainties can possibly have great effect on our results. When we test the uncertainties caused by various $\alpha_{\rm CE}$ and $\lambda$, we find that the number of blue stragglers can be changed as large as 40\% compared to the default case. However, it is extremely difficult to give detailed uncertainties in spectral stellar population synthesis due to the free parameters, as we lack of constrains on these free parameters (see \citealt{Hurley:2002}). Therefore, when we estimate the synthetic uncertainties in our $RPS$ model, the uncertainties due to variation of free parameters will not be taken into account. Because the fitted formulae used by Hurley code to evolve stars lead to uncertainties less than about 5\% \citet{Hurley:2002}, we take 5\% as the uncertainties in the evolution of stars in the whole paper. The correctness of the results of the $RPS$ model depends on how correct the default parameters of Hurley code are. In addition, the uncertainties in final generated spectrum caused by the spectral library and the method used for spectral synthesis are about 3\% and 0.81\%, respectively. Because a Monte Carlo technique is used by the $RPS$ model to generate the star sample (2\,000\,000 binaries or 4\,000\,000 single stars in our work and it is twice of that of the model of \citealt{Zhang:2004}) of stellar populations, the number of stars can result in statistical errors in the Lick indices and colours of populations. According to our test, 1\,000\,000 binaries is enough to get reliable Lick indices (see also \citealt{Zhang:2005}), but the near-infrared colours such as $(I-K)$, $(R-K)$, and $(r-K)$ of old populations are affected by the Monte Carlo method. However, the errors caused by the Monte Carlo method are small, about 2\% for a sample of 4\,000\,000 stars. Note that in this work, an uniform distribution is used to generate the ratio ($q$, 0--1) of the mass of the secondary to that of the primary (\citealt{Mazeh:1992}; \citealt{Goldberg:1994}), and then the mass of the secondary is calculated from that of the primary and $q$. The separation ($a$) of two components of a binary is generated following the assumption that the fraction of binary in an interval of log($a$) is constant when $a$ is big (10$R_\odot$ $< a <$ 5.75 $\times$ 10$^{\rm 6}$$R_\odot$) and it falls off smoothly when when $a$ is small ($\leq$ 10$R_\odot$) \citep{Han:1995}. The distribution of $a$ is written as \begin{equation} a~.p(a) = \left\{ \begin{array}{ll} a_{\rm sep}(a/a_{\rm 0})^{\psi}, &~a \leq a_{\rm 0}\\ a_{\rm sep}, &~a_{\rm 0} < a < a_{\rm 1}\\ \end{array} \right. \end{equation} where $a_{\rm sep} \approx 0.070, a_{\rm 0} = 10R_{\odot}, a_{\rm 1} = 5.75 \times 10^{\rm 6}R_\odot$ and $\psi \approx 1.2$. The eccentricity ($e$) of each binary system using a uniform distribution, in the range of 0--1, and $e$ affects the results slightly \citep{Hurley:2002}. In addition, $RPS$ model uses some methods different from those used in the work of \citet{Zhang:2004} to calculate the SEDs, Lick indices, and colours of populations. Besides $RPS$ used a statistical isochrone database, it calculated the Lick indices directly from SEDs while the work of \citet{Zhang:2004} used some fitting formulae to compute the same indices. The $RPS$ model calculated the colours of populations from SEDs, but the work of \citet{Zhang:2004} computed colours by interpolating the photometry library of BaSeL 2.2 \citep{Lejeune:1998}. Furthermore, the $RPS$ model used the more advanced version 3.1 \citep{Westera:2002} of BaSeL library rather than version 2.2 to give the colours of the populations. The BaSeL 3.1 library overcomes the weakness of the BaSeL 2.2 library at low metallicity, because it has been colour-calibrated independently at all levels of metallicity. This makes the predictions of our model are more reliable. Another important point is that the model of Zhang et al. did not present the near-infrared colours of stellar populations, but such colours are very important for disentangling the well-known age--metallicity degeneracy. \section{Effects of binary interactions on stellar population synthesis} \subsection{The effects on isochrones of stellar populations} The direct effect of binary interactions on stellar population synthesis is to change the isochrones of stellar populations, e.g., the distribution of stars in the surface gravity [log($g$)] versus effective temperature ($T_{\rm eff}$) grid, hereafter $gT$-grid. We investigate the differences between the isochrones of bsSSPs and ssSSPs. Stellar populations with the IMF of \citet{Salpeter:1955} are taken as our standard models for this work. The Salpeter IMF is actually not the best one for stellar population studies, although it is widely used. The reason is that the IMF is not valid for low masses. However, this IMF is reliable for stellar population synthesis, because low-mass stars contribute much less to the light of populations compared to high-mass stars. Some further studies will be given by taking some more reliable IMFs, e.g., \citet{Kroupa:1993}. Because the isochrone database used by this work divides the $gT$-grid into 1\,089\,701 sub-grids, with 0.01 and 40K the intervals of log($g$) and $T_{\rm eff}$, it is possible to compare the isochrones of bsSSPs and ssSSPs. The differences between the isochrones of two kinds of populations are calculated by subtracting the fraction of stars of bsSSP from that of its corresponding ssSSP, sub-grid by sub-grid. The ssSSP and its corresponding bsSSP have the same star sample, metallicity and age, and all their integrated specialties (SEDs, colours and Lick indices) are calculated via the same method. Therefore, the differences between the isochrones of a bsSSP and its corresponding ssSSP only result from binary interactions. For convenience, we call the difference a ``discrepancy isochrone''. Here we show the discrepancy isochrones for a few stellar populations in Figs. 2 and 3, for metal-poor ($Z$ = 0.004) and solar-metallicity ($Z$ = 0.02) populations, respectively. Because it is found that the discrepancy isochrones of metal-rich ($Z$ = 0.03) populations are similar to those of solar-metallicity populations, we do not show the results for metal-rich populations. Note that the results for populations with metallicities poorer than 0.004 are also given by our work, but we do not show them as the example for metal-poor populations as the $RPS$ model did not give the SEDs and Lick indices for populations with metallicities poorer than 0.004. This is actually limited by the spectral library used by the $RPS$ model, which only supplies spectra for stars more metal rich than 0.002. As we see, some special stars, e.g., blue stragglers, are generated by binary interactions (see also Fig. 1). We also show that the differences between isochrones of old bsSSPs and ssSSPs are smaller than those of young populations, because the isochrones of old populations are dominated by low-mass stars, in which binary interactions are much weaker. \begin{figure} \includegraphics[angle=-90,width=160mm]{f1.ps} \caption{Comparison of the shapes of the isochrones of a pair of solar-metallicity ($Z$ = 0.02) bsSSP and ssSSP. The figure is plotted by putting the isochrones of the bsSSP and ssSSP together. Black points show the isochrone of the ssSSP, and gray points the isochrone of the bsSSP.} \end{figure} \begin{figure} \includegraphics[angle=-90,width=160mm]{f2.ps} \caption{Differences between the number distribution of stars in the isochrones of metal-poor ($Z$ = 0.004) bsSSPs and ssSSPs.The darker the colour, the bigger the difference between star numbers of the bsSSP and ssSSP, (N$_{\rm b}$ - N$_{\rm s}$). For each sub-grid, N$_{\rm b}$ is the number of stars in a bsSSP, while N$_{\rm s}$ the number of stars in an ssSSP and locating in the sub-grid. Note that the grid of log($g$) versus $T_{\rm eff}$ with log($g$) range of -1.5 -- 6 and $T_{\rm eff}$ range of 2\,000 -- 60\,000 K is divided into 1\,089\,701 sub-grids, with intervals of 0.01 and 40 K for log($g$) and $T_{\rm eff}$, respectively. A point in the figures corresponds to a sub-grid.} \end{figure} \begin{figure} \includegraphics[angle=-90,width=160mm]{f3.ps} \caption{Similar to Fig.2, but for solar-metallicity ($Z$ = 0.02) stellar populations.} \end{figure} \subsection{The effects on integrated features of populations} The widely used integrated features of stellar populations are SEDs, Lick indices and colours. They are usually used for stellar population studies and are important. We investigate the effects of binary interactions on them in this section. \subsubsection{Spectral energy distributions} To investigate how binary interactions affect the SEDs of stellar populations, we compare the SEDs of a bsSSP and an ssSSP that have the same age and metallicity. The difference between SEDs are simply called discrepancy SEDs. The absolute discrepancy SED for a pair of bsSSP and ssSSP is derived by subtracting the flux of the ssSSP from that of the bsSSP, as a function of wavelength. The discrepancy SEDs are mainly caused by blue stragglers and hot sub-dwarfs, as such stars are very hot and luminous. The changes of surface abundances of stars caused by binary interactions can also contribute to discrepancy SEDs. The absolute discrepancy SEDs for metal-poor ($Z$ = 0.004), and solar-metallicity ($Z$ = 0.02) stellar populations are shown in Figs. 4, and 5, respectively. The absolute discrepancy SEDs such as those shown in the two figures can be easily used to add binary interactions into ssSSP models, but fractional discrepancy SEDs are more useful for understanding the effects of binary interactions. We show the fractional discrepancy SEDs of a few solar-metallicity populations in Fig. 6. \begin{figure} \includegraphics[angle=-90,width=160mm]{f4.ps} \caption{Absolute differences between the SEDs of metal-poor ($Z$ = 0.004) bsSSPs and ssSSPs, i.e., absolute discrepancy SEDs. The discrepancy SED of a pair of bsSSP and ssSSP is calculated by subtracting the flux of the ssSSP, F$_{\rm s}$, from that of the bsSSP, F$_{\rm b}$, as a function of wavelength. Each population contains 2\,000\,000 binaries or 4\,000\,000 single stars, whose initial mass follows the Salpeter IMF and with the lower and upper limits of 0.1 and 100 $M_{\bigodot}$.} \end{figure} \begin{figure} \includegraphics[angle=-90,width=160mm]{f5.ps} \caption{Similar to Fig. 4, but for solar-metallicity ($Z$ = 0.02) stellar populations.} \end{figure} \begin{figure} \includegraphics[angle=-90,width=160mm]{f6.ps} \caption{Fractional discrepancy SEDs of solar-metallicity stellar populations. Symbols F$_{\rm s}$ and F$_{\rm b}$ have the same meanings as in Fig. 4.} \end{figure} As we see, binary interactions make stellar populations less luminous, but the flux in shortwave bands are changed by binary interactions weakly compared to that in longwave bands. This mainly results from special stars generated by binary interactions, which contribute differently to flux in different bands. The differences between the SEDs of a bsSSP and its corresponding ssSSP decrease with increasing age or decreasing metallicity. In addition, it suggests that binary interactions can affect most Lick indices and colours of populations, because the flux changes caused by binary interactions are not 0, in the bands where widely used Lick indices and magnitudes are defined. In this case, bsSSP and ssSSP models usually give different results for stellar population studies. Furthermore, because the effects of binary interactions on the SED flux of populations are about 11\% on average, they are detectable for observations with spectral signal to noise ratio (SNR) grater than 10. In other words, the effects can be detected by most observations, as most reliable observations have SNRs greater than 10. \subsubsection{Lick indices} Lick indices are the most widely used indices in stellar population studies, because they can disentangle the well-known stellar age--metallicity degeneracy (see, e.g., \citealt{Worthey:1994}). If binary interactions are taken into account, some results different from those determined via ssSSP models will be obtained, which was suggested by the study of the differences between the SEDs of bsSSPs and ssSSPs. Here we test how binary interactions change the Lick indices of stellar populations when comparing to those of ssSSPs. In Fig. 7, we show the differences between four widely used indices of bsSSPs and those of ssSSPs. \begin{figure} \includegraphics[angle=-90,width=160mm]{f7.ps} \caption{Differences between four widely used Lick indices of bsSSPs and ssSSPs. The difference in a Lick index is calculated by subtracting the value of a ssSSP from that of its corresponding bsSSP, which has the same age and metallicity as the ssSSP. Star, circle, cross, and square are for populations with metallicities of 0.004, 0.01, 0.02, and 0.03, respectively. Note that the differences are averaged in each bin.} \end{figure} The indices are calculated from SEDs on the Lick system (\citealt{Worthey:1994lickdefinition}) directly. As we see, Fig. 7 shows that binary interactions make the H$\beta$ index of a population larger by about 0.15 ${\rm \AA}$, while making Mgb index smaller by about 0.06 ${\rm \AA}$ and Fe indices smaller by more than about 0.1 ${\rm \AA}$, compared to ssSSPs. Therefore, the changes in Lick indices are usually larger than typical observational uncertainties (about 0.07 ${\rm \AA}$ for H$\beta$ index and 0.04 ${\rm \AA}$ for metal-line indices according to the data of \citealt{Thomas:2005}). For fixed metallicity, the effects of binary interactions on both age- and metallicity-sensitive indices become stronger with increasing age when stellar age is less than about 1.5 $\sim$ 2\,Gyr, and then the effects become weaker with increasing age. The reason is that binary interactions change the isochrones of populations most strongly near 1.5 $\sim$ 2\,Gyr as the first mass transfer between two components of binaries peak and a lot of blue stragglers are generated near 1.5 $\sim$ 2\,Gyr according to the star sample of bsSSPs, and the light of old populations is dominated by low-mass binaries. The interactions between two components of low-mass binaries are usually weaker than for high-mass binaries. The effects of binary interactions on the isochrones are tested quantitatively using the numbers of stars with log($g$) $<$ 4.0 and log($T_{\rm eff}$) $>$ 3.75, because these stars are very luminous and contribute a lot to the light of their populations. Our result shows that binary interactions change the number distribution of stars in the above log($g$) and log($T_{\rm eff}$) ranges most significantly when stellar age is from 1.5 to 2. In addition, from Fig. 7, we find that for a fixed age, binary interactions affect the H$\beta$ and Fe5270 indices of metal-poor populations more strongly than those of metal-rich populations, while they affect the Mgb and Fe5335 indices of metal-poor populations more weakly. As a whole, using ssSSPs and bsSSPs, various ages and metallicities will be measured for the same stellar population, via popular Lick-index methods such as the H$\beta$ \& [MgFe] method, which determines the ages and metallicities of populations by comparing the observational and theoretical H$\beta$ and [MgFe] indices \citep{Thomas:2003}. Note that the evolution of the differences between the Lick indices of ssSSPs and bsSSPs were not shown before. \subsubsection{Colour indices} Because colours are useful for estimating the ages and metallicities of distant galaxies (see, e.g., \citealt{Li:2008colourpairs}), we investigate the effects of binary interactions on them. We use a method similar to that used for studying the Lick indices of stellar populations. Our detailed results are shown in Fig. 8. \begin{figure} \includegraphics[angle=-90,width=160mm]{f8.ps} \caption{Differences between four colours of bsSSPs and ssSSPs. The differences in a colour are calculated by subtracting the colour of a ssSSP from that of its corresponding bsSSP (with the same age and metallicity). Symbols have the same meanings as in Fig. 6. The colour $(u-r)$ is on the SDSS-$ugriz$ system and $(r-K)$ is a composite colour that consists of a Johnson magnitude ($K$) and a SDSS-$ugriz$ magnitude ($r$). The differences are averaged in each bin.} \end{figure} In the figure, the differences between two $UBVRIJHK$ colours, $(B-V)$ and $(B-K)$, a $ugriz$ colour on the photometric system used by Sloan Digital Sky Survey (hereafter SDSS system), $(u-r)$, and a composite colour, $(r-K)$, of bsSSPs and those of ssSSPs are shown, respectively. Note that the $(r-K)$ colour consists of a Johnson system magnitude, $K$, and an SDSS system magnitude, $r$. As we see, for a fixed age and metallicity, binary interactions make the colours of most stellar populations bluer than those predicted by ssSSPs. This mainly results from blue stragglers generated by binary interactions, because such stars are very luminous and blue. It suggests that we will get different stellar metallicities and ages for galaxies via bsSSP and ssSSP models, using a photometric method. When comparing to the typical colour uncertainties (0.12, 0.06, 0.13, 0.10, and 0.01 mag for $B$, $V$, $K$, $u$, and $r$ magnitudes, respectively), the changes [e.g., about -0.04, -0.15, and -0.08 mag for $(B-V)$, $(B-K)$, and $(u-r)$, respectively] of colours caused by binary interactions are similar to, but somewhat less than, typical observational errors. Note that the photometric uncertainties are estimated using the data of some publications, SDSS, and Two Micron All Sky Survey (2MASS). The uncertainties actually depend on surveys. The observational uncertainties of $u$ and $K$ magnitudes may be smaller when taking the data of other surveys instead of those of SDSS and 2MASS. In addition, similar to Lick indices, the differences between the colours of two kinds of stellar populations peak near 2\,Gyr. \section{The effects of binary interactions on the determination of stellar-population parameters} Two stellar-population parameters, i.e., stellar age and metallicity, are crucial in the investigations of the formation and evolution of galaxies. We investigate the effects of binary interactions on the estimates of the two parameters. We try to fit the stellar-population parameters of bsSSPs with various ages and metallicities using ssSSPs, via both Lick-index and photometric methods. Because observations show that about 50\% of stars in the Galaxy are in binaries, bsSSPs should be more similar to the real stellar populations of galaxies and star clusters. Therefore, the stellar-population parameters fitted via ssSSPs (hereafter ss-fitted results, represented by $t_{\rm s}$ and $Z_{\rm s}$) should be different from the results obtained via bsSSPs (bs-fitted results, $t_{\rm b}$ and $Z_{\rm b}$). The detailed differences are shown in this section. \subsection{Lick-index method} In a widely used method, i.e., Lick-index method, we fit the stellar ages and metallicities of populations by two indices, i.e., H$\beta$ and [MgFe] = $\rm {\sqrt{Mgb . (0.72Fe5270 + 0.28Fe5335)}}$, after \citet{Thomas:2003}. Thus the results are slightly affected by $\alpha$-enhancement and stellar population models (e.g., our $RPS$ model) without any $\alpha$-enhancement compared to the sun can be used to measure stellar-population parameters. The differences between bs- and ss-fitted stellar-populations parameters of populations with four metallicities (0.004, 0.01, 0.02, and 0.03) and 150 ages (from 0.1 to 15\,Gyr) are tested. In the test, we try to fit the stellar ages and metallicities of testing bsSSPs via an H$\beta$ versus [MgFe] grid of ssSSPs. Because ssSSPs predict different Lick indices for populations compared to bsSSPs, when we use ssSSPs to fit the ages and metallicities of our testing bsSSPs, the results obtained are different from the real parameters of bsSSPs, i.e., bs-fitted parameters. From an H$\beta$ versus [MgFe] grid of ssSSPs (Fig. 9), we can see this clearly. \begin{figure} \includegraphics[angle=-90,width=160mm]{f9.ps} \caption{Some bsSSPs on the H$\beta$ versus [MgFe] grid of ssSSPs. The position ``b'' shows the bs-fitted age and metallicity of a bsSSP, and ``s'' shows the ss-fitted values for it. The composite index [MgFe] is calculated by [MgFe] = $\rm {\sqrt{Mgb . (0.72Fe5270 + 0.28Fe5335)}}$ \citep{Thomas:2003}. Dashed and solid lines are for constant metallicity and constant age, respectively. Error bars show the typical observational uncertainties of indices.} \end{figure} In detail, the ss-fitted stellar ages are less than the bs-fitted ones, by 0 $\sim$ 5\,Gyr. The maximal difference is larger than the typical uncertainty ($<$ 2\,Gyr) in stellar population studies (see Fig. 9). The older the populations, the bigger the difference between ages fitted via bsSSPs and ssSSPs, although the differences between the Lick indices of old bsSSPs and ssSSPs are smaller (see Section 3). The reason is that the differences among the Lick indices of populations with different ages are much less for old populations than for young populations (see Fig. 9 for comparison). Therefore, the ss-fitted ages of galaxies can be much less than bs-fitted ages, because most galaxies, especially early-type ones, have old (7$\sim$8\,Gyr) populations and their metallicities are not big (peak near 0.002) \citep{Gallazzi:2005}. However, ss-fitted metallicities of populations are similar to bs-fitted values, compared to the typical uncertainties ($\sim$ 0.002). Therefore, if some stars are binaries, smaller ages will be measured via comparing the observational H$\beta$ and [MgFe] indices of galaxies with those of theoretical ssSSPs. This is more significant for metal-poor stellar populations. In our testing populations, on average, the ss-fitted metallicities are 0.0010 poorer than bs-fitted values, while ssSSP fitted ages are younger than bs-fitted values, by 0.3\,Gyr for all and 1.8\,Gyr for old ($\geq$ 7\,Gyr) testing populations. In this work, the ss-fitted stellar ages and metallicities of testing bsSSPs are obtained by finding the best-fit populations in a grid of theoretical populations with intervals of stellar age and metallicity of 0.1\,Gyr and 0.0001, respectively. A least square method is used in the fit. In addition, it is found that the bs-fitted ages of populations can be calculated from the ss-fitted ages and metallicities (with RMS of 1.45\,Gyr) via the equation \begin{equation} t_{\rm b} = (0.17 + 8.27Z_{\rm s}) + (1.38 - 14.45Z_{\rm s})t_{\rm s}, \end{equation} where $t_{\rm b}$, $Z_{\rm s}$ and $t_{\rm s}$ are the bs-fitted age, ss-fitted metallicity and age, respectively. The relation between bs-fitted ages and ss-fitted stellar-population parameters can be seen in Fig. 10. \begin{figure} \includegraphics[angle=-90,width=160mm]{f10.ps} \caption{The relation between bs-fitted ages of bsSSPs and their ss-fitted stellar-population parameters, when using H$\beta$ and [MgFe] \citep{Thomas:2003} to measure the ages and metallicities of testing bsSSPs. Star, circle, cross, and square point represent stellar populations with real metallicities of 0.004, 0.01, 0.02, and 0.03, respectively. Solid, dash-dotted, dotted, and dashed lines show the fitted relations between the bs-fitted ages and ss-fitted results of populations, for ss-fitted metallicities of 0.003, 0.009, 0.019, and 0.029, respectively.} \end{figure} We find that ss-fitted ages are usually less than the bs-fitted ages of populations, and the poorer the metallicity, the larger the differences between the bs- and ss-fitted ages. Note that Eq. (2) is not very accurate for metal-poor ($Z$ = 0.004) and old (age $>$ 11\,Gyr) populations. The reason is that the H$\beta$ index increases with age for metal-poor and old populations, while it decreases with age for other populations. \subsection{Photometric method} In the photometric method, we fit stellar-population parameters respectively via two pairs of colours, i.e., [$(u-R)$, $(R-K)$] and [$(u-r)$, $(r-K)$]. The two pairs are shown to have the ability to constrain the ages and metallicities of populations and can be used to study the stellar populations of some distant galaxies (see \citealt{Li:2008colourpairs}). The test shows that ss-fitted metallicities are poorer than the bs-fitted metallicities of populations. When taking [$(u-R)$, $(R-K)$] for this work, on average, ss-fitted metallicities are 0.003 smaller than bs-fitted values. It is 0.0035 when taking the pair [$(u-r)$, $(r-K)$]. The distribution of a few testing bsSSPs in the $(u-R)$ versus $(R-K)$ grid of ssSSPs is shown in Fig. 11. \begin{figure} \includegraphics[angle=-90,width=160mm]{f11.ps} \caption{Some bsSSPs on the $(u-R)$ versus $(R-K)$ grid of ssSSPs. The position ``b'' shows the bs-fitted age and metallicity of a bsSSP, and ``s'' shows the ss-fitted values. Dashed and solid lines are for constant metallicity and constant age, respectively. Error bars give typical observational uncertainties taken from the NED, SDSS and 2MASS surveys.} \end{figure} In particular, it is found that the ss-fitted ages are correlated with the bs-fitted ages of populations, which is independent of metallicity. The relation (with a RMS of 0.72\,Gyr) between ss- and bs-fitted ages of populations can be written as \begin{equation} t_{\rm b} = 0.24 + 0.93t_{\rm s}, \end{equation} where $t_{\rm b}$ and $t_{\rm s}$ are bs- and ss-fitted ages, respectively. It shows that the bs- and ss-fitted ages are similar. The equation is clearly different from Eq. (2), because colours are usually less sensitive to metallicity compared with metal-line Lick indices. The relation between bs- and ss-fitted ages of populations is shown in Fig. 12. \begin{figure} \includegraphics[angle=-90,width=160mm]{f12.ps} \caption{The relation between bs- and ss-fitted ages and metallicities of populations, when using $(u-R)$ and $(R-K)$ colours to measure the parameters. Points are for populations with Salpeter IMF and have the same meanings as in Fig. 10. Solid and dashed lines show the fitted relations between bs- and ss-fitted ages, for populations with Salpeter and Chabrier IMFs, respectively.} \end{figure} The figure shows the approximate relation between the bs- and ss-fitted ages of populations, which is nearly independent of metallicity. The equation is possibly useful to estimate the absolute ages of distant galaxies and star clusters. Note that the relation is presented for populations younger than 14\,Gyr, because the age of the universe is shown to be smaller than about 14\,Gyr \citep{WMAP:2003}. When we use $(u-r)$ and $(r-K)$ colours to estimate the stellar-population parameters of populations, we find that ss-fitted metallicities are about 0.0035 smaller than the bs-fitted values. The bs- and ss-fitted ages of populations can be approximately transformed by \begin{equation} t_{\rm b} = 0.28 + 0.91t_{\rm s}, \end{equation} where $t_{\rm b}$ and $t_{\rm s}$ are the bs- and ss-fitted ages, respectively. The RMS of the fitted relation is 1.00\,Gyr. The equation is similar to Eq. (3) but with larger scatter. This results from the various metallicity and age sensitivities of the colours used. One can see Fig. 13 for more details about the relation. \begin{figure} \includegraphics[angle=-90,width=160mm]{f13.ps} \caption{Similar to Fig. 13, but for the results obtained via $(u-r)$ and $(r-K)$ colours.} \end{figure} As a whole, from both the results obtained by Lick-index and photometric methods, we are shown that bs-fitted stellar-population parameters increase with ss-fitted ones. Therefore, using bsSSP models instead of ssSSP models, similar results for relative studies of stellar-population parameters of galaxies will be obtained. However, if one wants to get the absolute stellar-population parameters of galaxies and star clusters, the effects of binary interactions should be taken into account, especially for metal-poor populations. It can be conveniently done by taking the average metallicity deviations and the relations between the bs-fitted ages and ss-fitted results of populations, which were shown above. \subsection{Results for populations with Chabrier initial mass function} Some stellar populations with Salpeter IMF \citep{Salpeter:1955} were taken as standard models for the work, but even if some populations with other IMFs were taken instead, we can obtain similar results. We have a test using populations with the Chabrier IMF \citep{Chabrier:2003}. The result shows that ss-fitted metallicities are 0.0011 less on average than bs-fitted results, when taking H$\beta$ and [MgFe] for measuring stellar-population parameters. The bs-fitted ages and ss-fitted stellar-population parameters have a relation of $t_{\rm b} = (-0.06 + 20.63Z_{\rm s}) + (1.46 - 18.76Z_{\rm s})t_{\rm s}$, where $Z_{\rm s}$ is the ss-fitted metallicity, while $t_{\rm b}$ and $t_{\rm s}$ the bs- and ss-fitted ages, respectively. When we used $(u-R)$ and $(R-K)$ to estimate the stellar-population parameters of populations, ss-fitted metallicities were shown to be 0.0031 smaller than bs-fitted values, and bs-fitted ages can be calculated from ss-fitted results via $t_{\rm b} = 0.46 + 0.89t_{\rm s}$. A similar relation for the results fitted by $(u-r)$ and $(r-K)$ is $t_{\rm b} = 0.41 + 0.88t_{\rm s}$, with a deviation of 0.0039 in metallicity. As a whole, the relations between bs- and ss-fitted results obtained via populations with Salpeter and Chabrier IMFs are similar, compared to the typical uncertainties of stellar-population parameter studies. The comparisons of the results obtained via two IMFs can be seen in Figs. 11 and 12. \section{Discussions and Conclusions} We investigated the effects of binary interactions on the isochrones, SEDs, Lick indices and colours of simple stellar populations, and then on the determination of light-weighted stellar ages and metallicities. The results showed that binary interactions can affect stellar population synthesis studies significantly. In detail, binary interactions make stellar populations less luminous and bluer, while making the H$\beta$ index larger and metal line indices smaller compared to ssSSPs. The colour changes (2$\sim$5\%) caused by binary interactions are smaller than the systematic errors (about 6\%) of the $RPS$ model while similar to observational errors (4$\sim$7\%). Note that the systematic error of 6\% did not take the uncertainties due to the free parameters of the star model into account (see Sect. 2). The changes (3$\sim$6\%) of Lick indices caused by binary interactions are somewhat smaller than the systematic errors (about 6\%) of stellar population synthesis model but larger than observational errors (1$\sim$4\%). Therefore, if we measure luminosity-weighted stellar-population parameters (metallicity and age) via bsSSPs instead of ssSSPs, higher (0.0010 on average) metallicities and significantly larger ages will be obtained via a Lick-index method, and significantly higher (about 0.0030) metallicities and similar ages will be obtained via a photometric method. Because simple stellar population models are usually used for studying the populations of early-type galaxies or globular clusters, which possibly have old ($>$ 7\,Gyr) and relative metal-poor populations, the changes ($\sim$ 1.8\,Gyr in age and 0.0030 in metallicity) caused by binary interactions in stellar ages and metallicities are larger than the typical uncertainties. In particular, we found that the relative results of stellar population studies obtained by ssSSPs and bsSSPs are similar. The bs-fitted stellar-population parameters can be calculated from the ss-fitted ones, via equations presented by the paper. The relations between bs-fitted ages and ss-fitted stellar-population parameters are useful for some special investigations. For example, when studying the age of the universe via the stellar ages of some distant globular clusters, we can estimate the absolute age of star clusters using ss-fitted results. Although the results shown in this paper can help us to give some estimates for the absolute stellar ages and metallicities of galaxies, it is far from getting accurate values because of the large uncertainties in stellar population models (see, e.g., \citealt{Yi:2003}). In addition, different stellar population models usually give different absolute results for stellar population studies. Note that the results obtained by the Lick-index method are affected slightly by $\alpha$-enhancement, according to the work of \citet{Thomas:2003}, but this is not the case for the results obtained by photometric methods. In this work, all bsSSPs contains about 50\% binaries with orbital periods less than 100 yr (the typical value of the Galaxy). If the binary (with orbital periods less than 100 yr) fraction of galaxies are different from 50\%, the results shown in the paper will change. The higher the fraction of binaries, the larger the difference between ss- and bs-fitted stellar-population parameters. Thus the results obtained by in this paper may not be proper for investigating galaxies or star clusters with binary fractions obviously different from 50\%. Furthermore, when building bsSSPs, we assumed that the masses of the two components of a binary are correlated \citep{Li:2007database}, according to previous works. We did not try to take other distributions for secondary mass and binary period in this work, because it is limited by our present computing ability. We will give further studies in the future. The differences between the Lick indices and colours of bsSSPs and ssSSPs do not evolve smoothly with age. This possibly relates to the method used to calculate the integrated features of stellar populations. In fact, the Monte Carlo method usually leads to some scatter (about 2\%) in the integrated features of populations. The analytic fits and the binary algorithm used by Hurley code can also lead to some scatter (about 5\%). We investigated the effects of binary interactions via only some simple stellar populations, but the real populations of galaxies and star clusters are usually not so simple. In other words, the populations of galaxies and star clusters seem to be composite stellar populations including populations with various ages and metallicities (e.g., \citealt{Yi:2005}). It seems that the effects of binary interactions and population-mixing are degenerate. This is a complicated subject, which requires further study. \acknowledgments We greatly acknowledge the anonymous referee for two constructive referee's reports, Profs. Licai Deng, Tinggui Wan, Xu Kong, and Xuefei Chen for useful discussions, and Dr. Richard Simon Pokorny for greatly improving the English. This work is supported by the Chinese National Science Foundation (Grant Nos 10433030, 10521001, 2007CB815406) and the Youth Foundation of Knowledge Innovation Project of The Chinese Academy of Sciences (07ACX51001). \clearpage
1,941,325,221,236
arxiv
\section{Introduction} With the introduction of ever more powerful architectures, neural machine translation (NMT) has become the most promising machine translation method \cite{kalchbrenner2013recurrent,sutskever2014sequence,bahdanau2014neural}. For word representation, different architectures---including, but not limited to, recurrence-based \cite{Chen:2018vf}, convolution-based \cite{DBLP:journals/corr/GehringAGYD17} and transformation-based \cite{DBLP:journals/corr/VaswaniSPUJGKP17} NMT models---have been taking advantage of the distributed word embeddings to capture the syntactic and semantic properties of words \cite{Turian:2010vi}. NMT usually utilizes three matrices to represent source embeddings, target input embeddings, and target output embeddings (also known as pre-softmax weight), respectively. These embeddings occupy most of the model parameters, which constrains the improvements of NMT because the recent methods become increasingly memory-hungry \cite{DBLP:journals/corr/VaswaniSPUJGKP17,Chen:2018vf}.\footnote{For the purpose of smoothing gradients, a very large batch size is needed during training.} Even though converting words into sub-word units \cite{DBLP:journals/corr/SennrichHB15}, nearly 55\% of model parameters are used for word representation in the Transformer model \cite{DBLP:journals/corr/VaswaniSPUJGKP17}. \begin{figure}[t] \centering \subfigure[Standard]{ \scalebox{0.5}{\input{embeddings/standard}} \label{fig:standard}} \hfill \subfigure[Shared-private]{ \scalebox{0.5}{\input{embeddings/lexical}} \label{fig:lexical1}} \caption{Comparison between (a) standard word embeddings and (b) shared-private word embeddings. In (a), the English word ``Long'' and the German word ``Lange'', which have similar lexical meanings, are represented by two private $d$-dimension vectors. While in (b), the two word embeddings are made up of two parts, indicating the shared (lined nodes) and the private (unlined nodes) features. This enables the two words to make use of common representation units, leading to a closer relationship between them.} \label{fig:shareEmb} \end{figure} \begin{figure*}[t] \centering \subfigure[Similar lexical meaning]{ \scalebox{0.7}{\input{embeddings/lexical}} \label{fig:lexical2}} \hfill \subfigure[Same word form]{ \scalebox{0.7}{\input{embeddings/form}} \label{fig:form}} \hfill \subfigure[Unrelated]{ \scalebox{0.7}{\input{embeddings/unrelated}} \label{fig:unrelated}} \hfill \caption{Shared-private bilingual word embeddings perform between the source and target words or sub-words (a) {with similar lexical meaning}, (b) with same word form, and (c) without any relationship. Different sharing mechanisms are adapted into different relationship categories. This strikes the right balance between capturing monolingual and bilingual characteristics. The closeness of relationship decides the portion of features to be used for sharing. Words with similar lexical meaning tend to share more features, followed by the words with the same word form, and then the unrelated words, as illustrated by the lined nodes.} \label{fig:shareEmbAll} \end{figure*} To overcome this difficulty, several methods are proposed to reduce the parameters used for word representation of NMT. \myshortcite{Press:2017ug} propose two weight tying (WT) methods, called decoder WT and three-way WT, to substantially reduce the parameters of the word embeddings. Decoder WT ties the target input embedding and target output embedding, which has become the new \emph{de facto} standard of practical NMT \cite{Sennrich:2017uo}. Three-way WT uses only one matrix to represent the three word embeddings, where the source and target words that have the same word form tend to share a word vector. This method can also be adapted to sub-word NMT with a shared source-target sub-word vocabulary and it performs well in language pairs with many of the same characters, such as English-German and English-French \cite{DBLP:journals/corr/VaswaniSPUJGKP17}. Unfortunately, this method is not applicable to languages that are written in different alphabets, such as Chinese-English \cite{Hassan:2018vv}. Another challenge facing the source and target word embeddings of NMT is the lack of interactions. This degrades the attention performance, leading to some unaligned translations that hurt the translation quality. Hence, \myshortcite{Kuang:2017vk} propose to bridge the source and target embeddings, which brings better attention to the related source and target words. Their method is applicable to any language pairs, providing a tight interaction between the source and target word pairs. However, their method requires additional components and model parameters. In this work, we aim to enhance the word representations and the interactions between the source and target words, while using even fewer parameters. To this end, we present a language-independent method, which is called shared-private bilingual word embeddings, to share a part of the embeddings of a pair of source and target words that have some common characteristics (i.e. similar words should have similar vectors). Figure~\ref{fig:shareEmb} illustrates the difference between the standard word embeddings and shared-private word embeddings of NMT. In the proposed method, each source (or target) word is represented by a word embedding that consists of the shared features and the private features. The shared features can also be regarded as the prior alignments connecting the source and target words. The private features allow the words to better learn the monolingual characteristics. Meanwhile, the features shared by the source and target embeddings result in a significant reduction of the number of parameters used for word representations. The experimental results on 6 translation datasets of different scales show that our model with fewer parameters yields consistent improvements over the strong Transformer baselines. \section{Approach} In monolingual vector space, similar words tend to have commonalities in the same dimensions of their word vectors \cite{mikolov2013efficient}. These commonalities include: (1) a similar degree (value) of the same dimension and (2) a similar positive or negative correlation of the same dimension. Many previous works have noticed this phenomenon and have proposed to use shared vectors to represent similar words in monolingual vector space toward model compression \cite{DBLP:journals/corr/LiQYL16,Zhang:2017wt,Li:2017td}. Motivated by these works, in NMT, we assume that the source and target words that have similar characteristics should also have similar vectors. Hence, we propose to perform this sharing technique in bilingual vector space. More precisely, we share the features (dimensions) between the paired source and target embeddings (vectors). However, in contrast to the previous studies, we also model the private features of the word embedding to preserve the private characteristics of words for source and target languages. The private features allow the words to better learn the monolingual characteristics. Meanwhile, we also propose to adopt different sharing mechanisms among the word pairs, which will be described in the following sections. In the Transformer architecture, the shared features between the source and target embeddings always contribute to the calculation of the attention weight.\footnote{Based on the dot-product attention mechanism, the attention weight between the source and target embeddings is the sum of the dot-product of their features.} This results in paying more attention strength on the pair of related words. With the help of residual connections, the high-level representations can also benefit from the shared features of the topmost embedding layers. Both qualitative and quantitative analyses show the effectiveness on the translation tasks. \subsection{Shared-Private Bilingual Word Embeddings} Standard NMT jointly learns to translate and align, which has achieved remarkable results \cite{bahdanau2014neural}. In NMT, the intention is to identify the translation relationships between the source and target words. To simplify the model, we propose to divide the relationships into three main categories between a pair of source and target words: (1) words with similar lexical meaning (abbreviated as $\mathrm{lm}$), (2) words with same word form (abbreviated as $\mathrm{wf}$), and (3) unrelated words (abbreviated as $\mathrm{ur}$). Figure~\ref{fig:shareEmbAll} shows some examples of these different relationship categories. The number of the shared features of the word embeddings is decided by their relationships. Before presenting the pairing process in detail, we first introduce the constraints to the proposed method for convenience: \begin{itemize} \item Each source word is only allowed to share the features with a single target word, and vice versa.\footnote{We investigate the effect of synonym in the experiment section.} \item Each source word preferentially shares features with the target word that has similar lexical meaning, followed by the word with same word form, and then unrelated words. \end{itemize} \subsubsection{Words with Similar Lexical Meaning} As shown in Figure~\ref{fig:lexical2}, the English word ``Long'' and the German word ``Lange'', which have similar meaning, tend to share more common features of their embeddings. In our model, the source and target words with alignment links are regarded as parallel words that are the translation of each other. According to the word frequency, each source word $x$ is paired with a target aligned word $\hat{y}$ that has the highest alignment probability among the candidates, and is computed as follows: \begin{eqnarray} \hat{y} = \mathop{\arg\max}_{y \in a(x)}\mathrm{log}A(y|x) \label{equ:align} \end{eqnarray} where $a(\cdot)$ denotes the set of aligned candidates. It is worth noting the target words that have been paired with the source words cannot be used as candidates. $A(\cdot|\cdot)$ denotes the alignment probability. These can be obtained by either the intrinsic attention mechanism \cite{bahdanau2014neural} or unsupervised word aligner \cite{Dyer:2013wv}. \subsubsection{Words with Same Word Form} As shown in Figure~\ref{fig:form}, the sub-word ``Ju@@'' simultaneously exists in English and German sentences. This kind of word tends to share a medium number of features of the word embeddings. Most of the time, the source and target words with the same word form also share similar lexical meaning. This category of words generally includes Arabic numbers, punctuations, named entities, cognates and loanwords. However, there are some bilingual homographs where the words in the source and target languages look the same but have completely different meanings. For example, the German word ``Gift'' means ``Poison'' in English. That is the reason we propose to first pair the words with similar lexical meaning instead of those words with same word forms. This might be the potential limitation of the three-way WT method \cite{Press:2017ug}, where words with the same word form indiscriminately share the same word embedding. \begin{figure*}[t] \centering \scalebox{0.65}{\input{visual}} \caption{The example of assembling the source word embedding matrix. The words in parentheses denote the paired words sharing features with them.} \label{fig:preemb} \end{figure*} \subsubsection{Unrelated Words} We regard source and target words that cannot be paired with each other as unrelated words. Figure~\ref{fig:unrelated} shows an example of a pair of unrelated words. This category is mainly composed of low-frequency words, such as misspelled words, special characters, and foreign words. In standard NMT, the embeddings of low-frequency words are usually inadequately trained, resulting in a poor word representation. These words are often treated as noises and they are generally ignored by the NMT systems \cite{Feng:vt}. Motivated by the frequency clustering methods proposed by \citet{chen2015strategies} where they cluster the words with similar frequency for training a hierarchical language model, in this work, we propose to use a small vector to model the possible features that might be shared between the source and target words which are unrelated but having similar word frequencies. In addition, it can be regarded as a way to improve the robustness of learning the embeddings of low-frequency words because of the noisy dimensions \cite{Wang:2018vb}. \begin{table*}[ht] \small \centering \scalebox{1.0}{ \begin{tabularx}{\textwidth}{l|lrrr|l|lllll} \hline Architecture &\textcolor{white}{ }Zh$\Rightarrow$En &Params &Emb.& Red. &Dev. & MT02 & MT03 & MT04 & MT08 &All\\ \hline \hline SMT* &- &- &- &- &34.00 &35.81 &34.70 &37.15 &25.28 &33.39 \\ \hline \multirow{4}{*}{RNNsearch*} &Vanilla &74.8M &55.8M &0\% &35.92 &37.88 &36.21 &38.83 &26.30 &34.81 \\ &Source bridging &78.5M &55.8M &0\% &36.79 &38.71 &37.24 &40.28 &27.40 &35.91 \\ &Target bridging &76.6M &55.8M &0\% &36.69 &39.04 &37.63 &40.41 &27.98 &36.27 \\ &Direct bridging &78.9M &55.8M &0\% &36.97 &39.77 &38.02 &40.83 &27.85 &36.62 \\ \hline \multirow{4}{*}{Transformer} &Vanilla &90.2M &46.1M &0\% &41.37 &42.53 &40.25 &43.58 &32.89 &40.33 \\ &Direct bridging &90.5M &46.1M &0\% &41.67 &42.89 &41.34 &43.56 &32.69 &40.54 \\ &Decoder WT &74.9M &30.7M &33.4\% &41.90 &43.02 &41.89 &43.87 &32.62 &40.82 \\ &\emph{Shared-private} &62.8M &18.7M &59.4\% &42.57$^\uparrow$ &43.73$^\uparrow$ &41.99$^\uparrow$ &44.53$^\uparrow$ &33.81$^\Uparrow$ &41.61$^\Uparrow$ \\ \hline \end{tabularx}} \caption{Results on the NIST Chinese-English translation task. ``Params'' denotes the number of model parameters. ``Emb.'' represents the number of parameters used for word representation. ``Red.'' represents the reduction rate of the standard size. The results of SMT* and RNNsearch* are reported by \myshortcite{Kuang:2017vk} with the same datasets and vocabulary settings. ``$\uparrow$'' indicates the result is significantly better than that of the vanilla Transformer ($p < 0.01$), while ``$\Uparrow$'' indicates the result is significantly better than that of all other Transformer models ($p < 0.01$). All significance tests are measured by paired bootstrap resampling~\cite{koehn2004statistical}.} \label{tab:zhenresults} \end{table*} \begin{table}[ht] \small \centering \scalebox{1.0}{ \begin{tabularx}{0.47\textwidth}{@{\extracolsep{\fill}}l|l|lr|l} \hline En$\Rightarrow$De &Params &Emb.& Red. &BLEU\\ \hline \hline Vanilla &98.7M &54.5M &0\% &27.62 \\ Direct bridging &98.9M &54.5M &0\% &27.79 \\ Decoder WT &80.4M &36.2M &33.6\% &27.51 \\ Three-way WT &63.1M &18.9M &65.3\% &27.39 \\ \hline \hline \emph{Shared-private} &65.0M &20.9M &63.1\% &28.06$^\ddagger$ \\ \hline \end{tabularx}} \caption{Results on the WMT English-German translation task. ``$\ddagger$'' indicates the result is significantly better than the vanilla Transformer model ($p < 0.05$).} \label{tab:enderesults} \end{table} \subsection{Implementation} Before looking up embedding at each training step, the source and target embedding matrix are assembled by the sub-embedding matrices. As shown in Figure~\ref{fig:preemb}, the source embedding $\mathbf{E}^x \in {\mathbb{R}^{|V| \times d}}$ is computed as follows:: \begin{eqnarray} \mathbf{E}^x = \mathbf{E}^x_{\mathrm{\mathrm{lm}}} \oplus \mathbf{E}^x_{\mathrm{\mathrm{wf}}} \oplus \mathbf{E}^x_{\mathrm{\mathrm{ur}}} \end{eqnarray} where $\oplus$ is the row concatenation operator. $\mathbf{E}^x_{(\cdot)} \in {\mathbb{R}^{|V_{(\cdot)}| \times d}}$ represents the word embeddings of the source words belong to different categories, e.g. $\mathrm{lm}$ represents the words with similar lexical meaning. $|V_{(\cdot)}|$ denotes the vocabulary size of the corresponding category. The process of feature sharing is also implemented by matrix concatenation. For example, the embedding matrices of the source words with similar lexical meaning are computed as follows: \begin{eqnarray} \mathbf{E}^x_{\mathrm{lm}} = \mathbf{S}_{\mathrm{lm}} \tilde{\oplus} \mathbf{P}^x_{\mathrm{lm}} \end{eqnarray} where $\tilde{\oplus}$ is the column concatenation operator. $\mathbf{S}_{\mathrm{lm}} \in {\mathbb{R}^ { |V_{\mathrm{lm}}| \times \lambda_{\mathrm{lm}}d}}$ represent the word embeddings of the shared features, where $\lambda_{\mathrm{lm}}$ denotes the proportion of the features for sharing in this relationship category. $\mathbf{P}^{x}_{\mathrm{lm}} \in {\mathbb{R}^{|V_{\mathrm{lm}}| \times (1-\lambda_{\mathrm{lm}})d }}$ represent the word embeddings of the private features. Similar to the target word embedding. These matrix concatenation operations, which have low computational complexity, are very cheap to the whole NMT computation process. We also empirically find both the training speed and decoding speed are not influenced with the introduction of the proposed method. \end{CJK} \section{Experiments} We carry out our experiments on the small-scale IWSLT'17 \{Arabic (Ar), Japanese (Ja), Korean (Ko), Chinese (Zh)\}-to-English (En) translation tasks, medium-scale NIST Chinese-English (Zh-En) translation task, and large-scale WMT'14 English-German (En-De) translation task. For the IWSLT \{Ar, Ja, Ko, Zh\}-to-En translation tasks, there are respectively 236K, 234K, 227K, and 235K sentence pairs in each training set.\footnote{\url{https://wit3.fbk.eu/mt.php?release=2017-01-trnted}} The validation set is IWSLT17.TED.tst2014 and the test set is IWSLT17.TED.tst2015. For each language, we learn a BPE model with 16K merge operations \cite{DBLP:journals/corr/SennrichHB15}. For the NIST Zh-En translation task, the training corpus consists of 1.25M sentence pairs with 27.9M Chinese words and 34.5M English words. We use the NIST MT06 dataset as the validation set and the test sets are the NIST MT02, MT03, MT04, MT05, MT08 datasets. To compare with the recent works, the vocabulary size is limited to 30K for both languages, covering 97.7\% Chinese words and 99.3\% English words, respectively. For the WMT En-De translation task, the training set contains 4.5M sentence pairs with 107M English words and 113M German words. We use the newstest13 and newstest14 as the validation set and test set, respectively. The joint BPE model is set to 32K merge operations. \subsection{Setup} We implement all of the methods based on Transformer \cite{DBLP:journals/corr/VaswaniSPUJGKP17} using the \emph{base} setting with the open-source toolkit \emph{thumt}\footnote{\url{https://github.com/thumt/THUMT}} \cite{Zhang:2017vy}. There are six encoder and decoder layers in our models, while each layer employs eight parallel attention heads. The dimension of the word embedding and the high-level representation $d_{\mathrm{model}}$ is 512, while that of the inner-FFN layer $d_{\mathrm{ff}}$ is 2048. \begin{table}[t] \small \centering \scalebox{1.0}{ \begin{tabularx}{0.47\textwidth}{@{\extracolsep{\fill}}l|l|lr|l} \hline & Model &Emb.& Red. &BLEU\\ \hline \hline \multirow{2}{*}{Ar$\Rightarrow$ En} &Vanilla &23.6M &0\% &28.36\\ &\emph{Shared-private} &11.8M &50\% &29.71$^\uparrow$\\ \hline \multirow{2}{*}{Ja$\Rightarrow$ En} &Vanilla &25.6M &0\% &10.94\\ &\emph{Shared-private} &13.3M &48.0\% &12.35$^\uparrow$\\ \hline \multirow{2}{*}{Ko$\Rightarrow$ En} &Vanilla &25.1M &0\% &16.48\\ &\emph{Shared-private} &13.2M &47.4\% &17.84$^\uparrow$\\ \hline \multirow{2}{*}{Zh$\Rightarrow$ En} &Vanilla &27.4M &0\% &19.36\\ &\emph{Shared-private} &13.8M &49.6\% &21.00$^\uparrow$\\ \hline \end{tabularx}} \caption{Results on the IWSLT \{Ar, Ja, Ko, Zh\}-to-En translation tasks. These distant language pairs belonging to 5 different language families and written in 5 different alphabets.``$\uparrow$'' indicates the result is significantly better than that of the vanilla Transformer ($p < 0.01$).} \label{tab:iwsltresults} \end{table} The Adam \cite{kingma2014adam} optimizer is used to update the model parameters with hyper-parameters $\beta_{1}$= 0.9, $\beta_{2}$ = 0.98, $\varepsilon$ = $10^{-8}$ and a warm-up strategy with $warmup\_steps = 4000$ is adapted to the variable learning rate \cite{DBLP:journals/corr/VaswaniSPUJGKP17}. The dropout used in the residual connection, attention mechanism, and feed-forward layer is set to 0.1. We employ uniform label smoothing with 0.1 uncertainty. During the training, each training batch contains nearly 25K source and target tokens. We evaluate the models every 2000 batches via the tokenized BLEU \cite{papineni2002bleu} for early stopping. During the testing, we use the best single model for decoding with a beam of 4. The length penalty is tuned on the validation set, which is set to 0.6 for the English-German translation tasks, and 1.0 for others. We compare our proposed methods with the following related works: \begin{itemize} \item \textbf{Direct bridging} \cite{Kuang:2017vk}: this method minimizes the word embedding loss between the transformations of the target words and their aligned source words by adding an auxiliary objective function. \item \textbf{Decoder WT} \cite{Press:2017ug}: this method uses an embedding matrix to represent the target input embedding and target output embedding. \item \textbf{Three-way WT} \cite{Press:2017ug}: this method is an extension of the decoder WT method that the source embedding and the two target embeddings are represented by one embedding matrix. This method cannot be applied to the language pairs with different alphabets, e.g. Zh-En. \end{itemize} \begin{table}[t] \small \centering \scalebox{1.0}{ \begin{tabular}{l|rrr|l|l} \hline Zh-En &$\lambda_\mathrm{lm}$ &$\lambda_\mathrm{wf}$ &$\lambda_\mathrm{ur}$ &Emb. &BLEU\\ \hline \hline Vanilla &- &- &- &46.1M &41.37 \\ Decoder WT &0 &0 &0 &30.7M &41.90 \\ \hline\hline \multirow{5}{*}{\emph{Shared-private}} &0.5 &0.7 &0.9 &21.2M &41.98 \\ &0.5 &0.5 &0.5 &23.0M &42.26 \\ &0.9 &0.7 &0 &21.0M &42.27 \\ &1 &1 &1 &15.3M &42.36 \\ &0.9 &0.7 &0.5 &18.7M &42.57 \\ \hline \end{tabular}} \caption{Performance of models using different sharing coefficients on the validation set of the NIST Chinese-English translation task.} \label{tab:sharecoeff} \end{table} For the proposed model, we use an unsupervised word aligner \emph{fast-align}\footnote{\url{https://github.com/clab/fast_align}} \cite{Dyer:2013wv} to pair source and target words that have similar lexical meaning. We set the threshold of alignment probability to 0.05, i.e. only those words with an alignment probability over 0.05 can be paired as the words having similar lexical meaning. The sharing coefficient $\bm{\lambda} = (\lambda_{\mathrm{lm}},\lambda_{\mathrm{wf}},\lambda_{\mathrm{wf}})$ is set to (0.9,0.7,0.5), which is tuned on both the NIST Chinese-Enlgish task and the WMT English-German task. \subsection{Main Results} Table~\ref{tab:zhenresults} reports the results on the NIST Chinese-English test sets. It is observed that the Transformer models significantly outperform SMT and RNNsearch models. Therefore, we decide to implement all of our experiments based on Transformer architecture. The direct bridging model can further improve the translation quality of the Transformer baseline. The decoder WT model improves the translation quality while reducing the number of parameters for the word representation. This improved performance happens because there are fewer model parameters, which prevents over-fitting \cite{Press:2017ug}. Finally, the performance is further improved by the proposed method while using even fewer parameters than other models. Similar observations are obtained on the English-German translation task, as shown in Table~\ref{tab:enderesults}. The improvement of the direct bridging model is reduced with the introduction of sub-word units since the attention distribution of the high-level representations becomes more confused. Although the two WT methods use fewer parameters, their translation quality degrades. We believe that sub-word NMT needs the well-trained embeddings to distinguish the homographs of sub-words. In the proposed method, both the source and target embeddings benefit from the shared features, which leads to better word representations. Hence, it improves the quality of translation and also reduces the number of parameters. Table~\ref{tab:iwsltresults} shows the results on the small-scale IWSLT translation tasks. We observe that the proposed method stays consistently better than the vanilla model on these distant language pairs. Although the Three-way WT method has been sufficiently validated on similar translation pairs at low-resource settings \citep{DBLP:journals/corr/SennrichHB16}, it is not applicable to these distant language pairs. Instead, the proposed method is language-independent, making the WT methods more widely used. \begin{table}[t] \small \centering \scalebox{1.0}{ \begin{tabular}{l|rrr|l|l} \hline $A(\cdot|\cdot)$ &Lexical &Form &Unrelated &Emb. &BLEU \\ \hline \hline 0.5 &4,869 &309 &24,822 &22.0M &42.35 \\ 0.1 &15,103 &23 &14,874 &20.0M &42.53 \\ 0.05 &21,172 &11 &8,817 &18.7M &42.57 \\ \hline \end{tabular}} \caption{Effects on different alignment thresholds used for pairing the words with similar lexical meaning on the validation set of the NIST Chinese-English translation task.} \label{tab:aligneffect} \end{table} \begin{table*}[t] \small \centering \begin{tabular}{p{0.3cm}|p{2cm}|p{12.2cm}} \hline \multirow{6}{*}{\emph{1}}& Source & mengmai xingzheng zhangguan bazhake biaoshi , dan shi gaishi jiu you shisan \textcolor{red}{\textbf{sangsheng}} .\\ &Reference & mumbai municipal commissioner phatak claimed that 13 people were \textcolor{red}{\textbf{killed}} in the city alone .\\\cline{2-3} &Vanilla & bombay chief executive said that there were only 13 deaths in the city alone . \\ &Direct bridging & bombay 's chief executive , said there were 13 dead in the city alone . \\ &Decoder WT & chief executive of bombay , said that thirteen people had died in the city alone . \\ &\emph{Shared-private} & mumbai 's chief executive said 13 people were \textcolor{red}{\textbf{killed}} in the city alone . \\ \hline\hlin \multirow{6}{*}{\emph{2}} &Source & suoyi wo \textcolor{red}{\textbf{ye}} you liyou qu xiangxin ta de rensheng \textcolor{blue}{\textbf{ye}} hen jingcai .\\ &Reference & thus , i \textcolor{red}{\textbf{also}} have reason to believe that her life is \textcolor{blue}{\textbf{also}} very wonderful . \\\cline{2-3} &Vanilla & so i have reason to believe her life is \textcolor{blue}{\textbf{also}} very fantastic .\\ &Direct bridging & so i had reason to believe her life was \textcolor{blue}{\textbf{also}} brilliant . \\ &Decoder WT & so , i have reasons to believe that she has a wonderful life . \\ &\emph{Shared-private} & so i \textcolor{red}{\textbf{also}} have reason to believe that her life is \textcolor{blue}{\textbf{also}} wonderful . \\ \hline \end{tabular} \caption{Translation examples on MT08 test set. The first and second examples show the accuracy and adequacy of the proposed method, respectively. The \textbf{bold} words in each example are paired and will be discussed in the text.} \label{tab:translation} \end{table*} \subsection{Effect on Sharing Coefficients} The coefficient $\bm{\lambda}=(\lambda_{\mathrm{lm}},\lambda_{\mathrm{wf}},\lambda_{\mathrm{ur}})$ controls the proportion of the shared features. As shown in Table~\ref{tab:sharecoeff}, the decoder WT model can be seen as a kind of shared-private method where \emph{zero} features are shared between the source and target word embeddings. For the proposed method, $\bm{\lambda}=(0.5,0.5,0.5)$ and $\bm{\lambda}=(1,1,1)$ are, respectively, used for sharing half and all features between the embeddings of all categories of words. This allows the model to significantly reduce the number of parameters and also improve the translation quality. For comparison purpose, we also consider sharing a large part of the features among the unrelated words by setting $s_3$ to $0.9$, i.e. $\bm{\lambda}=(0.5,0.7,0.9)$. We argue that it is hard for the model to learn an appropriate bilingual vector space in such a sharing setting. Finally, we propose to share more features between the more similar words by using $s_1=0.9$ and reduce the weight on the unrelated words, which is $\bm{\lambda}=(0.9,0.7,0.5)$. This strikes the right balance between the translation quality and the number of model parameters. To investigate whether to share the features between unrelated words or not, we further conduct an experiment with the setting $\bm{\lambda}=(0.9,0.7,0)$. The result confirms our assumption that a small number of shared features between unrelated words with similar word frequency achieve better model performance. \begin{figure}[t]% \centering \subfigure[Vanilla]{% \label{fig:ldrfirst}% \includegraphics[height=1.5in]{attentions/14notied}}% \hfill \subfigure[Shared-private]{% \label{fig:ldrthird}% \includegraphics[height=1.5in]{attentions/14difftied}}% \caption{Long-distance reordering illustrated by the attention maps. The attention weights learned by the proposed shared-private model is more concentrated than that of the vanilla model.} \label{fig:ldr}% \end{figure} \subsection{Effect on Alignment Quality} Table~\ref{tab:aligneffect} shows the performance of different word alignment thresholds. In the first row, we only pair the words whose alignment probability $A(y|x)$ is above the threshold of 0.5 (see Equation~\ref{equ:align} for more details). Under this circumstance, 4,869 words are categorized as parallel words that have similar lexical meaning. Based on these observations, we find that the alignment quality is not a key factor affecting the model performance. In contrast, pairing as many as similar words possible helps the model to better learn the bilingual vector space, which improves the translation performance. The following qualitative analyses support these observations either. \begin{figure}[t]% \centering \subfigure[Vanilla]{% \label{fig:wofirst}% \includegraphics[height=1.4in]{attentions/1082notied}}% \hfill \subfigure[Shared-private]{% \label{fig:wothird}% \includegraphics[height=1.4in]{attentions/1082difftied}}% \caption{Word omission problem illustrated by the attention maps. In the vanilla model, the third source word ``ye'' is not translated, while our shared-private model adequately translates it to give a better translation result.} \label{fig:wo}% \end{figure} \begin{figure*}[t] \centering \subfigure[Vanilla]{ \scalebox{0.52}{\input{PCA/vanillaVS}} \label{fig:vanillaWB}} \hfill \subfigure[Shared-private (global)]{ \scalebox{0.52}{\input{PCA/ourVS}} \label{fig:ourWB}} \hfill \subfigure[Shared-private (local)]{ \scalebox{0.52}{\input{PCA/ourVSsamll}} \label{fig:ourWBsmall}} \caption{Visualization of the 2-dimensional PCA projection of the bilingual word embeddings of the two models. The \emph{\textcolor{blue}{blue}} words represent the Chinese embeddings while the \textcolor{red}{red} words represent the English embeddings. In (a), only the similar monolingual words are clustered together. While in (b) and (c), both the monolingual and bilingual words which have similar meanings are gathered together.} \label{fig:WBall} \end{figure*} \subsection{Analysis of the Translation Results} Table~\ref{tab:translation} shows two translation examples of the NIST Chinese-English translation task. To better understand the translations produced by these two models, we use layer-wise relevance propagation (LRP) \cite{Ding:2017ep} to produce the attention maps of the selected translations, as shown in Figure~\ref{fig:ldr} and~\ref{fig:wo}. In the first example, the Chinese word ``sangsheng'' is a low-frequency word and its ground truth is ``killed''. It is observed the inadequate representation of ``sangsheng'' leads to a decline in the translation quality of the vanilla, direct bridging, and decoder WT methods. In our proposed method, a part of the embedding of ``sangsheng'' is shared with that of ``killed''. These improved source representations help the model to generate better translations. Furthermore, as shown in Figure~\ref{fig:ldr}, we observe that the proposed method has better long-distance reordering ability than the vanilla. We attribute this improvement to the shared features, which provide an alignment guidance for the attention mechanism. The second example implies that our proposed model is able to improve the adequacy of translation, as illustrated in Figure~\ref{fig:wo}. The Chinese word ``ye'' (also) appears twice in the source sentence, while only the proposed method can adequately translate both of them to the target word ``also''. This once again proves that the shared embeddings between the pair words,``ye'' and ``also'' provide the attention model with a strong interaction between the words, leading to a more concentrated attention distribution and effectively alleviating the word omission problem. \subsection{Analysis of the Learned Embeddings} The proposed method has a limitation in that each word can only be paired with one corresponding word. However, \emph{synonym} is a quite common phenomenon in natural language processing tasks. Qualitatively, we use principal component analysis (PCA) to visualize the learned embeddings of the vanilla model and the proposed method, as shown in Figure~\ref{fig:WBall}. In the vanilla model, as shown in Figure~\ref{fig:vanillaWB}, only the similar monolingual embeddings are clustered, such as the English words ``died'' and ``killed'', and the Chinese words ``zhuxi'' (president) and ``zongtong'' (president). However, in the proposed method, no matter whether the similar source and target words are paired or not, they tend to cluster together; as shown in Figure~\ref{fig:ourWB} and ~\ref{fig:ourWBsmall}. In other words, the proposed method is able to handle the challenge of synonym. For example, both the Chinese words ``ye'' (paired with ``also'') and ``bing'' can be correctly translated to ``also'' and these three words tend to gather together in the vector space. This is similar to the Chinese word ``sangsheng'' (paired with ``killed'') and the English words ``died'' and ``killed''. Figure~\ref{fig:ourWBsmall} shows that the representations of the Chinese and English words which relate to ``president'' are very close. \section{Related Work} Many previous works focus on improving the word representations of NMT by capturing the fine-grained (character) or coarse-grained (sub-word) \emph{monolingual} characteristics, such as character-based NMT \cite{DBLP:journals/corr/Costa-JussaF16,ling2015character,DBLP:journals/corr/ChoMGBSB14,chen2015strategies}, sub-word NMT \cite{DBLP:journals/corr/SennrichHB15,johnson2017google,Ataman:2018wl}, and hybrid NMT \cite{luong2016achieving}.~They effectively consider and utilize the morphological information to enhance the word representations.~Our work aims to enhance word representations through the \emph{bilingual} features that are cooperatively learned by the source and target words. Recently, \myshortcite{Gu:2018vd} propose to use the pre-trained target (English) embeddings as a universal representation to improve the representation learning of the source (low-resource) languages. In our work, both the source and target embeddings can make use of the common representation unit, i.e. the source and target embedding help each other to learn a better representation. The previously proposed methods have shown the effectiveness of integrating prior word alignments into the attention mechanism \cite{Mi:2016wd,Liu:2016ta,Cheng:2016uy,Feng:vt}, leading to more accurate and adequate translation results with the assistance of prior guidance. We provide an alternative that integrates the prior alignments through the sharing of features, which can also leads to a reduction of model parameters. \myshortcite{Kuang:2017vk} propose to shorten the path length between the related source and target embeddings to enhance the embedding layer. We believe that the shared features can be seem as the \emph{\textbf{zero}} distance between the paired word embeddings. Our proposed method also uses several ideas from the three-way WT method~\citep{Press:2017ug}. Both of these methods are easy to implement and transparent to different NMT architectures. The main differences are: 1) we share a part of features instead of all features; 2) the words of different relationship categories are allowed to share with differently sized features; and (3) it is adaptable to any language pairs, making the WT methods more widely used. \section{Conclusion} In this work, we propose a novel sharing technique to improve the learning of word embeddings for NMT. Each word embedding is composed of shared and private features. The shared features act as a prior alignment guidance for the attention model to improve the quality of attention. Meanwhile, the private features enable the words to better capture the monolingual characteristics, result in an improvement of the overall translation quality. According to the degree of relevance between a parallel word pair, the word pairs are categorized into three different groups and the number of shared features is different. Our experimental results show that the proposed method outperforms the strong Transformer baselines while using fewer model parameters. \section*{Acknowledgements} This work is supported in part by the National Natural Science Foundation of China (Nos. 61672555, 61876035, 61732005), the Joint Project of Macao Science and Technology Development Fund and National Natural Science Foundation of China (No. 045/2017/AFJ), the Multi-Year Research Grant from the University of Macau (No. MYRG2017-00087-FST). Yang Liu is supported by the National Key R\&D Program of China (No. 2017YFB0202204), National Natural Science Foundation of China (No. 61761166008, No. 61432013), Beijing Advanced Innovation Center for Language Resources (No. TYR17002).
1,941,325,221,237
arxiv
\section{Introduction} The purpose of this article is to present a self-contained survey on several optimal isosystolic inequalities for the two-dimensional torus, and to prove a new such one. We will consider length metric structures arising from infinitesimal convex structures, namely Finsler (reversible or not) metrics. It includes Riemannian metrics as a special case. For such length metric structures, there exist various notions of area, but we choose to focus on the following two central notions of Finsler area: Busemann-Hausdorff area and Holmes-Thompson area. These two notions are particularly relevant for isosystolic inequalities: Busemann-Hausdorff area generalizes the notion of Hausdorff measure, and is natural from the metric point of view, while Holmes-Thompson area is a dynamical invariant of the geodesic flow, and is natural from the dynamical angle. \subsection{Finsler metrics} A {\it (continuous) Finsler metric} on the $2$-torus $\mathbb{T}^2$ is a continuous function $F:T\mathbb{T}^2\to \mathbb{R}_+$ such that the restriction $F(x,\cdot)$ to each tangent space $T_x\mathbb{T}^2$ is a norm that we denote by $\|\cdot\|^F_x$. Let us emphasize that we do not require the norm to be symmetric, but only to be a function positive outside the origin, convex and positively homogeneous. In particular, the subset of vectors $v$ in $T_x\mathbb{T}^2$ satisfying $\|v\|^F_x\leq 1$ is a convex body $K_x\subset T_x\mathbb{T}^2$ containing the origin in its interior. Therefore, a Finsler metric amounts to a collection $\{K_x\}_{x \in \mathbb{T}^2}$ of convex bodies that continuously depends on the point $x$. If each one of these convex bodies is symmetric, the metric is said to be {\it reversible}. If each one is an ellipsoid that smoothly depends on the point, the metric is said to be {\it Riemannian}, and in particular, is reversible. Denote by $\pi : \mathbb{R}^2 \to \mathbb{T}^2$ the universal covering map obtained by identifying $\mathbb{T}^2$ with the quotient space $\mathbb{R}^2/\mathbb{Z}^2$. A Finsler metric $F$ on $\mathbb{T}^2$ induces a $\mathbb{Z}^2$-periodic Finsler metric $\tilde{F}$ on $\mathbb{R}^2$ through the formula $\tilde{F}(\tilde{x},\tilde{v})=F(\pi(\tilde{x}),T\pi_{\tilde{x}}(\tilde{v}))$. Using the canonical identification $T_{\tilde{x}}\mathbb{R}^2\simeq \mathbb{R}^2$, the associated collection of convex bodies $\widetilde{K}_{\tilde{x}}\subset T_{\tilde{x}}\mathbb{R}^2$ thus defines a continuous $\mathbb{Z}^2$-periodic map \begin{eqnarray*} \widetilde{K}: \mathbb{R}^2 &\to &\mathcal{K}_0(\mathbb{R}^2)\\ \tilde{x}&\mapsto&\widetilde{K}_{\tilde{x}}:=T\pi^{-1}_{\tilde{x}}(K_{\pi(\tilde{x})}) \end{eqnarray*} where $\mathcal{K}_0(\mathbb{R}^2)$ denotes the space of convex bodies in $\mathbb{R}^2$ containing the origin in their interior and endowed with the Hausdorff topology. If the above map is constant, we will say that the Finsler metric on $\mathbb{T}^2$ is {\it flat}. And for flat metrics, we will denote both $\widetilde{K}_{\tilde{x}}$ and $K_x$ simply by $K$. \subsection{Systole and Finsler areas} Given a Finsler metric $F$ on a manifold $M$ (in our case $M=\mathbb{T}^2$ or $\mathbb{R}^2$), the length of a piecewise smooth curve $\gamma : [a,b] \to M$ is defined using the formula $$ \ell_F(\gamma)=\int_a^b \|\dot{\gamma}(t)\|^F_{\gamma(t)}dt. $$ This length functional gives rise to a Finsler distance $d_F$ on $M$ obtained by minimizing the length of curves connecting two given points. This Finsler distance may be not symmetric if the metric is not reversible. However, when $M$ is complete then $(M,F)$ is geodesically complete. We now present the first ingredient for isosystolic inequalities, namely the systole. \begin{definition} Given a Finsler metric $F$ on $\mathbb{T}^2$, the {\it systole} is defined as the quantity $$ \sys(\mathbb{T}^2,F)=\inf \{\ell_F(\gamma) \mid \gamma \, \, \text{non-contractible closed curve in} \, \, \mathbb{T}^2\}. $$ \end{definition} It is easy to see that the systole can be read on the universal cover of the two-torus using the formula $ \sys(\mathbb{T}^2,F)=\min \{d_{\tilde{F}}(\tilde{x},\tilde{x}+z) \mid \tilde{x} \in [0,1]^2 \, \, \text{and} \, \, z \in \mathbb{Z}^2\setminus\{0\}\}. $ In particular, the value $\sys(\mathbb{T}^2,F)$ is always positive and the infimum is actually a minimum realized by the length of a shortest non-contractible closed geodesic. The second ingredient for isosystolic inequalities is the $2$-dimensional volume, or area. For Finsler manifolds, there exist many notions of volume, but in this article we will be interested by the following two central notions. First recall that given a convex body $\widetilde{K}\subset \mathbb{R}^2$ containing the origin in its interior, its polar body is the convex body defined by $\widetilde{K}^\circ:=\{\tilde{x} \in \mathbb{R}^2 \mid \langle \tilde{x},\tilde{y}\rangle\leq 1 \, \, \text{for all} \, \, \tilde{y} \in \widetilde{K}\}$ where $\langle \cdot,\cdot \rangle$ denotes the Euclidean scalar product of $\mathbb{R}^2$. Denote by $|\cdot|$ the standard Lebesgue measure on $\mathbb{R}^2$. \begin{definition}\label{def:area} The {\it Busemann-Hausdorff area} of a Finsler $2$-torus $(\mathbb{T}^2,F)$ is defined as the quantity $$ \area_{BH}(\mathbb{T}^2,F)=\int_{[0,1]^2} {\pi\over |\widetilde{K}_x|}\, d\tilde{x}_1 \wedge d\tilde{x}_2, $$ while its {\it Holmes-Thompson area} is defined as $$ \area_{HT}(\mathbb{T}^2,F)=\int_{[0,1]^2} {|\widetilde{K}^\circ_x|\over \pi}\, d\tilde{x}_1 \wedge d\tilde{x}_2. $$ \end{definition} So the Busemann-Hausdorff notion of area corresponds to integrate over a fundamental domain the unique multiple of the Lebesgue measure for which the area of $\widetilde{K}_x$ equals the area of the Euclidean unit disk, while the Holmes-Thompson notion of area corresponds to integrate the unique multiple of the Lebesgue measure for which the dual measure of the polar body $\widetilde{K}^\circ_x$ equals the area of the Euclidean unit disk. When the convex $K$ is symmetric, Blaschke's inequality \cite{Blaschke} asserts that $|K|\cdot|K^\circ|\leq \pi^2$ with equality if and only if $K$ is an ellipsoid. Therefore the inequality $\area_{HT}(\mathbb{T}^2,F)\leq \area_{BH}(\mathbb{T}^2,F)$ holds true for Finsler reversible metrics, with equality if and only if $F$ is a continuous Riemannian metric. Further observe that for Riemannian metrics both notions of area coincide with the standard notion of Riemannian area. \subsection{Isosystolic inequalities} Given a choice denoted by $\area_\ast$ (with $\ast=BH$ or $HT$) of one of these two notions of area, the {\it systolic $\ast$-area} of a Finsler metric $F$ is defined as the quotient $$ \frac{\area_\ast(\mathbb{T}^2,F)}{\sys(\mathbb{T}^2,F)^2}. $$ Observe that this functional is invariant by rescaling the metric $F$ into $\lambda F$ for any positive constant $\lambda$. An {\it isosystolic inequality} is then a positive lower bound on the systolic $\ast$-area holding for a large class of metrics. Equivalently, it amounts to an inequality of the type $$ \area_\ast(\mathbb{T}^2,F)\geq C \cdot \sys(\mathbb{T}^2,F)^2 $$ for some positive constant $C$. If the constant $C$ can not be improved, the isosystolic inequality is said to be {\it optimal}. In the absence of isosystolic inequality, that is when the infimum of the systolic $\ast$-area function over some class of metrics is zero, we say that {\it systolic freedom} holds. In this paper we will be concerned with the following classes of metrics: flat Riemannian metrics, Riemannian metrics, flat Finsler reversible metrics, flat Finsler metrics, Finsler reversible metrics, and finally Finsler metrics. Here is a table summarizing the currently known optimal isosystolic inequalities on $\mathbb{T}^2$ for these classes of metrics and the two notions of area described above. \bigskip \begin{comment}{ \begin{table}[h!] \caption{Optimal constants for several classes of Finsler metrics} \noindent \begin{tabular}{ |p{5,5cm}||p{4,5cm}|p{5,5cm}| } \hline & \hfil Reversible case & {\hfil Non-reversible} \\ \hline \hline Flat Riemannian metrics & $\sqrt{3}/ 2$ \hfil {\it (Folklore)} & \hfil X \\ \hline Riemannian metrics & $\sqrt{3}/ 2$ \hfil {\it (Loewner 1949)}& \hfil X \\ \hline \hline Flat Finsler metrics and BH-area & $\pi/4$ \hfil {\it (Minkowski 1896)} & $0$ \hfil {\it (systolic freedom)} \\ \hline Finsler metrics and BH-area & ? \hfil {\it (Open)} & $0$ {\it \hfil (systolic freedom)} \\ \hline \hline Flat Finsler metrics and HT-area & $2/\pi$ \hfil {\it (Minkowski+Mahler)} & $3/(2\pi)$ \hfil {\it (Alvarez-B-Tzanev 2016)} \\ \hline Finsler metrics and HT-area & $2/\pi$ \hfil {\it (Sabourau 2010)} & $3/(2\pi)$ \hfil {\it (Alvarez-B-Tzanev 2016)} \\ \hline \end{tabular} \bigskip \end{table} \end{comment} \begin{table}[h!] \caption{Optimal constants for several classes of Finsler metrics} \begin{tabular}{c@{\hspace{0.45cm}}c@{\hspace{0.45cm}}c} \rowcolor{primarycream2} \cellcolor{white} & Reversible & Non-reversible \\ \arrayrulecolor{white}\specialrule{0.3mm}{0.3mm}{0.3mm} \rowcolor{secondarycream2} \cellcolor{primarycream2} \begin{tabular}{@{}c@{}} \cellcolor{primarycream2} Flat Riemannian \\ \cellcolor{primarycream2} metrics\end{tabular} & \begin{tabular}{@{}c@{}}Folklore \\ $ \sqrt{3}/2$ \end{tabular} & $\times$ \\ \arrayrulecolor{white}\specialrule{0.3mm}{0.3mm}{0.3mm} \rowcolor{secondarycream2} \cellcolor{primarycream2} \begin{tabular}{@{}c@{}} \cellcolor{primarycream2} Riemannian \\ \cellcolor{primarycream2} metrics \end{tabular} & \begin{tabular}{@{}c@{}}Loewner 1949 \\ $\sqrt{3}/2$ \end{tabular} & $\times$ \\ \arrayrulecolor{white}\specialrule{0.3mm}{0.3mm}{0.3mm} \rowcolor{secondarycream2} \cellcolor{primarycream2} \begin{tabular}{@{}c@{}} \cellcolor{primarycream2} Flat Finsler metrics \\ \cellcolor{primarycream2} BH-area \end{tabular} & \begin{tabular}{@{}c@{}}Minkowski 1896 \\ $\pi/4$ \end{tabular} & \begin{tabular}{@{}c@{}} Systolic freedom \\ $0$ \end{tabular} \\ \arrayrulecolor{white}\specialrule{0.3mm}{0.3mm}{0.3mm} \rowcolor{secondarycream2} \cellcolor{primarycream2} \begin{tabular}{@{}c@{}} \cellcolor{primarycream2} Finsler metrics \\ \cellcolor{primarycream2} BH-area \end{tabular} & \begin{tabular}{@{}c@{}}Open \\ ? \end{tabular} & \begin{tabular}{@{}c@{}} Systolic freedom \\ $0$ \end{tabular} \\ \arrayrulecolor{white}\specialrule{0.3mm}{0.3mm}{0.3mm} \rowcolor{secondarycream2} \cellcolor{primarycream2} \begin{tabular}{@{}c@{}} \cellcolor{primarycream2} Flat Finsler metrics \\ \cellcolor{primarycream2} HT-area \end{tabular} & \begin{tabular}{@{}c@{}}Minkowski + Mahler \\ $2/\pi$ \end{tabular} & \begin{tabular}{@{}c@{}} \' Alvarez-B-Tzanev 2016 \\ $3/(2\pi)$ \end{tabular} \\ \arrayrulecolor{white}\specialrule{0.3mm}{0.3mm}{0.3mm} \rowcolor{secondarycream2} \cellcolor{primarycream2} \begin{tabular}{@{}c@{}} \cellcolor{primarycream2} Finsler metrics \\ \cellcolor{primarycream2} HT-area \end{tabular} & \begin{tabular}{@{}c@{}} Sabourau 2010 \\ $2/\pi$ \end{tabular} & \begin{tabular}{@{}c@{}} \' Alvarez-B-Tzanev 2016 \\ $3/(2\pi)$ \end{tabular} \end{tabular} \bigskip \end{table} It is important to observe that all flat isosystolic inequalities here are implied by the corresponding non-flat ones. This is not a coincidence as the proof for some non-flat class of Finsler metrics always proceeds with the same strategy: first find a flat metric in this class with lower systolic area, and then study the problem for flat metrics. However, we have decided to express here the flat case of these isosystolic inequalities in independent statements to underline their own importance, and their connection with several fundamental results in the geometry of numbers and convex geometry as we shall later explain. From the fact that $\area_{BH}\geq \area_{HT}$ combined with Sabourau's isosystolic inequality, we see that for Finsler reversible metrics and Busemann-Hausdorff area the lower bound $C_{opt}\geq 2/\pi$ holds. Nevertheless this lower bound is not sharp, and in the present paper we complete the above table by proving the following optimal isosystolic inequality. \begin{theorem}\label{th:opt} Let $F$ be a Finsler reversible metric on $\mathbb{T}^2$. Then the following inequality holds true: \[ \area_{BH}(\mathbb{T}^2,F)\geq \frac{\pi}{4} \sys^2(\mathbb{T}^2,F). \] \end{theorem} This statement was conjectured by J.C. \'Alvarez Paiva in a private conversation with the first author. The main step in our proof is related to the asymptotic geometry of universal covers of Finsler tori. Namely, according to \cite{burago} the universal cover of a Finsler torus admits a unique norm---called the {\it stable norm}---which asymptotically approximates the pullback Finsler metric, see formula (\ref{eq:st}). This norm gives rise to a flat Finsler metric on $\mathbb{T}^2$ abusively called stable norm, see section \ref{sec:stable_norm}. We prove that {\it for a $2$-torus passing from a Finsler metric to its stable norm decreases the Busemann-Hausdorff area}. This result has been conjectured by Burago \& Ivanov in \cite[Conjecture A]{burago_ivanov}, where they connect it to two other conjectures for torus of arbitrary dimension and prove the analog statement for the Holmes-Thompson notion of volume in dimension $2$. A direct corollary of our statement is the above systolic inequality. \section{Isosystolic inequalities for flat and non-flat Riemannian metrics} We first explain how the optimal isosystolic inequality for flat Riemannian metrics on $\mathbb{T}^2$ reduces to compute the Hermite constant in dimension $2$. Then we prove Loewner's theorem, the optimal isosystolic inequality for Riemannian metrics on $\mathbb{T}^2$. Recall that for Riemannian metrics, both Busemann-Hausdorff and Holmes-Thompson notions of area coincide with the standard Riemannian area simply denoted by $area$ in this section. \subsection{The flat Riemannian case: Hermite constant in dimension $2$.} We start by recalling the definition of Hermite constant in an arbitrary dimension $n$. Given a full rank lattice $L$ in $\mathbb{R}^n$ endowed with the standard Euclidean structure $\langle \cdot,\cdot \rangle$, its determinant $\det(L)$ is defined as the absolue value of the determinant of any of its basis, while its norm $N(L)$ is defined as the minimum value $\langle \lambda,\lambda \rangle$ over all elements $\lambda \in L\setminus \{0\}$. The Hermite invariant of $L$ is then defined as the quantity $$ \mu(L)=\frac{N(L)}{\det(L)^{2\over n}}, $$ and the Hermite constant $\gamma_n$ as the supremum value of Hermite invariant $\mu(L)$ over all full rank lattices $L$ of $\mathbb{R}^n$. The Hermite constant is finite in every dimension, and its exact value is known only for dimensions $n=1,\ldots,8$ and $24$. For large value we know that $\gamma_n$ asymptotically behave like $n\over 2\pi e$ up to a factor $2$. We are interested in the following statement. \begin{theorem}[Folklore] $\gamma_2={2 \over \sqrt{3}}$. \end{theorem} \begin{proof} Because the Hermite invariant $\mu(\cdot)$ is invariant under scaling and rotating the lattice, we can suppose that $L=\mathbb{Z}(1,0)\oplus \mathbb{Z} v$ where $v=(v_1,v_2)$ has norm at least $1$. By possibly changing $v=(v_1,v_2)$ into $v=(v_1,-v_2)$ in the preceding expression of $L$ (which corresponds to a reflection of the lattice along the $x$ axis that does not change the value of $\mu(L)$), we can also suppose that $v_2>0$. Finally by possibly replacing $v$ by $v+n(1,0)$ for some $n \in \mathbb{Z}$ (which does not change $L$ at all), we can suppose that $v$ belongs to the domain $|v_1|\leq 1/2$, $v_2>0$ and $v_1^2+v_2^2\geq 1$. We have $N(L)=1$ as a shortest vector is $u=(1,0)$. It is then straightforward to check that $\det(L)=\det(u,v)=v_2$ is minimal when the second coordinate of $v$ is minimal, that is for $v=(\pm1/2,\sqrt{3}/2)$. \end{proof} Note that this supremum is reached if and only if $L$ is an hexagonal lattice (that is, a lattice generated by two vectors forming an angle of $2\pi/3$ and of equal lengths). Hermite proved that the previous result generalizes to the following upperbound (see \cite[pp.30-31]{Cassels}): $$ \gamma_n\leq \left({4 \over 3}\right)^{n-1 \over 2}. $$ Now observe that any flat two-dimensional Riemannian torus is isometric to the quotient of the Euclidean plane by some lattice $L$ (just choose a linear map $T:\mathbb{R}^2 \to \mathbb{R}^2$ whose send the ellipse formed by unit vectors at some (and so any) point to a circle and set $L=T(\mathbb{Z}^2)$). For such a flat $2$-torus $\mathbb{T}^2_L:=(\mathbb{R}^2/L,\langle\cdot,\cdot\rangle)$ we easily find that $\sys(\mathbb{T}^2_L)=\sqrt{N(L)}$ while $\area(\mathbb{T}^2_L)=\det(L)$. So the previous result amounts to the following optimal isosystolic inequality for flat Riemannian $2$-tori. \begin{theorem}[Hermite constant in dimension $2$, systolic formulation]\label{th:flatRIemannian} Let $g$ be a Riemannian flat metric on $\mathbb{T}^2$. Then the following holds true: \[ \area(\mathbb{T}^2,g)\geq \frac{\sqrt{3}}{2}\sys^2(\mathbb{T}^2,g). \] Furthermore equality holds if and only if the metric $g$ is isometric to some flat hexagonal metric. \end{theorem} A flat hexagonal metric on $\mathbb{T}^2$ is obtained by taking the quotient of the Euclidean plane by some hexagonal lattice. \subsection{The Riemannian case: Loewner's isosystolic inequality} We now prove Loewner's theorem (unpublished, see \cite{Pu52}), which describes the optimal isosystolic inequality for two-dimensional Riemannian tori. \begin{theorem}[Loewner's isosystolic inequality, 1949] Let $g$ be a Riemannian metric on $\mathbb{T}^2$. Then the following holds true: \begin{equation}\label{eq:Loewner} \area(\mathbb{T}^2,g)\geq \frac{\sqrt{3}}{2}\sys^2(\mathbb{T}^2,g). \end{equation} Furthermore equality holds if and only if the metric $g$ is isometric to some flat hexagonal metric. \end{theorem} \begin{proof} The uniformization theorem (see \cite{MT02} for instance) ensures that $g$ is isometric to a metric of the form $f g_0$ where $f$ is a positive smooth function on $\mathbb{T}^2$ and $g_0$ a flat metric obtained as the quotient of the plane with its Euclidean structure by some full rank lattice. We observe in particular that $g_0$ always admits a transitive compact subgroup ${\mathcal I}$ of isometries corresponding to Euclidean translations. This compact Lie group possesses a unique Haar measure $\mu$. Denote by \[ \bar{f}:=\int_{\mathcal I} (f \circ I) d\mu \] the averaged conformal factor to which we associate the new Riemannian metric $\bar{g}=\bar{f} g_0$. First observe that \begin{eqnarray*} \area(\mathbb{T}^2,\bar{g})&=&\int_{\mathbb{T}^2}\, dv_{\bar{g}}\\ &=&\int_{\mathbb{T}^2}\int_{\mathcal I} (f \circ I) \, d\mu \, dv_{g_0}\\ &=&\int_{\mathcal I}\int_{\mathbb{T}^2} (f \circ I) \, dv_{g_0} \, d\mu \qquad \text{(by Fubini)} \\ &=&\int_{\mathcal I}\int_{\mathbb{T}^2} f \, dv_{g_0} \, d\mu \qquad \text{($I$ being an isometry of $g_0$)}\\ &=&\int_{\mathcal I} \area(\mathbb{T}^2,g) \, d\mu\\ &=& \area(\mathbb{T}^2,g). \end{eqnarray*} Besides, for any non-contractible closed curve $\gamma:[a,b]\to \mathbb{T}^2$, we have \begin{eqnarray*} \ell_{\bar{g}}(\gamma)&=&\int_a^b \|\dot{\gamma}(t)\|^{\bar{g}}_{\gamma(t)} dt=\int_a^b \sqrt{\bar{f}}\cdot\|\dot{\gamma}(t)\|^{g_0}_{\gamma(t)} dt\\ &=&\int_a^b \sqrt{\int_{\mathcal I} (f \circ I) \, d\mu} \cdot \|\dot{\gamma}(t)\|^{g_0}_{\gamma(t)} dt\\ &\geq&\int_a^b \int_{\mathcal I} \sqrt{f \circ I} \, d\mu \cdot \|\dot{\gamma}(t)\|^{g_0}_{\gamma(t)} dt \qquad \text{(by Jensen's inequality)}\\ &=&\int_{\mathcal I} \int_a^b \sqrt{f \circ I}\cdot \|\dot{\gamma}(t)\|^{g_0}_{\gamma(t)} dt \, d\mu \qquad \text{(by Fubini again)} \\ &=&\int_{\mathcal I} \ell_g(I^{-1}\circ \gamma) \, d\mu \\ &\geq&\int_{\mathcal I} \sys(\mathbb{T}^2,g) \, d\mu \qquad \text{($I^{-1}\circ \gamma$ being non-contractible)}\\ &= &\sys(\mathbb{T}^2,g). \end{eqnarray*} Therefore we get that $\sys(\mathbb{T}^2,\bar{g})\geq \sys(\mathbb{T}^2,g)$, and consequently \begin{eqnarray}\label{eq:average} \frac{\area(\mathbb{T}^2,g)}{\sys^2(\mathbb{T}^2,g)}\geq \frac{\area(\mathbb{T}^2,\bar{g})}{\sys^2(\mathbb{T}^2,\bar{g})}. \end{eqnarray} Now using the transitivity of $\mathcal I$, we get that the function $\bar{f}$ is constant as for any $I \in {\mathcal I}$ we have $\bar{f}\circ I=\bar{f}$ by construction. By homogeneity of the systolic area, we deduce that $$ \frac{\area(\mathbb{T}^2,g)}{\sys^2(\mathbb{T}^2,g)}\geq \frac{\area(\mathbb{T}^2,g_0)}{\sys^2(\mathbb{T}^2,g_0)}. $$ Therefore we derive from Theorem \ref{th:flatRIemannian} the desired inequality. The equality case in (\ref{eq:average}) occurs if and only if $f \circ I$ is constant for all $I \in {\mathcal I}$, that is when $f$ itself is constant and the metric $g$ is isometric to a flat one. Therefore the equality case in (\ref{eq:Loewner}) occurs if and only if the metric is isometric to a flat hexagonal metric. \end{proof} \section{Isosystolic inequalities for flat Finsler metrics}\label{sec:flatFinsler} In this section, we survey optimal isosystolic inequalities for Finsler flat metrics on the two-torus for the two notions of area we are interested in. \subsection{Busemann-Hausdorff area in the flat reversible case: Minkoswki's first theorem} First recall the celebrated founding result of the geometry of numbers in the $2$-dimensional case, see \cite{Minkowski}. \begin{theorem}[Minkowski's first theorem, 1896]\label{th:firstMin} Let $K \subset \mathbb{R}^2$ be a symmetric convex body such that $\text{int}(K) \cap \mathbb{Z}^2 = \set{0}$. Then its Lebesgue measure satisfies $\abs{K} \leq 4$. \end{theorem} It is straightforward to observe that equality holds when $K$ is the square $[-1;1]^2$. In fact, it is even true that equality holds if and only if $K$ is the image of the previous square under some element of $SL_2(\mathbb{Z})$. But we will not need this fact. \begin{proof} We argue by contradiction as follows. Consider $\mathbb{T}^2=\mathbb{R}^2/\mathbb{Z}^2$ endowed with the Riemannian metric induced by the Euclidean scalar product. If $\abs{K} > 4$, fix $0<\lambda<1$ such that the symmetric convex body $K'=\lambda \cdot K\subset \text{int}(K)$ still satisfies $\abs{K'} > 4$. Then the homothetic symmetric convex body $K'/2$ would have Lebesgue measure strictly greater than $1$. Thus the universal covering map $\pi : \mathbb{R}^2 \to \mathbb{T}^2$ restricted to $K'/2$ can not be into, as in the contrary it would imply that $\abs{K'/2}=\vol(\pi(K'/2))\leq\vol(\mathbb{T}^2)=1$. Therefore there exist two points $\tilde{x}$ and $\tilde{x}+z$ with $z \in \mathbb{Z}^2\setminus\{0\}$ both belonging to $K'/2$. As $K'/2$ is symmetric, we get that $-\tilde{x}$ also belongs to $K'/2$, which ensures by convexity that $\frac{z}{2}=\frac{\tilde{x}+z-\tilde{x}}{2} \in K'/2 \Leftrightarrow z \in K'\subset\text{int}(K)$: a contradiction. \end{proof} Observe that the same proof actually works in arbitrary dimension $n\geq 2$, and that we obtain the following version of Minkowski first theorem: {\it Let $K \subset \mathbb{R}^n$ be a symmetric convex body such that $\text{int}(K) \cap \mathbb{Z}^n = \set{0}$. Then its Lebesgue measure satisfies $\abs{K} \leq 2^n$.} Let us now explain how this theorem translates into an optimal isosystolic inequality. A symmetric convex body $K \subset \mathbb{R}^2$ corresponds to a unique symmetric norm $\|\cdot \|_{K}$ on $\mathbb{R}^2$, which induces a unique $\mathbb{Z}^2$-periodic flat Finsler reversible metric $\tilde{F}_{K}$ on $\mathbb{R}^2$ by setting $\tilde{F}_{K}(\tilde{x},\tilde{v})=\|\tilde{v}\|_{K}$. That way it corresponds to a unique flat Finsler reversible metric $F_K$ on the $2$-torus $\mathbb{T}^2$. Observe that given two points $\tilde{x}, \tilde{y} \in \mathbb{R}^2$, the length of any curve $\gamma: [a,b] \rightarrow \mathbb{R}^2$ from $\tilde{x}$ to $\tilde{y}$ satisfies \begin{equation*} \ell_{\tilde{F}_{K}}(\gamma) = \int_a^b \|\dot{\gamma}(t)\|_{K} dt \geq \left\|\int_a^b \dot{\gamma}(t) dt\right\|_{K} = \|\gamma(b) - \gamma(a)\|_{K} = \|\tilde{y}-\tilde{x}\|_{K}. \end{equation*} Equality occurs when velocity is a constant vector, that is when $\gamma$ parametrizes a segment. Therefore the geodesics of $(\mathbb{R}^2,\tilde{F}_{K})$ are precisely the segment of lines. Now, any non-contractible closed curve of $\mathbb{T}^2$ lifts to $\mathbb{R}^2$ to a curve between two points $\tilde{x}$ and $\tilde{x}+z$ for some $z \in \mathbb{Z}^2\setminus\{0\}$. The length of such a curve is at least equal to $\norm{z}_K$ from which we deduce that \begin{equation*} \sys{(\mathbb{T}^2,F_K)} = \min_{z \in \mathbb{Z}^2 \backslash \set{0}} \norm{z}_K. \end{equation*} Therefore \begin{align*} \text{int}(K) \cap \mathbb{Z}^2 = \set{0} &\Longleftrightarrow \sys{(\mathbb{T}^2,F_K)} \geq 1, \end{align*} while using Definition \ref{def:area} we get that \begin{align*} \abs{K} \leq 4 &\Longleftrightarrow \area_{BH}{(\mathbb{T}^2, F_K)} = \int_{[0,1]^2} \frac{\pi}{\abs{K}} d\tilde{x}_1 \wedge d\tilde{x}_2 = \frac{\pi}{\abs{K}} \geq \frac{\pi}{4}. \end{align*} As the systolic area remains invariant under rescaling the metric by any positive factor $\lambda$, and observing the fact that $\sys{(\mathbb{T}^2,\lambda F_K)} =\lambda \sys{(\mathbb{T}^2,F_K)}$, we can reformulate Theorem \ref{th:firstMin} as the following optimal isosystolic inequality for Busemann-Hausdorff area and flat Finsler reversible metrics on the $2$-torus. \begin{theorem}[Minkowski's first theorem, systolic formulation]\label{th:Mink} Any flat Finsler reversible torus $(\mathbb{T}^2,F_K)$ satisfies the following optimal isosystolic inequality: \begin{equation*} \area_{BH}{(\mathbb{T}^2,F_K)} \geq \frac{\pi}{4} \sys^2{(\mathbb{T}^2,F_K)}. \end{equation*} \end{theorem} \subsection{Busemann-Hausdorff area in the flat non-reversible case: systolic freedom} It is well known that Minkowski first theorem no longer holds if we relax the symmetry assumption on the convex body. Equivalently, this means that there does not exist an isosystolic inequality on the $2$-torus for the Busemann-Hausdorff area and flat Finsler (possibly non-reversible) metrics. More especifically, consider for every $\varepsilon \in (0,1)$ the convex body $K_\varepsilon\subset \mathbb{R}^2$ in Figure \ref{fig:rhombus} defined as the convex hull of the four vertices $(0,1), (\tfrac{1+\varepsilon}{2\varepsilon},\tfrac{1-\varepsilon}{2}), (0,-\varepsilon)$ and $(-\tfrac{1+\varepsilon}{2\varepsilon}, \tfrac{1-\varepsilon}{2})$. \begin{figure}[h!] \includegraphics[scale=1]{Rhombus.pdf} \caption{The convex body $K_\varepsilon$.} \label{fig:rhombus} \end{figure} This convex body defines a flat Finsler metric $F_{K_\varepsilon}$ on $\mathbb{T}^2$ which is not reversible. We easily check that $\sys(\mathbb{T}^2,F_{K_\varepsilon})=1$ and $\abs{K_\varepsilon} = (1+\varepsilon)^2/(2\varepsilon).$ Therefore its Busemann-Hausdorff systolic area is not bounded from below: $$ \frac{\area_{BH}(\mathbb{T}^2,F_{K_\varepsilon})}{\sys^2(\mathbb{T}^2,F_{K_\varepsilon})} = \frac{2\pi\varepsilon}{(1+\varepsilon)^2} \xrightarrow[]{\varepsilon\to 0} 0. $$ \subsection{Mahler's theorem combined with Minkowski's first theorem: Holmes-Thompson area in the flat reversible case} We now focus on Holmes-Thompson notion of area in the reversible case. First recall the following optimal inequality. \begin{theorem}\cite{Mahler2}\label{th:Mahler} Given a symmetric convex body $K\subset \mathbb{R}^2$, the following inequality holds true: $$ |K|\cdot |K^\circ|\geq 8. $$ \end{theorem} \begin{proof}[Sketch of proof.] We briefly describe here the strategy behind the proof of Mahler's inequality, and we refer to \cite{henze} for all technical details of the proof. Any symmetric convex body can be approximated in the Hausdorff topology by a sequence of symmetric polygons. Since the product volume $K \mapsto |K|\cdot |K^\circ|$ is continuous on $\mathcal{K}_0(\mathbb{R}^2)$, it suffices to prove the inequality for symmetric polygons. It can be proved that given a symmetric polygon $P$ with $m \geq 3$ pairs of opposite vertices, one can construct a symmetric polygon $Q$ with $m-1$ such pairs such that $\abs{Q} \cdot \abs{Q^\circ} \leq \abs{K} \cdot \abs{K^\circ}$. Namely, one can remove a certain pair of opposite vertices by pushing them simultaneously and in opposite directions along a line segment parallel to the line passing through their adjacent vertices until they become aligned with one of their two adjacent edges. This process maintains the convexity, the symmetry and the volume of $P$, and the pair of opposite points can be chosen to ensure that the volume of the polar $P^\circ$ decreases. At the end of the deformation the number of vertices has decreased by $2$. Applying this result in a recurrent way, one can reduce any symmetric polygon to a parallelogram, for which equality is reached. \end{proof} Mahler also conjectured in \cite{mahler1} that in arbitrary dimension $n\geq 2$ the volume product of a symmetric convex body $K\subset \mathbb{R}^n$ satisfies the following lower bound: $$ |K|\cdot |K^\circ|\geq \frac{4^n}{n!}. $$ Mahler's conjecture has been recently proved in dimension $3$ by \cite{IS20}. In arbitrary dimension, the best known lower bound is due to \cite{kuperberg} who proved that $$ |K|\cdot |K^\circ|\geq {\pi \over 4} \cdot \frac{4^n}{n!}. $$ By combining Minkowski's first theorem together with Mahler's theorem, we obtain the following optimal isosystolic inequality for Holmes-Thompson area and flat Finsler reversible metrics on the $2$-torus. \begin{theorem} Any flat Finsler reversible torus $(\mathbb{T}^2,F_K)$ satisfies the following optimal isosystolic inequality: \begin{equation*} \area_{HT}{(\mathbb{T}^2,F_K)} \geq \frac{2}{\pi} \sys^2{(\mathbb{T}^2,F_K)}. \end{equation*} \label{th:HT_reversible} \end{theorem} \begin{proof} Rescaling the metric if necessary, we can suppose that $\sys{(\mathbb{T}^2,F_K)}=1$. Now Minkowski first Theorem \ref{th:firstMin} ensures that $|K|\leq 4$ which together with Mahler's Theorem \ref{th:Mahler} implies that $$ |K^\circ|\geq 2 \Leftrightarrow \area_{HT}{(\mathbb{T}^2,F_K)}\geq \frac{2}{\pi}. $$ \end{proof} \subsection{Holmes-Thompson area in the flat non-reversible case} We now present the optimal isosystolic inequality for Holmes-Thompson area and flat Finsler metrics on the two-torus obtained in \cite{ABT}. \begin{theorem} Any flat Finsler torus $(\mathbb{T}^2,F_K)$ satisfies the following optimal isosystolic inequality: \begin{equation*} \area_{HT}{(\mathbb{T}^2,F_K)} \geq \frac{3}{2\pi} \sys^2{(\mathbb{T}^2,F_K)}. \end{equation*} \label{th:ABT} \end{theorem} \begin{proof}[Sketch of proof.] We present here a short version of the proof, focusing on the main geometric ideas and avoiding several technical considerations. First we bring the above isosystolic inequality into the world of the geometry of numbers as follows. It is enough to show that if $\sys{(\mathbb{T}^2,F_K)}\geq 1$ then $\area_{HT}{(\mathbb{T}^2,F_K)} \geq \frac{3}{2\pi}$. Equivalently, we have to prove that for a convex body $K\subset \mathbb{R}^2$ the condition $\text{int}(K) \cap \mathbb{Z}^2 = \set{0}$ ensures that $\abs{K^\circ}\geq 3/2$. Now observe that $\text{int}(K) \cap \mathbb{Z}^2 = \set{0}$ if and only if every integer line $m_1\tilde{x}_1+m_2\tilde{x}_2=1$ where $(m_1,m_2)\in \mathbb{Z}^2\setminus\{0\}$ intersects $K^\circ$, as duality interchanges points and lines. \begin{figure}[h!] \includegraphics[scale=0.6]{DiophantineLinesEdit.pdf} \caption{Set of integer lines for $m_1^2+m_2^2\leq 50$ and a triangle with minimal area.} \label{integerlines} \end{figure} So we are left to prove the following assertion: {\it if a convex body $K\subset \mathbb{R}^2$ intersects every integer line, then its Lebesgue measure satisfies $\abs{K}\geq 3/2$}. If you wonder how looks this set of integer lines, see Figure \ref{integerlines}. The red triangle is the boundary of a convex body of area $3/2$ that meets every integer line, showing that the isosystolic inequality is optimal. Now we argue as follows. By approximation of convex bodies by polygons in the Hausdorff topology, it is enough to prove the assertion for a convex polygon $K$ whose vertices will be denoted by $v_1,\ldots,v_n$. Remark that necessarily $n\geq 3$. We will argue by induction on the number $n$ of vertices. If some vertex $v_i$ does not lie on any integer line, we can push this vertex along the segment joining it to its orthogonal projection on the segment $[v_{i-1},v_{i+1}]$ (with the convention that $v_{n+1}=v_1$), see Figure \ref{fig:first}. We stop the process when either the vertex meets an integer line, or when the vertex meets the segment $[v_{i-1},v_{i+1}]$ and thus disappears as a vertex of $K$. This deformation decreases the area of the polygon $K$ while preserving convexity and the property of intersecting every integer line. Applying this process to each vertex that is not contained in an integer line, we deform our original convex polygon into a new one with strictly less area, that still meets every integer line, and such that every vertex is contained in at least one integer line. The number of vertices have possibly decreased during the process. \begin{figure}[h!] \includegraphics[scale=1]{primermoviment.pdf} \caption{First type of deformation of the convex polygon.} \label{fig:first} \end{figure} Now suppose that a vertex $v_i$ is contained in exactly one integer line. We can move the vertex $v_i$ along this integer line in at least one direction to ensure that the area decreases. Suppose that the correct direction corresponds to the adjacent vertex $v_{i-1}$. We will stop either when the vertex $v_i$ meets another integer line, or when the vertex $v_i$ meets the line $(v_{i-2},v_{i-1})$ and thus $v_{i-1}$ disappears as a vertex of $K$, see Figure \ref{fig:second}. \begin{figure}[h!] \includegraphics[scale=1]{segonmoviment.pdf} \caption{Second type of deformation of the convex polygon.} \label{fig:second} \end{figure} Applying this process to each vertex that is contained in only one integer line, we deform the convex polygon obtained in the last step into a new one with strictly less area, that still meets every integer line, and such that every vertex is contained in at least two distinct integer lines. The number of vertices have possibly decreased during the process. We now show that it implies that every vertex of our new convex polygon belongs to $\mathbb{Z}^2$. For this, fix such a vertex $v$ contained in at least two distinct integer lines. Observe that in the dual space $(\mathbb{R}^2)^\ast$ a vertex $v$ corresponds to a line $v^\ast$, and an integer lines $L:=\{ m_1x_1+m_2x_2=1\}$ to a point $L^\ast=(m_1,m_2)\in \mathbb{Z}^2$. We have $v \in L\Leftrightarrow L^\ast \in v^\ast$. So fix $L_1$ and $L_2$ two integer lines containing $v$ such that the dual segment $]L_1^\ast,L_2^\ast[$ does not intersect $\mathbb{Z}^2$. It ensures that $(L_1^\ast,L_2^\ast)$ forms a basis of $\mathbb{Z}^2$, and thus the associated matrix $A$ with row vectors $L_1^\ast$ and $L_2^\ast$ belongs to $SL_2(\mathbb{Z})$. Next observe that $v=(x_1,x_2)$ is the unique solution of the equation $Av^t=(1,1)^t$. Thus $v=A^{-1}(1,1)^t\in \mathbb{Z}^2$. To finish the proof, we apply Pick's formula \cite{Pick} that asserts that for a convex polygon $K$ whose vertices belong to $\mathbb{Z}^2$, if we denote by $i$ the number of integer points contained in its interior, and by $b$ the number of integer points contained in its boundary, the area of the polygon satisfies $|K|=i+b/2-1$. In our case, as the origin of the plane belongs to the interior of our polygon which has at least $3$ integer vertices, we have $i\geq 1$ and $b\geq 3$ which ensures that $|K|\geq 3/2$. The triangle with vertices $(-1,-1)$, $(0,1)$ and $(1,0)$ pictured in Figure \ref{integerlines} has area $3/2$. It also intersects every integer line as its polar is the triangle with vertices $(1,1)$, $(1,-2)$ and $(-2,1)$ whose interior does not contain other integer points than the origin. It ensures the optimality of our assertion, and therefore of the corresponding isosystolic inequality. \end{proof} \section{Isosystolic inequalities for Finsler metrics and Holmes-Thompson area} In this section, we prove two optimal isosystolic inequalities on the $2$-torus for the Holmes-Thompson notion area: one for reversible Finsler metrics, and another for possibly non-reversible Finsler ones. The strategy of the proofs is similar to the Riemannian case: find a flat metric whose systolic area is smaller, and then apply the corresponding flat optimal systolic inequality already proved in section \ref{sec:flatFinsler}. The flat metric here is defined using the asymptotic geometry of the universal covering space of our Finsler torus, and is known as the {\it stable norm}. The first subsection briefly presents this notion. \subsection{Stable norm on the universal cover of a Finsler $2$-torus}\label{sec:stable_norm} Given a Finsler $2$-torus $(\mathbb{T}^2,F)$, or equivalently a $\mathbb{Z}^2$-periodic Finsler metric $\tilde{F}$ on $\mathbb{R}^2$, set for any $z \in \mathbb{Z}^2$ $$ \|z\|^F_{st}:=\inf_{\tilde{x} \in \mathbb{R}^2} d_{\tilde{F}}(\tilde{x},\tilde{x}+z). $$ \begin{proposition} The function $\|\cdot\|^F_{st}:\mathbb{Z}^2\to \mathbb{R}_+$ satisfies the following properties: \begin{enumerate} \item positivity: $$ \|z\|^F_{st}>0 $$ for every non-zero $z\in \mathbb{Z}^2$. \item positive homogeneity over $\mathbb{Z}$: $$ \|k z\|^F_{st}=k\| z\|^F_{st} $$ for any positive integer $k$ and any $z \in \mathbb{Z}^2$. \item strict convexity: $$ \| z_1+z_2\|^F_{st}<\| z_1\|^F_{st}+\|z_2\|^F_{st} $$ for any pair of linearly independent vectors $z_1,z_2 \in \mathbb{Z}^2$. \end{enumerate} \end{proposition} \begin{proof} (1) Because of the $\mathbb{Z}^2$-periodicity of the metric $\tilde{F}$, the infimum above turns out to be a minimum, and in particular we have $\|z\|^F_{st}=\min_{\tilde{x} \in [0,1]^2} d_{\tilde{F}}(\tilde{x},\tilde{x}+z)>0 $ for every non-zero $z\in \mathbb{Z}^2$. (2) According to the triangular inequality we have that $d_{\tilde{F}}(\tilde{x},\tilde{x}+kz)\leq kd_{\tilde{F}}(\tilde{x},\tilde{x}+z)$ for any $\tilde{x}\in [0,1]^2$, and thus $\|k\cdot z\|^F_{st}\leq k\| z\|^F_{st}$ for any positive integer $k$. In order to prove the reverse inequality, we argue by contradiction. Suppose this inequality is strict for some $k\geq 2$, and that $k$ is the smallest integer with this property. Fix $\tilde{x}_0 \in [0,1]^2$ such that $d_{\tilde{F}}(\tilde{x}_0,\tilde{x}_0+kz)=\min_{\tilde{x} \in [0,1]^2} d_{\tilde{F}}(\tilde{x},\tilde{x}+kz)$. Therefore there exists a geodesic path $\eta$ from $\tilde{x}_0$ to $\tilde{x}_0+kz$ with length $\ell_{\tilde{F}}(\eta)=d_{\tilde{F}}(\tilde{x}_0,\tilde{x}_0+kz)=\|k\cdot z\|_{st}^F$. Now if $\gamma$ denotes a geodesic path from $\tilde{x}_0$ to $\tilde{x}_0+z$ such that $\ell_{\tilde{F}}(\gamma)=d_{\tilde{F}}(\tilde{x}_0,\tilde{x}_0+z)$, we denote by $\gamma^k$ the path from $\tilde{x}_0$ to $\tilde{x}_0+k z$ obtained as the concatenation of the $k$ translated paths $\gamma+iz$ for $i=0,\ldots,k-1$. The strict inequality $\|k\cdot z\|^F_{st}< k\| z\|^F_{st}$ implies that $\ell_{\tilde{F}}(\eta)<\ell_{\tilde{F}}(\gamma^k)$ as $\ell_{\tilde{F}}(\gamma^k)=k\cdot \ell_{\tilde{F}}(\gamma)\geq k \|z\|_{st}^F $. Therefore $\eta$ does not coincide with $\gamma^k$. Furthermore $\eta$ and $\gamma$ intersect only at $\tilde{x}_0$ by minimality. Now denote by $\tilde{x}$ the first point of $\eta$ after $\tilde{x}_0$ that lies on $\gamma^k$. (Possibly $\tilde{x}=\tilde{x}_0+kz$ and the two curves intersect only at their extremal points.) Both $\eta$ and $\gamma^k$ being simple, the two subarcs respectively defined from $\tilde{x}_0$ to $\tilde{x}$ bounds a topological $2$-disk $D$. Now the translated curve $\eta+z$ is a path starting at $\tilde{x}+z \in \partial D$ with an initial vector pointing inside $D$ by construction. Because its final point $\tilde{x}+(k+1)z$ lies outside the disk, Jordan curve theorem ensures that $\eta+z$ intersects the boundary of $D$, and therefore $\eta$ by construction. Denote by $\tilde{x}_1+z$ this intersection point where $\tilde{x}_1$ belongs to $\eta$. Now we consider the arc $\eta'$ going from $\tilde{x}_0+z$ to $\tilde{x}_0+kz$ and made from the concatenation of the subarc of $\eta+z$ going from $\tilde{x}_0+z$ to $\tilde{x}_1+z$ with the subarc of $\eta$ going from $\tilde{x}_1+z$ to $\tilde{x}_0+kz$. The arc $\eta'$ has length bounded as follows: $\ell_{\tilde{F}}(\eta')=\ell_{\tilde{F}}(\eta)-d_F(\tilde{x}_1,\tilde{x}_1+z)<(k-1)\|z\|_{st}^F$. In particular $\|(k-1)z\|_{st}^F<(k-1)\|z\|_{st}^F$ which proves that $k$ was not minimal for this property: a contradiction. \begin{figure}[h!] \includegraphics[scale=1]{triangle_inequality.pdf} \caption{Scheme of the proof of strict convexity of the stable norm.} \label{fig:triangle_inequality} \end{figure} (3) Let $\eta_i$ be a curve from some point $\tilde{x}_i$ to $\tilde{x}_i+z_i$ such that $\ell_{\tilde{F}}(\eta_i)=\|z_i\|^F_{st}$ for $i=1,2$. If we project $\eta_1$ and $\eta_2$ to $\mathbb{T}^2$ we obtain two intersecting closed geodesics $\gamma_1$ and $\gamma_2$. Denote by $x$ any intersection point. We first lift $\gamma_1$ to a curve starting at some lifted point $\tilde{x}'_1$ such that $\pi(\tilde{x}'_1)=x$, that we denote by $\eta'_1$, and which still has length $\|z_1\|^F_{st}$. Therefore, $\tilde{x}'_1+z_1$ also projects to $x$, and we can lift the second closed geodesic $\gamma_2$ into a geodesic path $\eta'_2$ starting at $\tilde{x}'_1+z_1$. By construction $\eta'_2$ still has length $\|z_2\|^F_{st}$ and ends up at $\tilde{x}'_1+z_1+z_2$. Finally, the concatenation of $\eta'_1$ with $\eta'_2$ is a geodesic path from $\tilde{x}'_1$ to $\tilde{x}'_1+z_1+z_2$ with a non-smooth point at $\tilde{x}'_1+z_1$, where $\eta'_1\ast\eta'_2$ can be shortened. Hence $\|z_1+z_2\|_{st}^F< \ell_{\tilde{F}}(\eta'_1\ast\eta'_2)=\|z_1\|^F_{st}+\|z_2\|^F_{st}$. \end{proof} Now we extend the function $\|\cdot\|_{st}$ from $\mathbb{Z}^2$ to $\mathbb{Q}^2$ by setting $$ \|(q_1,q_2)\|^F_{st}=\frac{1}{m}\|(mq_1,mq_2)\|^F_{st} $$ where $m$ is any positive integer such that $(mq_1,mq_2)\in \mathbb{Z}^2$. Property (2) of the last proposition ensures that this definition does not depend on the chosen $m$, and we easily see that the extended function remains both positive, positively homogeneous over $\mathbb{Q}$ and strictly convex. \begin{proposition} The function $\|\cdot\|^F_{st}:\mathbb{Q}^2\to \mathbb{R}_+$ uniquely extends to a (possibly non-symmetric) norm $\|\cdot\|^F_{st}:\mathbb{R}^2\to \mathbb{R}_+$ called the stable norm. \end{proposition} \begin{proof} Convexity implies that for any $q,q'\in \mathbb{Q}^2$ $$ |\|q\|_{st}^F-\|q'\|_{st}^F|\leq \max \{\|q-q'\|_{st}^F,\|q'-q\|_{st}^F\}. $$ Therefore the function $\|\cdot\|^F_{st}:\mathbb{Q}^2\to \mathbb{R}_+$ uniquely extends to $\mathbb{R}^2$ if any sequence of points $q_n$ in $\mathbb{Q}^2$ converging to $0$ satisfies $\|q_n\|_{st}^F\to 0$. But this is the case as for such a sequence we have $$ \|q_n\|_{st}^F=\|(q_{1n},q_{2n})\|_{st}^F\leq |q_{1n}|\| (\pm 1,0)\|_{st}^F+|q_{2n}|\|(0, \pm 1)\|_{st}^F\to 0. $$ The extended function $\|\cdot\|^F_{st}:\mathbb{R}^2\to \mathbb{R}_+$ is positively homogenous over $\mathbb{R}$ and convex. So we are left to check positivity. For this, fix a product scalar $\langle\cdot,\cdot\rangle$ on $\mathbb{R}^2$ such that $\tilde{F}(p,v)\geq \sqrt{\langle v,v\rangle}$ for any $p\in [0,1]^2$ and $v\in \mathbb{R}^2$. By construction $\|\tilde{x}\|_{st}^F\geq \sqrt{\langle \tilde{x},\tilde{x}\rangle}$ for any $\tilde{x} \in \mathbb{R}^2$, as this inequality holds on $\mathbb{Z}^2$. This ensures strict positivity. \end{proof} The norm $\|\cdot\|^F_{st}$ is called {\it stable} norm because it satisfies the following property: \begin{equation}\label{eq:st2} \|z\|^F_{st}=\lim_{k\to \infty} \frac{d_{\tilde{F}}(\tilde{x}_0,\tilde{x}_0+k\cdot z)}{k} \end{equation} for every $z \in \mathbb{Z}^2$ and any base point $\tilde{x}_0 \in \mathbb{R}^2$. By a result by \cite{burago} called the bounded distance theorem, the stable norm of $F$ turns out to be the unique norm on $\mathbb{R}^2$ such that there exists a constant $C$ for which \begin{equation}\label{eq:st} \|z\|^F_{st}\leq d_{\tilde{F}}(\tilde{x}_0,\tilde{x}_0+z)\leq \|z\|^F_{st}+C \end{equation} for every $z\in \mathbb{Z}^2$ and any $\tilde{x}_0 \in \mathbb{R}^2$. With our definition we easily check that the double inequality holds for $C=2\diam(\mathbb{T}^2,F)$. By passing to the quotient, the stable norm induces a Finsler flat metric on $\mathbb{T}^2$ still denoted by $\|\cdot\|_{st}^F$. The following proposition is of fundamental importance for us: \begin{proposition}\label{prop:sys_stab} $\sys(\mathbb{T}^2,F)=\sys(\mathbb{T}^2,\|\cdot\|^F_{st})$. \end{proposition} \begin{proof} A shortest closed geodesic in a homotopy class lifts to a shortest geodesic path between two points differing by some $z\in \mathbb{Z}^2$. So the assertion is a direct consequence of the fact that $\|z\|^F_{st}=\min_{\tilde{x} \in [0,1]^2} d_{\tilde{F}}(\tilde{x},\tilde{x}+z)$ for every non-zero $z\in \mathbb{Z}^2$. \end{proof} \subsection{Decreasing the Holmes-Thompson area} While passing for a $2$-torus from a Finsler metric to its stable norm does not change the systole, it decreases the Holmes-Thompson area according to the following result. \begin{theorem} \cite{burago_ivanov} \label{th:decreasing_area} Let $(\mathbb{T}^2,F)$ be a Finsler torus. Then: $$ \area_{HT}{(\mathbb{T}^2,F)} \geq \area_{HT}(\mathbb{T}^2,\|\cdot\|_{st}^F). $$ \end{theorem} \begin{proof} The proof proposed by Burago \& Ivanov relies on the notion of calibrating functions. From now on, we fix the Finsler metric $F$ and an arbitrary point $\tilde{x}_0 \in \mathbb{R}^2$. Denote by $\|\cdot\|_{st}$ its stable norm and by $\|\cdot\|_{\tilde{x}}$ the norm defined on each tangent space $T_{\tilde{x}}\mathbb{R}^2\simeq \mathbb{R}^2$ by the Finsler metric $\tilde{F}$. For a linear form $h:\mathbb{R}^2\to\mathbb{R}$, we define its dual stable norm by $\|h\|_{st}^\ast:=\max\{h(v)\mid \|v\|_{st}\leq 1\}$ and its dual Finsler norm at $\tilde{x}$ by $\|h\|_{\tilde{x}}^\ast:=\max\{h(v)\mid \|v\|_{\tilde{x}}\leq 1\}$. We set $$ S^\ast_{\tilde{x}}:=\{h \in (\mathbb{R}^2)^\ast \mid \|h\|^\ast_{\tilde{x}}=1\} \, \, \text{and} \, \, S_{st}^\ast:=\{h \in (\mathbb{R}^2)^\ast \mid \|h\|^\ast_{st}=1\}. $$ \begin{lemma}\label{lem:cal} Let $h\in S^\ast_{st}$. The function \begin{equation*} f(\tilde{x}): = \limsup_{\mathbb{Z}^2 \ni z\to\infty} \left[h(z) - d_{\tilde{F}}(\tilde{x},\tilde{x}_0 +z)\right] \end{equation*} is well defined and satisfies the following properties: \begin{enumerate} \item $f(\tilde{x}+z) = f(\tilde{x}) + h(z)$ for all $\tilde{x} \in \mathbb{R}^2$ and $z \in \mathbb{Z}^2$. \item $d_{\tilde{x}} f$ is defined for almost every point $\tilde{x} \in \mathbb{R}^2$ and satisfied $\|d_{\tilde{x}} f\|_{\tilde{x}}^\ast=1$. \end{enumerate} \end{lemma} Such a function is an example of calibrating function for $h$ (see \cite{burago_ivanov} for a precise definition) and is defined in analogy with Busemann functions. \begin{proof}[Proof of the Lemma \ref{lem:cal}] Using the fact that $h(z)\leq \|z\|_{st}$ and (\ref{eq:st}), we see that $f(\tilde{x})\leq d(\tilde{x},\tilde{x}_0)<+\infty$. Besides, we can always find a sequence $z_i\to \infty$ of points in $\mathbb{Z}^2$ such that $h(z_i)\geq \|z_i\|_{st}-c$ for some constant $c$. For this, remark that the convex body $C_t$ obtained by intersecting the stable ball $\{\|\cdot\|_{st}\leq t\}$ with the half plane $\{h\geq t-c\}$ will look for large values of $t$ like a rectangle with one side of almost constant length depending on $c$ and another side very large. In particular, we can select $c$ and an unbounded increasing sequence of $\{t_i\}$ such that the convex bodies $\{C_{t_i}\}$ are pairewise disjoints and each one contains a square of side length $2$. It implies that each convex body $C_{t_i}$ contains some integer point $z_i$ in it by Minkowski's first theorem. We thus find an unbounded sequence $\{z_i\}$ of integer points such that $h(z_i)\geq \|z_i\|_{st}-c$ as wanted. Now, using (\ref{eq:st}) again, we see that $f(\tilde{x})\geq -d(\tilde{x},\tilde{x}_0)-C-c>-\infty$. So $f$ always takes finite values and is therefore well defined. We easily check that \begin{eqnarray*} f(\tilde{x}+z)& =& \limsup_{\mathbb{Z}^2 \ni z'\to\infty} \left[h(z') - d_{\tilde{F}}(\tilde{x}+z,\tilde{x}_0 +z')\right]\\ &=& \limsup_{\mathbb{Z}^2 \ni z'\to\infty} \left[h(z'+z) - d_{\tilde{F}}(\tilde{x}+z,\tilde{x}_0 +z'+z)\right]\\ &=& f(\tilde{x})+h(z), \end{eqnarray*} so property (1) holds. Next observe that $f$ is $1$-Lipschitz with respect to the Finsler metric $\tilde{F}$ as an upper limit of $1$-Lipschitz functions: $$\abs{f(\tilde{x})-f(\tilde{y})} \leq d_{\tilde{F}}(\tilde{x},\tilde{y})$$ for all $\tilde{x},\tilde{y} \in \mathbb{R}^2$. So its differential $d_{\tilde{x}} f$ is defined for almost every point $\tilde{x} \in \mathbb{R}^2$ and satisfies $\|d_{\tilde{x}} f\|_{\tilde{x}}^\ast\leq1$. In order to prove the reverse inequality, we will prove that for every $\tilde{x} \in \mathbb{R}^2$ there is a geodesic ray $\eta: [0,\infty) \rightarrow (\mathbb{R}^2,\tilde{F})$ with origin $\eta(0) = \tilde{x}$ satisfying $f(\eta(t)) = f(\tilde{x}) + t$. Indeed fix the point $\tilde{x}\in \mathbb{R}^2$ and choose a sequence $z_i\to \infty$ of points in $\mathbb{Z}^2$ such that $$ f(\tilde{x}) = \lim_{i\to\infty} \left[h(z_i) - d_{\tilde{F}}(\tilde{x},\tilde{x}_0 +z_i)\right]. $$ Denote by $\eta_i$ a minimal geodesic path going from $\tilde{x}$ to $\tilde{x}_0 +z_i$ parametrized by arc length. By compacity, we can find a converging subsequence $\{\eta_i'(0)\}\subset S_{\tilde{x}}$ to some vector $v\in S_{\tilde{x}}$. This defines a geodesic ray $\eta$ starting at $\tilde{x}$ by setting $\eta'(0)=v$. Observe that $$ f(\eta(t))-f(\tilde{x})=f(\eta(t))-f(\eta(0))\leq t $$ as $f$ is $1$-Lipschitz and $\eta$ is a geodesic parametrized by arc length. Besides \begin{eqnarray*} f(\eta(t))&=&\limsup_{\mathbb{Z}^2 \ni z\to\infty} \left[h(z) - d_{\tilde{F}}(\eta(t),\tilde{x}_0 +z)\right]\\ &\geq&\limsup_{i\to\infty} \left[h(z_i) - d_{\tilde{F}}(\eta(t),\tilde{x}_0 +z_i)\right]\\ \text{(by pointwise convergence)}&=&\limsup_{i\to\infty} \left[h(z_i) - d_{\tilde{F}}(\eta_i(t),\tilde{x}_0 +z_i)\right]\\ (\eta_i \, \text{minimal geodesic)}&=&\limsup_{i\to\infty} \left[h(z_i) - (d_{\tilde{F}}(\eta_i(0),\tilde{x}_0 +z_i)-d_{\tilde{F}}(\eta_i(0),\eta_i(t)))\right]\\ &=&\limsup_{i\to\infty} \left[h(z_i) - d_{\tilde{F}}(\tilde{x},\tilde{x}_0 +z_i)\right]+t\\ &=&f(\tilde{x})+t. \end{eqnarray*} Therefore $f(\eta(t)) = f(\tilde{x}) + t$ which implies the reverse inequality: $\|d_{\tilde{x}} f\|_{\tilde{x}}^\ast\geq1$. So $\|d_{\tilde{x}} f\|_{\tilde{x}}^\ast=1$ and property (2) is proved. \end{proof} Next we state the following intuitive result, a proof of which can be found in \cite[Lemma 5.1]{burago_ivanov}. \begin{lemma} \label{le:cyclic_order} Let $h_1, h_2, h_3 \in S^\ast_{st}$ be three linear forms, and $f_1 , f_2 , f_3$ the three associated functions defined using Lemma \ref{lem:cal}. Let $\tilde{x} \in \mathbb{R}^2$ be a point where $d_{\tilde{x}} f_i$ is defined for $i=1,2,3$. Then $\{d_{\tilde{x}} f_1 , d_{\tilde{x}} f_2 , d_{\tilde{x}} f_3\} \subset S^\ast_{\tilde{F}}(\tilde{x})$ have the same cyclic order that $\{h_1 , h_2 , h_3\} \subset S^\ast_{st}$. \end{lemma} Loosely speaking, this lemma holds true because two minimal geodesics (such as the $\eta_i$'s in the previous proof of Lemma \ref{lem:cal}) intersect at most once. This fact is purely $2$-dimensional, and does not longer hold in higher dimensions. We are now ready to undertake the proof of Theorem \ref{th:decreasing_area}. Let $\{h_1, \dots h_N\} \in S^\ast_{st}$ be a collection of cyclically ordered and pairewise linearly independent linear forms, and $\{f_1,\ldots,f_N\}$ the associated functions defined using Lemma \ref{lem:cal}. Consider for $i=1,\ldots,N$ the linear maps \begin{eqnarray*} L_i : \mathbb{R}^2 & \to & \mathbb{R}^2\\ \tilde{x} & \mapsto & (h_i(\tilde{x}), h_{i+1}(\tilde{x})) \end{eqnarray*} and the maps \begin{eqnarray*} G_i : \mathbb{R}^2 & \to & \mathbb{R}^2\\ \tilde{x} & \mapsto & (f_i(\tilde{x}), f_{i+1}(\tilde{x})) \end{eqnarray*} using a cyclic index. Because each $L_i$ is linear, it induces a map $$ \bar{L}_i : \mathbb{T}^2=\mathbb{R}^2/\mathbb{Z}^2 \to \mathbb{R}^2/L_i(\mathbb{Z}^2) $$ of degree one. Besides, for any $\tilde{x}\in \mathbb{R}^2$ and $z \in \mathbb{Z}^2$, we have that $G_i(\tilde{x}+z)= G_i(\tilde{x})+L_i(z)$. Therefore each $G_i$ also induces a well defined map $$ \bar{G}_i : \mathbb{T}^2=\mathbb{R}^2/\mathbb{Z}^2 \to \mathbb{R}^2/L_i(\mathbb{Z}^2) $$ of degree one. The linear forms $d\tilde{x}_1$ and $d\tilde{x}_2$ on $\mathbb{R}^2$ are $\mathbb{Z}^2$-periodic and induce $1$-forms on $\mathbb{R}^2/L_i(\mathbb{Z}^2)$ denoted by $dx_1$ and $dx_2$. Observe first that \begin{equation*} \abs{\mathbb{R}^2 / L_i(\mathbb{Z}^2)}= \int_{\mathbb{R}^2/L_i(\mathbb{Z}^2)} dx_1 \wedge dx_2=\int_{\mathbb{T}^2} \bar{L}_i^\ast (dx_1\wedge dx_2)= \int_{\mathbb{T}^2} h_i\wedge h_{i+1} \end{equation*} where the $h_i$'s are now thought as $1$-forms on $\mathbb{T}^2$. Identifying $(\mathbb{R}^2)^\ast\simeq \mathbb{R}^2$ using the standard Euclidean structure, we see that $\frac{1}{2} h_i \wedge h_{i+1} = \abs{\Delta_i} dx_1 \wedge dx_2$, where $|\Delta_i|$ is the Lebesgue measure of the triangle $\Delta_i$ defined by $0$, $h_i$ and $h_{i+1}$. Since the collection $\{h_1,\ldots,h_N\}$ of linear forms is cyclically ordered, we find that $ \frac{1}{2}\sum_{i = 1}^N h_i \wedge h_{i+1} = \abs{\conv{\set{h_i}_{i=1}^N}} dx_1 \wedge dx_2$. Thus \begin{equation*} \frac{1}{2} \sum_{i=1}^N \abs{\mathbb{R}^2/L_i(\mathbb{Z}^2)} = \frac{\abs{\conv{\set{h_i}_{i=1}^N}}}{\abs{B_{st}^\circ}} \int_{\mathbb{T}^2} \abs{B_{st}^\circ} dx_1 \wedge dx_2 = \pi\frac{\abs{\conv{\set{h_i}_{i=1}^N}}}{\abs{B_{st}^\circ}} \area_{HT}(\mathbb{T}^2,\|\cdot\|_{st}). \end{equation*} Observe that by choosing $\set{h_i}_{i=1}^N$ adequately we can make the ratio ${\abs{\conv{\set{h_i}_{i=1}^N}}}/{\abs{B_{st}^\circ}}$ arbitrarily close to $1$. In the other direction \begin{equation*} \abs{\mathbb{R}^2 / L_i(\mathbb{Z}^2)}=\int_{\mathbb{T}^2} \bar{G}_i^\ast (dx_1\wedge dx_2)= \int_{\mathbb{T}^2} df_i\wedge df_{i+1} \end{equation*} where the $df_i$'s being $\mathbb{Z}^2$-periodic are also thought as $1$-forms on $\mathbb{T}^2$. As above $\frac{1}{2} d_xf_i \wedge d_xf_{i+1} = \abs{\Delta_i(x)} dx_1 \wedge dx_2$, where $\Delta_i(x)$ is the triangle defined by $0$, $d_x f_i$ and $d_x f_{i+1}$. Since the collection of linear forms $\{d_{\tilde{x}} f_1,\ldots,d_{\tilde{x}}f_{N}\}$ is cyclically ordered according to Lemma \ref{le:cyclic_order}, we find that $\sum_{i = 1}^N \frac{1}{2} d_x f_i \wedge d_x f_{i+1} = \abs{\conv{\set{d_x f_i}_{i=1}^N}} dx_1 \wedge dx_2$. By convexity $\conv{\set{d_x f_i}_{i=1}^N} \subset K_x^\circ$ which gives that \begin{equation*} \frac{1}{2} \sum_{i=1}^N \abs{\mathbb{R}^2/L_i(H_1(\mathbb{T}^2;\mathbb{Z}))} = \int_{\mathbb{T}^2} \abs{\conv{\set{d_x f_i}_{i=1}^N}} dx_1 \wedge dx_2 \leq \int_{\mathbb{T}^2} \abs{K_x^\circ} dx_1 \wedge dx_2 = \pi \area_{HT}(\mathbb{T}^2,F) \end{equation*} At the end we get that \begin{equation*} \area_{HT}(\mathbb{T}^2,F) \geq \frac{\abs{\conv{\set{h_i}_{i=1}^N}}}{\abs{B_{st}^\circ}} \area_{HT}(\mathbb{T}^2,\|\cdot\|_{st}) \end{equation*} and concludes from the fact that the ratio ${\abs{\conv{\set{h_i}_{i=1}^N}}}/{\abs{B_{st}^\circ}}$ can be made arbitrarily close to $1$. \end{proof} \subsection{Holmes-Thompson area in the non-flat reversible case} Here is the optimal isosystolic inequality for Holmes-Thompson area and reversible Finsler metrics on the two-torus first observed in \cite{sabourau}. \begin{theorem}[Sabourau's isosystolic inequality] Any Finsler reversible torus $(\mathbb{T}^2,F)$ satisfies the following optimal isosystolic inequality: \begin{equation*} \area_{HT}(\mathbb{T}^2,F) \geq \frac{2}{\pi} \sys{(\mathbb{T}^2,F)}^2. \end{equation*} \end{theorem} \begin{proof} Given a reversible Finsler metric on $\mathbb{T}^2$, we have that \begin{eqnarray*} \area_{HT}(\mathbb{T}^2,F) & \geq & \area_{HT}(\mathbb{T}^2,\|\cdot\|_{st}^F) \, \, \text{(by Theorem \ref{th:decreasing_area})}\\ &\geq &{2\over \pi} \sys(\mathbb{T}^2,\|\cdot\|_{st}^F)^2 \, \, \text{(by Theorem \ref{th:HT_reversible})}\\ &= &{2\over \pi} \sys(\mathbb{T}^2,F)^2 \, \, \text{(by Proposition \ref{prop:sys_stab})} \end{eqnarray*} and the proof is complete. \end{proof} \subsection{Holmes-Thompson area in the non-flat and non-reversible case} The optimal isosystolic inequality for Holmes-Thompson area and possibly non-reversible Finsler metrics on the two-torus appears in \cite{ABT}. \begin{theorem} Any Finsler torus $(\mathbb{T}^2,F)$ satisfies the following optimal isosystolic inequality: \begin{equation*} \area_{HT}(\mathbb{T}^2,F) \geq \frac{3}{2\pi} \sys(\mathbb{T}^2,F)^2. \end{equation*} \end{theorem} \begin{proof} Given a Finsler metric $F$ on $\mathbb{T}^2$, we have that \begin{eqnarray*} \area_{HT}(\mathbb{T}^2,F) & \geq & \area_{HT}(\mathbb{T}^2,\|\cdot\|_{st}^F) \, \, \text{(by Theorem \ref{th:decreasing_area})}\\ &\geq &{3\over 2 \pi} \sys(\mathbb{T}^2,\|\cdot\|_{st}^F)^2 \, \, \text{(by Theorem \ref{th:ABT})}\\ &= &{3\over 2 \pi} \sys(\mathbb{T}^2,F)^2 \, \, \text{(by Proposition \ref{prop:sys_stab})} \end{eqnarray*} and the proof is complete. \end{proof} \section{Optimal isosystolic inequality for Busemann-Hausdorff area} In this last section, we prove the optimal isosystolic inequalities on the $2$-torus for the Busemann-Hausdorff notion of area and for reversible Finsler metrics. The main step is to prove the following analog of Theorem \ref{th:decreasing_area} which was conjectured in \cite[Conjecture A]{burago_ivanov}. \begin{theorem} \label{th:decreasing_area_BH} Let $(\mathbb{T}^2,F)$ be a Finsler torus. Then: $$ \area_{BH}{(\mathbb{T}^2,F)} \geq \area_{BH}(\mathbb{T}^2,\|\cdot\|_{st}^F). $$ \end{theorem} Indeed using this result we can easily prove the optimal isosystolic inequality presented in the introduction. \begin{proof}[Proof of Theorem \ref{th:opt}] Given a Finsler reversible metric $F$ on $\mathbb{T}^2$, we have that \begin{eqnarray*} \area_{BH}(\mathbb{T}^2,F) & \geq & \area_{BH}(\mathbb{T}^2,\|\cdot\|_{st}^F) \, \, \text{(by Theorem \ref{th:decreasing_area_BH})}\\ &\geq &{\pi \over 4} \sys(\mathbb{T}^2,\|\cdot\|_{st}^F)^2 \, \, \text{(by Minkowski's first Theorem \ref{th:Mink})}\\ &= &{\pi\over 4} \sys(\mathbb{T}^2,F)^2 \, \, \text{(by Proposition \ref{prop:sys_stab})} \end{eqnarray*} and the proof is complete. \end{proof} So we are left to prove the above theorem. \begin{proof}[Proof of Theorem \ref{th:decreasing_area_BH}] Let $(\mathbb{T}^2,F)$ be a Finsler torus. Remark that the metric here is not supposed to be reversible. Choose a collection $\{h_1, \dots h_N\} \subset S^\ast_{st}$ of cyclically ordered and pairewise linearly independent linear forms. For $i=1,\ldots,N$ define $w_i\in T_x\mathbb{R}^2\simeq \mathbb{R}^2$ by the equations \[ \left\{ \begin{array}{ccc} h_i(w_i) & = & 1 \\ h_{i+1}(w_i) &= &1 \\ \end{array} \right. \] using a cyclic index. Then define the linear form $\Omega_i$ as the contraction of the standard symplectic form by the vector $w_i$. In coordinates, if $w_i =(w_{i1},w_{i2})$ then $\Omega_i=-w_{i2}dx_1+w_{i1}dx_2$. Finally define in analogy with the previous section for $i=1,\ldots,N$ the linear maps \begin{eqnarray*} {\mathcal L}_i : \mathbb{R}^2 & \to & \mathbb{R}^2\\ \tilde{x} & \mapsto & (\Omega_i(\tilde{x}), \Omega_{i+1}(\tilde{x})) \end{eqnarray*} which induce maps $\bar{\mathcal L}_i : \mathbb{T}^2=\mathbb{R}^2/\mathbb{Z}^2 \to \mathbb{R}^2/{\mathcal L}_i(\mathbb{Z}^2)$ of degree one. Observe that \begin{equation*} \abs{\mathbb{R}^2 / {\mathcal L}_i(\mathbb{Z}^2)}= \int_{\mathbb{R}^2/{\mathcal L}_i(\mathbb{Z}^2)} dx_1 \wedge dx_2=\int_{\mathbb{T}^2} \bar{{\mathcal L}}_i^\ast (dx_1\wedge dx_2)=\int_{\mathbb{T}^2} \Omega_i\wedge \Omega_{i+1} \end{equation*} where the $\Omega_i$'s are now thought as $1$-forms on $\mathbb{T}^2$. Now we easily check that $\Omega_i \wedge \Omega_{i+1} = \det (w_i,w_{i+1}) \, dx_1\wedge dx_2=2 \, \abs{{\Delta}_i} \, dx_1 \wedge dx_2$, where $|\Delta_i|$ is the Lebesgue measure of the triangle $\Delta_i$ defined by $0$, $w_i$ and $w_{i+1}$. Since the collection $\{h_1,\ldots,h_N\}$ of linear forms is cyclically ordered, the collection $\{w_1,\ldots,w_N\}$ of vectors is also cyclically ordered (see Figure \ref{fig:cyclicorder}) so we have that $\sum_{i=1}^N \abs{\mathbb{R}^2/{\mathcal{L}}_i(\mathbb{Z}^2)}=2 \abs{\conv{\set{w_i}_{i=1}^N}}$. By choosing $\set{h_i}_{i=1}^N$ adequately we can make the ratio ${\abs{\conv{\set{w_i}_{i=1}^N}}}/{\abs{B_{st}}}$ arbitrarily close to $1$. \begin{figure}[h!] \includegraphics[scale=1]{cyclicorder.pdf} \caption{Cyclic order of vectors $\{w_1,\ldots,w_N\}$ induced by the cyclic order of the linear forms $\{h_1,\ldots,h_N\}$.} \label{fig:cyclicorder} \end{figure} Now let $\{f_1,\ldots,f_N\}$ be the functions associated to the collection $\{h_1, \dots h_N\} \subset S^\ast_{st}$ and defined using Lemma \ref{lem:cal}. If the functions of the collection $\{f_1,\ldots,f_N\}$ are not all smooth, we first smooth them as follows. Remember that these functions are $1$-Lipschitz with respect to the Finsler metric $\tilde{F}$, see the proof of Lemma \ref{lem:cal}. According to \cite[Appendix]{Matv13}, for any positive integer $n$ there exist smooth functions $f_{1,n},\ldots,f_{N,n}:\mathbb{R}^2 \to \mathbb{R}$ which are $(1+1 /n)$-Lipschitz with respect to the Finsler metric $\tilde{F}$ and satisfy $\|f_i-f_{i,n}\|_\infty \leq 1/n$. Using the $\mathbb{Z}^2$-periodicity of the metric and the fact that the $f_i$'s satisfy Property $(1)$ of Lemma \ref{lem:cal}, we easily see that we can further assume that the $f_{i,n}$'s also share the same property, namely: $$ f_{i,n}(\tilde{x}+z) = f_{i,n}(\tilde{x}) + h(z) $$ for all $\tilde{x} \in \mathbb{R}^2$ and $z \in \mathbb{Z}^2$. In fact, we only need to choose in the proof of \cite[Appendix]{Matv13} the locally finite cover to be $\mathbb{Z}^2$-periodic. Next by Lemma \ref{le:cyclic_order} for almost every $\tilde{x} \in \mathbb{R}^2$ the linear forms $\{d_{\tilde{x}}f_1,\ldots,d_{\tilde{x}} f_N\}$ belong to $S^\ast_{F}(x)$ and are cyclically ordered. As the approximating functions are constructed by convolution, it implies that by possibly passing to a subsequence we can also assumed the linear forms $\{d_{\tilde{x}}f_{1,n},\ldots,d_{\tilde{x}} f_{N,n}\}$ to be cyclically ordered for every $\tilde{x} \in \mathbb{R}^2$. Indeed we have for $j=1,2$ that $$ {\partial f_{i,n} \over \partial \tilde{x}_j} \overset{L^1}{\to} {\partial f_i \over \partial \tilde{x}_j}, $$ see for instance \cite[Theorem 1 p.264]{EvansPDE}. It implies by possibly passing to a subsequence that $$ {\partial f_{i,n} \over \partial \tilde{x}_j} \overset{a.e.}{\to} {\partial f_i \over \partial \tilde{x}_j} $$ which ensures this additional property provided $n$ is large enough. For $i=1,\ldots,N$ and for $\tilde{x} \in \mathbb{R}^2$ we define $\hat{w}_{i,n}(\tilde{x})\in T_{\tilde{x}}\mathbb{R}^2\simeq \mathbb{R}^2$ through the equations \[ \left\{ \begin{array}{ccc} d_{\tilde{x}} f_{i,n}(\hat{w}_{i,n}(\tilde{x})) & = & 1 \\ d_{\tilde{x}} f_{i+1,n}(\hat{w}_{i,n}(\tilde{x})) &= &1 \\ \end{array} \right. \] using a cyclic index. The collection $\{\hat{w}_{1,n}(\tilde{x}),\ldots,\hat{w}_{N,n}(\tilde{x})\}$ of vectors depends smoothly of the point $\tilde{x}$ and is also cyclically ordered. Define a smooth $1$-form by setting $\hat{\Omega}_{i,n}(\tilde{x})=-\hat{w}_{i2,n}(\tilde{x}) dx_1+\hat{w}_{i1,n}(\tilde{x}) dx_2 \in (\mathbb{R}^2)^\ast$. \begin{lemma}\label{lem:exact} The following holds true: $d\hat{\Omega}_{i,n}=0$. \end{lemma} \begin{proof}[Proof of Lemma \ref{lem:exact}] Observe that for any $\tilde{x} \in \mathbb{R}^2$ $$ \hat{w}_{i,n}(\tilde{x})= \left( \begin{array}{c} d_{\tilde{x}} f_{i,n} \\ d_{\tilde{x}} f_{i+1,n} \end{array} \right)^{-1} \left( \begin{array}{c} 1 \\ 1 \end{array} \right) $$ which implies that $d_{\tilde{x}} \hat{w}_{i,n}=0$. Therefore $d_{\tilde{x}}\hat{\Omega}_{i,n}=d_{\tilde{x}} \hat{w}_{i,n} (1,1) \, dx_1\wedge dx_2=0.$ \end{proof} By Poincar\'e Lemma there exists a unique smooth function $W_{i,n}:\mathbb{R}^2\to \mathbb{R}$ such that $W_{i,n}(0)=0$ and $dW_{i,n}=\hat{\Omega}_{i,n}$. \begin{lemma}\label{lem:periodic} For any $\tilde{x} \in \mathbb{R}^2$ and $z \in \mathbb{Z}^2$ the following holds true: $$ W_{i,n}(\tilde{x}+z)=W_{i,n}(\tilde{x})+\Omega_i(z). $$ \end{lemma} \begin{proof}[Proof of Lemma \ref{lem:periodic}] Observe that $$ W_{i,n}(\tilde{x}+z)=W_{i,n}(\tilde{x})+\Omega_i(z) \Leftrightarrow \int_{[\tilde{x},\tilde{x}+z]} \hat{\Omega}_{i,n}=\int_{[\tilde{x},\tilde{x}+z]} \Omega_i. $$ By Property $(1)$ in Lemma \ref{lem:cal}, we have that $\hat{\Omega}_{i,n}(\tilde{x}+z)=\hat{\Omega}_{i,n}(\tilde{x})$, so we can thought the $\hat{\Omega}_{i,n}$'s as $1$-forms on $\mathbb{T}^2$. Therefore what we shall prove is that $[\hat{\Omega}_{i,n}]=[\Omega_i]$ as de Rham cohomological classes in $H^1_{dR}(\mathbb{T}^2;\mathbb{R})$. Now remember that the pairing \begin{eqnarray*} H^1_{dR}(\mathbb{T}^2;\mathbb{R}) \times H^1_{dR}(\mathbb{T}^2;\mathbb{R}) & \to & \mathbb{R}\\ (\alpha,\beta) & \mapsto & \langle \alpha,\beta \rangle:= \int_{\mathbb{T}^2} \alpha \wedge \beta \end{eqnarray*} is perfect. In the first hand we have that $$ \langle [h_i],[\Omega_i]\rangle=\int_{\mathbb{T}^2} h_i \wedge \Omega_i=\int_{\mathbb{T}^2} h_i(w_i) \, dx_1\wedge dx_2 =1, $$ and in the same way $\langle [h_{i+1}],[\Omega_i]\rangle=1$. In the other hand, Property $(1)$ in Lemma \ref{lem:cal} implies that $[h_i]=[df_{i,n}]$ in $H^1(\mathbb{T}^2,\mathbb{R})$. Therefore $$ \langle [h_i],[\hat{\Omega}_{i,n}]\rangle=\int_{\mathbb{T}^2} df_{i,n} \wedge \hat{\Omega}_{i,n}=\int_{\mathbb{T}^2} df_{i,n} (\hat{w}_{i,n}) \, dx_1\wedge dx_2=1, $$ and $\langle [h_{i+1}],[\hat{\Omega}_{i,n}]\rangle=1$ as well. We conclude that $[\Omega_i]=[\hat{\Omega}_{i,n}]$ as claimed, and the Lemma is proved. \end{proof} Therefore the map $W_{i,n}$ induces a map $\bar{W}_{i,n}: \mathbb{T}^2 \to \mathbb{R}^2/{\mathcal L}_i(\mathbb{Z}^2)$ of degree one, and we get that \begin{equation*} \abs{\mathbb{R}^2 / {\mathcal L}_i(\mathbb{Z}^2)}=\int_{\mathbb{T}^2} \bar{W}_{i,n}^\ast (dx_1\wedge dx_2)= \int_{\mathbb{T}^2} \hat{\Omega}_{i,n}\wedge \hat{\Omega}_{i+1,n}. \end{equation*} As above for every $\tilde{x}\in \mathbb{R}^2$ we have that $\hat{\Omega}_{i,n}(\tilde{x}) \wedge \hat{\Omega}_{i+1,n}(\tilde{x}) = \det (\hat{w}_{i,n}(\tilde{x}),\hat{w}_{i+1,n}(\tilde{x})) \, dx_1\wedge dx_2=2 \, \abs{{\hat{\Delta}}_{i,n}(\tilde{x})} \, dx_1 \wedge dx_2$, where $\abs{{\hat{\Delta}}_{i,n}(\tilde{x})}$ is the Lebesgue measure of the triangle $\hat{\Delta}_{i,n}(\tilde{x})$ defined by $0$, $\hat{w}_{i,n}(\tilde{x})$ and $\hat{w}_{i+1,n}(\tilde{x})$. Since the collection $\{\hat{w}_{1,n}(\tilde{x}),\ldots,\hat{w}_{N,n}(\tilde{x})\}$ is cyclically ordered, we have that $\sum_{i=1}^N \abs{\mathbb{R}^2/{\mathcal{L}}_i(\mathbb{Z}^2))}=2 \int_{\mathbb{T}^2}\abs{\conv{\set{\hat{w}_{i,n}(\tilde{x})}_{i=1}^N}} dx_1\wedge dx_2$. Observe that as $\|df_{i,n}\|_x^\ast\leq 1+1/n$, we get that by convexity $K_x\subset\conv{\set{\left(1+{1\over n}\right)\hat{w}_{i,n}(\tilde{x})}_{i=1}^N}$. Now we deduce that \begin{eqnarray*} \area_{BH}(\mathbb{T}^2,F)=\int_{\mathbb{T}^2} {\pi \over \abs{K_x}} \, dx_1\wedge dx_2 & \geq & {1 \over \left(1+{1\over n}\right)^2}\int_{\mathbb{T}^2} {\pi \over \abs{\conv{\set{\hat{w}_{i,n}(\tilde{x})}_{i=1}^N}}} \, dx_1\wedge dx_2 \\ \, \, \text{(by Cauchy-Schwarz inequality)} & \geq & {1 \over \left(1+{1 \over n}\right)^2}{\pi \over \int_{\mathbb{T}^2} \abs{\conv{\set{\hat{w}_{i,n}(\tilde{x})}_{i=1}^N}} \, dx_1\wedge dx_2} \\ &= & {1 \over \left(1+{1 \over n}\right)^2}{2\pi \over\sum_{i=1}^N \abs{\mathbb{R}^2/{\mathcal{L}}_i(\mathbb{Z}^2))}}\\ &= & {1 \over \left(1+{1 \over n}\right)^2}{\abs{B_{st}}\over \abs{\conv{\set{w_i}_{i=1}^N}}}} \cdot {\area_{BH}(\mathbb{T}^2,\|\cdot\|_{st}) \end{eqnarray*} from which we derive that $$ \area_{BH}(\mathbb{T}^2,F)\geq {\abs{B_{st}}\over \abs{\conv{\set{w_i}_{i=1}^N}}} \cdot \area_{BH}(\mathbb{T}^2,\|\cdot\|_{st}). $$ We conclude the proof by observing that the ratio ${\abs{B_{st}}\over \abs{\conv{\set{w_i}_{i=1}^N}}}$ can be made arbitrarily close to $1$. \end{proof} \noindent {\bf Acknowledgements.} We would like to thank K. Tzanev for providing Figure \ref{integerlines}, and D. Azagra for pointing a useful reference on approximation of Lipschitz functions by smooth ones.\\
1,941,325,221,238
arxiv
\section{Introduction} \label{sec:intro} Expanding interest in cool objects and in high redshifts has driven continual progress in infrared astronomy, as enabled by tremendous improvements in detector sensitivity \citep{low2007}. Within the past decade, the Wide-field Infrared Survey Explorer \citep[WISE;][]{wright2010} has provided an orders of magnitude leap forward relative to its predecessor IRAS \citep{wheelock1994}, mapping the entire sky at 3$-$22 microns with unprecedented sensitivity. As next-generation infrared missions like JWST, Euclid and WFIRST move forward, maximizing the value of existing infrared surveys like WISE will be critical. By now, over 80\% of archival WISE data have been acquired via an ongoing asteroid-characterization mission called NEOWISE \citep{neowiser}. We are leading a wide-ranging effort to repurpose NEOWISE observations for astrophysics, starting by building deep full-sky coadds from tens of millions of 3.4 micron (W1) and 4.6 micron (W2) exposures. Through our resulting ``unWISE'' line of data products, we have already created the deepest ever full-sky maps at 3$-$5 microns \citep{fulldepth_neo1, fulldepth_neo2, fulldepth_neo3}, generated a new class of time-domain WISE coadds \citep{tr_neo2, tr_neo3}, and performed forced photometry on these custom WISE coadds at the locations of more than a billion optically detected sources \citep{unwise_sdss_forcedphot, schlegel2015, dey2018}. However, until now there has never been a WISE-selected catalog leveraging the enhanced depths achieved by incorporating NEOWISE data. Here we create and release such a catalog. Although WISE delivers exceptionally uniform and high quality imaging, its analysis requires careful application of appropriate computational techniques. At the WISE resolution of $\sim$6$\arcsec$, many sources substantially overlap others in the images, even at high Galactic latitudes where the fewest sources are detected. This blending together of nearby sources renders many standard photometry codes unusable. The \texttt{crowdsource} crowded field point source photometry code \citep{Schlafly:2018}, recently developed for the DECam Galactic Plane Survey (DECaPS)\footnote{\url{http://decaps.skymaps.info}}, is well-suited to the task of modeling unWISE images, where nearly all objects are unresolved and blending is pervasive. We have applied the \verb|crowdsource| photometry pipeline to deep unWISE coadds built from five years of publicly available WISE and NEOWISE imaging. The result is a catalog of $\sim$2 billion unique objects detected in the W1 and/or W2 channels, reaching depths $\sim$0.7 magnitudes fainter than those achieved by AllWISE \citep{Cutri:2013}. Our ``unWISE Catalog'' can therefore be considered a deeper W1/W2 successor to AllWISE with more than twice as many securely detected objects. This new catalog will have far-reaching implications, from discovering previously overlooked brown dwarfs in the solar neighborhood \citep[e.g.,][]{kirkpatrick2011} to revealing quasars in the epoch of reionization \citep[e.g.,][]{banados2018}. In $\S$\ref{sec:wise} we recap the relevant history of the WISE and NEOWISE missions. In $\S$\ref{sec:unwise} we briefly highlight salient features of the unWISE coadds which form the basis of our unWISE Catalog. In $\S$\ref{sec:crowdsource} we describe our photometry pipeline. In $\S$\ref{sec:catalog} we provide an overview and evaluation of our resulting catalog. In $\S$\ref{sec:limitations} we discuss limitations of our current catalog processing and related avenues for future improvements. In $\S$\ref{sec:release} we describe the data release contents. We conclude in $\S$\ref{sec:conclusion}. \section{WISE Overview} \label{sec:wise} Launched in late 2009, the WISE satellite resides in a $\sim$95 minute period low-Earth orbit. During the first half of 2010, WISE completed its primary mission by mapping the entire sky once in all four of its available channels, labeled W1 (3.4 microns), W2 (4.6 microns), W3 (12 microns) and W4 (22 microns), with a point spread function (PSF) of full-width at half-maximum (FWHM) of 6.1, 6.4, 6.5, and 12\arcsec. Over the following months, WISE ceased observations in W3 and W4 due to cryogen depletion, but nevertheless continued observing in W1 and W2 through early 2011 thanks to an asteroid-characterizing extension called NEOWISE \citep{neowise}. In 2011 February, WISE was placed into hibernation for nearly three years. In late 2013, however, it was reactivated, and continued observations in W1/W2 observations as the NEOWISE-Reactivation mission \citep[NEOWISER;][]{neowiser}. The ongoing NEOWISE mission has now obtained nearly five full years (10 full-sky mappings) of W1 and W2 imaging, and has publicly released single-exposure images and catalogs corresponding to the first four of those years \citep[observations acquired between 2013 December and 2017 December;][]{neowise_supplement}. Because NEOWISE is an asteroid characterization and discovery project, the mission itself does not publish any coadded data products of the sort that would maximize the raw NEOWISE data's value for Galactic and extragalactic astrophysics. AllWISE \citep{Cutri:2013} represents the most recent such set of coadded data products published by the WISE/NEOWISE teams, but was released at a time when only one fifth of the presently available W1/W2 data had been acquired. Because AllWISE already incorporates all available W3 and W4 imaging, we only construct catalogs in W1 and W2 in this work. \section{unWISE Coadd Images} \label{sec:unwise} Ideally, our WISE cataloging would proceed by directly and jointly modeling all available W1/W2 exposures. However, doing so would be computationally intensive because these inputs represent $\sim$175 terabytes of data spread across $\sim$25 million single-exposure (``L1b'') images. As a computational convenience, our cataloging operates on a full-sky set of 36,480 coadded images totaling less than 1 terabyte in size. Specifically, we model deep unWISE coadds built from five years of single-exposure images in each of W1 and W2. These unWISE coadds uniformly incorporate all publicly available single-exposure images ever acquired in these bands, spanning the WISE and NEOWISE mission phases. The unWISE coaddition procedure is described in \cite{lang_unwise_coadds}, \cite{fulldepth_neo1, fulldepth_neo2} and \cite{fulldepth_neo3}. The five-year unWISE coadds used in this work are yet to be publicly released (Meisner et al. 2019, in prep.). The unWISE coadds attain a 5$\times$ increase in total exposure time relative to the AllWISE coadds, so that we expect to achieve depths substantially beyond those attained by the AllWISE Source Catalog. Furthermore, the added redundancy of NEOWISE imaging allows our catalog to be relatively free of time-dependent systematics that were present in AllWISE, especially contamination from scattered moonlight. The 36,480 unWISE coadd images are each $2048\times2048$ pixels in size with a pixel scale of 2.75\arcsec, covering about 2.5 square degrees. Each image is identified by one of 18,240 \verb|coadd_id| values giving the location of the coadd on the sky, and a band (W1 or W2). The unWISE tile centers and footprints match those of the AllWISE Atlas stacks. \section{Crowded Field Photometry Pipeline} \label{sec:crowdsource} Cursory inspection of even the highest-latitude, least crowded images reveals that many sources in the unWISE coadds are significantly blended with their neighbors. Accordingly, fully taking advantage of the unWISE coadds requires a crowded-field photometry pipeline that simultaneously models the many blended sources in each field. The WISE images are in many ways ideal for crowded-field analysis techniques. A substantial challenge in modeling crowded fields is accurate determination of the PSF and its wings. Fortunately, the WISE satellite has a very stable PSF owing to its location above the atmosphere. Accordingly, we can adopt a nearly constant model for the shape of the PSF, and tweak only the very central region as necessary to improve the fit. The WISE images are also ideal for crowded-field analysis techniques because of their relatively large $\sim 6\arcsec$ PSF FWHM. This large PSF means that most distant galaxies are not resolved and can be adequately modeled as point sources, introducing only a small bias. The unWISE Catalog analysis simply assumes that \emph{all} sources are point sources. This tremendously simplifies the modeling relative to typical optical extragalactic surveys with $\sim 1\arcsec$ PSFs, where detailed modeling of the shapes of galaxies is required to match the observed images. We use the \texttt{crowdsource} analysis pipeline to model the unWISE images \citep{Schlafly:2018}. This pipeline simultaneously models all of the sources in each $512\times512$ pixel region of an unWISE tile, optimizing the positions and fluxes of the sources as well as the background sky to minimize the difference between the observed image and the model. This pipeline was designed for ground-based optical images, but application to WISE images poses few additional problems. Figure~\ref{fig:crowdsourceexample} shows examples of the \texttt{crowdsource} modeling in three fields of very different source densities. The first column shows a portion of the COSMOS field, at high Galactic latitude; the second shows a portion of the Galactic anticenter; and the third shows the Galactic bulge. At high latitudes, and even directly in the Galactic plane, the \texttt{crowdsource} model (second row) is an excellent description of the unWISE images (first row). The differences between the two (third row) clearly come substantially from the shot noise in the images, at least outside the cores of bright stars. Often inspection of the residual image at the locations of the detected sources (fourth row) shows no coherent residual signatures. \begin{figure}[htb] \begin{center}\includegraphics[width=\columnwidth]{crowdsourceexample}\end{center} \caption{ \label{fig:crowdsourceexample} \texttt{crowdsource} modeling results in three fields with very different source densities: the high-latitude COSMOS field (left), the Galactic anticenter (middle), and the Galactic bulge (right). From top to bottom, the rows show the unWISE coadded images, the \texttt{crowdsource} model, the residuals, and the residuals with the locations of cataloged sources overplotted. Except for in the bulge field, shot noise accounts for a substantial fraction of the residuals. On the other hand, in the densest regions, like the bulge, the residuals are completely dominated by unresolved sources and challenges in sky subtraction. The bulge field is stretched $10\times$ less hard than the other two fields. In all cases, the model images account for most of the flux in the real images, though clearly in the bulge case significant residuals remain. All images are W1. } \end{figure} Meanwhile, in the Galactic bulge (third column), the story is very different. The large WISE PSF coupled with the tremendous number of sources and insensitivity to dust extinction make this field very confused. While the bulge image (top row, right) and model (second row, right) are qualitatively in good agreement, the residuals (third row, right) are entirely dominated by coherent structures stemming from the incomplete identification of significant sources in the field. It may be possible to do better here by allowing \texttt{crowdsource} to more aggressively identify unmodeled stars overlapping brighter neighbors, but even very slight errors in the PSF model will have substantial effects on the residuals in fields as dense as this one, rendering the results of a more aggressive source identification uncertain at best. A modest number of modifications and improvements were made to the \texttt{crowdsource} pipeline to allow it to model WISE images. These changes included changes to the PSF modeling, the mosaicing strategy, the sky modeling, and the treatment of nebulosity and large galaxies. Additionally, a new diagnostic field \texttt{spread\_model} was added to the pipeline outputs, which can help determine the size of detected objects. \subsection{PSF modeling} \label{subsec:psf} The most significant addition to \texttt{crowdsource} was a PSF modeling module specifically designed for WISE. The unWISE PSF module\footnote{\url{https://github.com/legacysurvey/unwise_psf}} provides a $325\times325$ pixel PSF, developed using bright isolated stars \citep{wise_dust_map}. This PSF extends far into the wings of the WISE PSF, which has a full-width at half-maximum of only about 2.5 pixels. It includes details of the PSF like diffraction spikes, optical ghosts, and the optical halo. This model was originally designed for application to the unWISE single-exposure images. The unWISE coadd images sum many single-exposure images together, necessitating changes to the original unWISE PSF model. Near the ecliptic poles, the single-exposure images contributing to a given coadd image span a wide range of different detector orientations relative to the sky, leading the final coadd image PSF to be blurred over a range of azimuth. This is modeled by transformation of the PSF to polar coordinates and convolution with a boxcar kernel in azimuth, with the width and position of the boxcar kernel dependent on ecliptic latitude and longitude. The PSF is then projected back to cartesian coordinates. The resulting model gives an excellent description of the observed PSF at any particular point in the survey. The convolution process is somewhat expensive, however. To save time, a grid of these PSF models is generated for each unWISE tile. At low ecliptic latitudes, the coadd PSF is essentially constant over each unWISE tile, and a relatively coarse grid is adequate. At high ecliptic latitudes (within about 20\ensuremath{^\circ}\ of the ecliptic poles), the mean spacecraft orientation and the width of the range of orientations vary more significantly across each coadd tile. This requires a denser grid of PSFs to model. Within 5\ensuremath{^\circ}\ of the north ecliptic pole, one PSF per $128\times128$ pixel region is generated. The \texttt{crowdsource} pipeline linearly interpolates between these PSFs when generating a model for a source on a tile. These PSFs are reasonably accurate within a hundred pixels of the PSF center. Beyond this region, subtleties with the world-coordinate system of the coadds versus the PSF become important. Moreover, some coherent residuals in the PSF cores are also apparent. The \texttt{crowdsource} PSF module addresses these by, at each \texttt{crowdsource} modeling iteration, finding the median residual over all bright, unsaturated stars in the inner $9\times9$ pixel PSF core, and adding it to the PSF model. Typical remaining residuals stem from imperfect subpixel interpolation of the PSF, as indicated by ringing in the residuals around bright stars, as seen in the anticenter residual image in Figure~\ref{fig:crowdsourceexample}. Further effort could eliminate these artifacts; the huge number of stars in each image clearly provide more than enough information to constrain the subpixel shape of the PSF. \subsection{Mosaicing Strategy} Modeling crowded images can require substantial amounts of computer memory. Individual unWISE coadd images in the inner Galaxy contain more than 200,000 detected sources. \texttt{crowdsource} constructs a sparse matrix containing the PSF for each of these sources. Bright sources can require PSFs extending out to $299\times299$ pixels, for $\sim$90,000 values per source, so a naive approach could require $\sim200$ GB of memory for the sparse matrix alone in this extreme case. To make this more manageable, \texttt{crowdsource} splits each unWISE tile into $512\times512$ pixel subimages, with an additional 150 pixel border on each side. In the original incarnation of \texttt{crowdsource} for the DECam Plane Survey, these final catalogs for each subimage were created in turn. In order to prevent duplicate source detection in overlap regions, the analysis of later subimages included fixed sources at the locations of sources detected in earlier subimages. In unWISE, however, source detection occurs for the entire coadd simultaneously. The parameters of these sources are then optimized on each image subregion. Finally, again over the entire coadd, the PSF is refined and source detection is repeated. This new approach preserves the computer-memory advantages of the former approach while allowing PSF modeling to be performed on the entire coadd and more gracefully handling sources in the overlap regions between subimages. \subsection{Sky Modeling} A slightly different approach for sky modeling was taken for WISE than for the DECam Plane Survey. In the DECam Plane Survey, the sparse linear algebra solver was allowed to adjust the overall sky level simultaneously with the fluxes of the stars. In dense regions, this allows an an initial sky overestimate (due to the presence sources in the image) to be improved by simultaneously decreasing the sky and increasing the fluxes of the sources. This global approach has the disadvantage that isolated, very poorly fitting regions of the image can significantly drive the sky estimate over the entire image. The WISE coadds feature a larger dynamic range than DECam images, making it easier for small residuals in the cores of bright stars to make outsize contributions to the likelihood in the fitting. This problem was addressed by eliminating the overall sky parameter from the sparse linear algebra fit. The sky, however, is still improved at each iteration by median filtering the residual image, as in \citep{Schlafly:2018}. This area of the pipeline offers room for improvement; see \textsection\ref{subsec:skysubtractionlimitation}. \subsection{Nebulosity and large galaxies} \label{subsec:nebulosity} Reflected starlight and thermally emitted light from dust grains can add a diffuse component with rich small-scale structure to observed infrared images. The \texttt{crowdsource} modeling assumes that all flux not explained by a smooth sky model must be attributable to point sources. Similarly, large galaxies present in the WISE imaging will be split into many point sources unless preventative measures are taken. To address both of these cases, we identify nebulosity and large galaxies ahead of time and mark these regions. During source detection, candidate sources found in regions marked as containing nebulosity are then required to be sharp; significantly blended objects in regions with nebulosity will not be modeled. In \citet{Schlafly:2018}, these sources were additionally required to not overlap any neighbors substantially, but for WISE we removed this constraint. In the optical, regions with nebulosity are usually sparsely populated with stars; the extinction associated with the gas and dust limits the depth of the survey in nebulous areas. In contrast, the infrared light observed by WISE is hardly extinguished by dust, causing application of a strong blending criterion to source detection to eliminate many real sources. Sources found in regions containing large galaxies, on the other hand, are not required to be sharp, but \emph{are} required to not overlap any neighboring source substantially. This is appropriate since peaks corresponding to large galaxies will naturally be extended and fail a sharpness cut, but only a single source should be associated with these galaxies. Requiring new sources to not overlap existing ones significantly discourages \texttt{crowdsource} from splitting these galaxies into many point-source components. The delivered fluxes, however, will still be inaccurate, as these large galaxies are modeled as single point sources. Regions of nebulosity are identified with the same machine learning approach as in \citet{Schlafly:2018}. A few minor changes were made, however. First, the convolutional neural network was trained with $256\times256$ pixel images, instead of the $512\times512$ images used in \citet{Schlafly:2018}. Second, a slightly shallower convolutional neural network (see Appendix \ref{app:neural-network-structure}) was adopted. Finally, the neural network was retrained using a set of WISE images rather than optical images. The neural network nebulosity classifications are available as part of the mask images in the Data Release (\textsection\ref{sec:release}). The HyperLeda catalog \citep{Makarov:2014} of large galaxies was used to mark regions containing galaxies resolved in WISE ($D_{25} > 10\arcsec$). These regions are marked in the unWISE mask images; see Meisner et al. (2019, in prep.) for details of their implementation in unWISE. HyperLeda is also used to select large galaxies for further analysis in the Legacy Survey Large Galaxy Atlas\footnote{\url{https://github.com/mostakas/LSLGA}}. \subsection{\texttt{spread\_model}} \label{subsec:spreadmodel} The unWISE catalog modeling assumes that all sources are point sources; their shapes are completely described by the modeled PSF. Nevertheless, roughly half of all sources in the catalog are galaxies, which are not point sources. The size of a source can be a useful diagnostic for identifying galaxies or problems with the modeling; for example, a single catalog source that corresponds to two overlapping point sources should be slightly larger than the PSF. Additionally, instrumental effects, like trends in PSF shape with magnitude (for example, as caused by the brighter-fatter effect, \citealt{Downing:2006, Antilogus:2014}), or with color (e.g., PSF chromaticity, \citealt{Cypriano:2010}) can be identified through measurements of source size. The \texttt{crowdsource} pipeline now follows the lead of \texttt{SExtractor}, computing \texttt{spread\_model} as a measure of a source's size \citep{Bertin:1996, Desai:2012}\footnote{The \texttt{crowdsource} \texttt{spread\_model} is corrected from the formulation in \citet{Desai:2012} following the \texttt{SExtractor} documentation so that in the absence of noise point sources have \texttt{spread\_model} equal to 0.}. In combination with its uncertainty \texttt{dspread\_model}, this term roughly indicates how significantly increasing the size of the PSF would improve the fit. \section{unWISE Catalog} \label{sec:catalog} The unWISE catalog uses the deep unWISE coadds to detect sources a factor of two fainter than possible in AllWISE, detecting a total of about 2 billion objects. It couples this with the \texttt{crowdsource} source detection and characterization pipeline to model significantly blended sources. Figure~\ref{fig:auscompare} illustrates the improvement realized by this combination, showing the AllWISE and unWISE images with the corresponding catalogs, and finally the much deeper, higher-resolution S-COSMOS imaging from Spitzer \citep{Sanders:2007}. \begin{figure*}[htb] \begin{center}\includegraphics[width=\textwidth]{auscompare}\end{center} \caption{ \label{fig:auscompare} AllWISE and unWISE compared with much deeper, higher-resolution imaging from Spitzer-COSMOS, for a small portion of the COSMOS field. The three rows show 3.4\ensuremath{\mu}m\ imaging of the same small patch of high-latitude sky from AllWISE (top), unWISE (middle), and Spitzer-COSMOS (right, 3.6\ensuremath{\mu}m). The left column shows only the images, while the right column overplots the $5\sigma$ catalog entries from AllWISE (top) and unWISE (middle, bottom). The deeper unWISE stacks clearly allow many more sources to be detected, and the \texttt{crowdsource} catalog well describes these. Nevertheless, comparison with the Spitzer imaging reveals clear examples of unidentified sources (for instance, near (503, 1518)) and resolved sources that are split into multiple point sources (for instance, near (530, 1526)). Axis units are WISE pixels on \texttt{coadd\_id} \texttt{1497p015}. } \end{figure*} The dramatic improvement in depth realized by the four additional years of NEOWISE-Reactivation imaging is immediately apparent comparing the upper left and middle left panels of Figure~\ref{fig:auscompare}. Additionally, the importance of modeling blended sources is clear. Many sources within the unWISE coadd significantly overlap one another, even in this high-latitude field. Comparison of the unWISE coadd image with the corresponding catalog (middle right) suggests that \texttt{crowdsource} has done a good job identifying sources in the field. The much deeper Spitzer imaging (bottom row) generally confirms this impression, though there are clear instances of cases where \texttt{crowdsource} has split galaxies into multiple sources (for example, the large galaxy near (530, 1526)), as well as cases where it has failed to split overlapping, unresolved, faint sources into their components (for example, the pair of sources near (503, 1518)). The former case is not particularly troubling; there are not that many galaxies resolved by WISE, and the ones that exist are usually securely identified in higher-resolution imaging. Unsurprisingly, the unWISE catalog contains many more objects than the AllWISE catalog. Figure~\ref{fig:numdenscompare} shows the number densities of $5\sigma$ sources per square degree detected in unWISE (top) as compared with AllWISE (middle), and the ratio of the two number density maps (bottom). In typical high-latitude locations, unWISE includes about $2\times$ as many objects in W1 and $2.5\times$ as many objects in W2. In the Galactic plane, the difference is more dramatic, with unWISE cataloging 3--3.5$\times$ more sources than AllWISE, owing to the aggressive identification of blended stars in \texttt{crowdsource}. \begin{figure*}[htb] \begin{center}\includegraphics[width=\textwidth]{numdenscompare}\end{center} \caption{ \label{fig:numdenscompare} Number density of sources per square degree cataloged by unWISE and AllWISE. The top row shows the unWISE catalog densities; the middle row shows AllWISE; and the bottom row shows the ratio of the two. unWISE detects more than $2\times$ as many sources as AllWISE at high latitudes, and more than $3\times$ as many sources as AllWISE at low latitudes. Scattered light from the Moon leads to significant variations in the AllWISE number densities in W2 at high latitudes; these are mostly eliminated in unWISE owing to the greater number of observations, which allows all parts of the sky to be observed in conditions free from significant scattered moonlight. Isolated regions of elevated unWISE/AllWISE source count ratio usually correspond to the $\sim2000$ brightest stars in the sky, where unWISE tends to detect too many sources (incorrectly identifying features in the wings of the PSF of very bright stars as separate sources) and AllWISE tends to detect too few sources (missing many objects in the wings of bright stars). Regions with significant nebulosity show a deficit of objects in both unWISE and AllWISE---for instance, in Orion near $(-150\ensuremath{^\circ}, -20\ensuremath{^\circ})$. } \end{figure*} The ratio maps additionally show some isolated regions of elevated unWISE/AllWISE detections at high latitudes. These tend to surround the brightest $\sim2000$ stars in the sky. The unWISE Catalog detects too many sources in these regions, incorrectly interpreting errors in the wings of the PSF model as faint sources. AllWISE, on the other hand, detects too few sources in these regions, because it overestimates the sky in the vicinity of bright objects and because it relegates sources too near bright objects to the ``reject'' table. Regions of substantial nebulosity also are evident in Figure~\ref{fig:numdenscompare} as having low source densities in unWISE and AllWISE; the clearest example is in the vicinity of the Orion Molecular Cloud near $(l, b) = (-150\ensuremath{^\circ}, -20\ensuremath{^\circ})$. The unWISE Catalog imposes an additional criterion on candidate sources in these regions to limit the number of spurious objects detected (see \textsection\ref{subsec:nebulosity}). However, significant numbers of real sources are also excluded by this criterion. Clear striping along lines of constant ecliptic longitude are especially evident in the number count ratios (Figure~\ref{fig:numdenscompare}, bottom). These stripes usually correspond to periods of the survey where the Moon was near the part of the sky being observed, scattering light into the images and reducing their depth. These features are absent from the unWISE catalog because the additional years of NEOWISE observations provide moon-free imaging of these regions. The particularly prominent stripe in the W1 ratio map lies at ecliptic longitude $\lambda \approx 240\ensuremath{^\circ}$, which was observed at the beginning of the 3-band Cryo phase. Sources in AllWISE in this region may be reported as having zero uncertainty, leading them to be excluded from Figure~\ref{fig:numdenscompare}; see \textsection II.2.c.ii of \citet{Cutri:2013} for details. \subsection{Completeness and Reliability} \label{subsec:completeness} To assess the completeness and reliability of our catalog in a representative sky location, we compare against Spitzer data in the COSMOS region. COSMOS is at high Galactic latitude, and so serves as a typical extragalactic field. Moreover, it is at low ecliptic latitude, so the WISE imaging is typical in depth. For our purposes, S-COSMOS is the preferred Spitzer data set in the COSMOS region, as it is much deeper than our unWISE catalog and covers the entire 2 square degree COSMOS footprint. Our analyses compare WISE W1 and W2 against Spitzer ch1 and ch2, respectively, since these pairs of bandpasses are quite similar, although not identical. In both our completeness and reliability analyses, we use a Spitzer-WISE cross-match radius of 2$''$. Portions of four unWISE \verb|coadd_id| footprints contribute to the analysis: \verb|1497p015|, \verb|1497p030|, \verb|1512p015|, \verb|1512p030|. All completeness and reliability values presented are differential. To measure the unWISE catalog's completeness, we wish to compare against a highly reliable Spitzer catalog. For this reason, we take as a sample of ``true'' Spitzer sources the subset of S-COSMOS catalog entries with \verb|fl_c?| = 0 and \verb|flux_c?_4| $>$ 0 in the relevant channel. One consequence of our decision to cut on the S-COSMOS quality flag \verb|fl_c?| is that sources brighter than $\sim$13 mag in ch1 and ch2 are rejected, preventing our completeness analysis from reaching bright magnitudes. We also remove Spitzer sources with locations marked by \verb|flags_info| bit 1 as being associated with a WISE-resolved galaxy -- this effectively restricts our analysis to pointlike sources. The top two panels of Figure \ref{fig:completeness_reliability} show the completeness as a function of Spitzer magnitude, for both our unWISE Catalog and the AllWISE Source Catalog. Because AllWISE performs forced photometry in every WISE band for sources detected in any band, we have restricted to sources with signal-to-noise ratio greater than 5 in the corresponding band. This provides a fair comparison against the unWISE Catalog, which does not perform forced photometry. In W1, we find that the unWISE Catalog is 0.76 mags deeper than AllWISE, with the former (latter) reaching 50\% completeness at ch1 = 17.93 (17.17) mag. In W2, we find that the unWISE Catalog is 0.67 mags deeper than AllWISE, with the former (latter) reaching 50\% completeness at ch2 = 16.72 (16.05) mag. In AB, these unWISE catalog depths are 20.72 (19.97) mag in W1 (W2). Given that our unWISE catalog benefits from $\sim$5$\times$ enhanced integer coverage relative to AllWISE, one might naively expect that we should reach 2.5log$_{10}\sqrt{5}$ = 0.87 mags deeper in both W1 and W2, assuming that the pre- and post- hibernation data are of identical quality. In detail, there are two main factors that result in a lesser depth enhancement. First, the WISE PSF is a few percent broader in both W1 and W2 in post-hibernation imaging relative to pre-hibernation imaging\footnote{\url{http://wise2.ipac.caltech.edu/docs/release/neowise/expsup/sec4_2bi.html}}. This increases the number of effective noise pixels ($n_\mathrm{eff}$) for point sources, which scales like FWHM$^2$, while the limiting flux then scales as $\sqrt{n_\mathrm{eff}} \propto$ FWHM, so that the post-reactivation sensitivity is correspondingly decreased by a few hundredths of a mag. Furthermore, the W1 and W2 sensitivities have decreased slightly post-reactivation\footnote{\url{http://wise2.ipac.caltech.edu/docs/release/neowise/expsup/sec2_1e.html}, see Figure 1.}. The post-reactivation sensitivity decrease ranges from 0.05$-$0.12 mag in W1 and 0.15-0.26 mag in W2 when measured in yearly intervals during the NEOWISE mission. Combined, the increased PSF size and decreased sensitivity of post-reactivation data can account for the discrepancy between our achieved depths and the simplistic projection of 0.87 mag improvement. Confusion noise is an additional factor that would tend to reduce the depth improvement achieved relative to estimates based purely on reduced statistical pixel noise. \begin{figure*} \begin{center}\includegraphics[width=\textwidth]{completeness_reliability}\end{center} \caption{ \label{fig:completeness_reliability} Summary of our completeness and reliability analysis based on a comparison against the Spitzer S-COSMOS data set over $\sim$2 square degrees of extragalactic, low ecliptic latitude sky. Top: differential completeness as a function of Spitzer magnitude for AllWISE (green squares) and the unWISE Catalog (black plus marks). The unWISE Catalog hits 50\% completeness $\sim$0.7 magnitudes fainter than AllWISE in both bands. Bottom: differential reliability as a function of WISE magnitude, with the same marker symbols/colors as above. Vertical dashed green lines indicate magnitude bins with $< 10$ AllWISE \texttt{w?snr} $\ge 5$ sources in the relevant band. Vertical dashed black lines indicate magnitude bins containing $< 10$ sources with signal-to-noise $\ge 5$ in either AllWISE or unWISE. Note that although the numerical values along the horizontal axes of the upper and lower panels are aligned, their units are different (Spitzer magnitudes in the upper panels and WISE magnitudes in the lower panels).} \end{figure*} To measure the unWISE Catalog's reliability, we wish to compare against a highly complete Spitzer catalog. Therefore, in our reliability analysis, we compare against the entire S-COSMOS catalog without making any quality or flux cuts. We again require AllWISE \verb|w?snr| $\ge$ 5 and unWISE \verb|flux|/\verb|dflux| $\ge 5$ in the band under consideration. For unWISE we required \verb|flags_unwise| = 0, and that none of \verb|flags_info| bits 1, 6, 7 be set (see Table~\ref{tab:flags}). For AllWISE we required \verb|w?cc_map| = 0. We find that the unWISE Catalog has roughly comparable reliability to AllWISE until reaching sufficiently faint magnitudes that AllWISE no longer contains any sources in the sky region analyzed. In both W1 and W2, AllWISE appears to have very high reliability up until the point that it contains no more sources. We interpret this behavior as arising from the fact that AllWISE catalog construction and particularly its artifact flagging were engineered for high reliability, with the AllWISE Reject Table being used as a repository for lower confidence sources. The unWISE Catalog has no ``reject table'', and so its reliability rolls off toward faint magnitudes until reaching 0.62 (0.70) in W1 (W2), at which point there are no fainter $\ge 5\sigma$ sources in this sky region. The vast majority of AllWISE sources are detected at $\ge 5\sigma$ in W1, so that the W1 AllWISE sample reflected in the bottom left panel of Figure \ref{fig:completeness_reliability} can be considered roughly W1-selected. The unWISE sample in that same subplot is strictly W1-selected, making this a fair comparison. The W1 unWISE reliability is superior to the AllWISE reliability at the faintest AllWISE magnitudes, which makes sense given that the unWISE Catalog has benefited from a larger amount of W1 imaging. On the other hand, in W2, the AllWISE reliability appears to be slightly higher than the unWISE reliability at the faintest AllWISE magnitudes. We attribute this to the fact that AllWISE W2 detections are benefiting from W1 information (e.g., during the source identification and centroiding) as a result of being jointly fit across all bands, while the unWISE Catalog fits the two bands entirely independently. The relatively high AllWISE reliability around W2 $\sim$ 16 mag also coincides with AllWISE completeness which is much lower than that of the unWISE Catalog at the same magnitudes. The depth of the unWISE Catalog varies across the sky, generally becoming deeper at higher ecliptic latitude and shallower at lower Galactic latitude. Especially in W2, zodiacal light further reduces the depth at low ecliptic latitude. The COSMOS field is at an ecliptic latitude of $\sim 10\ensuremath{^\circ}$, so the COSMOS field corresponds to a relatively shallow region in unWISE. \subsection{Astrometry} \label{subsec:astrometry} The simplest test of the unWISE Catalog astrometry is to compare it with Gaia \citep{Gaia:2016}. The comparison shows systematic offsets in declination of about -1 and 0 mas in right ascension ($\alpha$) and 28 and 36 mas in declination ($\delta$), in W1 and W2, respectively. These offsets vary spatially over the sky, dominated by signals related to Galactic rotation; the rms scatter in the offset over the sky is 51 and 51 mas in $\alpha$ and 38 and 39 mas in $\delta$ in W1 and W2, respectively. After removing these offsets over the sky, the rms scatter in $\alpha$ and $\delta$ for individual bright stars ($G < 15$) between Gaia and unWISE is 39 mas. That said, a number of features appear in the comparison of the astrometry to Gaia unrelated to Galactic rotation. To inspect these in more detail, we compare the unWISE astrometry to AllWISE. In analyzing the unWISE coadds, we attempt to directly propagate the underlying WISE single-exposure astrometry forward without adjustment. The unWISE Catalog astrometry should therefore agree quite well with the AllWISE astrometry. Figure~\ref{fig:astromaw} shows the mean and root-mean-square (rms) difference between the measured unWISE and AllWISE right ascension (top) and declination (bottom) across the sky in W1 (left) and W2 (right) for bright stars ($W1 < 14$ or $W2 < 14$). Overall, agreement is excellent. There are spatially correlated differences of 25 mas in size, but this is only a hundredth of a WISE pixel. Individual bright stars agree in position in AllWISE and unWISE with an rms difference of 70 mas. We note that since AllWISE simultaneously fits the W1 and W2 bands, but W1 provides substantially more signal-to-noise for typical sources, Figure~\ref{fig:astromaw} primarily compares unWISE W1 and W2 to AllWISE W1. In particular, differences between the W1 and W2 comparisons stem from differences in the unWISE W1 and W2 astrometry; the AllWISE astrometry is identical. \begin{figure*}[htb] \begin{center}\includegraphics[width=\textwidth]{astromcompaw}\end{center} \caption{ \label{fig:astromaw} Astrometry comparison between unWISE and AllWISE for bright, relatively isolated stars. Each panel is a map of the sky; the rows shows the average difference in right ascension $\alpha$ and declination $\delta$, as well as the rms difference in $\alpha$ and $\delta$ over the sky. All units are in milliarcseconds (mas). The left column is W1, while the right column is W2. The mean $\mu$ and standard deviation $\sigma$ of each map above $|b| = 15\ensuremath{^\circ}$ is shown. Agreement is excellent; typical correlated residuals are only 25 mas, less than a hundredth of a WISE pixel. Individual bright stars agree in position to an rms difference of 70 mas. Nevertheless, many WISE survey and processing related structures are evident; see text for details. The color scale covers 200 mas linearly in all panels. } \end{figure*} Despite this great overall agreement, Figure~\ref{fig:astromaw} shows interesting structures in the differences between the unWISE and AllWISE astrometry. Most prominently, the differences in $\alpha$ and $\delta$ show stripes along lines of constant ecliptic longitude. This is due to a difference between the definitions of the ``center'' of the PSF in unWISE and AllWISE. AllWISE adopted a PSF with a flux-weighted centroid slightly offset from the origin, while unWISE uses a PSF with a centroid at the origin \citep{tr_neo2}. However, unWISE adopts the published WISE WCS solutions without modification, leading unWISE to report slightly different world coordinates for sources than AllWISE. This difference is most pronounced in locations where, due to Moon avoidance maneuvers, more observations have been made in one WISE scan direction than in the other, leading to structures at constant ecliptic longitude. The larger amplitude of the striping in W1 than W2 is due to the greater PSF asymmetry in W1 than W2 \citep{tr_neo2}. Another source of low-level disagreement between unWISE Catalog positions and AllWISE positions are the ``MFPRex'' astrometric corrections \citep[][$\S$V.2]{Cutri:2013} implemented in the AllWISE pipeline but not reflected in the single-exposure WCS used by unWISE. These corrections presumably are the source of the coherent small scale features in Figure~\ref{fig:astromaw}, most prominent near the Galactic Center in the $\delta$ offset map (second row). These corrections additionally account for the proper motions of the stars in the astrometric reference catalog, leading to some small differences between AllWISE and unWISE on large angular scales. Comparison to Gaia confirms that the AllWISE astrometry is superior to unWISE, owing to these corrections, so users seeking the best possible astrometry are encouraged to use AllWISE until the unWISE astrometry can be improved (\textsection\ref{subsec:astrometrylimits}). The $\delta$ offset maps (second row of Figure~\ref{fig:astromaw}) show significantly smaller residuals in the celestial south than in the north; we do not understand the cause of this north-south discrepancy. The rms differences in $\alpha$ and $\delta$ are very uniform over the high-latitude sky. In the inner Galaxy, however, they increase dramatically, presumably owing to the more aggressive unWISE identification of stars in the wings of neighboring stars and the corresponding impact on the positions of the stars and their neighbors. Similarly, the Magellanic Clouds and M31 show enhanced rms. Two small regions of slightly enhanced residuals and scatter in $\delta$ fall near (113\ensuremath{^\circ}, 17\ensuremath{^\circ}) and (148\ensuremath{^\circ}, -18\ensuremath{^\circ}) and are not understood, though the modeling in these regions looks accurate, leading us to suspect problems with the WCS. \subsection{Photometry} \label{subsec:photometry} To assess the accuracy of the unWISE photometry, we compare it with AllWISE photometry in Figure~\ref{fig:photomaw}. Agreement is excellent, with overall offsets of 7 and 35 mmag in W1 and W2 and sub-percent level fluctuations over the high-latitude sky. The rms difference between unWISE and AllWISE photometry of bright stars is 15 mmag in W1 and 17 mmag in W2. \begin{figure*}[htb] \begin{center}\includegraphics[width=\textwidth]{photomcompaw}\end{center} \caption{ \label{fig:photomaw} Photometry comparison between unWISE and AllWISE for bright (W1 or W2 brighter than 14th mag), relatively isolated stars. Each panel is a map of the sky; the first row shows the average difference in magnitude between unWISE and AllWISE, while the second row shows the rms scatter in the magnitude differences. The left column shows W1, while the right column shows W2. The mean $\mu$ and standard deviation $\sigma$ of each map above $|b| = 15\ensuremath{^\circ}$ is shown, with units of mmag. Agreement is excellent, with percent-level offsets and uniformity, and scatter of $1\%$, though many WISE survey and processing related structures are evident; see text for details. The color scale covers 100 mmag linearly in the top two panels and 50 mmag linearly in the bottom two panels. } \end{figure*} However, a number of structures are evident in the comparison. Most prominently, the comparison features a number of streaks running between the ecliptic poles at constant ecliptic longitude. These appear to stem from the AllWISE photometry, as they correlate with the moon-avoidance maneuvers made during the pre-hibernation phase of the WISE mission; the post-reactivation WISE imaging has now covered the sky more uniformly. Structures are apparent at $(l, b) = (145\ensuremath{^\circ}, 45\ensuremath{^\circ})$ and $(-40\ensuremath{^\circ}, -40\ensuremath{^\circ})$, which stem from the WISE spacecraft's dumping angular momentum using its magnetic torque rods \citep[][IV.2.c.i.]{Cutri:2013}. A dark band of higher-than-average scatter is apparent in the W2 rms map at $\delta \approx -30\ensuremath{^\circ}$, due to the South Atlantic Anomaly. The improved cosmic-ray rejection made possible by the many NEOWISE epochs \citep{fulldepth_neo2} allows the unWISE photometry to improve on the AllWISE photometry in this region. Very crowded regions and regions with significant nebulosity appear as areas of poor AllWISE-unWISE agreement: for example, the Galactic plane, the Large and Small Magellanic Clouds, large globular clusters, and the Orion nebula. The W1 offset map additionally shows some peculiar regions around the Galactic plane and inner bulge where stars are preferentially a couple hundredths brighter in unWISE than in AllWISE. We do not understand these features, but comparison with 2MASS indicates that they are present in AllWISE and absent from unWISE. Figure~\ref{fig:photomaw} addresses the spatial uniformity of the photometry of bright stars, but is insensitive to the catalog accuracy of faint stars. Figure~\ref{fig:dphotomvmag} shows the differences between unWISE and AllWISE photometry as a function of magnitude for point sources identified by Gaia in a 25 square degree region around the COSMOS field. Agreement is again good; for bright stars the rms difference is a few hundredths, as anticipated in Figure~\ref{fig:photomaw}, and near the AllWISE faint limit the uncertainties increase to the expected $\approx 0.2$ magnitudes. In W2, the unWISE and AllWISE fluxes have a tight linear relationship between the saturation limit at about 8th magnitude to the faint limit at about 16th magnitude. However, in W1 there is a clear trend in the magnitude difference with magnitude; unWISE sources are 0.03 mag fainter than AllWISE sources at 8th mag, while they are 0.01 mag brighter than AllWISE sources at 17th mag. It is unclear where this nonlinearity comes from; one hint is that the measured sizes of point sources likewise depends on magnitude, albeit in both W1 \emph{and W2}; see \textsection\ref{subsec:nonlinearity} for further discussion. At about 8th mag in both W1 and W2, the unWISE magnitudes depart sharply from the AllWISE magnitudes. This is due to the onset of saturation. Due to the way the unWISE patches the saturated cores of stars in the construction of the deep unWISE coadds \citep{lang_unwise_coadds}, the inner $7\times7$ pixel regions of saturated stars are unreliable. Flux estimates of saturated stars are then made entirely on the basis of the wings of the PSF in the unWISE Catalog, outside the $7\times7$ pixel region where roughly 90\% of the flux resides. The onset of saturation is tracked in the unWISE Catalog columns \verb|flags_unwise| and \verb|qf|, allowing easy identification of saturated sources (\textsection\ref{sec:release}). \begin{figure}[htb] \begin{center}\includegraphics[width=\columnwidth]{dphotomvmag}\end{center} \caption{ \label{fig:dphotomvmag} Photometry comparison between unWISE and AllWISE as a function of magnitude. The grayscale shows the number of stars at each unWISE-AllWISE magnitude difference as a function of the magnitude of the stars. The solid lines show the 16th, 50th, and 84th percentiles of the distribution at each magnitude. There is good agreement between AllWISE and unWISE at bright magnitudes fainter than the WISE saturation limit of 8, with about 15 mmag scatter. W2 shows excellent linearity, but in W1 there is a trend of 40 mmag from 8th to 16th mag, with unWISE being fainter than AllWISE for bright stars and brighter than unWISE for faint stars. } \end{figure} The unWISE Catalog absolute photometric calibration derives from the photometric calibration of the unWISE coadds \citep{fulldepth_neo1}, which is tied to the original WISE zero points through aperture fluxes in a 27.5\arcsec\ radius. The unWISE Catalog fluxes are defined in the context of a PSF that is normalized to unity in a $19\times19$ pixel box, corresponding to 52.25\arcsec. Meanwhile the AllWISE PSFs are normalized to unity in a 220\arcsec\ box. Because the PSF is extremely stable, these different conventions lead one to expect slight offsets ($\sim$30 mmag) in the absolute calibration of unWISE and AllWISE, consistent with Figure~\ref{fig:photomaw}. We recommend subtracting 4 mmag and 32 mmag in W1 and W2 from the unWISE magnitudes to better match the AllWISE absolute flux calibration, with the caveat that the W1 correction is particularly uncertainty because of the W1 nonlinearity (Figure~\ref{fig:dphotomvmag}). \subsection{Example Uses} \label{subsection:examples} The unWISE Catalog should prove a valuable resource for a range of astronomical applications. This section presents simple examples of potential uses in Galactic, extragalactic, and high-redshift science. \subsubsection{Galactic} \label{subsec:galacticcmd} A wide variety of Galactic science relies on WISE infrared fluxes, ranging from nearby stars \citep{kirkpatrick2011} to dust-extinguished stars throughout the Galaxy \citep{Schlafly:2016, Schlafly:2017}. Figure~\ref{fig:cmds} illustrates the improvement of the unWISE fluxes relative to AllWISE in three fields of different source densities: the high-latitude COSMOS field (top row), the Galactic anticenter (middle row), and the Galactic bulge (bottom row). The figure also shows high-resolution data from Spitzer, taken from the ultra deep S-COSMOS program on the COSMOS field \citep{Sanders:2007}, from GLIMPSE360 in the Galactic anticenter, and from GLIMPSE3D in the Galactic bulge \citep{Benjamin:2003, Churchwell:2009}. In each case, the greater depth of the unWISE imaging (left) than the AllWISE imaging (center) is apparent; the sources in the color-magnitude diagrams extend roughly 0.7~mag fainter in the unWISE diagrams than the AllWISE diagrams. \begin{figure*}[htb] \begin{center}\includegraphics[width=\textwidth]{cmds}\end{center} \caption{ \label{fig:cmds} Color-magnitude diagram of $5\sigma$ sources from unWISE, AllWISE, and Spitzer in three different fields: the high-latitude COSMOS field, the Galactic anticenter, and the Galactic bulge. The greater depth of the unWISE catalog relative to AllWISE is immediately apparent. In relatively uncrowded high-latitude fields, unWISE extends roughly 0.7~mag fainter than AllWISE, as expected from \textsection\ref{subsec:completeness}. This continues to be the case in the Galactic anticenter. In the Galactic bulge, both the unWISE and AllWISE color-magnitude diagrams clearly suffer from crowding, with broad color distributions even at bright magnitudes. Comparison with much higher resolution Spitzer observations from S-COSMOS, GLIMPSE360, and GLIMPSE3D show good agreement with the unWISE magnitudes. The deep Spitzer observations on the COSMOS field are naturally much deeper than unWISE, but unWISE is competitive with GLIMPSE in depth in the outer Galaxy, while providing a tighter stellar locus. In the inner Galaxy, the higher-resolution GLIMPSE observations allow for deeper catalogs.} \end{figure*} In the COSMOS field and toward the Galactic anticenter, the unWISE measurements are very similar to the AllWISE measurements and extend them naturally to fainter magnitudes. In the bulge, however, there is a pronounced difference between the unWISE and AllWISE color-magnitude diagrams; the typical star in AllWISE becomes bluer at fainter magnitudes, while it becomes redder in unWISE. The higher resolution Spitzer observations better match the unWISE measurements than the AllWISE measurements, giving us confidence that unWISE is improving on AllWISE in dense regions like the bulge. That said, in these regions the Spitzer data are substantially superior to the unWISE data---unsurprisingly, given the extreme crowding and $\sim4\times$ better resolution of Spitzer. \subsubsection{Extragalactic} \label{subsec:extragalacticnz} At $z=0$, the WISE W1 and W2 bands sample a steeply falling portion of the typical early-type galaxy's spectral energy distribution. At higher redshifts, more and more of a galaxy's light redshifts into the WISE bands. This makes WISE an effective tool for detecting galaxies at redshifts $0 < z < 2$. To illustrate this, Figure~\ref{fig:cosmosnz} shows the redshift distribution of galaxies detected by AllWISE and unWISE in the COSMOS field \citep{scosmos}, with redshifts taken from the photometric redshift catalog of \citet{Laigle:2016}. WISE sources were matched to COSMOS sources with a match radius of 2.75\arcsec, considering only COSMOS sources with Spitzer 3.6\ensuremath{\mu}m\ or 4.5\ensuremath{\mu}m\ fluxes bright enough that the sources could conceivably be detected in WISE. The resulting redshift distribution peaks at $z\approx 1$. The bulk of the distribution falls between $0 < z < 2$, with a tail to higher redshifts. \begin{figure}[htb] \begin{center}\includegraphics[width=\columnwidth]{cosmosnz}\end{center} \caption{ \label{fig:cosmosnz} Redshift distribution of galaxies in the COSMOS field detected in AllWISE (blue), unWISE (orange), and the difference (green). The unWISE catalog roughly doubles the number of galaxies detected with $z<1$, while tripling it at $z>1$. Extrapolating to the entire sky, the unWISE catalog should contain $>500$ million galaxies broadly distributed over $0 < z < 2$. } \end{figure} Figure~\ref{fig:cosmosnz} further indicates that unWISE roughly doubles the number of $0 < z < 1$ galaxies detected, while increasing the number of galaxies with $1 < z < 2$ by a factor of three or more. Extrapolated over the whole sky, the unWISE catalog should contain $>500\times10^6$ galaxies with $0 < z < 2$. Table~\ref{tab:nz} summarizes the number densities of sources of different types and redshifts in the COSMOS field for AllWISE and unWISE. \begin{deluxetable}{c|cc|ccccc} \tablecaption{Number density of objects at different redshifts} \tablehead{ \colhead{} & \multicolumn{2}{c}{stars} & \multicolumn{5}{c}{$z$ range} \\ \colhead{Catalog} & \colhead{Gaia} & \colhead{$\neg$Gaia} & \colhead{0, 0.5} & \colhead{0.5, 1} & \colhead{1, 1.5} & \colhead{1.5, 2} & \colhead{$>2$} } \startdata \label{tab:nz} AllWISE & 2977 & 411 & 2151 & 3760 & 1501 & 411 & 126 \\ unWISE & 4562 & 1297 & 3941 & 8457 & 4598 & 1876 & 773 \\ \enddata \tablecomments{Number of objects per square degree for stars and galaxies of different redshifts, based on comparison to objects with photometric redshifts in COSMOS \citep{Laigle:2016}. unWISE increases the number of galaxies detected by a factor of 2--4. Stars are marked as having been identified by Gaia (Gaia), or not ($\neg$Gaia). Note that counts are given only for objects matching objects in \citep{Laigle:2016}, but roughly 6\% of unWISE objects have no match, primarily due to masked regions near bright stars in COSMOS and large galaxies split into multiple PSF components in unWISE.} \end{deluxetable} For extragalactic purposes, the presence of stars in the catalog can be a nuisance. Often, however, these stars can be identified by their pointlike morphology in Gaia imaging, which like WISE, is available for the entire sky. Table~\ref{tab:nz} indicates the number density of stars detected by Gaia, and the number density not detected by Gaia, usually due to faint magnitudes and red colors. From the unWISE Catalog alone, the only information available about a typical galaxy is its flux in the W1 and W2 bands. This makes efforts to estimate a galaxy's redshift from its unWISE Catalog entry challenging. Nevertheless, there is a good correlation between the WISE color of a galaxy and its redshift. Figure~\ref{fig:cosmosnz2} shows the redshift distribution of unWISE Catalog galaxies in the COSMOS field satisfying four simple color cuts. The galaxies passing these cuts have mean redshifts steadily increasing from $z=0.4$ to $z=1.5$, with a typical rms of 0.4, as detailed in Table~\ref{tab:nz2}. \begin{figure}[htb] \begin{center}\includegraphics[width=\columnwidth]{cosmosnz2}\end{center} \caption{ \label{fig:cosmosnz2} Redshift distribution of galaxies in the COSMOS field detected in unWISE, satisfying four different color and magnitude selections. Judicious cuts on galaxies' WISE colors can produce samples with mean redshifts ranging from 0.4 to 1.5. Vertical lines give the mean redshifts of the different selections. } \end{figure} \begin{deluxetable}{ccccc} \tablecaption{Color cuts for different WISE galaxy samples} \tablehead{ \colhead{W2} & \colhead{W1$-$W2$ > x$} & \colhead{W1$-$W2$ < x$} & \colhead{$\bar{z}$} & \colhead{$\delta z$} } \startdata \label{tab:nz2} $<15.5$ & & & 0.4 & 0.3 \\ $> 15.5$ & & $(17-\mathrm{W2})/4+0.3$ & 0.6 & 0.3 \\ $> 15.5$ & $(17-\mathrm{W2})/4 + 0.3$ & $(17-\mathrm{W2})/4 + 0.8$ & 1.0 & 0.4 \\ $> 15.5$ & $(17 - \mathrm{W2})/4 + 0.8$ & & 1.5 & 0.4 \\ \enddata \tablecomments{Color and magnitude cuts for selecting galaxies of different redshifts, together with the mean redshift $\bar{z}$ and the width of the redshift distribution $\delta z$ for the selections, as measured by matching to objects with photometric redshifts on the COSMOS field \citep{Laigle:2016}.} \end{deluxetable} \subsubsection{High-redshift} \label{subsec:highredshiftqso} Mid-infrared colors provide an efficient means of selecting quasars, making them effective for detecting objects at high redshifts \citep{Wang:2016}. By providing deep mid-infrared photometry, the unWISE Catalog should prove valuable in searches for the highest redshift quasars. Consistent with this expectation, the unWISE catalog contains detections of more $z > 5$ quasars than AllWISE. Among the 453 quasars currently known (Ross \& Cross, in prep.), 268 are detected in W1 and 183 are detected in W2 in AllWISE, where ``detection'' means that the catalog contains a source within 2.75\arcsec\ of the QSO with $>5\sigma$ significance. Meanwhile, 355 and 307 are detected in unWISE in W1 and W2, respectively. Roughly half of all high-redshift quasars formerly undetected in WISE now have secure detections. \section{Limitations and Future Directions} \label{sec:limitations} The unWISE Catalog is a first attempt to measure the fluxes and positions of all of the sources in the unWISE coadds. During the construction of the catalog, a number of issues were identified that could be addressed in future unWISE catalogs. We list some limitations of the catalog here, and in many cases describe improvements that could eliminate these limitations in future releases. \subsection{Non-linearity} \label{subsec:nonlinearity} Comparison of AllWISE and unWISE magnitudes reveals a slight non-linearity in W1, as evident in Figure~\ref{fig:dphotomvmag}. It is not clear what the source of this trend is. Variation in the non-linearity of the WISE detector pixels between the mission phases could account for some of the effect. Inspection of the shape of the PSF as a function of magnitude (as measured by \texttt{spread\_model}) shows a significant trend in both W1 and W2, large enough to account for the trend in W1. However, because the shape-dependent trend is present in both W1 and W2, the obvious avenues for eliminating it introduce a significant trend between AllWISE and unWISE in W2. No correction for non-linearity is made in the unWISE catalog, incurring systematic uncertainties of $\approx 0.02$ mag. More work is needed to identify the source of the non-linearity and to develop effective mitigation strategies. \subsection{Saturation} \label{subsec:saturation} The performance of the unWISE Catalog rapidly degrades at the onset of saturation, as indicated in Figure~\ref{fig:dphotomvmag}. The unWISE coadd images (\textsection{\ref{sec:unwise}}) use a $7\times7$ Lanczos kernel to resample the native WISE images onto the unWISE coadds. Before resampling, any masked pixels are first ``patched'' by replacing the masked value with the mean of the surrounding pixels. This procedure tends to make the peaks of stars flatter than they would otherwise have been, changing the shape of the PSF. This changed shape is then propagated out to the full $7\times7$ pixel surrounding neighborhood, albeit with decreasing influence toward the edges. Consistent with this, saturated stars show significantly worse residuals than unsaturated stars in the unWISE imaging, even outside the saturated center. To mitigate this effect, all pixels within a three pixel radius of a potentially saturated pixel in unWISE are masked in the analysis. Because roughly $90\%$ of the flux of a star lands within 3 pixels of a star's center, the unWISE catalog fluxes of these bright stars are highly uncertain and dependent on the amount of flux in the wings of the PSF. Significant improvement here could come from improving the ``patching'' process for saturated pixels in the unWISE coadds by incorporating knowledge of the WISE PSF. \subsection{Sky Subtraction} \label{subsec:skysubtractionlimitation} The \texttt{crowdsource} sky subtraction analysis removes the median residual in each $19\times19$ pixel region of the image during each iteration of its fitting process. In very dense regions like the Galactic bulge this process tends to be biased high, and many potential sources are not discovered and analyzed. The sky could instead be fit as an additional set of linear parameters in the main \texttt{crowdsource} optimization. In the DECam Plane Survey \citep{Schlafly:2018}, \texttt{crowdsource} was run in a mode in which a single global sky was simultaneously optimized, but in unWISE we found that this tended to propagate small residuals around bright stars into global sky errors that interacted with the median sky filtering to slow convergence. The locality of the sky fit could be preserved with simultaneous fitting of the sky by adopting a small-scale cardinal basis spline approach to the sky modeling. This approach would require somewhat more memory than the existing analysis, but should improve the speed of convergence and the accuracy of the sky fits. \subsection{Inconsistency of W1 \& W2 Modeling} \label{subsec:w1w2modeling} Sources in the unWISE Catalog are modeled completely independently in different bands, in contrast to AllWISE. This means that images that are modeled with three stars in W1 may be modeled with two stars in W2, confusing the linking of W1 and W2 catalogs into multiband object lists. Similarly, a single, isolated object will have slightly different positions in its W1 and W2 analysis in the unWISE Catalog, again in contrast to AllWISE. The AllWISE approach has many advantages over the approach taken for unWISE; the primary motivation for the unWISE approach was computational and algorithmic convenience. Future releases could straightforwardly adopt an AllWISE-like simultaneous modeling scheme. Nevertheless, color-magnitude diagrams like Figure~\ref{fig:cmds} indicate that the negative effects of the inconsistent modeling are not large for typical stars. \subsection{No source motions or variability} \label{subsec:staticsky} The unWISE Catalog is strictly a static-sky catalog; it does not fit for the motions of sources or for their variability. Time-resolved unWISE coadds \citep{tr_neo2} preserve almost all of the information present in the WISE data about the motion of objects outside of the Solar System. In principle, analysis of these images could recover proper motions for many stars. Like \textsection\ref{subsec:w1w2modeling}, this would require simultaneously modeling several images, but beyond this, the generally very small, subpixel motions would be naturally accommodated in the sparse linear algebra analysis at the core of \texttt{crowdsource}. Similarly, the unWISE Catalog contains no variability information; this has likewise been lost in the construction of the deep static-sky unWISE coadds we have analyzed. Variability on time scales of 0.5--8 years, however, would be accessible through analysis of the time-resolved unWISE coadds. \subsection{Astrometry} \label{subsec:astrometrylimits} The unWISE coadds adopt the world-coordinate system of the underlying WISE single-exposure images without modification. The WISE single-exposure world-coordinate system is correct in the context of a slightly asymmetric PSF model\footnote{See \url{http://wise2.ipac.caltech.edu/docs/release/neowise/expsup/sec3\_2.html\#astrom} and $\S$4.1 of \cite{tr_neo2} for in-depth discussion of this single-exposure PSF model asymmetry.} that is different from the PSF model adopted for the unWISE Catalog. The resulting inconsistency leads to the $\approx 25$ milliarcsecond residuals throughout the upper two rows of Figure~\ref{fig:astromaw}. The unWISE coadds were made from single-epoch images that did not include MFPRex WCS improvements. Inclusion of these improvements and correction for the different AllWISE and unWISE PSF conventions would remove the dominant sources of coherent astrometric residuals in unWISE. Another option for improving the unWISE Catalog's astrometric accuracy would be to recalibrate either the single-exposure WISE image astrometry or the unWISE coadd astrometry, using a procedure similar to that described in \cite{tr_neo2}. The release of unprecedentedly accurate astrometry and proper motions from Gaia should enable substantial improvements to the native WISE single-exposure astrometry (currently tied to 2MASS, in some cases using UCAC4 proper motions). \subsection{Bright Stars} \label{subsec:brightstars} The WISE PSF extends very far from the source; flux is readily detected 2\ensuremath{^\circ}\ from the centers of the brightest stars. The PSF used for the unWISE analysis extends ``only'' $\sim 150$ pixels ($\sim 0.1\ensuremath{^\circ}$) from the centers of stars. Moreover, the wings of the unWISE PSF are very uncertain. Accordingly, there are significant residuals $\approx 0.1\ensuremath{^\circ}$ from very bright stars that may present as spurious sources in the unWISE Catalog. Closer to the center of extremely bright stars, large saturated regions likewise occasionally lead to the generation of spurious sources in the catalog. Diffraction spikes around very bright stars pose an additional challenge. The interpolation process from the original images onto new tile centers makes the detailed path a diffraction spike should follow in the coadd images subtle. Far from the center of a star, the unWISE PSF diffraction spikes tend to fall slightly off the true diffraction spikes, leading to significant residuals and affecting the flux of stars in the vicinity. Spurious sources may also be detected in these locations. The unWISE images contain a number of flags indicating pixels affected by bright stars; these are included in the unWISE catalog in the \texttt{flags\_unwise} column (\textsection\ref{sec:release}). Improved modeling of the wings of very bright stars is also possible, but would require significant effort while improving the analysis in only a tiny fraction of the sky. \subsection{Extended Sources} \label{subsec:extended} All sources in the unWISE analysis are modeled as if they were point sources. Fluxes and locations of extended sources will be correspondingly biased and sub-optimal. However, the broad WISE PSF of 6\arcsec\ FWHM means that most extended sources can be treated as point sources. For example, a 1\arcsec\ FWHM galaxy would only broaden the PSF by 1.4\%, corresponding to a flux estimate 1.4\% too small in the unWISE catalog. Some galaxies, however, are much larger, and are readily resolved by WISE. Absent intervention, \texttt{crowdsource} will split these galaxies into several point-source components. Many of these large galaxies have already been identified in other surveys, however, making it easy to modify the behavior of \texttt{crowdsource} in these regions. The unWISE mask images include a bit indicating that a pixel is contained in a large galaxy, as tabulated in the HyperLeda catalog \citep{Makarov:2014}. In such pixels, \texttt{crowdsource} rejects candidate new sources if they significantly overlap with a neighboring source. The effectiveness of this mitigation strategy depends on the underlying catalog of large galaxies. The HyperLeda catalog is rather heterogeneous; over the Sloan Digital Sky Survey \citep{York:2000} footprint it contains many more large galaxies than in the southern sky, for instance. The unWISE catalog will correspondingly spuriously split varying numbers of large galaxies into multiple point sources depending on sky location. This strategy discourages \texttt{crowdsounce} from splitting large galaxies, but significant residuals are still left on the images around large galaxies because the point source model is a poor fit. As part of the optimization process, \texttt{crowdsource} may still decide that the image is best modeled by slowly moving nearby stars into the large galaxy to reduce these residuals, leading to occasional splitting even of large galaxies identified in HyperLeda. Explicitly modeling all of the sources as potentially resolved galaxies or unresolved stars would be the best approach, but presents substantial conceptual, algorithmic, and computational challenges. Alternatively, full galaxy fitting could be enabled exclusively for objects marked in external catalogs as being resolved galaxies; this would improve the fits of these galaxies and their neighbors. Galaxy models tend to be less well approximated by linear models, however, so their optimization does not fit neatly into the \texttt{crowdsource} framework. \subsection{Ecliptic Poles} \label{subsec:eclipticpoles} The unWISE coadd images are qualitatively different at high ecliptic latitudes than at low ecliptic latitudes. At low ecliptic latitudes, the typical number of WISE observations contributing to a given part of the sky in the unWISE coadds is $\sim 120$ per band. At the ecliptic poles, however, the number is greater than 23000, $\sim 200\times$ larger. Moreover, due to the WISE scan strategy, at low ecliptic latitudes, the position angle of the WISE focal plane on the sky is nearly constant, while the ecliptic poles were observed at all position angles. The huge number of WISE observations at the ecliptic pole potentially make for images $\sim 3$ magnitudes deeper than typical at low latitudes. However, at this depth, large-scale residuals in the unWISE coadd sky become significant. Absent intervention, the \texttt{crowdsource} analysis would incorrectly attempt to explain what remains of these residuals after sky subtraction with many sources. To mitigate this, a flux uncertainty floor was added to the unWISE uncertainty images such that the $5\sigma$ sources correspond to at most $\sim 19.8$ mag in W1 and $\sim 18.2$ mag in W2. These are roughly 1.5 mag fainter than the typical depths of the unWISE coadd images (\textsection\ref{subsec:completeness}), but much brighter than the nominal depth achievable at the ecliptic poles. The varying position angle of the WISE images taken near the ecliptic pole leads the PSF there to be different from the PSF at lower ecliptic latitudes. As discussed in \textsection\ref{subsec:psf}, the \texttt{crowdsource} analysis uses a dense grid of rotated and azimuthally smoothed PSF models to capture much of the PSF variation in the unWISE coadds in these areas. However, our flagging of diffraction spikes and ghosts in the unWISE coadd masks was not modified to account for the azimuthal smoothing of the PSF near the ecliptic poles. Accordingly, these features in the unWISE masks near the ecliptic pole are narrower than they should be. This problem primarily affects regions within about 5\ensuremath{^\circ}\ of the ecliptic poles, $\sim0.4$\% of the footprint. Moreover, because sharp features like diffraction spikes spread over a wider area due to this effect, their amplitude is lower and their masking is less important. Directly on the pole, for instance, it is not clear if a diffraction spike mask would be desired; the diffraction spikes have been fully blurred out into the wings of the PSF. \subsection{Nebulosity} \label{subsec:nebulositylimitations} The \texttt{crowdsource} analysis identifies regions believed to contain significant nebulosity and requires that new sources detected in these regions be relatively sharp and PSF-like; sources significantly broadened by neighboring sources will not be modeled. This process occasionally incorrectly identifies features in the wings of bright stars as nebulous, leading to fewer detected sources in these regions. Only roughly 0.7\% of the footprint is identified as being affected by nebulosity, however, so this is a small effect. Likewise, some areas containing significant nebulosity are not marked and may contain many spurious sources. \subsection{Uncertainty estimates} \label{subsec:uncertaintylimitations} The unWISE Catalog analysis adopts the unWISE coadd uncertainty images with little alteration, except near the ecliptic poles (\textsection\ref{subsec:eclipticpoles}). Analysis of the coadd residual images after subtraction of the best fit models suggests that the coadd uncertainty images overestimate the actual uncertainty by 15\% in W1 and 20\% in W2. On the other hand, the residual images show significant correlated uncertainties, which are neglected in the analysis. If the correlated errors in the residuals could be eliminated or modeled out, and the uncertainty images modified to better describe the true dispersion in the values, WISE catalogs could reliably detect objects $\sim 0.2$ mag fainter than at present. The unWISE Catalog reports formal statistical uncertainties that would be obtained for isolated point sources, without accounting for the covariance between blended sources. The covariance between nearby sources will dominate the uncertainty of most sources in the bulge, for example, as well as faint sources near bright sources at all Galactic latitudes. The importance of blending can be assessed via the catalog column \verb|fracflux|; values near 1 indicate that the source is isolated, while values near 0 indicate that most of the flux in the vicinity of this source comes from another object (Table~\ref{tab:detectioncatalog}). The catalog also neglects systematic uncertainties in the measurement, which dominate the uncertainties of bright sources, even when isolated. It is challenging to assess the size of the systematic uncertainty. Were we to analyze the time-resolved unWISE coadd images, we could empirically measure the repeatability of the photometry for non-variable stars, but that is beyond the scope of this work. Nevertheless, we can establish a floor on the systematic uncertainty by considering sources in the overlap regions between adjacent unWISE coadds, for which we have multiple unWISE catalog entries. For sources brighter than 14th mag, the rms difference in the magnitudes of these sources is 3 mmag, possibly stemming from imperfect subpixel PSF interpolation, which dominates the residuals of bright stars (for example, Figure~\ref{fig:crowdsourceexample}). Because the different unWISE coadds are drawn from identical underlying WISE individual-epoch images, this approach will strictly underestimate the true systematic error floor. Additionally, this approach is not sensitive to systematic trends with magnitude (for example, \textsection\ref{subsec:nonlinearity}) or color. \section{Data Release} \label{sec:release} The unWISE coadds comprise 18240 tiles in two bands. The corresponding 36480 catalog FITS files \citep{Pence:2010} composing the unWISE Catalog are available at the catalog web site, \url{http://catalog.unwise.me}. The Catalog consists of 2.03 billion detections of objects in the ``primary'' regions of their unWISE coadds that have at least $5\sigma$ significance in W1 or W2. Some basic numbers describing the catalog are given in Table~\ref{tab:numbers}. \begin{deluxetable*}{crrrr} \tablecaption{Number of Sources in the unWISE Catalog} \tablehead{ \colhead{type} & \colhead{W1 \& W2} & \colhead{W1 only} & \colhead{W2 only} & \colhead{total}} \startdata \label{tab:numbers} all & 1,063,569,639 & 1,032,419,887 & 118,744,698 & 2,214,734,224 \\ primary & 979,399,857 & 949,169,093 & 108,858,251 & 2,037,427,201 \\ $>5\sigma$ & 1,062,021,630 & 1,027,533,192 & 116,683,978 & 2,206,238,800 \\ primary \& $>5\sigma$ & 978,015,749 & 944,811,706 & 106,974,415 & 2,029,801,870 \\ \enddata \tablecomments{Number of objects in the unWISE Catalog satisfying various criteria. ``primary'' sources discard duplicate sources in sky regions included in multiple coadds. $>5\sigma$ sources are detected with at least $5\sigma$ significance.} \end{deluxetable*} The content of the FITS files is essentially identical to the files of the DECam Plane Survey \citep{Schlafly:2018}, with the addition of a few metadata columns and \texttt{spread\_model} (\textsection\ref{subsec:spreadmodel}). The catalog columns are listed in Table~\ref{tab:detectioncatalog}. \begin{deluxetable}{ll} \tablewidth{\columnwidth} \tablecaption{Catalog Columns} \tablehead{ \colhead{Name} & \colhead{Description}} \startdata \label{tab:detectioncatalog} \texttt{ra} & right ascension (deg) \\ \texttt{dec} & declination (deg) \\ \texttt{x} & $x$ coordinate (pix) \\ \texttt{y} & $y$ coordinate (pix) \\ \texttt{flux} & Vega flux (nMgy) \\ \texttt{dx} & $x$ uncertainty (pix) \\ \texttt{dy} & $y$ uncertainty (pix) \\ \texttt{dflux} & formal \texttt{flux} uncertainty (nMgy) \\ \texttt{fluxlbs} & local-background-subtracted flux (nMgy) \\ \texttt{dfluxlbs} & formal \texttt{fluxlbs} uncertainty (nMgy) \\ \hline \texttt{qf} & PSF-weighted fraction of good pixels \\ \texttt{rchi2} & PSF-weighted average $\chi^2$ \\ \texttt{fracflux} & PSF-weighted fraction of flux from this source \\ \texttt{spread\_model} & SExtractor-like source size parameter \\ \texttt{dspread\_model} & uncertainty in \texttt{spread\_model} \\ \texttt{fwhm} & FWHM of PSF at source location (pix) \\ \texttt{sky} & residual sky at source location (nMgy) \\ \hline \texttt{nm} & number of images in coadd at source \\ \texttt{primary} & source located in primary region of coadd \\ \texttt{flags\_unwise} & unWISE flags at source location \\ \texttt{flags\_info} & additional flags at source location \\ \hline \texttt{coadd\_id} & unWISE/AllWISE \texttt{coadd\_id} of source \\ \texttt{band} & 1 for W1, 2 for W2 \\ \texttt{unwise\_detid} & detection ID, unique in catalog \\ \enddata \tablecomments{ Columns in the unWISE catalogs. A more complete description is available at the survey web site. } \end{deluxetable} Fluxes and corresponding uncertainties are given in linear flux units, specifically, in Vega nanomaggies (nMgy) \citep{Finkbeiner:2004}. The corresponding Vega magnitudes are given by $m_\mathrm{Vega} = 22.5-2.5\log_{10} \texttt{flux}$. The following equations give the corresponding AB magnitudes: \begin{align*} m_\mathrm{W1,\, AB} &= m_\mathrm{W1,\, Vega} + 2.699 \\ m_\mathrm{W2,\, AB} &= m_\mathrm{W2,\, Vega} + 3.339 \, . \end{align*} As noted in \textsection\ref{subsec:photometry}, the agreement between unWISE and AllWISE magnitudes can be improved by subtracting 4 mmag and 32 mmag from W1 and W2. Additional files give the \texttt{crowdsource} model image and sky image for each unWISE tile. The PSF flux inverse variance image, mask image, and PSF model are also available for each tile. The column \texttt{fwhm} is intended to give a sense of the size of the model PSF for a particular detection. Given a PSF model, it is computed as the FWHM a Gaussian PSF with equal $n_\mathrm{eff}$ would have. Typical values of 7.2\arcsec\ in W1 and 7.8\arcsec\ in W2 are larger than the true WISE FWHMs because the WISE PSF has more flux in its wings than a Gaussian, increasing $n_\mathrm{eff}$. The column \texttt{primary} marks whether a particular source is located in the ``primary'' region of its coadd. The unWISE coadds overlap one another by roughly 60 pixels, so that sources residing on the edges of an unWISE coadd will be detected in multiple coadds. By selecting only ``primary'' sources, duplicate sources can be eliminated. Determining whether a source is primary in a given tile is purely a geometric operation, and does not involve any cross-matching of detections on neighboring tiles. For each source in a given tile's catalog, we compute the source's minimum distance from any edge of that tile's footprint. Using the source's (RA, Dec) coordinates, we perform the same minimum edge distance computation for all neighboring tiles. The source is labeled primary in the tile under consideration if that tile's footprint provides a larger minimum distance to any edge than do all other neighboring tile footprints. Finally, merged catalogs linking W1 detections and W2 detections into multiband objects are also available. Each W1 source is matched to the nearest W2 source within 2.4\arcsec; W2 sources within 2.4\arcsec\ that are not the closest source to a W1 source are considered unmatched. The merged catalogs include the same columns as in the individual catalogs (Table~\ref{tab:detectioncatalog}), but each column now contains a two element vector for the W1 and W2 quantities. The canonical right ascension and declination are taken to be the W1 quantities, when available, and otherwise the W2 quantities. Likewise an object is considered ``primary'' when its W1 detection is considered ``primary.'' Finally, a unique \texttt{unwise\_objid} is assigned to each entry in the merged catalog. Unmatched detections contain zeros in all columns corresponding to the missing band; these are easily identified, for example, by the empty \texttt{unwise\_detid}. In additional to the catalogs themselves, the unWISE Catalog release contains a few items intended to facilitate the catalog's use. These are \begin{itemize} \item model images, \item model sky images, \item PSF depth images, \item mask images, and \item PSF images. \end{itemize} A sense for the plausibility of the modeling of a particular object in the unWISE Catalog can be obtained by comparing the unWISE coadds with the model images. The unWISE model sky images and images of the PSF are potentially useful for users seeking to do their own photometry on the unWISE coadds; estimating the sky and PSF in crowded fields can be challenging. Users wondering about spatial variations in the depth of the survey may find the PSF depth images valuable. We also provide mask images that replicate some of the elements in the unWISE coadd mask images, but also add a few elements specific to the unWISE Catalog processing. The values of the mask image are given in Table~\ref{tab:flags}. In particular, the mask images indicate sky regions in which candidate sources significantly overlapping other sources are not modeled, due to the presence of a nearby large galaxy (\textsection\ref{subsec:nebulosity}). They also indicate which parts of the sky the convolutional neural network indicates to be affected by significant nebulosity (\textsection\ref{subsec:nebulosity}). \begin{deluxetable}{lll} \tablewidth{\columnwidth} \tablecaption{unWISE Catalog Info Flags} \tablehead{ \colhead{Name} & \colhead{Bit} & \colhead{Description}} \startdata \label{tab:flags} \texttt{bright\_off\_edge} & $2^0$ & bright source off coadd edge \\ \texttt{resolved\_galaxy} & $2^1$ & in large galaxy in HyperLeda \\ \texttt{big\_object} & $2^2$ & in M31 or Magellanic Cloud \\ \texttt{bright\_star\_cen} & $2^3$ & may contain bright star center \\ \texttt{crowdsat} & $2^4$ & may be affected by saturation \\ \texttt{nebulosity} & $2^5$ & nebulosity may be present \\ \texttt{nodeblend} & $2^6$ & deblending discouraged here \\ \texttt{sharp} & $2^7$ & only ``sharp'' sources here \\ \enddata \tablecomments{ Informational flags in the unWISE Catalogs. A more complete description is available at the survey web site. } \end{deluxetable} \subsection{Source Designations} We prescribe that unWISE Catalog source designations contain the prefix ``WISEU''. For example, WISEU J112234.53+122954.3 refers to the object with \verb|UNWISE_OBJID| = 1699p121o0017067. \section{Conclusion} \label{sec:conclusion} The continuing NEOWISE-Reactivation mission has provided more than four years of imaging beyond the initial year of WISE data that was available for AllWISE processing. The unWISE project has combined the $\sim25$ million single-frame WISE images into coadds reaching $2\times$ deeper than AllWISE. The unWISE Catalog is the result of the analysis of these coadds. It contains $\sim 2$ billion sources, roughly $3\times$ as many as cataloged in AllWISE. Because of the broad WISE PSF, only a small fraction of extragalactic sources are resolved in WISE, making the analysis ideally suited to crowded-field pipelines that aggressively model images as sums of many overlapping point sources. Application of the \texttt{crowdsource} crowded-field image analysis to the unWISE coadds provides accurate measurements for the unWISE Catalog, both in extragalactic fields where the improved depth of unWISE is critical, and also in the Galactic bulge where the analysis is limited by crowding. Comparison between bright sources in the unWISE and AllWISE catalogs shows good agreement between the two catalogs. For faint sources, comparison with deeper Spitzer imaging confirms that the catalog reaches $2\times$ deeper than AllWISE. The unWISE Catalog reaches stars in the Milky Way at greater distances, detects hundreds of millions of new galaxies over $0 < z < 2$, and finds half of high-redshift quasars undetected in AllWISE. We have outlined possible ways in which future versions of the unWISE Catalog may be able to augment or improve upon the present data products. Incorporating future NEOWISE data releases would enable the unWISE Catalog to push yet deeper. In combination with the complementary WISE-based proper motions that will be supplied by CatWISE (PI: Eisenhardt), the unWISE Catalog realizes much of the NEOWISE data set's tremendous potential for Galactic and extragalactic astrophysics. unWISE coadd images, the derived unWISE catalog, the corresponding model PSF, sky images, and depth maps are publicly available at the unWISE web site, \url{http://unwise.me}, and the catalog web site, \url{http://catalog.unwise.me}. \vspace{5mm} It is a pleasure to thank Roc Cutri for detailed feedback and John Moustakas for help with the HyperLeda catalog and the Legacy Survey Large Galaxy Atlas. David Schlegel provided invaluable guidance and motivation, and Dustin Lang built much of the framework on which this catalog stands. We thank the anonymous referee for valuable comments that improved the manuscript. This work has been supported in part by NASA ADAP grant NNH17AE75I. ES and AMM acknowledge support for this work provided by NASA through Hubble Fellowship grants HST-HF2-51367.001-A and HST-HF2-51415.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. ES and AMM acknowledge additional support by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE-AC02- 05CH11231, and by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract We acknowledge the usage of the HyperLeda database (\url{http://leda.univ-lyon1.fr}). This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, and NEOWISE, which is a project of the Jet Propulsion Laboratory/California Institute of Technology. WISE and NEOWISE are funded by the National Aeronautics and Space Administration. The unWISE Catalog analysis was run on the Odyssey cluster supported by the FAS Division of Science, Research Computing Group at Harvard University, and on the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. The unWISE nebulosity CNN was trained on the XStream computational resource, supported by the National Science Foundation Major Research Instrumentation program (ACI-1429830).
1,941,325,221,239
arxiv
\section{Introduction and Related Work} \label{sec:intro} Quality data~\cite{Everingham10,imagenet_cvpr09} has always played a pivotal role in the advancement of pattern recognition problems. Some of the key properties for any dataset are:- (i) a good sample distribution which can mimic the real world unseen examples, (ii) quality of annotation and (iii) scale. With the success of deep learning based methods~\cite{KrizhevskySH12,SimonyanZ14a,SzegedyLJSRAEVR15,JaderbergSVZ14}, there has been surge in newer supervised learning architectures which are ever more data hungry. These architectures have millions of parameters to learn, thereby need large amount of training data to avoid over-fitting and generalize well. In general data creation is a time consuming and expensive process which requires huge human efforts starting from the data collection phase to annotation and validation. More recently an alternative form of data generation process with minimal supervision is getting popular~\cite{JaderbergSVZ14,RozantsevLF15,ros2016synthia} which uses synthetic mechanisms to render and annotate the images in appropriate form. The simple idea of generating data synthetically allows to overcome challenges in problems where the data is inherently difficult to obtain and problems which require huge data to train such as object detection~\cite{RozantsevLF15,RematasRFT14} and semantic segmentation in 3D models~\cite{ros2016synthia}. One of the first successful use of synthetic data for document image processing is shown in~\cite{SankarJM10} which uses rendered images for the task of annotating large scale of document images. A similar scheme has been used in~\cite{Rodriguez-SerranoPLS09} for querying in handwritten collection. In~\cite{JawaharBMN09}, a scheme of synthesis using online handwritten samples using strokes is demonstrated. In this work, we address the need for large scale annotated datasets for handwritten images by generating synthetic word images with natural variations. Recognition of handwritten (HW) data is one of oldest sought out challenges in the realms of artificial intelligence with great success in classification of handwritten characters and digits. However it is somewhat disappointing that there are no enough works in the space of offline recognition or retrieval for handwritten word images which could enable practical applications. We believe that with the availability of a large scale handwritten data which captures the inherent challenges in terms of numerous styles and distortion, one could address this issue to a large extent. Some of the popular datasets in HW domain are IAM handwriting dataset~\cite{Marti02}, George Washington dataset~\cite{Fischer12,ManmathaHR96}, Bentham manuscripts~\cite{causer2012building}, Parzival database~\cite{Fischer12} etc. Except IAM, the remaining datasets are part of historical collection which were created by one or very few writers. In historical documents, the major challenges involves in predicting rare usages of ligatures, handling degradation of document while the style is more or less consistent. IAM is a relatively modern dataset, published in ICDAR 1999 which consists of unconstrained text written in forms by 657 writers. The vocabulary of IAM is limited by nearly 11K words whereas any normal dictionary in English language would contain more than 100K words. Fig.~\ref{fig:iamWordsDist} shows the distribution of entire words in IAM vocabulary which follows the typical Zipf law. As one can notice that out of 11K words, nearly 10.5K word classes contain fewer than 20 samples or instances. Also the majority of remaining words are stop words which are shorter in length and are less informative. The actual samples in training data is much smaller than this which limits building efficient machine learning models such as deep learning networks~\cite{KrizhevskySH12}. \begin{figure}[t] \centering \includegraphics[height=4.5cm]{Images/iam_zipf.pdf} \caption{Distribution of words in IAM dataset.} \label{fig:iamWordsDist} \end{figure} \section{Synthetic Word Image Rendering} \label{sec:synthImage} Creation of synthetic data for word images can approached in two ways:- (i) rendering words using the available font classes~\cite{JaderbergSVZ14,CamposBV09,WangWCN12} and (ii) learning latent model parameters to separate style and content, thereby modifying the styles aspects alone for different variation. Though the later work seems promising with success in generating characters but synthesizing word images is still a challenging task. In this work, we extend the former approach due to recent availability of handwritten fonts and manipulate the rendered images to simulate the natural variations present in handwritten domain. \begin{figure}[t] \centering \includegraphics[height=2.8cm,width=8.5cm]{Images/HW-Words-Synth_cropped.pdf} \caption{Sample synthetic handwritten word images rendered in this work. Notice the variability of each word image which mimics the natural writing process.} \label{fig:varImages} \end{figure} \subsection{Handwritten Font Rendering} \label{subsec:fonts} We use publicly available handwritten fonts for our task. The vocabulary of words is chosen from a dictionary. For each word in vocabulary, we randomly sample a font and render\footnote{We use ImageMagick for the rendering of word images. URL: \url{http://www.imagemagick.org/script/index.php}} its corresponding image. During this process, we vary the following parameters: (i) kerning level (inter character space), (ii) stroke width, from a defined distribution. In order to make the pixel distribution of both foreground ($F_g$) and background ($B_g$) pixels more natural, we sample the corresponding pixels for both regions from a Gaussian distribution where the parameters such as mean and standard deviation are learned from the $F_g$ and $B_g$ region of IAM dataset. Finally a Gaussian filtering is done to smooth the rendered image. It is well know that natural handwriting is also affected by many factors other than the inherent writing style of the writer such as (i) angle at which the writing medium is placed, (ii) speed of writing, (iii) friction between the pen and writing medium and other motor functionalities associated with hand movement. In the current work, we limit to affine transformation to capture few of such variations. We apply a random amount of rotation ($+/-5$ degrees), shear ($+/-0.5$ degrees along horizontal direction) and perform translation in terms of padding on all four sides to simulate incorrect segmentation of words. \subsection{IIIT-HWS dataset} \label{subsec:hwsynth} Inspired from~\cite{JaderbergSVZ14}, to address the lack of data in handwritten images, we release IIIT-HWS dataset comprising of nearly 9M synthetic word images rendered out of 750 publicly available handwritten fonts. We use 90K unique words as the vocabulary which is picked from a popular open source English dictionary Hunspell. For each word in the vocabulary, we randomly sample 100 fonts and render its corresponding image and follow the post processing steps as described in the previous section. Figure~\ref{fig:varImages} shows some sample rendered word images using handwritten fonts which tries to simulate actual handwritten words. \section{Conclusion} \label{sec:conc} In this work, we propose a framework to render large scale synthetic data for handwritten images. We also release IIIT-HWS dataset, a 9M word image corpus for the purpose of training deep neural networks which would enable learning better models for handwritten word spotting and recognition tasks. In the current work, we have not addressed how to include the cursive property present in the word images. As a future work we plan to address this issue along with additional augmentation schemes such as elastic distortion. \bibliographystyle{splncs}
1,941,325,221,240
arxiv
\section{Introduction: Causal cognition and its vulnerabilities \label{Sec:Intro} \input{1-cogsci-problem} \section{Background: Causality theories and models} \label{Sec:Causality} \input{2-cogsci-background} \section{Approach: Causes in boxes, tied with strings}\label{Sec:Category} \input{3-cogsci-foreground} \section{Modeling: Causal models as causal factors}\label{Sec:Expl} \input{4-cogsci-insight} \section{Construction: Self-confirming causal models}\label{Sec:Factors} \input{5-cogsci-self} \section{Summary: Towards artificial causal cognition? }\label{Sec:Outro} \input{8-cogsci-outro} \addcontentsline{toc}{part}{References} \subsection{Causal cognition in life and science}\label{Sec:IntroOne} Causal cognition drives the interactions of organisms and organizations with their environments at many levels. At the level of perception, spatial and temporal correlations are often interpreted as causations, and directly used for predictions \cite{Michotte}. At the level of science, testing causal hypotheses is an important part of the practice of experiment design \cite{fisher1935design}, although the general concept of causation seldom explicitly occurs in scientific theories \cite{RussellB:cause}. It does occur, of course, quite explicitly in metaphysics and in theory of science \cite{bunge2012causality}, and most recently in AI\footnote{\emph{AI}\/ is usually interpreted as the acronym of \emph{Artificial Intelligence}. But the discipline given that name in 1956 by John McCarthy evolved in the meantime in many directions, some of which are hard to relate with any of the usual meanings of the word "intelligence". We are thus forced to either keep updating the concept of "intelligence" to match the artificial versions, or to unlink the term "AI" from its etymology, and to use it as a word rather than as an acronym, like it happened with the words gif, captcha, gulag, or snafu. The latter choice seems preferable, at least in the present context.}\cite{pearl2009causality,spirtes2000causation}. \subsection{Causal cognition and launching effects} Causal theories may or may not permeate science (depending on how you look at it) but they certainly permeate life. Why did the ball bounce? Why did my program crash? Why did the boat explode? Who caused the accident? Why did the chicken cross the street? We need to know. We seek to debug programs, to prevent accidents, and to understand chicken behaviors. We often refuse to move on without an explanation. Lions move on, mice run away, but humans want to understand, to predict, to control. In some cases, the path to a causal hypothesis and the method to test it are straightforward. When a program crashes, the programmer follows its execution path. The cause of the crash must be on it. The causal hypothesis is tested by eliminating the cause and verifying that the effect has been eliminated as well. \begin{figure}[ht!] \begin{center} \includegraphics[height=3.8cm]{newton.epsf} \hspace{3em}\includegraphics[height=3.8cm]{action-movie.eps} \caption{Launching effects} \label{Fig:boats} \end{center} \end{figure} If a ball bounces like in Newton's cradle, in Fig.~\ref{Fig:boats} on the left, we see that the ball was launched by another ball. If a boat explodes like in Fig.~\ref{Fig:boats} on the right, we see that the explosion was caused by another boat crashing. Action movies are chains of causes and effects, packed between a problem and a solution, which are usually presented as a big cause and a big effect. In other cases, establishing a causal hypothesis may be hard. It is obvious that the collision caused the explosion; but who caused the collision? It is obvious that the bouncing ball extends the trajectory of the hitting ball; but how is the force transferred from the hitting ball to the bouncing ball through all the balls in-between, that don't budge at all? Such questions drive our quest for causes, and our fascination with illusions. They drive us into science, and into magic. In a physics lab, a physics teacher would explain the physical law behind Newton's cradle. But in a magic lab, a magic teacher would present a magic cradle, looking exactly the same like Newton's cradle, but behaving differently. One time the hitting ball hits, and the bouncing ball bounces. Another time the hitting ball hits, but the bouncing ball does not bounce. But later, unexpectedly, the bouncing ball bounces all on its own, without the hitting ball hitting. Still later, the bouncing ball bounces again, and the hitting ball hits \emph{after}\/ that. Magic! Gradually you understand that the magic cradle is controlled by the magic teacher: he makes the balls bounce or stick at his will. Michotte studied such \emph{"launching effects"}\/ in his seminal book \cite{Michotte}. The action movie industry is built upon a wide range of techniques for producing such effects. While the example in Fig.~\ref{Fig:boats} on the right requires pyrotechnic engineering, the example in Fig.~\ref{Fig:ali} on the right is close to Michotte's lab setups. \begin{figure}[h!t] \begin{center} \includegraphics[height=4cm]{muhammad-london.eps}\qquad \qquad \includegraphics[height=4cm]{john-huston-errol-flynn-fake.eps} \caption{The cause of an effect may appear obvious, but the appearance may deceive} \label{Fig:ali} \end{center} \end{figure} Observing subtle details, we can usually tell apart a real effect from an illusion. But illusions are often more exciting, or less costly, and we accept them. We enjoy movies, magic, superstition. We abhor randomness and complexity, and seek the simplest causal explanations. We like to be deceived, and we deceive ourselves. \subsection{Causal cognition as a security problem}\label{Sec:security} Cognitive bias and belief perseverance are familiar and well-studied properties of human cognition, promoted in evolution because they strengthen the group cohesion \cite{CogBias-handbook}. As human interactions increasingly spread through media, human beliefs are hijacked, their biases are manipulated, and social cohesion is amplified and abused at ever larger scales. This is also well-known and well-studied in psychology \cite[Part~VI]{buss2015handbook-2} and in social sciences \cite{zuboff2019age}. However, on the information technology side, cognitive bias manipulations on the web market and in cyberspace have been so profitable in practice, that there was no time for theory. We live in times of advertising and publicity campaigns. Brands and crowds are built and steered using the same tools, and from the same materials: human cognition and sensitivities. The AI tools, recently added to the social engineering toolkit, appear to be among the most effective ones, and among the least understood. The practice is far ahead of theory. The research reported in this paper is a part of a broader effort \cite{PavlovicD:CathyFest,PavlovicD:NSPW11,PavlovicD:HoTSoS15,PavlovicD:ICDCIT12,PavlovicD:AMAI17} towards developing models, methods, and tools for this area, where human and computational behaviors are not just inseparable, but they determine each other; and they do not just determine each other, but they seek to control each other. The specific motivating question that we pursue is: \emph{Is artificial causal cognition suitable as a tool to defend human causal cognition from the rapid strategic advances in cognitive bias manipulation, influencing, and deceit?} There are, of course, many aspects of this question, and many approaches to it. The combined backgrounds of this paper's authors provide equipment only for scaling the south-west ridge: where the west cliff of \emph{computation}\/ meets the south cliff of \emph{communication}. Neither of us is equipped to place or interpret our model and results in the context of the extant research in psychology, or social sciences, where they should be empirically validated. We are keen to present it to a mixed audience hoping that there are gates between the silos. Rather than attempt to shed light on the problem from a direction that is unfamiliar to us, we are hoping to shed light on the problem from a direction that is unfamiliar to the reader, while doing our honest best to avoid throwing any avoidable shadows. If the reader sheds some light from their direction, this multi-dimensional problem may emerge in its multi-disciplinary space. \subsection{A very brief history of causality}\label{Sec:history} Causal relations impose order on the events in the world: an event $a$ causes an event $b$, which causes an event $c$, whereas the event $d$ is unrelated to $a$ or $b$, but may contribute to $c$. An event may have multiple causes, and multiple effects, and they may all be distant from one another in time and in space. We impose order on the world by thinking in terms of causes and effects. But connecting causes and effects also causes difficulties. In Fig.~\ref{Fig:boats}, the force on one end of Newton's cradle causes the effect on the other end without affecting anything in-between; whereas the boats collide and are subsequently engulfed in the explosion, but the explosion is not caused by the collision, but staged. Tellingly, such decouplings are called \emph{special effects}. Our eye distrusts the physical effect on the left, and accepts the special effect on the right. Recognizing such remote causations and adjacent non-causations requires learning about the unobservable causes of observable effects that fill the world, and about the unobservable effects of observable causes, and even about the unobservable causes of unobservable effects. Imposing the causal order on the world forces us to think about the unobservable, whether they are black holes or spirits of our ancestors. While acting upon some causes to prevent some effects is a matter of individual adaptations and survival for humans and androids, plants and animals, understanding the general process of causation has been a matter of civilization and cognition. While the efforts in the latter direction surely go back to the dawn of mankind, the early accounts go back to pre-Socratic philosophy, and have been traditionally subsumed under \emph{Metaphysics}, which was the (post-Socratic) title of Aristotle's manuscript that came \emph{after} (i.e. it was the \emph{"meta-"} to) his Physics \cite{Aristotle:Physics}. We mention just three paradigmatic steps towards the concept of causation: \begin{enumerate}[label=\roman*), labelindent=\parindent] \item {Parmenides:}\/ "Nothing comes from nothing." \item {Heraclitus:}\/ "Everything has a cause, nothing is its own cause." \item {Aristotle:}\/ "Everything comes from an {\em Uncaused Cause}." \end{enumerate} Step (i) thus introduces the principle of causation; step (ii) precludes causal cycles, and thus introduces the problem of \emph{infinite regression}\/ through causes of causes; and step (iii) resolves this problem by the argument\footnote{"It is clear, then, that though there may be countless instances of the perishing of unmoved movers, and though many things that move themselves perish and are succeeded by others that come into being, and though one thing that is unmoved moves one thing while another moves another, nevertheless there is something that comprehends them all, and that as something apart from each one of them, and this it is that is the cause of the fact that some things are and others are not and of the continuous process of change; and this causes the motion of the other movers, while they are the causes of the motion of other things. Motion, then, being eternal, the First Mover, if there is but one, will be eternal also; if there are more than one, there will be a plurality of such eternal movers." \cite[258b--259a]{Aristotle:Physics}} that in Christian theology came to be interpreted as the \emph{cosmological proof} of existence and uniqueness of God. Just like the quest for causes of particular phenomena leads to magic and superstition, the quest for a general Uncaused Cause leads to monotheistic religion. This logical pattern persists in modern cosmology, where the Uncaused Cause arises as the initial gravitational singularity, known as the Big Bang. Although it is now described mathematically, it still spawns untestable theories. The singularity can be avoided using mathematical devices, inflating and smoothening the Uncaused Cause into a field; but preventing its untestable implications requires logical devices. While modern views of a \emph{global}\/ cause of the world may still appear somewhat antique, the modern theories of \emph{local}\/ causations evolved with science. The seed of the modern idea that causes can be separated and made testable through \emph{intervention}\/ \cite{GopnikA:learning,SlomanS:choice,woodward2005making} was sown by Galileo: "That and no other is to be called cause, at the presence of which the effect always follows, and at whose removal the effect disappears" (cited in \cite{bunge2012causality}). As science progressed, philosophers echoed the same idea in their own words, e.g., "It is necessary to our using the word cause that we should believe not only that the antecedent always has been followed by the consequent, but that as long as the present constitution of things endures, it always will be so" \cite{mill1904system}. But David Hume remained unconvinced: "When I see, for instance, a billiard-ball moving in a straight line towards another; even suppose motion in the second ball should by accident be suggested to me, as the result of their contact or impulse; may I not conceive, that a hundred different events might as well follow from the cause? [\ldots] The mind can never possibly find the effect in the supposed cause, by the most accurate scrutiny and examination. For the effect is different from the cause, and can never be discovered in it." \cite[4.10]{Hume:enquiry} Hume's objections famously shook Immanuel Kant from his metaphysical "dogmatic slumber", and they led him on the path of \emph{"transcendental deduction"}\/ of effects from causes, based on \emph{synthetic a priori}\/ judgements, that are to be found mostly in mathematics \cite{KantI:critique}. Scientists, however never managed to prove any of their causal hypotheses by pure reason, but took over the world by disproving their hypotheses in lab experiments, whereupon they would always proceed to formulate better causal hypotheses, which they would then try to disprove again, and so on. The imperative of falsifiable hypotheses, and the idea of progress by disproving them empirically, suggested the end of metaphysics, and of causality, and it prompted Bertrand Russell to announce in 1912: "We shall have to give up hope of finding causal laws such as Mill contemplated, as any causal sequence which we have observed may at any moment be falsified without a falsification of any laws of the kind that the actual sciences aim at establishing. [\ldots] All philosophers of every school imagine that causation is one of the fundamental axioms or postulates of science, yet, oddly enough, in advanced science such as [general relativity], the word 'cause' never occurs. [\ldots] The law of causality, I believe, like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm." \cite{RussellB:cause} \subsection{A very high-level view of causal models}\label{Sec:models} In spite of the logical difficulties, the concept of causation persisted. General relativity resolved the problem of action at distance, that hampered Newtonian theory of gravitation, by replacing causal interactions by fields; but the causal cones continued to mushroom through the spacetime of special relativity, even along the closed timelike curves. In quantum mechanics, causality was one of the main threads in the discussions between Einstein and Bohr \cite{Bohr51}. It was also the stepping stone into Bohm's quantum dialectical materialism \cite{BohmD:causality,BohmD:letters}. Last but not least, causality remains central in the modern axiomatizations of quantum mechanics \cite{Ciribella,CoeckeB:CQM-caus}, albeit in a weak form. Nevertheless, even the weakest concept of causality implies that the observables must be objective, in the sense that the outcomes of measurements should not depend subjective choices, which precludes covert signaling. The most influential modern theory of causation arose from Judea Pearl's critique of subjective probability, which led him to \emph{Bayesian networks} \cite{PearlJ:85,pearl2014probabilistic}. As a mathematical structure, a Bayesian net can be thought of as an extension of a Markov chain, where each state is assigned a random variable, and each transition represents a stochastic dependency. Extending Markov chains in this way, of course, induces a completely different interpretation, which is stable only if there are no cycles of state transitions. Hence the causal ordering. Originally construed as an AI model, Bayesian networks turned out to be a useful analysis tool in many sciences, from psychology \cite{Gopnik:bayes} to genetics and oncology \cite{KollerD-Segal}. Broad overviews of the theory and of the immense range of its applications can be found in \cite{koller2009probabilistic,pearl2009causality,spirtes2000causation}. Very readable popular accounts are \cite{PearlJ:CACM,pearl2018book}, and \cite{SlomanS:Book} is written from the standpoint of psychology. The causal models imposing the acyclicity requirement on the dependencies between random variables opened opened an alley towards an algorithmic approach to causal reasoning, based on dependency discovery: given some input random variables and some output random variables, specify a network of unobservable random variables whereby the given inputs can cause the given outputs. A good introduction into such discovery algorithms is \cite{spirtes2000causation}. The applications have had a tremendous impact. A skeptic could, or course, argue that inserting in-between a cause and an effect a causal diagram is just as justified as inserting Kant's synthetic \emph{a priori}\/ judgements. A control theorist could also argue that the canonical language for specifying dependencies between random variables was provided by stochastic calculus a long time ago. They were not presented as elegantly as in bayesian nets, but they were calculated and graphed. On the other hand, the dependencies modeled in control theory are not required to be acyclic. This is, of course, essential, since the acyclicity requirement precludes feedback. Is feedback acausal? A simple physical system like centrifugal governor\footnote{Centrifugal governor consists of a pair of rotating weights, mounted to open a valve proportionally to their rotation speed, were used to control pressure in steam engines, and before that on the windmill stones. Bayesian nets require an added dimension of time to model such things, basically unfolding the feedback!} obviously obeys the same laws of causation like its feedback-free cousins: an increase in angular momentum causes the valve lever to rise; a wider opening lets more steam out and decreases the pressure; a lower pressure causes a decrease in angular momentum. The only quirk is thus that some physical effects thus cause physical effects on their causes, in continuous time. Whether that is causation or not depends on the modeling framework. Be it as it may, our very short story about the concepts and the models of causality ends here. The upshot is that \textbf{\emph{there are effective languages for explaining causation}}. The availability and effectiveness of such languages is the central \textbf{assumption} on which the rest of the paper is based. Such languages, effective enough to explain everything, have been provided by metaphysics, and by transcendental deduction, and by bayesian networks, and by stochastic processes. There are other such languages, and there will be more. The rest of our story proceeds the same for any of them. \subsection{String diagrams} \begin{figure}[ht] \medskip \begin{center} \begin{minipage}{.3\linewidth} \begin{center} \newcommand{\mbox{\footnotesize \bf course}{\includegraphics[height=2.75cm,width=4.05cm]{PICS/muhammad-london-3.eps}} \newcommand{\scriptstyle X}{\raisebox{2ex}{\ \bf \large Cause $\rightsquigarrow$ }\includegraphics[height=1cm]{PICS/muhammad-london-1.eps} } \newcommand{\raisebox{2ex}{\large \bf process $\rightsquigarrow$ }}{\raisebox{2ex}{\large \bf process $\rightsquigarrow$ }} \newcommand{\begin{minipage}{3em}{\raisebox{2ex}{\large \bf Effect $\rightsquigarrow$ }\includegraphics[height=1cm]{PICS/muhammad-london-2.eps}} \def0.8{.5} \input{PICS/process.tex} \end{center} \end{minipage} \medskip \hspace{.25\linewidth} \begin{minipage}{.3\linewidth} \newcommand{\mbox{\footnotesize \bf course}{\mbox{\footnotesize \bf course} } \newcommand{\begin{minipage}{3.8em}{\begin{minipage}{3.8em} \footnotesize exam and \\[-.75ex] \footnotesize homework\end{minipage}} \newcommand{\begin{minipage}{3em}{\begin{minipage}{3em} \footnotesize reference \\[-1ex] \footnotesize request\end{minipage}} \newcommand{\mbox{\scriptsize $A$}}{\begin{minipage}{3em}\begin{flushright} \scriptsize Student \\ \scriptsize Work\end{flushright}\end{minipage}} \newcommand{\scriptstyle X}{\begin{minipage}{3em} \begin{flushright} \scriptsize Course\\ \scriptsize Hardness \end{flushright}\end{minipage}} \newcommand{\begin{minipage}{3em}\begin{flushright}{\begin{minipage}{3em}\begin{flushright} \scriptsize Class \\ \scriptsize Activity\end{flushright}\end{minipage}} \newcommand{\begin{minipage}{3em}{\begin{minipage}{3em} \scriptsize Grade \\ \scriptsize Report\end{minipage}} \newcommand{\mbox{\scriptsize Record}}{\mbox{\scriptsize Record}} \newcommand{\mbox{\scriptsize $B$}}{\begin{minipage}{3em} \scriptsize Reference \\ \scriptsize Letter\end{minipage}} \begin{center} \def0.8{.5} \input{PICS/process-1.tex} \end{center} \end{minipage} \caption{String diagrams: causation flows upwards} \label{Fig:strings} \end{center} \end{figure} Henceforth, we zoom out, hide the "innards" of causal models, and study how they are composed and decomposed. Towards this goal, {{causal processes}} are presented as \emph{string diagrams}, like in Fig.~\ref{Fig:strings}. String diagrams consist of just two graphic components: \begin{itemize} \item \textbf{\emph{strings}} --- representing \emph{types}, and \item \textbf{\emph{boxes}} --- representing causal \emph{processes}. \end{itemize} Formally, types and processes are the basic building blocks of a causal modeling language: e.g., they may correspond to random variables and stochastic processes. Informally, a type can be thought of as a collection of events. In a causal process, enclosed in a box, the events of the input type, corresponding to the string that enters the box at the bottom, cause the events of the output type, corresponding to the string that exits the box at the top. The time just flows upwards in our string diagrams. We call the types consumed and produced by a causal process inputs and outputs (and not causes and effects) to avoid the ambiguities arising from the fact that the events produced by one process as effects may be consumed by another process as causes. The input and the output type may be the same. The diagram in Fig.~\ref{Fig:strings} on the left is hoped to convey an idea what is what. Presenting {{causal processes}} as boxes allows us to abstract away the details when we need a high-level view, and also to refine the view as needed, by opening some boxes and displaying more details. This is similar to the mechanism of virtual functions in programming: we specify the input and the output types, but postpone the implementation details. A refined view of a {{causal process}} in an open box may be composed of other boxes connected by strings. An example\footnote{This has been a running example in causal modeling textbooks and lectures at least since \cite{koller2009probabilistic}.} is in Fig.~\ref{Fig:strings} on the right. The types of the strings coming in and out show that \emph{grades} and \emph{reference letters}, as events produced as effects of a {{causal process}} \textbf{course}, are causally determined by \emph{students' work} and \emph{class activities}, as well as by the \emph{hardness}\/ of the course itself. All these causal factors are viewed of events of suitable types. When we open the \textbf{course} box, we see that this {{causal process}} is composed from two simpler {{causal processes}}: one is \textbf{exam and homework}, the other \textbf{reference request}. Each of them inputs two causal factors at the bottom; \textbf{exam and homework} outputs two effects, whereas \textbf{reference request} outputs one. Each \emph{grade}, as an event of type Grade Report, is caused both by the \emph{course hardness} and by the \emph{student work}; whereas the \emph{reference letters} are caused (influenced, determined\ldots) by the \emph{class activity} and by the course \emph{record}, which is again an effect of the \emph{student work}\/ and of the \emph{course hardness} in the process of \textbf{exam and homework}. The causal dependencies between the random variables corresponding to the types Grade Report and Reference Letter, and their dependencies on the random variables corresponding to Course Hardness, Student Work, and Class Activity, are thus displayed in the string diagram inside the \textbf{course} box. For a still more detailed view of causal relations, we could zoom in further, and open the boxes \textbf{exam and homework} and \textbf{reference request}, to display the dependencies through some other random variables, influenced by some other {{causal processes}}. The causes corresponding to the input types are assumed to be independent on each other. More precisely, the random variables, corresponding to the types Course Hardness, Student Work, and Class Activity, are assumed to be statistically independent. On the other hand, the causal dependencies of random variables corresponding to Grade Report, Record, and Reference Letter are displayed in the diagram. E.g., the content of a Reference Letter is not directly caused by Course Hardness, but it indirectly depends on it, since the student performance in the Record depends on it, and the Record is taken into account in the Reference Letter. The abstract view of the {{causal process}} \textbf{box match} on the left could be refined in a similar way. The causes are boxers' actions, the effects are boxers' states. The images display a particular cause, and a particular effect of these types. The direct cause of the particular effect that is displayed on top is a particular blow, that is only visible inside the box. The cause at the bottom is the boxer's decision to deliver that particular blow. The {{causal process}} transforming this causal decision into its effect can be refined all the way to distinguishing good boxing from bad boxing, and to analyzing the causes of winning or losing. Spelling out a process corresponding to the \textbf{box match} on the right in Fig.~\ref{Fig:ali} might be even more amusing. While the strings outside the box would be the same, the causal dependencies inside the box would be different, as boxers' states are not caused by their blows, but feigned; and the blows are not caused by boxers' decisions, but by the movie director's requests. \subsection{A category of {{causal processes}}}\label{Sec:category} String diagrams described in the previous section provide a graphic interface for any of causal modeling languages, irrespective of their details, displaying only how {{causal processes}} are composed. It is often convenient to arrange compositional structures into \emph{categories} \cite{MacLaneS:CWM}. The categorical structures capturing causations turn out to yield to string diagrams \cite{Coecke-Spekkens:Bayes,CoeckeB:book,JacobsB:causal}. For the most popular models, as mentioned above, the strings correspond to random variables, the boxes to parametrized stochastic processes. The strings are thus the objects, the boxes the morphisms of a monoidal category. When no further constraints are imposed, this category can be viewed as the coslice, taken from the one-point space of the category of free algebras for a suitable stochastic monad. For a categorical account of full stochastic calculus, the monad can be taken over measurable spaces \cite{Culbertson-Sturtz,GiryM:monad}. For a simplified account the essential structures, the convexity monad over sets may suffice \cite{JacobsB:causal}. For our purpose of capturing causal explanations \emph{and}\/ taking them into account as causal factors, we must deviate from the extant frameworks \cite{Coecke-Spekkens:Bayes,Culbertson-Sturtz,GiryM:monad,JacobsB:causal}\footnote{There is an entire research area of categorical probability theory, steadily built up since at least the 1960s, with hundreds of references. The three cited papers are a tiny random sample, which an interested reader could use as a springboard into further references, in many different flavors.} in two ways. One is minor, and well tried out: the randomness is captured by \emph{sub}\/probability measures, like in \cite{PanangadenP:book}, to account for nontermination. The other is that {{causal processes}} must be taken up to \emph{stochastic indistinguishability}, for the reasons explained in \cite[Ch.~4]{spirtes2000causation}. \paragraph{Notation.} In the rest of the paper, we fix an abstract universe ${\cal C}$ of {{causal processes}}, which happens to be a strict monoidal category \cite[Sec.~VII.1]{MacLaneS:CWM}. This structure is precisely what the string diagrams display. The categorical notation collects all strings together, in the family of event \emph{types} $|{\cal C}|$; and for any pair of types $A,B \in |{\cal C}|$ it collects all boxes together, in the family ${\cal C}(A,B)$ of causal \emph{processes}. In summary, \begin{itemize} \item \textbf{\emph{strings}} form in the family of \emph{types} \begin{eqnarray*} |{\cal C}| & = & \{A, B, C\ldots, \mbox{Student Work}, \mbox{Grade Report}\ldots\} \end{eqnarray*} \item\textbf{\emph{boxes}} form for each pair $A,B\in |{\cal C}|$ the family of \emph{processes}, \begin{eqnarray*} {\cal C}(A,B) & = & \{f, g, t, u,\ldots, \mbox{\bf box match}, \mbox{\bf course}\ldots\} \end{eqnarray*} \end{itemize} The compositional structure is briefly described in the Appendix. \subsection{Modeling causal models} The experimenter has been recognized as a causal factor in experiments since the onset of science. Already Galileo's concept of causality, mentioned in Sec.~\ref{Sec:Causality}, requires that the experimenter manipulates a cause in order to establish how the effect depends on it. In modern theories of causation, this idea is refined to the concept of \emph{intervention}\/ \cite{HagmayerY:intervention}. In quantum mechanics, the effects of measurements depend not only on the causal freedom of the measured particles, but also on experimenters' own causal freedom \cite{BohrN:37,conway2009strong}. As mentioned in Sec.~\ref{Sec:Intro}, similar questions arise even in magic: if the magician manipulates the effects, what causes magician's manipulations? But if the magician and the experimenter are {{causal processes}} themselves, then they also occur, as boxes among boxes, in the universe ${\cal C}$ of {{causal processes}}. And if the experimenter's causal hypotheses are caused by some causal factors themselves, then the universe ${\cal C}$ contains, among other types, a distinguished type of causal models $\Omega}%{\tt Cog}$, where the experimenter outputs his hypotheses. For instance, $\Omega}%{\tt Cog}$ can be the type of our string diagram models, like in Fig.~\ref{Fig:strings}. Or $\Omega}%{\tt Cog}$ can be the type of (suitably restricted) stochastic processes; or of Bayesian nets, including the running example \cite[Fig.~3.3]{koller2009probabilistic}, on which Fig.~\ref{Fig:strings} is based.\footnote{One of the coauthors of this paper thinks of $\Omega}%{\tt Cog}$ as typing the well-formed expressions in any of the suitable modeling languages, discussed in Sec.~\ref{Sec:models}. The other coauthor is inclined to include a wider gamut of causal theories, including those touched upon in Sec.~\ref{Sec:history}. Different theories may not only describe different models, but also prescribe different interpretations.} There are many different languages for describing different causations, and thus many different ways to think of the type $\Omega}%{\tt Cog}$. Psychologist's descriptions of causal cognition differ from physicist's view of the {{causal process}} of experimentation. What is their common denominator? \emph{What distinguishes $\Omega}%{\tt Cog}$ from other types?} In the Sec.~\ref{Sec:Axioms}, we attempt to answer this question by specifying a structure, using three simple axioms, which should be carried by the type $\Omega}%{\tt Cog}$ in all frameworks for causal modeling. The other way around, we take this structure to mean that a framework where it occurs describes causal modeling; that the blind men talking about different parts of an elephant are talking about the same animal. This is where the \textbf{assumption} stated at the end of Sec.~\ref{Sec:Causality} is used. But stating and using the axioms requires two basic syntactic features, which are explained next. \subsection{Parametrizing and steering models and processes} \label{Sec:Params} Suppose that we want to explore how learning environments influence the {{causal process}} \textbf{course} from Fig.~\ref{Fig:strings}. How does the causal impact of Course Hardness change depending on the \emph{school}\/ where a \textbf{course} takes place? Abbreviating for convenience the cause and the effect types of the {{causal process}} \textbf{course} to \begin{eqnarray*} {\rm Causes} & = & {\rm Course\ Hardness\otimes Student\ Work \otimes Class\ Activity}\\ {\rm Effects} & = & {\rm Grade\ Report \otimes Reference\ Letter} \end{eqnarray*} we now have \begin{itemize} \item a family of {causal processes}\ ${\rm Causes}\ \tto{\ {\rm \bf course}\, (shool)\ } \ {\rm Effects}$, indexed over\\ $school \in$ Schools, or equivalently \item a parametric {{causal process}} $ {\rm Schools}\otimes {\rm Causes} \ \tto{{\rm \bf course}} \ {\rm Effects}$. \end{itemize} In the first case, each {causal process}\ \textbf{course}\emph{(school)} consumes the causal inputs in the form \textbf{course}\emph{(school)(hardness, work, activity)}; whereas in the second case, all inputs are consumed together, in the form \textbf{course}\emph{(school, hardness, work, activity)}. The difference between the parameter \emph{school}\/ and the general causal factors $hardness, work, activity$ is that the parameter is \emph{deterministic}, whereas the general causal factors influence processes with some \emph{uncertainty}. This means that entering the same parameter $school$ always produces the same {{causal process}}, whereas the same causal factors $hardness, work, activity$ may have different effects in different samples. The convenience of internalizing the indices and writing indexed families as parametric processes is that steering processes can be internalized as reparametrizing. E.g. given a function $\varsigma: {\rm Teachers} \to {\rm Schools}$, mapping each \emph{teacher}\/ the the \emph{school}\/ where they work, induces the reindexing of families \[\prooftree {\rm Causes}\ \tto{\ {\rm \bf course}\, (shool)\ } \ {\rm Effects}\hspace{3em} school \in {\rm Schools} \justifies {\rm Causes}\ \tto{\ {\rm \bf course}\, \left(\varsigma\, (teacher)\right)\ } \ {\rm Effects}\hspace{2em} teacher \in {\rm Teachers} \endprooftree \] which can however be captured as a single causal process in the universe ${\cal C}$ \[ {\rm Teachers}\otimes {\rm Causes} \tto{\ \varsigma\ \otimes \ {\rm Causes}\ } {\rm Schools}\otimes {\rm Causes} \ \tto{\ {\rm \bf course}\ } \ {\rm Effects} \] where the causal factor $s$ happens to be deterministic. In general, an arbitrary {{causal process}} $Y \otimes A \tto{\ {\rm \bf process}\ } B$ can be steered along an arbitrary function ${\tt steer}:X \to Y$, viewed as a deterministic process: \[ X \otimes A \tto{\ \tt steer\, \otimes\, A\ } Y \otimes A \tto{\ {\rm \bf process}\ } B\] Deterministic functional dependencies are characterized in string diagrams in Appendix~\ref{Sec:functions}. \subsection{Axioms of causal cognition}\label{Sec:Axioms} Every framework for causal modeling must satisfy the followin axioms: \begin{enumerate}[label = {\bf \Roman* :}] \item Every causal model models a unique {{causal process}} (over the same parameters). \item Every {{causal process}} has a model (not necessarily unique). \item Models are preserved under steering. \end{enumerate} \subsubsection{Axioms formally} We view a causal model as a family $P}%{\tt xplan}(y) \in \Omega}%{\tt Cog}$, indexed by $y\in Y$, i.e. as a parametrized model $Y \tto{\ P}%{\tt xplan}\ } \Omega}%{\tt Cog}$. The notation introduced in Sec.~\ref{Sec:category} collects all $Y$-parametrized causal models in the set of processes ${\cal C}(Y, \Omega}%{\tt Cog})$. The parameter type $Y$ is arbitrary. On the other hand, all $Y$-parametrized causal processes $Y \otimes A \tto p B$, where the events of type $A$ cause events of type $B$, are collected in the set ${\cal C}(Y \otimes A, B)$. \paragraph{Axiom I} postulates that every parametrized causal model $Y\tto{P}%{\tt xplan}} \Omega}%{\tt Cog}$ induces a unique causal process $Y\otimes A\tto{\Semantics{P}%{\tt xplan}}} B$ with the same parameters $Y$. A causal modeling framework is thus given by a family of the \emph{prediction}%{\tt xplan}}\/ maps \begin{eqnarray}\label{eq:Semantics} {\cal C}(Y, \Omega}%{\tt Cog}) & \tto{\Semantics{-}} & {\cal C}(Y \otimes A, B) \end{eqnarray} indexed over all types $Y, A$ and $B$. \paragraph{Axiom II} says that the {prediction}%{\tt xplan}} maps $\Semantics{-}$ are surjective: for every causal process $Y\otimes A\tto p B$ there is a causal model $Y\tto P \Omega}%{\tt Cog}$ that predicts its behavior, in the sense that the process $\Semantics{P}$ is \emph{indistinguishable} from $p$, which we write $\Semantics{P}\approx p$. The indistinguishability relation $\approx$ is explained in the next section. \paragraph{Axiom III} says that for any function $\varsigma: X\to Y$ and any causal model $Y\ttoP}%{\tt xplan} \Omega}%{\tt Cog}$ reparametrizing $P}%{\tt xplan}$ along $\varsigma$ models steering the modeled process $\Semantics P}%{\tt xplan}$ along it, in the sense that \begin{eqnarray}\label{eq:III} \Semantics{X\tto{\varsigma} Y\ttoP}%{\tt xplan} \Omega}%{\tt Cog}} & = & X\otimes A \tto{\varsigma \otimes A} Y\otimes A\tto{\Semantics{P}%{\tt xplan}}} B \end{eqnarray} \subsubsection{Axioms informally} \paragraph{Axiom I } is a soundness requirement: it says that every causal model models some causal process. If causal processes are viewed as observations, the axiom thus says that for any causal model, we will recognize the modeled process if we observe it. \paragraph{Axiom II} is a completeness requirement: it says that for every {{causal process}} that we may observe, there is a causal model that models it. Can we really model everything that we observe? Yes, but our models are valid, i.e. their predictions are consistent with the behavior of the modeled processes \emph{only in so far as our current observations go}. Given a process $p$, we can always find a model $P}%{\tt xplan}$ whose predictions $\Semantics{P}%{\tt xplan}}$ summarize our observations of $p$, so that $\Semantics{P}%{\tt xplan}}$ and $p$ are for us \emph{indistinguishable}, i.e. $\Semantics{P}\approx p$. The less we observe, the less we distinguish, the easier we model. \paragraph{Indistinguishability relations}, be it statistical, computational, or observational, are central in experimental design, in theory of computation, and in modern theories of causation \cite[Ch.~4]{spirtes2000causation}. The problem of distinguishing between observed processes has been tackled since the early days of statistics by significance testing, and since the early days of computation by various semantical and testing equivalences. Axiom II says that, up to an indistinguishability relation, any observed causal process has a model. It does not say anything about the hardness of modeling. This problem is tackled in research towards \emph{cause discovery algorithms} \cite{spirtes2000causation}. \paragraph{Axiom III} is a coherence requirement: it says that steering a process $Y\otimes A\tto p B$ along a deterministic function $\varsigma:X\to Y$ does not change the causation, in the sense of \eqref{eq:III} or \begin{eqnarray}\label{eq:steering} \Semantics P \approx p &\implies & \Semantics{P\circ \varsigma} \approx \big(p\circ (\varsigma\otimes A)\big) \end{eqnarray} \subsection{Universal testing}\label{Sec:universal} Since any causal modeling language can thus be viewed as a type $\Omega}%{\tt Cog}$, living in the process universe ${\cal C}$, the family of all causal models $\omega \in \Omega}%{\tt Cog}$, trivially indexed over itself, can be represented by the identity function $\Omega}%{\tt Cog}\tto{Id} \Omega}%{\tt Cog}$, viewed as a $\Omega}%{\tt Cog}$-parametrized model. Instantiating in \eqref{eq:III} $Id$ for $P$ (and thus $\Omega}%{\tt Cog}$ for $Y$) yields $\uev \varsigma = \uev{Id}\circ(\varsigma \otimes A)$, for any $X\tto\varsigma \Omega}%{\tt Cog}$. \begin{figure}[ht] \begin{center} \newcommand{\mbox{\footnotesize \bf course}{\begin{minipage}{1.5cm}\bf \scriptsize causal\\%[-3ex] \bf process\\ \centering $\displaystyle p$\end{minipage}} \newcommand{$\Semantics{Id}$}{\begin{minipage}{1.5cm}\bf \scriptsize universal\\%[-.5ex] \bf testing\\ \centering $\Semantics{Id}$\end{minipage} } \newcommand{\includegraphics[height=2.7cm,width=1.8cm]{PICS/scatter-plot.eps}}{\includegraphics[height=2.7cm,width=1.8cm]{PICS/scatter-plot.eps}} \newcommand{\mbox{\scriptsize $B$}}{$\scriptstyle B$ \newcommand{\mbox{\scriptsize $A$}}{$\scriptstyle A$ \newcommand{\scriptstyle X}{\scriptstyle X} \newcommand{\tiny $\Omega}%{\tt Cog}$}{$\scriptscriptstyle \Omega}%{\tt Cog}$} \newcommand{\includegraphics[height=1.8cm]{PICS/graph-1.eps}}{\includegraphics[height=1.8cm]{PICS/graph-1.eps}} \newcommand{\tiny \tt model}%{$G$} \begin{minipage}{1cm}\bf \scriptsize model \\[-.3ex] \centering$\displaystyle P$ \end{minipage}} \newcommand{\large $\approx$} \huge$\mathbf{\approx}$} \def0.8{.6} \input{PICS/process-test-1.tex} \caption{There is an explanation for every causation} \label{Fig:interpreter} \end{center} \end{figure} \emph{Mutatis mutandis}, for an arbitrary process $p$ and a model $P$ assured by axioms I and II, axiom II thus implies $p \approx \Semantics{Id}\circ(P \otimes A)$, as displayed in Fig.~\ref{Fig:interpreter}. Any causal universe ${\cal C}$, as soon as it satisfies axioms I--III, thus contains, for any pair $A,B$, a \textbf{\emph{universal testing}}\/ process $\Omega}%{\tt Cog}\otimes A\tto{\Semantics{Id}} B$, which inputs causal models and tests their predictions, in the sense that $X\otimes A \tto{P\otimes A}\Omega}%{\tt Cog}\otimes A \tto{\Semantics{Id}} B$ is indistinguishable from $X \otimes A\tto p B$ whenever $\Semantics{P}$ is indistinguishable from $p$. Universal testing is thus a causal process where the predictions of causal models are derived as their effects. It can thus be construed as a very high level view of scientific practice; perhaps also as an aspect of cognition. \subsection{Partial modeling}\label{Sec:partial} If a causal process has multiple causal factors, then they can be modeled separately be treating some of them as model parameters. E.g. a process in the form $Y\otimes X \otimes A \tto r B$ can be viewed as a $Y$-parametrized process with causal factors of type $X\otimes A$, or as a $Y\otimes X$-parametrized process with causal factors of type $A$. The two different instances of \eqref{eq:Semantics}, both surjections by axiom II, would lead to models $Y \tto{R'} \Omega}%{\tt Cog}$, and $Y\otimes X \tto{R''} \Omega}%{\tt Cog}$, with $r\approx \Semantics{R'}_Y \approx \Semantics{R''}_{Y\ttimesX}$ by \eqref{eq:Semantics}, and thus $r\approx \Semantics{Id}_{X\otimes A}\circ (R'\otimes X\otimes A) \approx \Semantics{Id}_{A}\circ (R'' \otimes A)$ by Fig.~\ref{Fig:interpreter}. \begin{figure}[ht] \begin{center} \newcommand{\mbox{\footnotesize \bf course}{\begin{minipage}{2cm}\bf \footnotesize universal\\%[-3ex] \bf testing\\ \centering $\Semantics{Id}_{X\otimes A}$\end{minipage}} \newcommand{\includegraphics[height=2.7cm,width=1.8cm]{PICS/scatter-plot.eps}}{\begin{minipage}{2cm}\bf \footnotesize universal\\%[-.5ex] \bf testing\\ \centering $\Semantics{Id}_A$\end{minipage} } \newcommand{\mbox{\scriptsize $B$}}{\mbox{\scriptsize $B$}} \newcommand{\mbox{\scriptsize $A$}}{\mbox{\scriptsize $A$}} \newcommand{\mbox{\scriptsize $X$}}{\mbox{\scriptsize $X$}} \newcommand{\scriptstyle X}{% \mbox{\scriptsize $X$} } \newcommand{\tiny $\Omega}%{\tt Cog}$}{\tiny $\Omega}%{\tt Cog}$} \newcommand{$\scriptstyle \Omega}%{\tt Cog}$}{$\scriptstyle \Omega}%{\tt Cog}$} \newcommand{{\tiny \tt partial}}{$\Xi$} \newcommand{\tiny \tt model}%{$G$}{{\scriptsize \tt model}} \newcommand{\large $\approx$} \huge$\mathbf{\approx}$} \def0.8{.6} \input{PICS/process-test-2.tex} \caption{Causes of type $X$ are treated as parameters} \label{Fig:specializer} \end{center} \end{figure} In particular, taking $r$ to be the universal process $\Omega}%{\tt Cog} \otimes X \otimes A \tto{\Semantics{Id}} B$, and interpreting it as an $\Omega}%{\tt Cog}\otimes X$-parametrized process, leads to an $\Omega}%{\tt Cog}\otimes X$-parametrized model $\Xi$, such that $\Semantics{Id}_{X\otimes A} \approx \Semantics{Id}_{A}\circ (\Xi \otimes A)$, as displayed in Fig.~\ref{Fig:specializer}. \subsection{Slicing models} Using universal testing processes and partial modeling, causal processes can be modeled incrementally, factor by factor, like in Fig.~\ref{Fig:slicing}. \newcommand{\tiny \tt model}%{$G$}{\tiny \tt model \newcommand{\tiny \tt model}%{$G$}{\tiny \tt model \newcommand{$g$}{\begin{minipage}{1.5cm}\bf \footnotesize causal\\%[-.5ex] \bf \footnotesize process\centering\end{minipage} \newcommand{\bf \footnotesize testing}{\bf \footnotesize testing} \newcommand{\stackrel {(a)}\approx%{\mbox{\large =}}{\stackrel {(a)}\appro } \newcommand{\stackrel {(b)}\approx%{\mbox{\large =}}{\stackrel {(b)}\appro } \newcommand{\stackrel {(c)}\approx%{\mbox{\large =}}{\stackrel {(c)}\appro } \newcommand{{\tiny \tt partial}}{{\tiny \tt partial}} \newcommand{\ppprogram}{\begin{minipage}{1.5cm}\tt \scriptsize steered\\%[-.5ex] \tt \centering model\end{minipage} } \begin{figure}[htbp] \begin{center} \def0.8{.5} \input{PICS/slicing.tex} \caption{Separating independent causes allows incremental modeling} \label{Fig:slicing} \end{center} \end{figure} \subsection{Extensional explanations} If only causes and effects are taken into account, then nothing about explanations is decidable. ANDROIDS MUST TAKE THE SIMPLEST EXPLANATIONS. They must ask: WHO SPEAKS? \subsection{OLD: Computational consequences of evolution: Logical depth} **** selection according to behaviors is undecidable **** where decisions are needed **** genes are not transferred not just horizontally (hypersex) but split into genomes **** that is why horizontal gene transfer got obstructed \subsubsection{Church-Turing Antithesis} EVOLUTION INTO EUCARIOTS IS INTENSIONAL SELECTION, NOT BEHAVIORAL 34. MSRS Thm: TT is finitely approximable **** open ended selection according to behaviors 35. ORACLES: ** layered language generation: **** words: a-machines --- arbitrary! **** sentences: a-machines or FSM --- simple, with stack memory **** messages: o-machines --- feedback from conversation **** protocols: o-machines --- extensive oracle interactions, run in parallel **** contexts, narratives, experience, identity --- o-interactions, complexity degrees 36. ACTOR LOGIC \subsubsection{High-level programming language for foundations} Resumptions. Complexity. \subsection{Complexity bounding: MDL, Occam} Only observations pump complexity into scientific theories. The wrong theory of Mercury's orbit is simpler than the correct one. The natural gravity of logic, the quest for the truth, is NOT the driving force of science. We need to disprove theories, to learn where we are wrong. As long as we think that we are right, and do not learn that we are wrong, we do not move ahead. \subsection{Complexity measures} See Bennett. 41. Measures 411. Static measures: grading 412. Step counting 413. Classes 414. Indistinguishability 42. Hypothesis testing: Martin-Loef's significance testing 43. MDL ** Occam's razor: ** CAUSALITY AS SIMPLICITY: the simpler causes the more complex??? ===> lies are more convincing than truths: ===> androids will have a religion 44. INTENSIONALITY: ** bind causality to identity: trust ** recognize liars and not lies 45. Logical depth: time to evolve ** Why is the world complex? What pumps complexity? ** Where does morphogenesis come from? 5. STRATIFIED CAUSALITY ** Hierarchies of causations ** Abstraction in sciences ** SUBLIMATION in evolution: **** selfish modules carried by genomes ****** distributed by viruses **** selfish genes carried by individuals ****** distributed through reproduction and selection **** selfish individuals carried by societies ****** distributed through social economy and politics 6. UNIVERSAL COMPUTER = evaluator of strategies, speaker (translator) of a language ** an encodable mental function: there is a process called cognition ** if there is, then there is a fixed point ** it may not be able to decide: it has to clock out and randomize ** it consults oracles \subsection{Computational consequences of evolution: Logical depth} **** selection according to behaviors is undecidable **** where decisions are needed **** genes are not transferred not just horizontally (hypersex) but split into genomes **** that is why horizontal gene transfer got obstructed EVOLUTION INTO EUCARIOTS IS INTENSIONAL SELECTION, NOT BEHAVIORAL 34. MSRS Thm: TT is finitely approximable **** open ended selection according to behaviors 35. ORACLES: ** layered language generation: **** words: a-machines --- arbitrary! **** sentences: a-machines or FSM --- simple, with stack memory **** messages: o-machines --- feedback from conversation **** protocols: o-machines --- extensive oracle interactions, run in parallel **** contexts, narratives, experience, identity --- o-interactions, complexity degrees 36. ACTOR LOGIC \section{Composing and decomposing {{causal processes}} } The salient feature of the string diagram presentation of processes is that the two dimensions of string diagrams correspond to two kinds of function composition. This is displayed in Fig.~\ref{Fig:godement}. \begin{figure}[ht] \begin{center} \newcommand{$f$}{$f$} \renewcommand{$g$}{$g$} \newcommand{$s$}{$s$} \renewcommand{\bf \footnotesize testing}{t} \newcommand{\scriptstyle B}{\scriptstyle B} \newcommand{\scriptstyle (f;g)}{{\color{red}g\circ f}} \newcommand{\scriptstyle f\otimes h}{{\color{blue}f\otimes t}} \newcommand{\mbox{\scriptsize $A$}}{\scriptstyle A} \newcommand{$\scriptstyle C$}{$\scriptstyle C$} \newcommand{\scriptstyle U}{\scriptstyle U} \newcommand{\otheroutpt}{\scriptstyle V} \newcommand{$\scriptstyle W$}{$\scriptstyle W$} \def0.8{.5} \input{PICS/godement.tex} \vspace{.5\baselineskip} \caption{Sequential composition $g\circ f$ and parallel composition $f\otimes t$} \label{Fig:godement} \end{center} \end{figure} \begin{itemize} \item \emph{\textbf{Sequential composition}} of {{causal processes}} corresponds to linking the corresponding string diagrams \emph{vertically}: the effects of the {{causal process}} $A\tto f B$ are passed to the {{causal process}} $B\tto g C$ to produce the {{causal process}} $A\tto{g\circ f} C$; \item \emph{\textbf{parallel composition}}\/ lays {{causal processes}} next to each other \emph{horizontally}: the processes $A\tto f B$ and $U\tto t V$ are kept independent, and their composite $A\otimes U\tto{f\otimes t} B\otimes V$ takes each processes causal factors separately, and produces their effects without any interactions between the two: \end{itemize} As categorical structures, these operations are captured as the mappings \begin{eqnarray*} {\cal C}(A,B) \times {\cal C}(B,C) & \tto\circ & {\cal C}(A,C)\\ {\cal C}(A,B) \times {\cal C}(U,V) & \tto\otimes & {\cal C}(A\otimes U,B\otimes V) \end{eqnarray*} \paragraph{Meaning of the sequential composition.} The composite $A\tto{g\circ f}C$ inputs the cause $a\in A$ and outputs the effect $g(f(a))\in C$ of the cause $f(a)\in B$, which is itself the effect of the cause $a\in A$ . In summary, we have \begin{gather}\label{eq:circ} \prooftree \prooftree a\in A\qquad f:A\to B \justifies f(a) \in B \endprooftree \quad g: B\to C \justifies g\circ f(a)\ =\ g(f(a))\in C \endprooftree \end{gather} \paragraph{Meaning of the parallel composition and product types.} Since the strings in a string diagram correspond to types, drawing parallel strings leads to product types, like $A\otimes U$, which is the name of the type corresponding to the strings $A$ and $U$ running in parallel. The events of this type are the pairs $<a,u>\in A\otimes U$, where $a\in A$ and $u\in U$. The parallel composite $A\otimes U\tto{f\otimes t} B\otimes V$ can thus be defined as the {{causal process}} transforming independent pairs of causes into independent pairs of effects, without any interferences between the components: \[ \prooftree \prooftree \prooftree <a,u>\in A\otimes U \justifies a\in A \endprooftree \qquad f:A\to B \justifies f(a) \in B \endprooftree \qquad \prooftree \prooftree <a,u>\in A\otimes U \justifies u\in U \endprooftree \qquad t: U\to V \justifies t(u) \in V \endprooftree \justifies (f\otimes t)<a,u> = \big<f(a), t(u)\big> \in B\otimes V \endprooftree \] \section{Units} \paragraph{Vectors, scalars, covectors.} There are processes where events occur with no visible causes; and there are processes where events have no visible effects. Such processes correspond, respectively, to string diagrams $c$ and $e$ in Fig.~\ref{Fig:vector-scalar}. There are even processes with no observable causes or effects, like the one represented by the diamond $s$ in the middle of Fig.~\ref{Fig:vector-scalar}. When there are no strings at the bottom, or at the top of a box in a string diagram, we usually contract the bottom side, or the top side, into a point, and the box becomes a triangle. When there are no strings either at the bottom or at the top, then contracting both of them results in a diamond. A diamond with no inputs and outputs may still contain a lot of information. E.g., if {{causal processes}} are timed, then reading the time without observing anything else would be a process with no observable causes or effects. If processes are viewed as linear operators, then those that do not consume any input vectors and do not produce any output vectors are scalars. The processes that do not consume anything but produce output are just vectors, since they boil down to their output. The processes that do not produce vectors, but only consume them, are just covectors, or linear functionals. \textbf{Invisible string: The unit type $I$.} Since every process must have an input type and an output type for formal reasons, in order to fit into a category of processes, a process that does not have any actual causes is presented in the form $I\tto e A$, where $I$ is the \emph{unit type}, satisfying \begin{equation}\label{eq:I} I\otimes A \ =\ A \ =\ A\otimes I\end{equation} for every type $A$. The unit type is thus degenerate, in the sense that any number of its copies can be added to any type, without changing its elements. It is easy to see that it is unique, as the units in algebra tend to be. A process that does not output any effects is then in the form $A\tto c I$. The unit type is the unit with respect to the type product and to the parallel composition of processes, just like 0 is the unit with respect to the addition of numbers. It can be thought of as the type of a single event that never interferes with any other events. Since it is introduced only to make sure that everything has a type, but otherwise has no visible causes or effects, the unit type is usually not drawn in string diagrams, but thought of as an \emph{"invisible string"}. In Fig.~\ref{Fig:vector-scalar}, the invisible string is coming in below $e$, and out above $c$, and on both sides of $s$. \begin{figure}[htbp] \begin{center} \newcommand{\mbox{\scriptsize $A$}}{\scriptstyle A} \newcommand{\scriptstyle c}{\scriptstyle c} \newcommand{\scriptstyle s}{\scriptstyle s} \newcommand{\bee}{\scriptstyle e} \def0.8{.5} \input{PICS/vector-scalar.tex} \caption{String diagrams with invisible strings} \label{Fig:vector-scalar} \end{center} \end{figure} \paragraph{Invisible boxes: The unit processes ${\rm id}_A$.} For every type $A$ there is a unit process $A\tto{{\rm id}_A} A$, called the \emph{identity}\/ of $A$, such that \begin{equation}\label{eq:id} {\rm id}_B\circ f\ =\ f\ =\ f\circ {\rm id}_A\end{equation} holds for every process $A\tto f B$. This property is clearly analogous to \eqref{eq:I}, but has a different meaning: the {{causal process}} ${\rm id}_A$ inputs causes and outputs effects, both of type $A$; but it does not modify anything: it just outputs every cause $a\in A$ as its own effect. Since the box corresponding to ${\rm id}_A$ thus just passes to the output string whatever comes at the input string, and there is nothing else in it, we do not draw such a box. In string diagrams, the boxes corresponding to the identity processes ${\rm id}_A$ can thus be thought of as \emph{"invisible boxes"}. In a sense, the strings themselves play the role of the identities. Because of \eqref{eq:I}, any number of invisible strings can be added to any string diagram without changing its meaning. Because of \eqref{eq:id}, any number of invisible boxes can be added on any string, visible or invisible, without changing its meaning. This justifies eliding the units not only from string diagrams, but also from algebraic expressions, and writing things like \[ f\otimes U\ =\ f\otimes {\rm id}_U \qquad \mbox{ and } \qquad A\otimes t\ =\ {\rm id}_A \otimes f \] With this notation, the algebra of tensor products, studied in serious math, boils down to a single law, from which everything else follows: \section{The middle-two-interchange law} The main reason why string diagrams are convenient is that their geometry captures the \emph{middle-two-interchange}\/ law: \begin{eqnarray}\label{eq:godementLine} (f;g)\otimes(t;s) & = & (f\otimes t);(g\otimes s) \end{eqnarray} Note that the both sides of this equation correspond to the same string diagram, displayed in Fig.~\ref{Fig:godement}. The left-hand side of \eqref{eq:godementLine} is obtained by first reading the sequential compositions vertically, and then composing them in parallel horizontally; whereas the right-hand side is obtained by first reading the parallel compositions horizontally, and then composing them in sequence vertically. Sliding boxes along stings provides an easy way to derive further equations from the middle-two-interchange low of \eqref{eq:godementLine}, such as \[ (f\otimes t);(g\otimes V) \qquad = \qquad (f\otimes U);(B\otimes t);(g\otimes V) \qquad = \qquad (f\otimes U);(g\otimes t) \] In string diagrams, this is just \[ \newcommand{$f$}{$f$} \renewcommand{$g$}{$g$} \renewcommand{\bf \footnotesize testing}{t} \newcommand{\scriptstyle B}{\scriptstyle B} \newcommand{\scriptstyle (f;g)}{\scriptstyle (f;g)} \newcommand{\scriptstyle f\otimes h}{\scriptstyle f\otimes h} \newcommand{\mbox{\scriptsize $A$}}{\scriptstyle A} \newcommand{$\scriptstyle C$}{$\scriptstyle C$} \newcommand{\scriptstyle U}{\scriptstyle U} \newcommand{\otheroutpt}{\scriptstyle V} \def0.8{.35} \input{PICS/seq.tex} \qquad\quad=\quad\qquad \input{PICS/seq-1.tex} \qquad\quad=\quad\qquad \input{PICS/seq-2.tex} \] \vspace{3\baselineskip} \section{Functions}\label{Sec:functions} Causations are usually uncertain to some extent: a cause induces an effect with a certain probability. Those causations that are certain, so that the input always produces the output, and the same input always produces the same output, are called \emph{functions}. If causations are presented as stochastic processes, then functions are the subfamily deterministic processes. An intrinsic characterization of functions in a monoidal framewrok is provided by the following definitions. A \textbf{data type} is a type $A\in {\cal C}$, equipped with \emph{data services}, i.e. the operations \begin{equation}\label{eq:comon} I \oot\top}%{{\scriptstyle \top} A \tto \Delta}%{{\scriptstyle \Delta} A\otimes A \end{equation} respectively called \emph{deleting} and \emph{copying}, which satisfy the following equations: \begin{alignat*}{7} \comp {\Delta}%{{\scriptstyle \Delta}}{(\Delta}%{{\scriptstyle \Delta} \otimes A)} &\ \ =\ \ \comp{\Delta}%{{\scriptstyle \Delta}}{(A\otimes \Delta}%{{\scriptstyle \Delta})} &&\qquad&\qquad&\quad& \comp{\Delta}%{{\scriptstyle \Delta}}{(\top}%{{\scriptstyle \top} \otimes A)} &\ \ =\ \ &\ {\rm id}_A &\ \ =\ \ &\ \ \comp{\Delta}%{{\scriptstyle \Delta}}{(A\otimes \top}%{{\scriptstyle \top})} \\[1ex] \def0.8{.85} \input{PICS/cmn-left.tex}\ &\ \ =\ \ \def0.8{.85} \input{PICS/cmn-right.tex} &&&&& \def0.8{.85} \input{PICS/cun-left.tex}&\ \ =\ \ & \def0.8{.85} \input{PICS/cun-id.tex}\ \ &\ \ =\ \ & \def0.8{.85} \input{PICS/cun-right.tex} \end{alignat*} \begin{eqnarray*} \Delta}%{{\scriptstyle \Delta}\ \ & = & \varsigma \circ \Delta}%{{\scriptstyle \Delta}\\[1.5ex] \def0.8{.85} \input{PICS/cmn.tex} & = & \def0.8{.85} \input{PICS/cmn-sym.tex} \end{eqnarray*} \vspace{1\baselineskip} We define \emph{data} as the elements preserved by data services. Such elements are precisely those that can be manipulated using variables \cite{PavlovicD:MSCS97,PavlovicD:Qabs12}. \paragraph{Remark.} If we dualize data services, i.e. reverse the arrows in \eqref{eq:comon}, we get a binary operation and a constant. Transferring the equations from deleting and copying makes the binary operation associative and commutative, and it makes the constant into the unit. The dual of data services is thus the structure of a \emph{commutative monoid}. The structure of data services itself is thus a commutative \emph{co}\/monoid. \paragraph{Functions} are causations that map data to data. A causation $A\tto f B$ is a function if it is \emph{total}\/ and \emph{single-valued}, which respectively corresponds to the two equations in Fig.~\ref{fig-comonoid}. They also make $f$ into a comonoid homomorphism from the data service comonoid on $A$ to the data service comonoid on $B$. \begin{figure}[htbp] \begin{center} \newcommand{\Delta}%{{\scriptstyle \Delta}}{\Delta}%{{\scriptstyle \Delta}} \newcommand{\scriptstyle f}{\scriptstyle f} \newcommand{A}{A} \newcommand{B}{B} \newcommand{\top}%{{\scriptstyle \top}}{\top}%{{\scriptstyle \top}} \def0.8{1.1} \input{PICS/homomorphism} \caption{$f$ is a function if and only if it is a comonoid homomorphism} \label{fig-comonoid} \end{center} \end{figure}
1,108,101,562,368
arxiv
\section{Introduction} The non-archimedean symmetric spaces $\Omega = \Omega^{r}$ introduced by Drinfeld \cite{Drinfeld74} have shown great importance in the theories of modular and automorphic forms and of Shimura varieties, in the analytic uniformization of algebraic varieties, in the representation theory of $\mathrm{GL}(r,K)$, in the local Langlands correspondence, and in several other topics of the arithmetic of non-archimedean local fields $K$. An incomplete list of a few references is \cite{ManinDrinfeld73}, \cite{Mustafin78}, \cite{GerritzenVanDerPut80}, \cite{SchneiderStuhler91}, \cite{Laumon96}, \cite{DeShalit01}. For a complete non-archimedean local field $K$ with finite residue class field $\mathds{F}$ and completed algebraic closure $C$, the space $\Omega$ is defined as the complement of the $K$-rational hyperplanes in $\mathds{P}^{r-1}(C)$. It carries a natural structure as a rigid-analytic space defined over $K$, and is supplied with an action of the group $\mathrm{PGL}(r,K)$. In contrast with the case of real symmetric spaces, it fails to be simply connected (in the étale topology), but has a rich cohomological structure. Its cohomology (for cohomology theories satisfying the usual axioms) has been calculated by Schneider and Stuhler \cite{SchneiderStuhler91}, see also \cite{DeShalit01} and \cite{IovitaSpiess01}. Suppose for the moment that $\boldsymbol{r=2}$. In this case, $\Omega = \Omega^{2}$ has dimension 1, and a coarse combinatorial picture is provided by the Bruhat-Tits tree $\mathcal{T}$ of $\mathrm{PGL}(2,K)$, a $(q+1)$-regular tree, where $q = \#(\mathds{F})$ is the residue class cardinality of $K$. A map $\varphi$ from the set $\mathbf{A}(\mathcal{T})$ of oriented 1-simplices (\enquote{arrows}) of $\mathcal{T}$ to $\mathds{Z}$ that satisfies \begin{enumerate}[label=(\Alph*)] \item $\varphi(e) + \varphi(\overline{e}) = 0$ for each $e \in \mathbf{A}(\mathcal{T})$ with inverse $\overline{e}$, and \item $\sum \varphi(e) = 0$ for each vertex $v$ of $\mathcal{T}$, where $e$ runs through the arrows emanating from $v$, \end{enumerate} is called a $(\mathds{Z}$-valued) \textbf{harmonic cochain} on $\mathcal{T}$. The group $\mathbf{H}(\mathcal{T}, \mathds{Z})$ of all such yields upon tensoring with $\mathds{Z}_{\ell}$ ($\ell$ a prime coprime with $q$) the first étale cohomology group $H_{\acute{e}t}^{1}(\Omega^{2}, \mathds{Z}_{\ell})$ of $\Omega^{2}$ (\cite{Drinfeld74} Proposition 10.2). In 1981 Marius van der Put (\cite{VanDerPut8182}, see also \cite{FresnelVanDerPut81} I.8.9) established a short exact sequence \begin{equation} \label{Eq.van-der-Put-Short-exact-sequence-of-PGL-2-K-modules} \begin{tikzcd} 1 \arrow[r] & C^{*} \arrow[r] & \mathcal{O}(\Omega^{2})^{*} \ar[r, "P"] & \mathbf{H}(\mathcal{T}, \mathds{Z}) \ar[r] & 0 \tag{0.1} \end{tikzcd} \end{equation} of $\mathrm{PGL}(2,K)$-modules, where $\mathcal{O}(\Omega^{2})$ is the $C$-algebra of holomorphic functions on $\Omega^{2}$ with multiplicative group $\mathcal{O}(\Omega^{2})^{*}$. The van der Put transform $P(u)$ of an invertible function $u$ is a substitute for the logarithmic derivative $u'/u$, and \eqref{Eq.van-der-Put-Short-exact-sequence-of-PGL-2-K-modules} provides the starting point for a study of the \enquote{Riemann surface} $\Gamma \setminus \Omega^{2}$, where $\Gamma \subset \mathrm{PGL}(2,K)$ is a discrete subgroup (\cite{GerritzenVanDerPut80}, \cite{Gekeler96}). It is the aim of the present paper to develop a higher-rank (i.e., $r>2$) analogue of \eqref{Eq.van-der-Put-Short-exact-sequence-of-PGL-2-K-modules}. In \cite{GekelerTA} it was shown that the absolute value $\lvert u \rvert$ of $u \in \mathcal{O}(\Omega^{2})^{*}$ factors over the building map \[ \lambda \colon \Omega^{r} \longrightarrow \mathcal{BT}^{r} \] and that its logarithm $\log_{q} \lvert u \rvert$ defines an affine map on $\mathcal{BT}^{r}(\mathds{Q})$. Here $\mathcal{BT}^{r}$ is the Bruhat-Tits building of $\mathrm{PGL}(r,K)$ (the higher-dimensional analogue of $\mathcal{BT}^{2} = \mathcal{T}$) and $\mathcal{BT}^{r}(\mathds{Q})$ is the set of $\mathds{Q}$-points of its realization $\mathcal{BT}^{r}(\mathds{R})$. This makes it feasible that $u \mapsto \log_{q} \lvert u \rvert$ gives rise to a construction of $P$ generalizing van der Put's in the case $r=2$. The transform $P(u)$ of $u$ will be a $\mathds{Z}$-valued function on the set of arrows $\mathbf{A}(\mathcal{BT}^{r})$ of $\mathcal{BT}^{r}$ subject to (obvious generalizations of) the conditions (A) and (B) above. Our first result, Proposition \ref{Proposition.Property-C-holds-for-invertible-holomorphic-functions-on-Omega}, is that $P(u)$ satisfies one more relation (condition (C) in Corollary \ref{Corollary.Condition-C}) not visible if $r=2$. We then define $\mathbf{H}(\mathcal{BT}^{r}, \mathds{Z})$ as the group of those $\varphi \colon \mathbf{A}(\mathcal{BT}^{r}) \to \mathds{Z}$ which satisfiy (A), (B) and (C). The principal result of the present paper is the fact that the set of these relations is complete: \begin{manualtheorem}{3.10} The map $P \colon \mathcal{O}(\Omega^{r})^{*} \to \mathbf{H}(\mathcal{BT}^{r}, \mathds{Z})$ is surjective, and the van der Put sequence \begin{equation} \label{Eq.van-der-Put-Short-exact-sequence-for-general-r} \begin{tikzcd} 1 \arrow[r] & C^{*} \arrow[r] & \mathcal{O}(\Omega^{r})^{*} \ar[r] & \mathbf{H}(\mathcal{BT}^{r}, \mathds{Z}) \ar[r] & 0 \tag{0.2} \end{tikzcd} \end{equation} is an exact sequence of $\mathrm{PGL}(r,K)$-modules. \end{manualtheorem} The proof requires the construction of certain functions $u = f_{H,H',n}$ whose transforms $P(u)$ have a prescribed behavior on the finite subcomplex $\mathcal{BT}^{r}(n)$ of $\mathcal{BT}^{r}$, and a crucial technical result (Proposition \ref{Proposition.Crucial-technical-tool-for-main-theorem}), which solely refers to the geometry of $\mathcal{BT}^{r}$. Still, $\mathbf{H}(\mathcal{BT}^{r}, \mathds{Z})$ is a torsion-free abelian group of complicated appearance. However, as a further consequence of Proposition \ref{Proposition.Crucial-technical-tool-for-main-theorem}, we are able to describe it in Theorem \ref{Theorem.Abstraction-to-arbitrary-abelian-groups} \begin{itemize} \item either as $\mathbf{H}(\mathcal{T}_{v_{0}}, \mathds{Z})$, where $\mathcal{T}_{v_{0}}$ is a subcomplex of dimension 1 of $\mathcal{BT}^{r}$ (in fact, a tree, which for $r=2$ agrees with the Bruhat-Tits tree $\mathcal{T} = \mathcal{BT}^{2}$), and where only conditions (A) and (B) are involved, \item or as the group $\mathbf{D}^{0}(\mathds{P}(V^{\wedge}), \mathds{Z})$ of $\mathds{Z}$-valued distributions of total mass 0 on the compact space $\mathds{P}(V^{\wedge})$ of hyperplanes of the $K$-vector space $V=K^{r}$. \end{itemize} As the corresponding group $\mathbf{D}^{0}(\mathds{P}(V^{\wedge}), A)$ with coefficients in some ring $A$ depending on the cohomology theory used (e.g., $A = \mathds{Z}_{\ell}$ for étale cohomology, or $A=K$ for de Rham cohomology) has been shown to agree with the first cohomology $H^{1}(\Omega^{r}, A)$ (\cite{SchneiderStuhler91}, Section 3, Theorem 1), we get in particular a natural integral structure on $H^{1}(\Omega^{r}, A)$ along with a concrete arithmetic interpretation. \section{Background} \subsection{} Throughout, $K$ denotes a non-archimedean local field with ring $O$ of integers, a fixed uniformizer $\pi$, and finite residue class field $O/(\pi) = \mathds{F} = \mathds{F}_{q}$ of cardinality $q$. Hence $K$ is a finite extension of either a $p$-adic field $\mathds{Q}_{p}$ or of a Laurent series field $\mathds{F}_{p} ((X))$. We normalize its absolute value $\lvert \cdot \rvert$ by $\lvert \pi \rvert = q^{-1}$, and let $C = \widehat{\overline{K}}$ be its completed algebraic closure with respect to the unique extension of $\lvert \cdot \rvert$ to $\overline{K}$. Further, $\log \colon C^{*} \to \mathds{Q}$ is the map $z \mapsto \log_{q} \lvert z \rvert$. \subsection{} Given a natural number $r \geq 2$, the Drinfeld symmetric space $\Omega = \Omega^{r}$ of dimension $r-1$ is the complement $\Omega = \mathds{P}^{r-1} \setminus \bigcup H$ of the $K$-rational hyperplanes $H$ in projective space $\mathds{P}^{r-1}$. Hence the set of $C$-valued points of $\Omega$ (for which we briefly write $\Omega$) is \begin{align*} \Omega = \{ (\omega_{1} : \ldots : \omega_{r}) \in \mathds{P}^{r-1}(C) \mid \text{the $\omega_{i}$ are $K$-linearly independent}\}. \end{align*} If not indicated otherwise, we always suppose that projective coordinates \mbox{$(\omega_{1} : \ldots : \omega_{r})$} are \textbf{unimodular}, that is $\max_{i} \lvert \omega_{i} \rvert = 1$. The set $\Omega$ carries a natural structure as a rigid-analytic space defined over $K$ (see \cite{Drinfeld74}, \cite{DeligneHusemoller87}, \cite{SchneiderStuhler91}); in fact, it is an admissible open subspace of $\mathds{P}^{r-1}$, and even a Stein domain (\cite{SchneiderStuhler91}, Section 1, Proposition 14; see \cite{Kiehl67} for the notion of non-archimedean Stein domain). \subsection{} Let $G$ be the group scheme $\mathrm{GL}(r)$ with center $Z$; hence $\mathrm{G}(K) = \mathrm{GL}(r,K)$, $Z(K) \cong K^{*}$, etc. The Bruhat-Tits building \cite{BruhatTits72} $\mathcal{BT} = \mathcal{BT}^{r}$ of $\mathrm{G}(K)/Z(K) = \mathrm{PGL}(r,K)$ is a contractible simplicial complex with set of vertices \begin{equation} \mathbf{V}(\mathcal{BT}) = \{ [L] \mid L \text{ an $O$-lattice in $V$}\}, \end{equation} where $L$ runs through the set of $O$-lattices in the $K$-vector space $V = K^{r}$ and $[L]$ is the similarity class of $L$. (An \textbf{$\boldsymbol{O}$-lattice} is a free $O$-submodule of rank $r$ of $V$, two such, $L$ and $L'$, are \textbf{similar} if there exists $0 \neq c \in K$ such that $L' = cL$.) The classes $[L_{0}], \dots, [L_{s}]$ form an $s$-simplex if and only if they are represented by lattices $L_{i}$ such that \begin{equation} L_{0} \supsetneq L_{1} \supsetneq \dots \supsetneq L_{s} \supsetneq \pi L_{0}. \end{equation} The \textbf{combinatorial distance} $d(v,v')$ of two vertices $v,v' \in \mathbf{V}(\mathcal{BT})$ is the length of a shortest path connecting them in the 1-skeleton of $\mathcal{BT}$. It is easily verified that \begin{equation} d(v,v') = \min \left\{ n \, \middle\vert \, \begin{array}{l} \text{$\exists$ representatives $L,L'$ for $v,v'$} \\ \text{such that $L \supset L' \supset \pi^{n} L$} \end{array} \right\}. \end{equation} The \textbf{star} $\mathrm{st}(v)$ of $v \in \mathbf{V}(\mathcal{BT})$ will always denote the full subcomplex of $\mathcal{BT}$ with set of vertices \begin{equation} \label{Eq.Ster-of-vertices} \mathbf{V}(\mathrm{st}(v)) = \{ w \in \mathbf{V}(\mathcal{BT}) \mid d(v,w) \leq 1 \}. \end{equation} We regard $V$ as a space of row vectors, on which $\mathrm{G}(K)$ acts as a matrix group from the right. Hence $\mathrm{G}(K)$ acts also from the right on $\mathcal{BT}$. If the syntax requires a left action, we shift this action to the left by the usual formula $\gamma x \vcentcolon= x\gamma^{-1}$. \subsection{} The relationship between $\Omega$ and $\mathcal{BT}$ is as follows: By the Goldman-Iwahori theorem \cite{GoldmanIwahori63}, the realization $\mathcal{BT}(\mathds{R})$ of $\mathcal{BT}$ is in a natural one-to-one correspondence with the set of similarity classes of real-valued non-archimedean norms on $V$, where a vertex $v = [L] \in \mathbf{V}(\mathcal{BT}) = \mathcal{BT}(\mathds{Z})$ corresponds to the class of a norm with unit ball $L \subset V$. Now the \textbf{building map} \begin{equation} \begin{split} \lambda \colon \Omega &\longrightarrow \mathcal{BT}(\mathds{R}) \\ \boldsymbol{\omega} = (\omega_{1}:\ldots:\omega_{r}) &\longmapsto [\nu_{\boldsymbol{\omega}}] \end{split} \end{equation} is well-defined, where the norm $\nu_{\boldsymbol{\omega}}$ maps $\mathbf{x} = (x_{1}, \dots, x_{r}) \in V$ to \[ \nu_{\boldsymbol{\omega}}(\mathbf{x}) = \bigg\lvert \sum_{1 \leq i \leq r} x_{i} \omega_{i} \bigg\rvert, \] and $[\nu_{\boldsymbol{\omega}}]$ is its similarity class. According to the value group $\lvert C^{*} \rvert = q^{\mathds{Q}}$, $\lambda$ maps to $\mathcal{BT}(\mathds{Q})$, and is in fact onto $\mathcal{BT}(\mathds{Q})$, the set of points of $\mathcal{BT}(\mathds{R})$ with rational barycentric coordinates. $\mathrm{G}(K)$ acts from the left on the set of norms via \begin{equation} \gamma \nu(\mathbf{x}) \vcentcolon= \nu(\mathbf{x}\gamma) \end{equation} for $\mathbf{x} \in V$, a norm $\nu$, and $\gamma \in \mathrm{G}(K)$; the reader may verify that $\lambda$ is $\mathrm{G}(K)$-equivariant, where the action on $\Omega$ is the standard one through left matrix multiplication. The pre-images under $\lambda$ of simplices of $\mathcal{BT}$ yield an admissible covering of $\Omega$, see e.g. \cite{DeShalit01} (6.2) and (6.3). We therefore consider $\mathcal{BT}$ as a combinatorial picture of $\Omega$. We cite the following results from \cite{Gekeler17} and \cite{GekelerTA}. \begin{Theorem}[\cite{GekelerTA} Theorem 2.4] \label{Theorem.Image-of-invertible-holomorphic-function} Let $u$ be an invertible holomorphic function on $\Omega$. Then $\lvert u(\boldsymbol{\omega}) \rvert$ depends only on the image $\lambda(\boldsymbol{\omega})$ of $\boldsymbol{\omega} \in \Omega$ in $\mathcal{BT}(\mathds{Q})$. \end{Theorem} \subsubsection{} \label{subsubsec-Spectral-norm} We thus define the \textbf{spectral norm} $\lVert u \rVert_{x}$ as the common absolute value $\lvert u(\boldsymbol{\omega}) \rvert$ for all $\omega \in \lambda^{-1}(x)$, where $x \in \mathcal{BT}(\mathds{Q})$. \begin{Theorem}[\cite{GekelerTA} Theorem 2.6] \label{Theorem.Invertible-holomorphic-functions-are-affine-on-certain-sets} Let $u$ be an invertible holomorphic function on $\Omega$. Then $\log u = \log_{q} \lvert u \rvert$ regarded as a function on $\mathcal{BT}(\mathds{Q})$ is affine, that is, interpolates linearly in simplices. \end{Theorem} \subsection{} Let $\mathbf{A}(\mathcal{BT})$ be the set of \textbf{arrows}, i.e., of oriented 1-simplices of $\mathcal{BT}$. For each arrow $e = (v,v') = ([L], [L'])$ we write \begin{multline} o(e) = \text{origin of $e$} \vcentcolon = v, \quad t(e) = \text{terminus of $e$} \vcentcolon = v', \\ \text{and } \mathrm{type}(e) \vcentcolon= \dim_{\mathds{F}}(L/L'), \end{multline} where $L,L'$ are representatives with $L \supset L' \supset \pi L$. Then $1 \leq \mathrm{type}(e) \leq r-1$ and $\mathrm{type}(e) + \mathrm{type}(\overline{e}) = r$, where $\overline{e} = (v',v)$ is $e$ with reverse orientation. We let \begin{equation} \mathbf{A}_{v} = \charfusion[\mathop]{\bigcup}{\cdot}_{1 \leq t \leq r-1} \mathbf{A}_{v,t} \end{equation} be the arrows $e$ with $o(e) = v$, grouped according to their types $t$. For an invertible function $u$ on $\Omega$ and an arrow $e = (v,w)$, define the \textbf{van der Put value} $P(u)(e)$ of $u$ on $e$ as \begin{equation} P(u)(e) = \log_{q} \lVert u \rVert_{w} - \log_{q} \lVert u \rVert_{v} \end{equation} with the spectral norm of \ref{subsubsec-Spectral-norm}. \begin{Proposition}[\cite{Gekeler17}, Proposition 2.9] \label{Proposition.Properties-of-the-van-der-Put-transform} The \textbf{van der Put transform} \begin{align*} P(u) \colon \mathbf{A}(\mathcal{BT}) &\longrightarrow \mathds{Q} \\ e &\longmapsto P(u)(e) \end{align*} of $u$ has in fact values in $\mathds{Z}$ and satisfies \begin{equation} \label{Eq.Property-van-der-Put-transform} \sum_{e \in \mathbf{A}_{v,1}} P(u)(e) = 0 \end{equation} for all $v \in \mathbf{V}(\mathcal{BT})$. Here the sum is over the arrows $e$ with $o(e) = v$ and $\mathrm{type}(e) = 1$. \end{Proposition} \stepcounter{subsubsection} \subsubsection{}\label{subsubsection.Intuition-for-property-of-van-der-Put-transform} For later use, we describe how \eqref{Eq.Property-van-der-Put-transform} comes out. The canonical reduction $\overline{\lambda^{-1}(v)}$ of the affinoid $\lambda^{-1}(v)$ is a variety over the residue class field $\mathds{F} = O/(\pi)$ isomorphic with \[ \Omega_{\mathds{F}} \vcentcolon = \mathds{P}^{r-1}/\mathds{F} \setminus \bigcup \overline{H}, \] where $\overline{H}$ runs through the hyperplanes defined over $\mathds{F}$. Assume $u$ is scaled such that $\lVert u \rVert_{v} = 1$. Its reduction $\overline{u}$ is a rational function on $\overline{\lambda^{-1}(v)}$ without zeroes or poles. The boundary hyperplanes $\overline{H}$ of $\overline{\lambda^{-1}(v)}$ correspond canonically to the elements of $\mathbf{A}_{v,1}$ by $e \mapsto \overline{H}_{e}$, say. Let $m_{e}$ be the vanishing order of $\overline{u}$ along $\overline{H}_{e}$ (negative, if $\overline{u}$ has a pole along $\overline{H}_{e}$) and let $\ell_{e}$ be a linear form on $\mathds{P}^{r-1}/\mathds{F}$ with vanishing set $\overline{H}_{e}$. Then $P(u)(e) = -m_{e}$ and, since $\overline{u}$ up to a multiplicative constant equals $\prod_{e \in \mathbf{A}_{v,1}} \ell_{e}^{m_{e}}$, we find \[ -\sum_{e \in \mathbf{A}_{v,1}} m_{e} = \text{weight of the form $\overline{u}$} = 0. \] \begin{Remark} \begin{enumerate}[wide=15pt, label=(\roman*)] \item In the case $r=2$, the results \ref{Theorem.Image-of-invertible-holomorphic-function}, \ref{Theorem.Invertible-holomorphic-functions-are-affine-on-certain-sets}, \ref{Proposition.Properties-of-the-van-der-Put-transform} have been known for quite some time: see \cite{VanDerPut8182} and e.g. \cite{FresnelVanDerPut81} I.8.9. For general $r$, they are shown in \cite{Gekeler17} and \cite{GekelerTA} in the framework of these papers, where $\mathrm{char}(K) = \mathrm{char}(\mathds{F}) = p$. However, the proofs make no use of this assumption, and are therefore valid for $\mathrm{char}(K) = 0$, too. \item The three cited results are local in the sense that they do not require $u$ to be a global unit. If, e.g., $u$ is a holomorphic function without zeroes on the affinoid $\lambda^{-1}(x)$ with $x \in \mathcal{BT}(\mathds{Q})$, then $\lvert u(\boldsymbol{\omega}) \rvert$ is constant on $\lambda^{-1}(x)$; if $u$ is invertible on $\lambda^{-1}(\sigma)$ with a closed simplex $\sigma$ of $\mathcal{BT}$, then $\log u$ is affine there, and if $u$ is invertible on $\lambda^{-1}(\mathrm{st}(v))$, where $\mathrm{st}(v)$ is the star of $v \in \mathbf{V}(\mathcal{BT})$ (see \eqref{Eq.Ster-of-vertices}), then $P(u)(e)$ is defined for all $e \in \mathbf{A}_{v}$ and satisfies \eqref{Eq.Property-van-der-Put-transform}. \item It is immediate from definitions that for invertible functions $u$, $u'$ and arrows $e$, \begin{equation} \label{Eq.van-der-put-transform-easy-property-1} P(u)(e) + P(u)(\overline{e}) = 0, \end{equation} and more generally \begin{equation} \label{Eq.van-der-put-transform-easy-property-2} \sum P(u)(e) = 0, \quad \text{if $e$ runs through the arrows of a closed path in $\mathcal{BT}$}, \end{equation} as well as \begin{equation} \label{Eq.van-der-put-transform-easy-property-3} P(uu') = P(u) + P(u'). \end{equation} Hence the van der Put transform $P \colon u \mapsto P(u)$ is a homomorphism from the multiplicative group $\mathcal{O}(\Omega)^{*}$ of invertible holomorphic functions on $\Omega$ to the additive group of maps $\varphi \colon \mathbf{A}(\mathcal{BT}) \to \mathds{Z}$ that satisfy \eqref{Eq.van-der-put-transform-easy-property-1}, \eqref{Eq.van-der-put-transform-easy-property-2} and \eqref{Eq.Property-van-der-Put-transform}. Moreover, for $\gamma \in \mathrm{G}(K)$, \begin{equation} \label{Eq.van-der-put-transform-group-action-property} P(u)(e\gamma) = P(u \circ \gamma^{-1})(e), \end{equation} i.e., $\gamma(P(u)) = P(\gamma u) \vcentcolon = P(u \circ \gamma^{-1})$ holds; whence $P$ is $\mathrm{G}(K)$-equivariant. \end{enumerate} \end{Remark} In Theorem 3.10 we will find exact conditions that characterize the image of $P$. This will yield the exact sequence \eqref{Eq.van-der-Put-Short-exact-sequence-for-general-r} of $\mathrm{G}(K)$-modules that generalizes \eqref{Eq.van-der-Put-Short-exact-sequence-of-PGL-2-K-modules}. \section{Evaluation of \emph{P} on elementary rational functions} \subsection{} Let $U$ be a subspace of $V = K^{r}$ of codimension $t$, where $1 \leq t \leq r-1$. We define the shift toward $U$ on $\mathbf{V}(\mathcal{BT})$ by \begin{equation} \begin{split} \tau_{U} \colon \mathbf{V}(\mathcal{BT}) &\longrightarrow \mathbf{V}(\mathcal{BT}), \\ v = [L] &\longmapsto [L'] \end{split} \end{equation} where $L' = (L \cap U) + \pi L$. Obviously, $e = (v, \tau_{U}(v))$ is a well-defined arrow of type $\mathrm{type}(e) = \mathrm{codim}_{V}(U) = t$. We say that \textbf{$e$ points to $U$}. \stepcounter{subsubsection} \stepcounter{equation} \subsubsection{} For a local ring $R$ (in practice: $R=K$, or $O$, or a finite quotient $O_{n} \vcentcolon = O/(\pi^{n})$) and a free $R$-module $F$ of finite rank, let $\mathrm{Gr}_{R,t}(F)$ be the Grassmannian of direkt summands $F'$ such that $\mathrm{rank}_{R}(F/F') = t$. Fixing $v = [L] \in \mathbf{V}(\mathcal{BT})$, there is a natural surjective map \begin{equation} \begin{split} \mathrm{Gr}_{K,t}(V) &\longrightarrow \mathbf{A}_{v,t} \\ U &\longmapsto (v, \tau_{U}(v)) \end{split} \end{equation} and a canonical bijection \begin{equation} \label{Eq.Canonical-bijection-of-Avt-and-Gr-FtLpiL} \begin{tikzcd} \mathbf{A}_{v,t} \ar[r, "\cong"] & \mathrm{Gr}_{\mathds{F}, t}(L/\pi L) \end{tikzcd} \end{equation} given by $e = (v,w) = ([L],[M]) \mapsto \overline{M} \vcentcolon = M/\pi L$, where $L \supset M \supset \pi L$. We denote the image of $e$ by $\overline{M}_{e}$ and the pre-image of $\overline{M}$ in $\mathbf{A}_{v,t}$ by $e_{\overline{M}}$. \stepcounter{subsubsection} \stepcounter{subsubsection} \subsubsection{} For two arrows $e=e_{\overline{M}}$ and $e' = e_{\overline{M}'}$ with the same origin, we write $e \prec e'$ ($e'$ \textbf{dominates} $e$) if and only if $\overline{M} \subset \overline{M}'$. \stepcounter{equation} \stepcounter{equation} \subsubsection{} \label{subsubsection.Identification-of-arrows-with-special-Grassmannians} Fix $n \in \mathds{N}$ and let $O_{n}$ be the ring $O/(\pi^{n})$. Then, as a generalization of the above, $U \mapsto (v, \tau_{U}(v), \dots, \tau_{U}^{n}(v))$ is surjective from $\mathrm{Gr}_{K,t}(V)$ onto the set $\mathbf{A}_{v,t,n}$ of paths of length $n$ in $\mathcal{BT}$ which emanate from $v$, are composed of arrows of type $t$, and whose endpoints $w$ have distance $d(v,w) = n$ (e.g., $\mathbf{A}_{v,t,1} = \mathbf{A}_{v,t}$). The set $\mathbf{A}_{v,t,n}$ corresponds one-to-one to $\mathrm{Gr}_{O_{n}, t}(L/\pi^{n}L)$, where the composite map from $\mathrm{Gr}_{K,t}(V)$ to $\mathrm{Gr}_{O_{n}, t}(L/\pi^{n}L)$ is given by $U \mapsto ( (L\cap U) + \pi^{n} L)/\pi^{n} L$. This yields in the limit the canonical bijections \begin{equation} \begin{tikzcd} \mathrm{Gr}_{K,t}(V) \ar[r, "\cong"] & \varprojlim\limits_{n} \mathbf{A}_{v,t,n} = \varprojlim\limits_{n} \mathrm{Gr}_{O_{n},t} (L/\pi^{n} L) = \mathrm{Gr}_{O,t}(L), \end{tikzcd} \end{equation} whose composition is simply $U \mapsto U \cap L$. Let $e$ be an arrow of type $t$. Then \begin{equation} \label{Eq.Special-grassmannian-is-compact-and-open-in-Gr-KtV} \mathrm{Gr}_{K,t}(e) \vcentcolon= \{ U \in \mathrm{Gr}_{K,t}(V) \mid e \text{ points to } U \} \end{equation} is compact and open in the compact space $\mathrm{Gr}_{K,t}(V)$, and it follows from the considerations above that the set of all $\mathrm{Gr}_{K,t}(e)$, where $v,t$ are fixed and $e$ belongs to $\mathbf{A}_{v,t,n}$ for some $n \in \mathds{N}$, form a basis for the topology on $\mathrm{Gr}_{K,t}(V)$. \subsection{} Given a hyperplane $H$ in $V$, we let $\ell_{H} \colon V \to K$ be a linear form with kernel $H$. We denote by the same symbol its extension $\ell_{H} \colon V \otimes_{K} C = C^{r} \to C$. The quotients \begin{equation} \ell_{H,H'} \vcentcolon = \ell_{H}/\ell_{H'} \end{equation} of two such are rational functions on $\mathds{P}^{r-1}(C)$ without zeroes or poles on $\Omega \hookrightarrow \mathds{P}^{r-1}(C)$. Note that $\ell_{H}$ is determined up to multiplication by a non-zero scalar in $K$; hence $P(\ell_{H,H'})$ depends only on $H$ and $H'$, but not on the scaling of $\ell_{H}$ and $\ell_{H'}$. Our first task will be to describe $P(\ell_{H,H'})$. \subsection{} We start with some local considerations around the vertex $v_{0} = [L_{0}]$, where $L_{0}$ is the standard lattice $O^{r}$ in $V$. Let us first recall the easily verified fact (where the unimodularity normalization of $\boldsymbol{\omega} \in \Omega$ is used): \begin{align} \lambda^{-1}(v_{0}) &= \{ \boldsymbol{\omega} \in \Omega \mid \nu_{\boldsymbol{\omega}} \text{ has unit ball } L_{0} \} \\ &= \{ \boldsymbol{\omega} \in \Omega \mid \text{ the $\omega_{i}$ are orthogonal and $\lvert \omega_{i} \rvert$ for $1 \leq i \leq r$} \}. \nonumber \end{align} ($z_{1}, \dots, z_{n} \in C$ are \textbf{orthogonal} if and only if $\lvert \sum_{1 \leq i \leq r} a_{i} z_{i} \rvert = \max_{i} \lvert a_{i} z_{i} \rvert$ for arbitrary coefficients $a_{i} \in K$.) Hence the canonical reduction of $\lambda^{-1}(v_{0})$ equals \begin{equation} \label{Eq.Canonical-reduction-of-preimage-of-v0-under-lambda} \overline{\lambda^{-1}(v_{0})} = \mathds{P}^{r-1}/\mathds{F} \setminus \bigcup \overline{H}, \end{equation} where $\overline{H}$ runs through the hyperplanes defined over $O/(\pi) = \mathds{F}$. \subsection{} Write $\langle \cdot , \cdot \rangle$ for the standard bilinear form on $V$ given by \[ \langle \mathbf{x}', \mathbf{x} \rangle = \sum_{1 \leq i \leq r} x_{i}'x_{i}, \] which we extend to a form $\langle \cdot, \cdot \rangle$ on $C^{r}$. Each hyperplane $H$ of $V$ is given as the kernel of a linear form \begin{equation} \ell_{H} = \ell_{\mathbf{y}} \colon \mathbf{x} \longmapsto \langle \mathbf{y}, \mathbf{x} \rangle \end{equation} with some $\mathbf{y} \in L_{0} - \pi L_{0}$. The arrow $(v_{0}, \tau_{H}(v_{0})) \in \mathbf{A}_{v_{0},1}$ equals $e_{\overline{H}}$ with \[ \overline{H} = ( (L_{0} \cap H) + \pi L_{0})/\pi L_{0} = ( (L_{0} \cap \mathrm{ker}(\ell_{\mathbf{y}})) + \pi L_{0})/\pi L_{0}. \] Two such vectors $\mathbf{y}, \mathbf{y}'$ give rise to the same $e_{\overline{H}}$ if and only if $\mathbf{y}' \equiv c \cdot \mathbf{y} \pmod{\pi}$ with some unit $c \in O^{*}$. More generally, $\mathbf{y}$ and $\mathbf{y}'$ give rise to the same path $(v_{0}, \tau_{H}(v_{0}), \dots, \tau_{H}^{n}(v_{0})) \in \mathbf{A}_{v_{0},1,n}$ if and only if \begin{equation} \label{Eq.n-equivalence-of-vectors} \mathbf{y}' \equiv c \cdot \mathbf{y} \pmod{\pi^{n}} \end{equation} with $c \in O^{*}$. In this case we call $\mathbf{y}$ and $\mathbf{y}'$ \textbf{$\boldsymbol{n}$-equivalent}; the respective equivalence classes are briefly the \textbf{$\boldsymbol{n}$-classes} of $\mathbf{y}, \mathbf{y}'$. \subsection{} \label{subsection.Rational-function-corresponding-to-two-hyperplanes} Let now hyperplanes $H, H'$ of $V$ be given by $\mathbf{y}, \mathbf{y}'$ as above. The function $\ell_{H,H'} = \ell_{\mathbf{y}}/ \ell_{\mathbf{y}'}$ has constant absolute value 1 on $\lambda^{-1}(v_{0})$ and therefore, by reduction, gives a rational function $\overline{\ell}_{H,H'}$ on $\overline{\lambda^{-1}(v_{0})} \hookrightarrow \mathds{P}^{r-1}/\mathds{F}$. Put \[ \overline{H} = ( (L_{0} \cap H) + \pi L_{0}) / \pi L_{0}, \] and ditto $\overline{H}'$. By definition, it is an $\mathds{F}$-subvector space of $L_{0}/\pi L_{0} \overset{\cong}{\rightarrow} \mathds{F}^{r}$. Abusing language, we denote by the same symbol the corresponding $\mathds{F}$-rational linear subvariety of $\mathds{P}^{r-1}/\mathds{F}$ that appears e.g. in \eqref{Eq.Canonical-reduction-of-preimage-of-v0-under-lambda}. Suppose that $\overline{H}$ differs from $\overline{H}'$. Then $\overline{\ell}_{H,H'}$ has vanishing order 1 along $\overline{H}$, vanishing order $-1$ along $\overline{H}'$, and vanishing order 0 along the other hyperplanes in the boundary of $\overline{\lambda^{-1}}(v_{0})$ (see \eqref{Eq.Canonical-reduction-of-preimage-of-v0-under-lambda}). If however $\overline{H} = \overline{H}'$, then $\overline{\ell}_{H,H'}$ has neither zeroes nor poles along the boundary (and is therefore constant). According to the recipe discussed in \ref{subsubsection.Intuition-for-property-of-van-der-Put-transform}, we find the following description. \begin{Proposition} \label{Proposition.van-der-Put-transform-on-special-rational-function-evaluated-on-special-arrow-of-type-1} Let $e$ be an arrow in $\mathbf{A}_{v_{0},1}$. Then \[ P(\ell_{H,H'})(e) = \begin{cases} -1, &e=(v_{0}, \tau_{H}(v_{0})) \neq (v_{0}, \tau_{H'}(v_{0})), \\ +1, &e=(v_{0}, \tau_{H'}(v_{0})) \neq (v_{0}, \tau_{H}(v_{0})), \\ \hphantom{+}0, &\text{otherwise.} \end{cases} \] ~\hfill{$\square$} \end{Proposition} Formula \eqref{Eq.van-der-put-transform-group-action-property} implies \begin{equation} P(\ell_{H,H'})(\gamma_{e}) = P(\ell_{H\gamma^{-1}, H'\gamma^{-1}})(e) \end{equation} for arrows $e$ and $\gamma \in \mathrm{G}(K)$. As $\mathrm{G}(K)$ acts transitively on $\mathbf{V}(\mathcal{BT})$, we may transfer \ref{Proposition.van-der-Put-transform-on-special-rational-function-evaluated-on-special-arrow-of-type-1} to arbitrary arrows of type 1, and thus get: \begin{Corollary} \label{Corollary.van-der-Put-transform-of-special-rational-function-evaluated-on-arbitrary-arrow-of-type-1} Let $e \in \mathbf{A}_{v,1}$ be an arrow of type 1 with arbitrary origin $v \in \mathbf{V}(\mathcal{BT})$. Write $e_{H}$ (resp. $e_{H'}$) for the arrow $(v, \tau_{H}(v))$ (resp. $(v, \tau_{H'}(v))$). Then \[ P(\ell_{H,H'})(e) = \begin{cases} -1, &e=e_{H} \neq e_{H'}, \\ +1, &e=e_{H'} \neq e_{H}, \\ \hphantom{+}0, &\text{otherwise.}\end{cases} \] ~\hfill $\square$ \end{Corollary} Next, we deal with arrows of arbitrary type. \begin{Proposition} \label{Proposition.van-der-Put-transform-of-special-rational-function-evaluated-on-abitrary-arrow} Given hyperplanes $H,H'$ of $V$ and an arrow $e$ of $\mathcal{BT}$ with origin $v \in \mathbf{V}(\mathcal{BT})$, let $e_{H}$ (resp. $e_{H'}$) be the arrow with origin $v$ pointing to $H$ (resp. to $H'$). The transform $P(\ell_{H,H'})$ evaluates on $e$ as follows: \[ P(\ell_{H,H'})(e) = \begin{cases} -1, &e \prec e_{H}, e \nprec e_{H'}, \\ +1, &e \prec e_{H'}, e \nprec e_{H}, \\ \hphantom{+}0, &\text{otherwise.} \end{cases} \] \end{Proposition} \begin{proof} Let $L$ be a lattice with $[L] = v$ and $e = e_{\overline{M}}$, where $\overline{M}$ is a subspace of $L/\pi L$ of codimension $t = \mathrm{type}(e)$. Without restriction, $t \geq 2$. Suppose that $e \prec e_{H}$, i.e., \[ \overline{M} \subset \overline{H} = ( (L\cap H) + \pi L)/\pi L \subset L/ \pi L. \] Let $\overline{M}_{0} = L/ \pi L \supsetneq \overline{M}_{1} = \overline{H} \supsetneq \dots \supsetneq \overline{M}_{t} = \overline{M}$ be a complete flag connecting $L/\pi L$ to $\overline{M}$, where $\mathrm{codim}_{L/ \pi L}(\overline{M}_{i}) = i$ for $0 \leq i \leq t$. It corresponds to a path $(v_{0}, v_{1}, \dots, v_{t})$ in $\mathcal{BT}$, where $v_{0} = v = [L]$, $v_{t} = t(e_{\overline{M}})$, and all the arrows $e_{1} = (v_{0}, v_{1}), \dots, e_{t} = (v_{t-1}, v_{t})$ of type 1. As $\{v_{0}, \dots, v_{t}\}$ is a $t$-simplex, $d(v_{0}, v_{i}) = 1$ for $1 \leq i \leq t$, and therefore no $e_{i}$ different from $e_{1} = e_{H}$ points to $H$. Suppose that moreover $e \nprec e_{H'}$, that is, \[ \overline{M} \not\subset ( (L\cap H') + \pi L) / \pi L. \] Then none of the $e_{i}$ ($1 \leq i \leq t$) points to $H'$, so \[ P(\ell_{H,H'})(e) = \sum_{1 \leq i \leq t} P(\ell_{H,H'})(e_{i}) = P(\ell_{H,H'})(e_{1}) = -1 \] by \eqref{Eq.van-der-put-transform-easy-property-2} and Corollary \ref{Corollary.van-der-Put-transform-of-special-rational-function-evaluated-on-arbitrary-arrow-of-type-1}. If $e \prec e_{H'} \neq e_{H}$, then we can arrange the flag $\overline{M}_{0} \supsetneq \dots \supsetneq \overline{M}_{t}$ such that as before $e_{1}$ points to $H$, $e_{2}$ points to $H'$, and no $e_{i}$ ($3 \leq i \leq t$) points to $H$ or $H'$. In this case \[ P(\ell_{H,H'})(e) = P(\ell_{H,H'})(e_{1}) + P(\ell_{H,H'})(e_{2}) = -1 + 1 = 0. \] If $e \prec e_{H} = e_{H'}$, then \[ P(\ell_{H,H'})(e) = P(\ell_{H,H'})(e_{1}) = 0 \quad \text{by \ref{Corollary.van-der-Put-transform-of-special-rational-function-evaluated-on-arbitrary-arrow-of-type-1}.} \] If neither $e \prec e_{H}$ nor $e \prec e_{H'}$, neither of the arrows $e_{i}$ ($1 \leq i \leq t$) corresponding to a flag $\overline{M}_{0} = L/\pi L \supsetneq \dots \supsetneq \overline{M}_{t} = \overline{M}$ points to $H$ or to $H'$, and so $P(\ell_{H,H'})(e) = 0$ results. The case $e \prec e_{H'}$, $e \nprec e_{H}$ comes out by symmetry. \end{proof} \begin{Corollary} \label{Corollary.Condition-C} Let $H_{1}, \dots, H_{n}$ be finitely many hyperplanes of $V$ with corresponding linear forms $\ell_{i} = \ell_{H_{i}}$, $\mathrm{ker}(\ell_{i}) = H_{i}$, and multiplicities $m_{i} \in \mathds{Z}$ such that $\sum_{1 \leq i \leq n} m_{i} = 0$. The function \[ u \vcentcolon = \prod_{1 \leq i \leq n} \ell_{i}^{m_{i}} \] is a unit on $\Omega$, whose van der Put transform $P(u)$ satisfies the condition: \begin{enumerate} \item[$\mathrm{(C)}$] For each arrow $e \in \mathbf{A}(\mathcal{BT})$ with $o(e) = v \in \mathbf{V}(\mathcal{BT})$, \[ P(u)(e) = \sum_{\substack{e' \in \mathbf{A}_{v,1} \\ e \prec e'}} P(u)(e'). \] \end{enumerate} \end{Corollary} \begin{proof} (C) is satisfied for $u = \ell_{H,H'} = \ell_{H}/\ell_{H'}$ by \ref{Corollary.van-der-Put-transform-of-special-rational-function-evaluated-on-arbitrary-arrow-of-type-1} and \ref{Proposition.van-der-Put-transform-of-special-rational-function-evaluated-on-abitrary-arrow}. The general case follows as condition (C) is linear (it holds for $u \cdot u'$ if it holds for $u$ and $u'$) and $\prod \ell_{i}^{m_{i}}$ is a product of functions of type $\ell_{H,H'}$. \end{proof} \section{The van der Put sequence} \begin{Proposition} \label{Proposition.Property-C-holds-for-invertible-holomorphic-functions-on-Omega} Let $u$ be an invertible holomorphic function on $\Omega$. Then its van der Put transform $P(u)$ satisfies condition $\mathrm{(C)}$ from Corollary \ref{Corollary.Condition-C}. \end{Proposition} \begin{proof} Again by \eqref{Eq.van-der-put-transform-group-action-property} we may suppose that the origin $o(e)$ of the arrow in question equals to $v_{0} = [L_{0}]$. So $e = e_{\overline{M}}$ with some non-trivial $\mathds{F}$-subspace $\overline{M}$ of $L_{0}/\pi L_{0}$. As in \ref{subsection.Rational-function-corresponding-to-two-hyperplanes} we use the same letter $\overline{M}$ for the corresponding linear subvariety of $\mathds{P}^{r-1}/\mathds{F}$ of codimension $t = \mathrm{type}(e) = \mathrm{codim}_{L_{0}/\pi L_{0}}(\overline{M})$. Multiplying $u$ by suitable functions of type $\ell_{H,H'}$ (which doesn't alter the (non)-validity of (C) for $u$), we may assume that $P(u)(e') = 0$ for all $e' \in \mathbf{A}_{v_{0},1}$ dominating $e$. Then we must show, that $P(u)(e) = 0$, too. Let $u$ be normalized such that $\lVert u \rVert_{v_{0}} = 1$, and let $\overline{u}$ be its reduction as a rational function on $\mathds{P}^{r-1}/\mathds{F}$, see \eqref{Eq.Canonical-reduction-of-preimage-of-v0-under-lambda}. If $P(u)(e) < 0$ then $\lvert u \rvert$ decays along $e = e_{\overline{M}}$ and $\overline{u}$ vanishes along $\overline{M}$. Correspondingly, if $P(u)(e) > 0$ then $(\overline{u})^{-1} = (\overline{u^{-1}})$ vanishes along $\overline{M}$. Hence it suffices to show that, under our assumptions, $\overline{u}$ restricts to a well-defined rational function on $\overline{M}$, i.e., $\overline{M}$ is neither contained in the vanishing locus $V(\overline{u})$ nor in $V(\overline{u}^{-1})$. But the latter is obvious: With a suitable constant $c \neq 0$ we have \[ \overline{u} = c \cdot \prod \ell_{\overline{H}}^{m(\overline{H})}, \] where $\overline{H}$ runs through the boundary components of $\overline{\lambda^{-1}(v_{0})}$ as in \eqref{Eq.Canonical-reduction-of-preimage-of-v0-under-lambda}, $\ell_{\overline{H}}$ is a linear form vanishing on $\overline{H}$, $\sum m(\overline{H}) = 0$, and $m(\overline{H}) = -P(u)(e_{\overline{H}}) = 0$ if $\overline{M} \subset \overline{H}$. Hence neither the rational function $\overline{u}$ nor its reciprocal vanishes identically on $\overline{M}$. \end{proof} \subsection{}\label{subsection.Conditions-for-varphi-in-A(BT)} The proposition motivates the following definition. Let $A$ be any additively written abelian group. The group of \textbf{$\boldsymbol{A}$-valued harmonic 1-cochains} $\mathbf{H}(\mathcal{BT}, A)$ is the group of maps $\varphi \colon \mathbf{A}(\mathcal{BT}) \to A$ that satisfy \begin{enumerate}[label=(\Alph*)] \item $\sum \varphi(e) = 0$, whenever $e$ ranges through the arrows of a closed path in $\mathcal{BT}$; \item for each type $t$, $1 \leq t \leq r-1$, and each $v \in V(\mathcal{BT})$, the condition \par\noindent\begin{minipage}{\linewidth} \begin{equation} \sum_{e \in \mathbf{A}_{v,t}} \varphi(e) = 0 \quad \text{holds}; \tag{$\mathrm{B}_{t}$} \end{equation} \end{minipage} \item for each $v \in \mathbf{V}(\mathcal{BT})$ and each $e \in \mathbf{A}_{v,t}$, \[ \sum_{e' \in \mathbf{A}_{v,1}, ~e \prec e'} \varphi(e') = \varphi(e). \] \end{enumerate} \begin{Remark} \begin{enumerate}[wide=15pt, label=(\roman*)] \item In the case where the coefficient group $A$ equals $\mathds{Z}$, condition (A) is \eqref{Eq.van-der-put-transform-easy-property-2}, $(\mathrm{B}_{1})$ is \eqref{Eq.Property-van-der-Put-transform}, and (C) is the condition dealt with in \ref{Corollary.Condition-C} and \ref{Proposition.Property-C-holds-for-invertible-holomorphic-functions-on-Omega}. (A) in particular implies that $\varphi$ is alternating, i.e., $\varphi(\overline{e}) = -\varphi(e)$. Further, $(\mathrm{B}_{1})$ together with (C) implies $(\mathrm{B}_{t})$ for all types $t$, as \[ \sum_{e \in \mathbf{A}_{v,t}} \varphi(e) = \sum_{e' \in \mathbf{A}_{v,1}} \varphi(e') \# \{ e \in \mathbf{A}_{v,t} \mid e \prec e' \}, \] where $\# \{\, \cdots \}$, the cardinality of some finite Grassmannian, is independent of $e'$. \item Note that the current $\mathbf{H}(\mathcal{BT}, \mathds{Z})$ differs from the group defined in \cite{Gekeler17}, as condition (C) is absent there. \item Proposition \ref{Proposition.Property-C-holds-for-invertible-holomorphic-functions-on-Omega} together with the preceding considerations shows that \begin{align*} P \colon \mathcal{O}(\Omega)^{*} &\longrightarrow \mathbf{H}(\mathcal{BT}, \mathds{Z}) \\ u &\longmapsto P(u) \end{align*} is well-defined. Its kernel consists of the invertible holomorphic functions on $\Omega$ with constant absolute value, which equals the constants $C^{*}$, as $\Omega$ is a Stein domain. Hence, by \eqref{Eq.van-der-put-transform-group-action-property}, we have the exact sequence of $\mathrm{G}(K)$-modules \begin{equation*} \begin{tikzcd} 1 \ar[r] &C^{*} \ar[r] &\mathcal{O}(\Omega)^{*} \ar[r, "P"] &\mathbf{H}(\mathcal{BT}, \mathds{Z}). \end{tikzcd} \end{equation*} In fact, we will show that $P$ is also surjective. \item Beyond the natural coefficient domains $A=\mathds{Z}$ or $\mathds{Q}$ for $\mathbf{H}(\mathcal{BT}, A)$, at least the torsion groups $A = \mathds{Z}/(N)$ deserve interest. For example, in the case $r=2$ and $\mathrm{char}(C) = \mathrm{char}(\mathds{F}) = p$, the invariants $\mathbf{H}(\mathcal{BT}, \mathds{F}_{p})^{\Gamma}$ under an arithmetic subgroup $\Gamma \subset \mathrm{G}(K)$ differ in general from $\mathbf{H}(\mathcal{BT}, \mathds{Z})^{\Gamma} \otimes \mathds{F}_{p}$, see \cite{Gekeler96} Section 6. The coefficient rings $A = \mathds{Z}_{\ell}$ ($\ell$ a prime number) and $A=K$ come into the game by relating $\mathbf{H}(\mathcal{BT}, \mathds{Z})$ with the first cohomology of $\Omega$, see Remark 5.5. \end{enumerate} \end{Remark} \subsection{} The strategy of proof of the surjectivity of $P$ will be to approximate a given $\varphi \in \mathbf{H}(\mathcal{BT}, \mathds{Z})$ by elements $P(u)$, where $u$ is a function $\ell_{H,H'}$, or a relative of it. Given two hyperplanes $H \neq H'$ of $V$ and $n \in \mathds{N}_{0} = \{0,1,2,\dots\}$, define \begin{equation} \label{Eq.Definition-of-relative-to-ell-H-H'} f_{H,H',n} \vcentcolon = 1 + \pi^{n} \ell_{H,H'}. \end{equation} Here $\ell_{H,H'} = \ell_{H}/\ell_{H'} = \ell_{\mathbf{y}}/\ell_{\mathbf{y}'}$, where $\mathbf{y}, \mathbf{y}' \in L_{0} - \pi L_{0}$, $H = \mathrm{ker}(\ell_{\mathbf{y}})$, $H' = \mathrm{ker}(\ell_{\mathbf{y}'})$. Like $\ell_{H,H'}$, $f_{H,H',n}$ is a unit on $\Omega$. We denote by \begin{equation} \mathcal{BT}(n) \subset \mathcal{BT} \end{equation} the full subcomplex with vertices $\mathbf{V}(\mathcal{BT}(n)) = \{ v \in \mathbf{V}(\mathcal{BT}) \mid d(v_{0}, v) \leq n\}$. Hence $\mathcal{BT}(0) = \{v_{0}\}$, $\mathcal{BT}(1) = \mathrm{st}(v_{0})$, etc. Further, \begin{equation} \Omega(n) \vcentcolon= \lambda^{-1}(\mathcal{BT}(n)). \end{equation} Then $\Omega(n)$ is an admissible affinoid subspace of $\Omega$ and $\Omega = \bigcup_{n \geq 0} \Omega(n)$. (In \cite{SchneiderStuhler91} Section 1, Proposition 4, $\Omega(n)$ is called $\overline{\Omega}_{n}$, and a system of affinoid generators is constructed.) \begin{Lemma} \label{Lemma.Properties-of-f-H-H'-n-on-Omega-n} For $n \in \mathds{N}_{0}$, the following hold on $\Omega(n)$: \begin{enumerate}[label=$\mathrm{(\roman*)}$] \item $\log \ell_{H,H'} \leq n$; \item $\lvert f_{H,H',n} \rvert = 1$ if $n > 0$. \end{enumerate} \end{Lemma} \begin{proof} \let\qed\relax \begin{enumerate}[wide=15pt, label=(\roman*)] \item By our normalization, $\lvert \ell_{H,H'}(\boldsymbol{\omega}) \rvert = 1$ for $\boldsymbol{\omega} \in \lambda^{-1}(v_{0})$. Then by \ref{Proposition.van-der-Put-transform-of-special-rational-function-evaluated-on-abitrary-arrow}, $\lVert \ell_{H,H'}(\boldsymbol{\omega}) \rVert_{v} \leq q^{n}$ for $v \in \mathbf{V}(\mathcal{BT})$ whenever $d(v_{0}, v) \leq n$, which gives the assertion. \item $\lvert f_{H,H',n}(\boldsymbol{\omega}) \rvert = \lvert 1 + \pi^{n} \ell_{H,H'}(\boldsymbol{\omega}) \rvert \leq 1$ on $\Omega(n)$ by (i), with equality at least if $\boldsymbol{\omega}$ doesn't belong to $\lambda^{-1}(v)$, where $v$ is a vertex with $d(v_{0}, v) = n$, since in this case $\log \ell_{H,H'}(\boldsymbol{\omega}) < n$. But the equality must also hold for $\boldsymbol{\omega}$ with $\lambda(\boldsymbol{\omega}) =$ such a $v$, due to the linear interpolation property \ref{Theorem.Invertible-holomorphic-functions-are-affine-on-certain-sets} of $\log_{q} \lVert f_{H,H',n} \rVert_{x}$ for $x$ belonging to an arrow $e = (v',v)$ with $d(v_{0}, v') = n-1$. \hfill \mbox{$\square$} \end{enumerate} \end{proof} \begin{Definition} A vertex $v \in \mathbf{V}(\mathcal{BT})$ is called \textbf{$\boldsymbol{n}$-special} ($n \in \mathds{N}_{0}$) if there exists a (necessarily uniquely determined) path $(v_{0}, v_{1}, \dots, v_{n} = v) \in \mathbf{A}_{v_{0},1,n}$, i.e., the arrows $e_{i} = (v_{i-1}, v_{i})$, $i = 1,2, \dots, n$ all have type 1, and $d(v_{0}, v) = n$. (By definition, $v_{0}$ is $0$-special.) An arrow $e \in \mathbf{A}(\mathcal{BT})$ is \textbf{$\boldsymbol{n}$-special} $(n \in \mathds{N})$ if $o(e)$ is $(n-1)$-special and $t(e)$ is $n$-special, that is, if it appears as some $e_{n}$ as above. Also, the path $(v_{0}, \dots, v_{n}) = (e_{1}, \dots, e_{n})$ is called \textbf{$\boldsymbol{n}$-special}. An arrow $e$ with $d(v_{0}, o(e)) = n$ is \textbf{inbound} (of level $n$) if it belongs to $\mathcal{BT}(n)$, and \textbf{outbound} otherwise. That is, $e$ is inbound $\Leftrightarrow d(v_{0}, t(e)) \leq n$. \end{Definition} \subsection{} \label{subsection.Description-of-P(f_H_H'_n)-on-n+1-special-arrows} Next, we describe the restriction of $P(f_{H,H',n})$ to $(n+1)$-special arrows $e$. Let $n \in \mathds{N}$, and choose hyperplanes $H,H'$ of $V$, given as $H = \mathrm{ker}(\ell_{\mathbf{y}})$, $H' = \mathrm{ker}(\ell_{\mathbf{y}'})$ as in \eqref{Eq.Definition-of-relative-to-ell-H-H'}. Assume that $\mathbf{y}$ and $\mathbf{y}'$ are not 1-equivalent \eqref{Eq.n-equivalence-of-vectors}, that is, $\tau_{H}(v_{0}) \neq \tau_{H'}(v_{0})$. \begin{enumerate}[wide=15pt, label=(\roman*)] \item According to Corollary \ref{Corollary.van-der-Put-transform-of-special-rational-function-evaluated-on-arbitrary-arrow-of-type-1}, $\ell_{H,H'} = \ell_{\mathbf{y}}/\ell_{\mathbf{y}'}$ has the property that $\log \ell_{H,H'}$ grows by 1 in each step of the $(n+1)$-special path \begin{equation} \label{Eq.special-n+1-special-path} (v_{0}, v_{1}, \dots, v_{n}, v_{n+1}) = (e_{1}, e_{2}, \dots, e_{n+1}) \end{equation} from $v_{0}$ toward $H'$. Together with \ref{Lemma.Properties-of-f-H-H'-n-on-Omega-n} (ii), this implies that $P(f_{H,H',n})(e_{n+1}) = 1$. \item On the other hand, again by Corollary \ref{Corollary.van-der-Put-transform-of-special-rational-function-evaluated-on-arbitrary-arrow-of-type-1}, $\log \ell_{H,H'} < n$ on $\lambda^{-1}(v)$ for each $n$-special $v$ different from $v_{n}$. By a variation of the linear interpolation argument in the proof of \ref{Lemma.Properties-of-f-H-H'-n-on-Omega-n} (ii), $P(f_{H,H',n})(e) = 0$ for each $(n+1)$-special arrow $e$ with $o(e) \neq v_{n}$. \item The function $u \vcentcolon= f_{H,H',n} = (\ell_{\mathbf{y}'} + \pi^{n}\ell_{\mathbf{y}})/\ell_{\mathbf{y}'}$ satisfies $\lVert u \rVert_{v_{n}} = 1$. Its reduction $\overline{u}$ as a rational function on the reduction \begin{equation} \label{Eq.Preimage-of-special-vector-under-affinoid-lambda} \overline{\lambda^{-1}(v_{n})} \cong \mathds{P}^{r-1}/\mathds{F} \setminus \bigcup \overline{H} \qquad \text{(see \ref{Eq.Canonical-reduction-of-preimage-of-v0-under-lambda})} \end{equation} of $\lambda^{-1}(v_{n})$ has a simple pole along the hyperplane $\overline{H}_{e_{n+1}}$ of $\mathds{P}^{r-1}/\mathds{F}$ corresponding to the arrow $e_{n+1}$, a simple zero along a unique $\overline{H}_{e}$, where $e = (v_{n}, w)$, and neither zeroes nor poles along the other hyperplanes that appear in \eqref{Eq.Preimage-of-special-vector-under-affinoid-lambda}. The hyperplane $\overline{H}_{e}$ is the vanishing locus in $\mathds{P}^{r-1}/\mathds{F}$ of the reduction of the form $\ell_{\mathbf{y}'} + \pi^{n} \ell_{\mathbf{y}} = \ell_{\mathbf{y}''}$; accordingly, $w = \tau_{H''}(v_{n})$, where $H'' = \mathrm{ker}(\ell_{\mathbf{y}''})$ and \begin{equation} \label{Eq.Description-for-y''} \mathbf{y}'' = \mathbf{y}' + \pi^{n} \mathbf{y}. \end{equation} \item If $\mathbf{y}'$ is fixed and $\mathbf{y}$ runs through the elements of $L_{0} \setminus \pi L_{0}$ not 1-equivalent with $\mathbf{y}'$, then the corresponding $\mathbf{y}''$ are $n$-equivalent but not $(n+1)$-equivalent with $\mathbf{y}'$ (cf. \eqref{Eq.n-equivalence-of-vectors}). In this way we get all the $(n+1)$-classes with this property, that is, all the $(n+1)$-special paths $(e_{1}, e_{2}, \dots, e_{n}, e)$ which agree with the path $(e_{1}, \dots, e_{n}, e_{n+1})$ of \eqref{Eq.special-n+1-special-path} except for the last arrow. We collect what has been shown. \end{enumerate} \begin{Proposition} \begin{enumerate}[label=$\mathrm{(\roman*)}$] \item Let $H, H'$ be two hyperplanes in $V$ with $\tau_{H}(v_{0}) \neq \tau_{H'}(v_{0})$ and $n \in \mathds{N}$. Put $v_{i} \vcentcolon = (\tau_{H'})^{i}(v_{0})$. If $e$ is an $(n+1)$-special arrow then \par\noindent\begin{minipage}{\linewidth} \begin{equation} P(f_{H,H',n})(e) = \begin{cases} +1, &\text{if } e=(v_{n}, v_{n+1}), \\ -1, &\text{if }e =(v_{n}, w), \\ \hphantom{+}0, &\text{otherwise.} \end{cases} \end{equation} \vspace*{2pt} \end{minipage} Here $w = \tau_{H''}(v_{n}) \neq v_{n+1}$, where $H''$ is the hyperplane $\mathrm{ker}(\ell_{\mathbf{y}''})$ with \par\noindent\begin{minipage}{\linewidth} \[ \mathbf{y}'' = \mathbf{y}' + \pi^{n} \mathbf{y} \] \vspace*{-3pt} \end{minipage} as described in \ref{subsection.Description-of-P(f_H_H'_n)-on-n+1-special-arrows}, notably in \eqref{Eq.Description-for-y''}. \item If $H'$ is fixed, each $(n+1)$-special arrow $e \neq (v_{n}, v_{n+1})$ with $o(e) = v_{n}$ occurs through a suitable choice of $H$ as the arrow $e = (v,w)$ where $P(f_{H,H',n})$ evaluates to $-1$. \hfill \mbox{$\square$} \end{enumerate} \end{Proposition} The next result, technical in nature, is crucial for the proof of Theorem 3.10. Its proof is postponed to the next section. \begin{Proposition} \label{Proposition.Crucial-technical-tool-for-main-theorem} Let $n \in \mathds{N}_{0}$ and $\varphi \in \mathbf{H}(\mathcal{BT}, \mathds{Z})$ be such that $\varphi(e) = 0$ for arrows $e$ that either belong to $\mathcal{BT}(n)$ or are $(n+1)$-special. Then $\varphi(e) = 0$ for all arrows $e$ of $\mathcal{BT}(n+1)$. \end{Proposition} Now we are able to show (modulo \ref{Proposition.Crucial-technical-tool-for-main-theorem}) the principal result. \begin{Theorem} \label{Theorem.van-der-Put-sequence-short-exact-sequence-of-GK-modules} The van der Put map $P \colon \mathcal{O}(\Omega)^{*} \to \mathbf{H}(\mathcal{BT}, \mathds{Z})$ is surjective, and so the sequence \begin{equation} \begin{tikzcd} 1 \ar[r] & C^{*} \ar[r] &\mathcal{O}(\Omega)^{*} \ar[r] & \mathbf{H}(\mathcal{BT}, \mathds{Z}) \ar[r] &0 \tag{0.2} \end{tikzcd} \end{equation} is a short exact sequence of $\mathrm{G}(K)$-modules. \end{Theorem} \begin{proof}\let\qed\relax \begin{enumerate}[wide=15pt, label=(\roman*)] \item Let $\varphi \in \mathbf{H}(\mathcal{BT}, \mathds{Z})$ be given. By successively subtracting $P(u_{n})$ from $\varphi$, where $(u_{n})_{n \in \mathds{N}}$ is a suitable series of functions in $\mathcal{O}(\Omega)^{*}$ with $u_{n} \to 1$ locally uniformly, we will achieve that \[ \varphi - P\Big( \prod_{1 \leq i \leq n} u_{i} \Big) \equiv 0 \qquad \text{on $\mathcal{BT}(n)$}. \] Then $\varphi = P(u)$, where $u = \lim_{n \to \infty} \prod_{1 \leq i \leq n} u_{i}$ is the limit function. \item From condition $(\mathrm{B}_{1})$ for $\varphi$ and Proposition \ref{Proposition.van-der-Put-transform-on-special-rational-function-evaluated-on-special-arrow-of-type-1} we find a function $u_{1}$, namely a suitable finite product of functions of type $\ell_{H,H'}$, such that $(\varphi - P(u_{1}))(e) = 0$ for each $e \in \mathbf{A}_{v_{0}, 1}$. By condition (C), $\varphi - P(u_{1})$ vanishes on all $e \in \mathbf{A}_{v_{0}}$, and thus by (A) on all $e$ that belong to $\mathcal{BT}(1) = \mathrm{st}(v_{0})$. \item Suppose that $u_{1}, \dots, u_{n} \in \mathcal{O}(\Omega)^{*}$ are constructed $(n \in \mathds{N})$ such that for $1 \leq i \leq n$ \begin{enumerate}[label=(\alph*), itemindent=15pt] \item $P(u_{i}) \equiv 0$ on $\mathcal{BT}(i-1)$, \item $u_{i} \equiv 1 \pmod{\pi^{[(i-1)/2]}}$ on $\mathcal{BT}([(i-1)/2])$; here $[\cdot]$ is the Gauß bracket; \item $\varphi - P(\prod_{1 \leq i \leq n} u_{i}) \equiv 0$ on $\mathcal{BT}(n)$ \end{enumerate} hold. (Condition (a) is empty for $i=1$ and therefore trivially fulfilled.) We are going to construct $u_{n+1}$ such that $u_{1}, \dots, u_{n+1}$ fulfill the conditions on level $n+1$. \item From (c) and $(\mathrm{B}_{1})$ we have for $n$-special vertices $v$ and $\psi \vcentcolon = \varphi - P(\prod_{1 \leq i \leq n} u_{i}) \in \mathbf{H}(\mathcal{BT}, \mathds{Z})$: \[ \sum_{e \in \mathbf{A}_{v,1} \text{ outbound}} \psi(e) = \sum_{e \in \mathbf{A}_{v,1}} \psi(e) = 0. \] \item According to Proposition \ref{Proposition.Crucial-technical-tool-for-main-theorem}, we find $u_{n+1}$, viz, a suitable product of functions $f_{H,H',n}$, such that \[ \big(\psi - P(u_{n+1})\big)(e) = \bigg( \varphi - P\Big(\prod_{1 \leq i \leq n+1} u_{i} \Big)\bigg)(e) = 0 \] on all $(n+1)$-special arrows $e$. Furthermore, that $u_{n+1}$ (like the functions $f_{H,H',n}$, see Lemma \ref{Lemma.Properties-of-f-H-H'-n-on-Omega-n} (ii)) satisfies $P(u_{n+1}) \equiv 0$ on $\mathcal{BT}(n)$, i.e., condition (a), and condition (b): $u_{n+1} \equiv 1 \pmod{\pi^{[n/2]}}$ on $\mathcal{BT}([n/2])$. Hence $\varphi - P(\prod_{1 \leq i \leq n+1} u_{i})$ vanishes on arrows which belong to $\mathcal{BT}(n)$ or are $(n+1)$-special. Using Proposition \ref{Proposition.Crucial-technical-tool-for-main-theorem}, $\varphi - P(\prod_{1 \leq i \leq n+1} u_{i})$ vanishes on $\mathcal{BT}(n+1)$. That is, conditions (a), (b), (c) hold for $u_{1}, \dots, u_{n+1}$, and we have inductively constructed an infinite series $u_{1}, u_{2}, \dots$ with (a), (b) and (c) for all $n$. \item It follows from (b) that the infinite product \[ u = \prod_{i \in \mathds{N}} u_{i} \] is normally convergent on each $\Omega(n)$ and thus defines a holomorphic invertible function $u$ on $\Omega$. Its van der Put transform $P(u)$ restricted to $\mathcal{BT}(n)$ depends only on $u_{1}, \dots, u_{n}$, due to (c), and thus agrees with $\varphi$ reduced to $\mathcal{BT}(n)$. Therefore, $\varphi = P(u)$, and the result is shown. \hfill \mbox{$\square$} \end{enumerate} \end{proof} \section{The group $\mathbf{H}(\mathcal{BT}, \mathds{Z})$} \subsection{} We start with the \begin{proof}[Proof of Proposition \ref{Proposition.Crucial-technical-tool-for-main-theorem}] \begin{enumerate}[wide=15pt, label=(\roman*)] \item The requirements of Proposition \ref{Proposition.Crucial-technical-tool-for-main-theorem} for $\varphi \in \mathbf{H}(\mathcal{BT}, \mathds{Z})$ on level $n \in \mathds{N}_{0}$ will be labelled by $\mathrm{R}(n)$. \item Suppose that $\mathrm{R}(n)$ holds for $\varphi$. Then $\varphi$ vanishes on all arrows $\mathbf{A}_{v,1}$ whenever $v$ is $n$-special, since such an $e$ is either $(n+1)$-special or belongs to $\mathcal{BT}(n)$. Hence by conditions (C) and (A) of \ref{subsection.Conditions-for-varphi-in-A(BT)}, $\varphi(e) = 0$ whenever $e$ is contiguous with $v$, i.e., if $e$ belongs to $\mathrm{st}(v)$. This shows, in particular, that Proposition \ref{Proposition.Crucial-technical-tool-for-main-theorem} holds for $n=0$. \item Let $v \in \mathbf{V}(\mathcal{BT})$ have distance $d(v_{0},v) = n$, but is not necessarily $n$-special. For the same reason as in (ii), $\varphi$ vanishes identically on $\mathrm{st}(v)$ if it vanishes on all outbound arrows $e \in \mathrm{A}_{v,1}$. Hence if suffices to show \begin{equation} \label{Eq.Assertion-for-varphi-in-the-proof-of-Proposition-3-9} \varphi(e) = 0 \quad \text{for outbound arrows $e$ of type 1 and level $n$}. \tag{O} \end{equation} \item For a vertex $v$ with $d(v_{0}, v) = n$, we let $s(v)$ be the distance to the next $w \in \mathbf{V}(\mathcal{BT})$ which is $n$-special. We are going to show assertion \eqref{Eq.Assertion-for-varphi-in-the-proof-of-Proposition-3-9} by induction on $s(o(e))$. \item By $\mathrm{R}(n)$, \eqref{Eq.Assertion-for-varphi-in-the-proof-of-Proposition-3-9} holds if $s = s(o(e)) = 0$, i.e., if $o(e)$ is $n$-special. Therefore, suppose that $s > 0$. By the preceding we are reduced to showing \begin{equation} \label{Eq.Reduced-assertion-for-varphi-in-the-proof-of-Proposition-3-9} \begin{array}{l} \text{Let $e$ be an outbound arrow of type 1, level $n$, and with $s = s(o(e)) > 0$.} \\ \text{Then $e$ belongs to $\mathrm{st}(\tilde{v})$, where $d(v_{0}, \tilde{v}) = n$ and $s(\tilde{v}) < s$.} \end{array} \tag{P} \end{equation} \item We reformulate \eqref{Eq.Reduced-assertion-for-varphi-in-the-proof-of-Proposition-3-9} in lattice terms. Representing $v_{0} = [L_{0}]$ through $L_{0} = O^{r}$, the vertices $v \in \mathbf{V}(\mathcal{BT})$ correspond one-to-one to sublattices $L$ of full rank $r$ which satisfy $L \subset L_{0}$, $L \not\subset \pi L_{0}$. For such a vertex $v$ or its lattice $L$, we let $(n_{1}, n_{2}, \dots, n_{r})$ with $n_{1} \geq n_{2} \geq \dots \geq n_{r} = 0$ be the sequence of elementary divisors (\textbf{sed}) of $L_{0}/L$ ($n_{r} = 0$ as $L \not\subset \pi L_{0}$). That is, \[ L_{0}/L \cong O/(\pi^{n_{1}}) \times \cdots \times O/(\pi^{n_{r}}). \] Then $n_{1} = d(v_{0}, v)$, and $v$ is $n$-special if and only if its sed is $(n,0, \dots, 0)$. \item Let $e = (v,v')$ be given as required for \eqref{Eq.Reduced-assertion-for-varphi-in-the-proof-of-Proposition-3-9}, $v=[L]$, $v' = [L']$, where $\pi^{n+1}L_{0} \subset L' \subset L \subset L_{0}$. Let $(n_{1} = n, n_{2}, \dots, n_{r})$ be the sed of $L_{0}/L$. Then, as $\dim_{\mathds{F}}(L/L') = 1$ and $d(v_{0}, v') = n+1$, $(n_{1}' = n+1, n_{2}, \dots, n_{r})$ is the sed of $L_{0}/L'$. This means that $L_{0}$ has an ordered $O$-basis $\{x_{1}, \dots, x_{r}\}$ such that $\{ \pi^{n+1}x_{1}, \pi^{n_{2}}x_{2}, \dots, \pi^{n_{r}}x_{r}\}$ is a basis of $L'$ and $\{ \pi^{n} x_{1}, \pi^{n_{2}} x_{2}, \dots, \pi^{n_{r}} x_{r}\}$ is a basis of $L$. Assume that $k$ with $2 \leq k \leq r$ is maximal with $n_{2} = n_{k}$. Let $M$ be the sublattice of $L_{0}$ with basis $\{ \pi^{n}x_{1}, x_{2}, \dots, x_{r}\}$. Then $w = [M]$ is $n$-special and $s(v) = d(v,w) = n_{2}$, which by assumption is positive. Put $\tilde{L}$ for the lattice with basis $\{ \pi^{n}x_{1}, \pi^{n_{2}-1}x_{2}, \dots, \pi^{n_{k}-1}x_{k}, \pi^{n_{k+1}}x_{k+1}, \dots, \pi^{n_{r}}x_{r}\}$. The vertex $\tilde{v} \vcentcolon = [\tilde{L}]$ satisfies \begin{multline} d(v_{0}, \tilde{v}) = n, ~d(v,\tilde{v}) = 1 = d(v', \tilde{v}) \\ \text{and} \quad s(\tilde{v}) = d(w,\tilde{v}) = n_{2} -1 = s(v)-1. \end{multline} Hence $e = (v,v')$ belongs to $\mathrm{st}(\tilde{v})$, where $\tilde{v}$ is as wanted for assertion \eqref{Eq.Reduced-assertion-for-varphi-in-the-proof-of-Proposition-3-9}. \end{enumerate} This finishes the proof of Proposition \ref{Proposition.Crucial-technical-tool-for-main-theorem}. \end{proof} \begin{Corollary} \label{Corollary.Statement-for-varphi-who-is-0-on-i-special-arrows} Let $\varphi \in \mathbf{H}(\mathcal{BT}, \mathds{Z})$ be such that $\varphi(e) = 0$ for all $i$-special arrows $e$, where $1 \leq i \leq n$. Then $\varphi \equiv 0$ on $\mathcal{BT}(n)$. \end{Corollary} \begin{proof} This follows by induction from \ref{Proposition.Crucial-technical-tool-for-main-theorem}. \end{proof} \subsection{} Let $v$ be an $n$-special vertex $(n \geq 1)$, $v^{*}$ its predecessor on the uniquely determined $n$-special path $(v_{0}, v_{1}, \dots, v_{n-1} = v^{*}, v)$ from $v_{0}$ to $v$, and $e^{*}$ the $n$-special arrow $(v^{*}, v)$. Its inverse $\overline{e^{*}} = (v,v^{*})$ belongs to $\mathbf{A}_{v,r-1}$. \begin{Lemma} \label{Lemma.If-and-only-if-condition-for-inboundness} In the given situation, $e \in \mathbf{A}_{v,1}$ is inbound if and only if $\overline{e^{*}} \prec e$. \end{Lemma} \begin{proof} As the stabilizer $\mathrm{GL}(r,O)$ of $L_{0} = O^{r}$ acts transitively on $n$-special vertices or arrows, we may suppose that $v = [L_{n}]$, where $L_{n}$ is the $O$-lattice with basis $\{\pi^{n}x_{1}, \dots, x_{r} \}$, and thus $v^{*} = [L_{n-1}]$. (Here $\{x_{1}, \dots, x_{r}\}$ is the standard basis of $L_{0}$.) Under \eqref{Eq.Canonical-bijection-of-Avt-and-Gr-FtLpiL}, $\overline{e^{*}}$ corresponds to the one-dimensional subspace $\pi L_{n-1}/\pi L_{n}$ of the $r$-dimensional $\mathds{F}$-space $L_{n}/\pi L_{n}$, which has $(\overline{\pi^{n}x_{1}}) = \pi^{n}x_{1} \pmod{\pi L_{n}}$ as a basis vector. Let $\overline{H}$ be a hyperplane in $L_{n}/\pi L_{n}$ with pre-image $H$ in $L_{n}$, and let $e_{\overline{H}} = (v, v_{\overline{H}})$ be the arrow of type 1 determined by $\overline{H}$. Then $v_{\overline{H}} = [H]$ and \begin{align*} \overline{e^{*}} \prec e_{H} \Leftrightarrow (\overline{\pi^{n}x_{1}}) \in \overline{H} &\Leftrightarrow \pi^{n}x_{1} \in H \\ &\Leftrightarrow \pi^{n}L_{0} \subset H \Leftrightarrow d(v_{0}, v_{\overline{H}}) \leq n \Leftrightarrow e_{\overline{H}} \text{ is inbound}. \end{align*} \end{proof} \subsection{} \label{subsection.Reformulation-of-B-1-at-n-special-v} We may now reformulate condition $(\mathrm{B}_{1})$ for $\varphi \in \mathbf{H}(\mathcal{BT}, \mathds{Z})$ at the $n$-special vertex $v$ of level $n \geq 1$ as follows: Splitting \begin{equation} \mathbf{A}_{v,1} = \mathbf{A}_{v,1,\mathrm{in}} \charfusion[\mathbin]{\cup}{\cdot} \mathbf{A}_{v,1,\mathrm{out}} \end{equation} into the subsets of inbound / outbound arrows (note that $e \in \mathbf{A}_{v,1}$ is outbound if and only if it is $(n+1)$-special), $(\mathrm{B}_{1})$ reads \begin{equation*} 0 = \sum_{e \in \mathbf{A}_{v,1}} \varphi(e) = \sum_{e \in \mathbf{A}_{v,1,\mathrm{in}}} \varphi(e) + \sum_{e \in \mathbf{A}_{v,1,\mathrm{out}}} \varphi(e) = \varphi(\overline{e^{*}}) + \sum_{e \in \mathbf{A}_{v,1,\mathrm{out}}} \varphi(e) \end{equation*} (where we used \ref{Lemma.If-and-only-if-condition-for-inboundness} and condition (C) for $\varphi(\overline{e^{*}})$), i.e., as the flow condition \begin{equation} \label{Eq.Flow-condition} \varphi(e^{*}) = \sum_{e \in \mathbf{A}_{v,1,\mathrm{out}}} \varphi(e). \end{equation} The number of terms in the sum is \begin{equation} \label{Eq.Cardinality-of-outbound-arrows-of-type-1} \# \mathbf{A}_{v,1,\mathrm{out}} = \# \mathbf{A}_{v,1} - \# \mathbf{A}_{v,1,\mathrm{in}} = \# \mathds{P}^{r-1}(\mathds{F}) - \# \mathds{P}^{r-2}(\mathds{F}) = q^{r-1} \end{equation} \subsection{} Let $\mathcal{T}_{v_{0}}$ be the full subcomplex of $\mathcal{BT}$ composed of the $n$-special vertices ($n \in \mathds{N}_{0}$) along with the 1-simplices connecting them. In other words, $\mathcal{T}_{v_{0}}$ is the union of the paths $\mathbf{A}_{v_{0},1,n}$, where $n \in \mathds{N}$, see \ref{subsubsection.Identification-of-arrows-with-special-Grassmannians}. It is connected, one-dimensional and cycle-free, hence a tree. The valence (= number of neighbors) of $v_{0}$ is $\# \mathds{P}^{r-1}(\mathds{F}) = (q^{r}-1)/(q-1)$, the valence of each other vertex $v \neq v_{0}$ is $q^{r-1}+1$, as we read off from \eqref{Eq.Cardinality-of-outbound-arrows-of-type-1}. Let further $\mathcal{T}_{v_{0}}(n) \vcentcolon = \mathcal{T}_{v_{0}} \cap \mathcal{BT}(n)$. \subsubsection{} We define $\mathbf{H}(n)$ as the image of $\mathbf{H}(\mathcal{BT}, \mathds{Z})$ in $\{ \varphi \colon \mathbf{A}(\mathcal{BT}(n)) \to \mathds{Z}\}$ obtained by restriction. Hence \stepcounter{equation} \begin{equation} \label{Eq.H(BT,Z)-as-projective-limit} \mathbf{H}(\mathcal{BT},\mathds{Z}) = \varprojlim_{n \in \mathds{N}} \mathbf{H}(n). \end{equation} Put further \begin{equation} \mathbf{H}'(n) \vcentcolon = \left\{ \varphi \colon \mathbf{A}(\mathcal{T}_{v_{0}}(n)) \longrightarrow \mathds{Z} \, \middle\vert \begin{array}{l} \varphi \text{ is subject to \eqref{Eq.Condition-1-H'n} and \ref{Eq.Condition-2-H'n}} \\ \text{for each $i$-special $v$, $0 \leq i < n$} \end{array} \!\!\right\}. \end{equation} Here $\mathbf{A}(\mathcal{S})$ is the set of arrows (oriented 1-simplices) of the simplicial complex $\mathcal{S}$, and the conditions are \begin{equation} \label{Eq.Condition-1-H'n} \varphi(e) + \varphi(\overline{e}) = 0 \quad \text{for each arrow $e$ with inverse $\overline{e}$}; \end{equation} \begin{equation} \label{Eq.Condition-2-H'n} \sum_{\substack{ e \in \mathbf{A}(\mathcal{T}_{v_{0}}) \\ o(e) = v }} \varphi(e) = 0. \tag*{(4.6.5)($v$)} \end{equation} \subsection{} Equality \eqref{Eq.Flow-condition} together with the condition $(\mathrm{B}_{1})$ at $v_{0}$ states that the restriction of $\varphi \in \mathbf{H}(\mathcal{BT}, \mathds{Z})$ to $\mathcal{T}_{v_{0}}(n)$ is an element of $\mathbf{H}'(n)$. Therefore, restriction defines homomorphisms $r_{n} \colon \mathbf{H}(n) \to \mathbf{H}'(n)$, which make the diagram (with natural maps $q_{n}$, $q_{n}'$) \begin{equation} \begin{tikzcd} \mathbf{H}(n+1) \ar[r, "r_{n+1}"] \ar[d, "q_{n}"] & \mathbf{H}'(n+1) \ar[d, "q_{n}'"] \\ \mathbf{H}(n) \ar[r, "r_{n}"] & \mathbf{H}'(n) \end{tikzcd} \end{equation} commutative. Note that both $q_{n}$ and $q_{n}'$ are surjective. Corollary \ref{Corollary.Statement-for-varphi-who-is-0-on-i-special-arrows} may be rephrased as \begin{Proposition} \label{Proposition.rn-is-injective} $r_{n}$ is injective for $n \in \mathds{N}$. \hfill \mbox{$\square$} \end{Proposition} \begin{Lemma} \label{Lemma.rn-is-surjective} $r_{n}$ is also surjective. \end{Lemma} \begin{proof} For $n=1$, this is implicit in the proof of Theorem \ref{Theorem.van-der-Put-sequence-short-exact-sequence-of-GK-modules} (i.e., one may arbitrarily prescribe the value of $\varphi \in \mathbf{H}(\mathcal{BT}, \mathds{Z})$ on $e \in \mathbf{A}_{v_{0},1}$, subject only to $(\mathrm{B}_{1})$ at $v_{0}$). For $n \geq 1$, let $Q_{n+1}$ (respectively $Q_{n+1}'$) be the kernel of $q_{n}$ (resp. $q_{n}'$). Then $r_{n+1}(Q_{n+1}) \subset Q_{n+1}'$, and we have the commutative diagram with exact rows \[ \begin{tikzcd} 0 \ar[r] &Q_{n+1} \ar[r] \ar[d] &\mathbf{H}(n+1) \ar[r] \ar[d, "r_{n+1}"] & \mathbf{H}(n) \ar[r] \ar[d, "r_{n}"] & 0 \\ 0 \ar[r] & Q_{n+1}' \ar[r] &\mathbf{H}'(n+1) \ar[r] & \mathbf{H}'(n) \ar[r] & 0. \end{tikzcd} \] By induction hypothesis, $r_{n}$ is surjective, so the surjectivity of $r_{n+1}$ is implied by \begin{equation} \label{Eq.Reduced-assertion-by-induction} r_{n+1}(Q_{n+1}) = Q_{n+1}' \tag{$\ast$} \end{equation} But \begin{align*} Q_{n+1} &= \{ \varphi \in \mathbf{H}(n+1) \mid \varphi \equiv 0 \text{ on } \mathcal{BT}(n) \} \\ Q_{n+1}' &= \{ \varphi \in \mathbf{H}'(n+1) \mid \varphi \equiv 0 \text{ on } \mathcal{T}_{v_{0}}(n) \}, \end{align*} so \eqref{Eq.Reduced-assertion-by-induction} follows from the existence of sufficiently many elements of $Q_{n+1}$ (e.g., the classes in $\mathbf{H}(n+1)$ of the $P(f_{H,H',n})$) which have sufficiently independent values on the arrows in $\mathcal{T}_{v_{0}}(n+1)$ not in $\mathcal{T}_{v_{0}}(n)$. See also the proof of Theorem \ref{Theorem.van-der-Put-sequence-short-exact-sequence-of-GK-modules}, steps (iv) and (v). \end{proof} \subsection{} Let $\mathbf{H}(\mathcal{T}_{v_{0}}, \mathds{Z}) = \varprojlim_{n \in \mathds{N}} \mathbf{H}'(n)$ be the group of functions $\varphi \colon \mathbf{A}(\mathcal{T}_{v_{0}}) \to \mathds{Z}$ which satisfy \eqref{Eq.Condition-1-H'n} and \ref{Eq.Condition-2-H'n} for all vertices $v$ of $\mathcal{T}_{v_{0}}$. Similarly, we define $\mathbf{H}(\mathcal{T}_{v_{0}}, A)$ for an arbitrary abelian group $A$ instead of $\mathds{Z}$. That is, elements of $\mathbf{H}(\mathcal{T}_{v_{0}}, A)$ are characterized by conditions analogous with (A) and (B) of \ref{subsection.Conditions-for-varphi-in-A(BT)}, while (C) is not applicable. Putting together the considerations of \ref{subsection.Reformulation-of-B-1-at-n-special-v} with \ref{Proposition.rn-is-injective} and \ref{Lemma.rn-is-surjective}, we find \begin{equation} \label{Eq.Canonical-isomorphism-between-H(BT,Z)-and-H(TV0,Z)} \begin{tikzcd} \mathbf{H}(\mathcal{BT}, \mathds{Z}) \ar[r, "\cong"] &\mathbf{H}(\mathcal{T}_{v_{0}}, \mathds{Z}), \tag{4.11} \end{tikzcd} \end{equation} where the canonical isomorphism is given by restricting $\varphi \in \mathbf{H}(\mathcal{BT}, \mathds{Z})$, $\varphi \colon \mathbf{A}(\mathcal{BT}) \to \mathds{Z}$ to the subset $\mathbf{A}(\mathcal{T}_{v_{0}})$ of $\mathbf{A}(\mathcal{BT})$. In what follows, $A$ is an arbitrary abelian group. The next result is a consequence of the above. \stepcounter{subsection} \begin{Proposition} The canonical maps \begin{align*} \mathbf{H}(\mathcal{BT}, \mathds{Z}) \otimes A &\longrightarrow \mathbf{H}(\mathcal{BT}, A) \\ \text{and} \quad \mathbf{H}(\mathcal{T}_{v_{0}}, \mathds{Z}) \otimes A &\longrightarrow \mathbf{H}(\mathcal{T}_{v_{0}}, A) \end{align*} are bijective, and \eqref{Eq.Canonical-isomorphism-between-H(BT,Z)-and-H(TV0,Z)} yields \begin{equation} \label{Eq.Canonical-isomorphism-between-H(BT,A)-and-H(TV0,A)} \begin{tikzcd} \mathbf{H}(\mathcal{BT}, A) \ar[r, "\cong"] & \mathbf{H}(\mathcal{T}_{v_{0}}, A). \end{tikzcd} \end{equation} \end{Proposition} \begin{proof} As $\mathbf{H}(n)$ and $\mathbf{H}'(n)$ are finitely generated free $\mathds{Z}$-modules, their tensor products with $A$ are isomorphic with the similiarly defined groups of $A$-valued maps. Then \eqref{Eq.Canonical-isomorphism-between-H(BT,A)-and-H(TV0,A)} follows from \eqref{Eq.H(BT,Z)-as-projective-limit} and \eqref{Eq.Canonical-isomorphism-between-H(BT,Z)-and-H(TV0,Z)}. \end{proof} \subsection{} Recall that an $A$-valued distribution on a compact totally disconnected topological space $X$ is a a map $\delta \colon U \to \delta(U) \in A$ from the set of compact-open subspaces $U$ of $X$ to $A$ which is additive in finite disjoint unions. We call $\delta(U)$ the \textbf{volume} of $U$ \textbf{with respect to} $\delta$. The \textbf{total mass} (or volume) of $\delta$ is $\delta(X)$. We apply this to the situation (see \ref{subsubsection.Identification-of-arrows-with-special-Grassmannians} - \eqref{Eq.Special-grassmannian-is-compact-and-open-in-Gr-KtV}) where \begin{equation} X = \mathrm{Gr}_{K,1}(V) = \left\{ \begin{array}{l} \text{hyperplanes $H$ of} \\ \text{the $K$-space $V$} \end{array} \right\} = \mathds{P}(V^{\wedge}) \end{equation} where $V^{\wedge}$ is the vector space dual to $V = K^{r}$. \stepcounter{subsubsection} \subsubsection{} Let $\mathbf{D}(\mathds{P}(V^{\wedge}), A)$ be the group of $A$-valued distributions on $\mathds{P}(V^{\wedge})$ with subgroup $\mathbf{D}^{0}(\mathds{P}(V^{\wedge}), A)$ of distributions with total mass 0. By \eqref{Eq.Special-grassmannian-is-compact-and-open-in-Gr-KtV}, the sets $\mathds{P}(V^{\wedge})(e) = \mathrm{Gr}_{K,1}(e)$, where $e$ runs through the outbound arrows of $\mathbf{A}_{v_{0},1,n}$ ($n \in \mathds{N}$), i.e., through the set \stepcounter{equation} \begin{equation} \mathbf{A}^{+}(\mathcal{T}_{v_{0}}) = \{e \in \mathbf{A}(\mathcal{T}_{v_{0}}) \mid e \text{ oriented away from $v_{0}$}\}, \end{equation} form a basis for the topology on $\mathds{P}(V^{\wedge})$. Therefore, an element $\delta$ of $\mathbf{D}(\mathds{P}(V^{\wedge}), A)$ is an assignment \[ \delta \colon \mathbf{A}^{+}(\mathcal{T}_{v_{0}}) \longrightarrow A \] (where we interpret $\delta(e)$ as the volume of $\mathds{P}(V^{\wedge})(e)$ with respect to $\delta$) subject to the requirement \begin{equation} \delta(e^{*}) = \sum_{\substack{e \in \mathbf{A}^{+}(\mathcal{T}_{v_{0}}) \\ o(e) = t(e^{*})}} \delta(e) \end{equation} for each $e^{*} \in \mathbf{A}^{+}(\mathcal{T}_{v_{0}})$. The total mass of $\delta$ is \begin{equation} \delta(\mathds{P}(V^{\wedge})) = \sum_{\substack{e \in \mathbf{A}^{+}(\mathcal{T}_{v_{0}}) \\ o(e) = v_{0}}} \delta(e) = \sum_{e \in \mathbf{A}_{v_{0},1}} \delta(e). \end{equation} In view of \eqref{Eq.Flow-condition} and (4.6.5)($v_{0}$), we find that \begin{equation} \label{Eq.Isomorphism-of-D0(PVDach,A)-and-H(Tv0,A)} \begin{tikzcd} \mathbf{D}^{0}(\mathds{P}(V^{\wedge}), A) \ar[r, "\cong"] &\mathbf{H}(\mathcal{T}_{v_{0}}, A), \tag{4.14} \end{tikzcd} \end{equation} where some $\delta \colon \mathbf{A}^{+}(\mathcal{T}_{v_{0}}) \to A$ in the left hand side is completed to a map on $\mathbf{A}(\mathcal{T}_{v_{0}})$ by \eqref{Eq.Condition-1-H'n}, i.e., by $\varphi(\overline{e}) = -\varphi(e)$. \stepcounter{subsection} While both isomorphisms in \eqref{Eq.Canonical-isomorphism-between-H(BT,Z)-and-H(TV0,Z)} (or \eqref{Eq.Canonical-isomorphism-between-H(BT,A)-and-H(TV0,A)}) and \eqref{Eq.Isomorphism-of-D0(PVDach,A)-and-H(Tv0,A)} fail to be $\mathrm{G}(K)$-e\-qui\-vari\-ant (as $\mathrm{G}(K)$ fixes neither $v_{0}$ nor $\mathcal{T}_{v_{0}}$), the resulting isomorphism \begin{align} \mathbf{H}(\mathcal{BT}, \mathbf{A}) &\overset{\cong}{\longrightarrow} \mathbf{D}^{0}(\mathds{P}(V^{\wedge}), A) \tag{4.15} \\ \varphi &\longmapsto \tilde{\varphi} \nonumber \end{align} is. Here the distribution $\tilde{\varphi}$ evaluates on $\mathds{P}(V^{\wedge})(e)$ as $\varphi(e)$ whenever $e$ is an arrow of $\mathcal{BT}$ of type 1 and $\mathds{P}(V^{\wedge})(e)$ is the compact-open subset of hyperplanes $H$ of $V$ such that $e$ points to $H$. \stepcounter{subsection} We summarize what has been shown. \begin{Theorem} \label{Theorem.Abstraction-to-arbitrary-abelian-groups} Let $A$ be an arbitrary abelian group. Restricting the evaluation of $\varphi \in \mathbf{H}(\mathcal{BT}, A)$ to arrows of $\mathcal{T}_{v_{0}}$ (resp. arrows of type 1 of $\mathcal{BT}$) yields canonical isomorphisms \begin{align*} \mathbf{H}(\mathcal{BT}, A) &\overset{\cong}{\longrightarrow} \mathbf{H}(\mathcal{T}_{v_{0}}, A) \intertext{resp.} \mathbf{H}(\mathcal{BT}, A) &\overset{\cong}{\longrightarrow} \mathbf{D}^{0}(\mathds{P}(V^{\wedge}), A). \end{align*} The second of these is equivariant for the natural actions of $\mathrm{G}(K) = \mathrm{GL}(r,K)$ on both sides, while the first isomorphism is equivariant for the actions of the stabilizer $\mathrm{G}(O)Z(K)$ of $v_{0} \in \mathrm{G}(K)$. Each of the three modules $\mathbf{H}(\mathcal{BT},A)$, $\mathbf{H}(\mathcal{T}_{v_{0}}, A)$, $\mathbf{D}^{0}(\mathds{P}(V^{\wedge}), A)$ is the tensor product with $A$ of the same module with coefficients in $\mathds{Z}$. \end{Theorem} As a direct consequence of the first isomorphism, i.e., of \eqref{Eq.Canonical-isomorphism-between-H(BT,A)-and-H(TV0,A)} we find the following Corollary, which is in keeping with the fact that bounded holomorphic functions on $\Omega$ are constant. \begin{Corollary} If $\varphi \in \mathbf{H}(\mathcal{BT}, A)$ has finite support, it vanishes identically. \end{Corollary} \begin{proof} Suppose that $\varphi$ has support in $\mathcal{BT}(n)$ with $n \in \mathds{N}$. Then its restriction to $\mathcal{T}_{v_{0}}(n+1)$ satisfies \eqref{Eq.Condition-1-H'n} and (4.6.5) at all vertices $v$ of $\mathcal{T}_{v_{0}}(n+1)$. As $\mathcal{T}_{v_{0}}(n+1)$ is a finite tree, this forces $\varphi$ to vanish identically on $\mathcal{T}_{v_{0}}(n+1)$, thus on $\mathcal{BT}$. \end{proof} \section{Concluding remarks} \subsection{} Ehud de Shalit in \cite{DeShalit01} Section 3.1 postulated four conditions $\mathfrak{A}, \mathfrak{B}, \mathfrak{C}, \mathfrak{D}$ for what he calls harmonic $k$-cochains on $\mathcal{BT}$. These conditions spezialized to $k=1$ are essentially our conditions (A), (B), (C) from \ref{subsection.Conditions-for-varphi-in-A(BT)}. Grosso modo, de Shalit's $\mathfrak{B}$ corresponds to (B), $\mathfrak{C}$ to (C) and $\mathfrak{D}$ to (A), while $\mathfrak{A}$ is a special case of (A). \subsection{} In fact, the relationship with de Shalit's work is as follows. Suppose that $\mathrm{char}(K) = 0$, and consider the diagram \begin{equation} \label{Eq.Relation-with-de-Shalits-work-and-this-article-for-char-K-0} \begin{tikzcd} u \ar[d, mapsto] & \mathcal{O}(\Omega)^{*} \ar[r, "P"] \ar[d, mapsto] & \mathbf{H}(\mathcal{BT}, \mathds{Z}) \ar[d, hook] \\ d\log u = u^{-1}du & \left\{ \begin{array}{l} \text{closed $1$-forms} \\ \text{on $\Omega$}\end{array} \right\} \ar[r, "\mathrm{res}"] & \mathbf{H}(\mathcal{BT}, K) ~(= C_{\mathrm{har}}^{1} \text{ of \cite{DeShalit01}}), \end{tikzcd} \end{equation} where \enquote{$\mathrm{res}$} is de Shalit's residue mapping. Its commutativity follows for $u = \ell_{H,H'}$ from Corollary 7.6 and Theorem 8.2 of \cite{DeShalit01} (along with the explanations given there, and our description of $P(u)$), and may be verified for general $u$ by approximating. Hence the van der Put transform $P$ yields a concrete description of the residue mapping on logarithmic 1-forms. \subsection{} Now suppose that $\mathrm{char}(K) = p > 0$, and that moreover $r=2$. Then $\mathcal{BT}$ is the Bruhat-Tits tree $\mathcal{T}$, and the residue mapping \[ \mathrm{res} \colon \{ \text{1-forms on $\Omega = \Omega^{2}$}\} \longrightarrow \mathbf{H}(\mathcal{T}, C) \] (see \cite{Gekeler96} 1.8) is such that the diagram analogous with \eqref{Eq.Relation-with-de-Shalits-work-and-this-article-for-char-K-0} \begin{equation} \begin{tikzcd} u \ar[d, mapsto] & \mathcal{O}(\Omega)^{*} \ar[r, "P"] \ar[d] & \mathbf{H}(\mathcal{T}, \mathds{Z}) \ar[d] \\ d\log u & \{\text{1-forms on $\Omega$}\} \ar[r, "\mathrm{res}"] & \mathbf{H}(\mathcal{T}, C) \end{tikzcd} \end{equation} commutes, with remarkable arithmetic consequences (loc. cit, Sections 6 and 7). A similar residue map for $r>2$ unfortunately lacks so far. In any case, we should regard $P$ as a substitute for the logarithmic derivation operator \[ u \longmapsto d\log u = u^{-1} du \] in characteristic 0. \subsection{} In \cite{SchneiderStuhler91}, Peter Schneider and Ulrich Stuhler described the cohomology $H^{*}(\Omega, A)$ of $\Omega = \Omega^{r}$ with respect to an abstract cohomology theory, where $A = H^{0}(\mathrm{Sp}(K))$. That theory is required to satisfy four natural axioms, loc. cit, Section 2. As they explain, these axioms are fulfilled at least \begin{itemize} \item for the étale $\ell$-adic cohomology of rigid-analytic spaces over $K$, where $\ell$ is a prime different from $p = \mathrm{char}(\mathds{F})$, and $A = \mathds{Z}_{\ell}$, and \item for the de Rham cohomology (where one must moreover assume that $\mathrm{char}(K) = 0$); here $A = K$. \end{itemize} Their result is stated loc. cit. Section 3, Theorem 1, which in dimension 1 is (in our notation) \begin{equation} \begin{tikzcd} H^{1}(\Omega^{r}, A) \ar[r, "\cong"] &\mathbf{D}^{0}(\mathds{P}(V^{\wedge}), A). \end{tikzcd} \end{equation} Theorem 8.2 in \cite{DeShalit01} gives that (in the case where $\mathrm{char}(K) = 0$ and $H^{*} = H_{\mathrm{dR}}^{*}$ is the de Rham cohomology) \begin{equation} \begin{tikzcd} H_{\mathrm{dR}}^{k}(\Omega^{r}) \ar[r, "\cong"] & C_{\mathrm{har}}^{k}, \end{tikzcd} \end{equation} where $C_{\mathrm{har}}^{1}$ is our $\mathbf{H}(\mathcal{BT}, K)$. Hence our Theorems \ref{Theorem.van-der-Put-sequence-short-exact-sequence-of-GK-modules} and \ref{Theorem.Abstraction-to-arbitrary-abelian-groups} refine the above in the case $k=1$ and moreover provide natural $\mathds{Z}$-structures on the $H^{1}$-groups. \subsection{} Let now $\Gamma$ be a discrete subgroup of $\mathrm{G}(K)$. The most interesting cases are those where the image of $\Gamma$ in $\mathrm{G}(K)/Z(K) = \mathrm{PGL}(r,K)$ has finite covolume with respect to Haar measure, or is even cocompact. Examples are given as Schottky groups in $\mathrm{PGL}(2,K)$ \cite{GerritzenVanDerPut80} or as arithmetic subgroups of $\mathrm{G}(K)$ of different types, when $K$ is the completion $k_{\infty}$ of a global field $k$ at a non-archimeadean place $\infty$ \cite{Drinfeld74}, \cite{Reiner75}. Then often the quotient analytic space $\Gamma \setminus \Omega$ is the set of $C$-points of an algebraic variety \cite{GoldmanIwahori63}, \cite{Drinfeld74}, \cite{Mustafin78}, which may be studied via a spectral sequence relating the cohomologies of $\Omega$ and $\Gamma$ with that of $\Gamma \setminus \Omega$ (\cite{SchneiderStuhler91} Section 5). For $r=2$, this essentially boils down to a study of the $\Gamma$-cohomology sequence of \eqref{Eq.van-der-Put-Short-exact-sequence-for-general-r} (\cite{Gekeler96} Section 5). But also for $r>2$, \eqref{Eq.van-der-Put-Short-exact-sequence-for-general-r} with its $\Gamma$-action will be useful, which is the topic of ongoing work. \clearpage \begin{bibdiv} \begin{biblist} \bib{BruhatTits72}{article}{title={Groupes r\'eductifs sur un corps local.}, author={Fran\c cois Bruhat and Jacques Tits}, date={1972}, publisher={Institut des Hautes \'Etudes Scientifiques}, journal={Publications Math\'ematiques de l'IH\'ES}, volume={42}, pages={5--251}} \bib{DeShalit01}{article}{title={Residues on buildings and de Rham cohomology of p-adic symmetric domains}, author={Ehud De Shalit}, date={2001}, volume={106}, publisher={Duke University Press}, journal={Duke Mathematical Journal}, pages={123--191}} \bib{DeligneHusemoller87}{article}{title={Survey of Drinfel'd modules}, author={Pierre Deligne and Dale Husemoller}, date={1987}, publisher={AMS}, journal={Contemporary Mathematics}, volume={67}, pages={25--91}} \bib{Drinfeld74}{article}{title={Elliptic modules (Russian), }, author={Vladimir Gershonovich Drinfel\cprime d}, date={1974}, journal={Mat. Sb. (N.S.)}, volume={94(136)}, pages={594--627.}} \bib{FresnelVanDerPut81}{book}{title={G{\'e}om{\'e}trie analytique rigide et applications}, author={Jean Fresnel and Marius van der Put}, date={1981}, publisher={Birkh{\"a}user, Basel}} \bib{Gekeler96}{article}{title={Jacobians of Drinfeld modular curves}, author={Ernst{-}Ulrich Gekeler and Marc Reversat}, date={1996}, journal={Journal für die reine und angewandte Mathematik}, volume={476}, pages={27--93}} \bib{Gekeler17}{article}{title={On Drinfeld modular forms of higher rank}, author={Ernst{-}Ulrich Gekeler}, date={2017}, journal={Journal de Th\'eorie des Nombres de Bordeaux}, volume={29}, pages={875--902}} \bib{GekelerTA}{article}{title={On Drinfeld modular forms of higher rank II}, author={Ernst{-}Ulrich Gekeler}, journal={Journal of Number Theory, to appear}} \bib{GerritzenVanDerPut80}{book}{title={Schottky groups and Mumford curves}, author={Lothar Gerritzen and Marius van der Put}, date={1980}, publisher={Springer, Berlin}, series={Lecture Notes in Mathematics}} \bib{GoldmanIwahori63}{article}{title={The space of p-adic norms}, author={O. Goldman and N Iwahori}, date={1963}, publisher={Institut Mittag-Leffler}, journal={Acta Mathematica}, volume={109}, pages={137--177}} \bib{IovitaSpiess01}{article}{title={Logarithmic differential forms on p-adic symmetric spaces}, author={Adrian Iovita and Michael Spiess}, date={2001}, publisher={Duke University Press}, journal={Duke Mathematical Journal}, volume={110}, pages={253--278}} \bib{Kiehl67}{article}{title={Theorem A und Theorem B in der nichtarchimedischen Funktionentheorie}, author={Reinhardt Kiehl}, date={1967}, journal={Inventiones Mathematicae}, volume={2}, pages={256--273}} \bib{Laumon96}{book}{title={Cohomology of Drinfeld Modular Varieties, Part 1, Geometry, Counting of Points and Local Harmonic Analysis}, author={Gérard Laumon}, date={1996}, publisher={Cambridge University Press},series={Cambridge Studies in Advanced Mathematics},} \bib{ManinDrinfeld73}{article}{title={Periods of p-adic Schottky groups}, author={Yuri Manin and Vladimir Gershonovich Drinfeld}, date={1973}, journal={Journal für die reine und angewandte Mathematik}, volume={0262\_0263}, pages={239--247}} \bib{Mustafin78}{article}{title={Non-Archimedean uniformization (Russian)}, author={G. A. Mustafin}, date={1978}, publisher={}, journal={Mat. Sb. (N.S.)}, volume={105 (147)}, pages={207--237}} \bib{Reiner75}{book}{title={Maximal orders}, author={Irving Reiner}, date={1975}, publisher={Academic Press, London-New York}, series={London Mathematical Society monographs, No. 5}} \bib{SchneiderStuhler91}{article}{title={The cohomology of p-adic symmetric spaces}, author={Peter Schneider and Ulrich Stuhler}, date={1991}, publisher={}, journal={Inventiones mathematicae}, volume={105}, pages={47--122}} \bib{VanDerPut8182}{article}{title={Les fonctions thêta d'une courbe de Mumford}, author={Marius van der Put}, date={1981/82}, publisher={Secr\'etariat math\'ematique}, journal={Groupe de travail d'analyse ultram\'etrique}, volume={9}, pages={1-12, Institut Henri Poincaré, Paris, 1983}} \end{biblist} \end{bibdiv} \end{document} https://www.researchgate.net/publication/319121564_On_Drinfeld_modular_forms_of_higher_rank_II https://eudml.org/doc/91869
1,108,101,562,369
arxiv
\section{Introduction} The entanglement properties of quantum chains in their ground state are by now rather well known. The most common measure is the entanglement entropy calculated with the reduced density matrix for a subsystem, usually a segment of the chain. This quantity was first studied within field theory \cite{Wilczek} and more recently also for various lattice models \cite{Vidal,Korepin,Keating,CC04,PeschelXY,Eisler}. For non-critical systems it is finite and of order one, but for critical systems it diverges logarithmically with the length of the subsystem. Moreover, the prefactor of the logarithm is given by the central charge $c$ in the conformal classification, see \cite{CC04}. In systems with defects or disorder it may be modified \cite{Peschel05,Refael,Laflo}. \par A more complex situation arises if the quantum state evolves in time. The simplest way to achieve this is a quench, where one changes a parameter of the system instantaneously. An eigenstate of the initial Hamiltonian then becomes a superposition of the eigenstates of the final one and a complicated dynamics results. The concomitant evolution of the entanglement entropy has been the topic of several recent studies \cite{CC05,Dechiara06, Eisert/Osborne06,Bravyi06}. If one is dealing with critical models, this can be discussed within conformal field theory \cite{CC05}. \par So far, most quenches considered were global, i.e. the system was modified everywhere in the same way. Such a quench can actually be realized for atoms in optical lattices \cite{Greiner02}. In the theoretical studies, it was found that the entanglement entropy first increases linearly in time and then saturates at a value proportional to the size of the subsystem. Thus it becomes an extensive quantity, in contrast to the equilibrium case. These features can be understood in a simple picture where pairs of particles transmit the entanglement from the initial state to later times \cite{CC05,Dechiara06}. \par In the present paper we study a different situation, namely a local quench. We consider free electrons hopping on a chain which initially contains a defect in form of a weakened bond. This defect is suddenly removed and the electronic system has to readjust. The set-up is similar to the X-ray absorption problem in metals where the creation of a deep hole leads to a local scattering potential. When a conduction electron fills the hole, this potential is switched off again \cite{Mahan}. \par In contrast to a related study of the transverse Ising model \cite{Skrov06}, we start from the ground state. We consider a section of the chain containing the defect in the interior or at its boundary and calculate the time evolution of its entanglement entropy with the rest. For the equilibrium case, this problem has been studied before \cite{Peschel05}. We find a time dependence which is very different from that for a global quench. The entanglement entropy only changes after a certain waiting time, increases then to a maximum or plateau, depending on the location of the defect, and finally decreases very slowly and in a universal way towards its equilibrium value. In particular, it remains always non-extensive. \par The calculations are numerical and based on the determination of the reduced density matrix $\rho$ from the one-particle correlation functions. The necessary formulae are given in section \ref{sec:model}. The case of a central defect, typical single-particle eigenvalue spectra and the long-time behaviour of S are discussed in section \ref{sec:center}. The evolution of the entropy at intermediate times and for various defect positions is presented in section \ref{sec:defpos}. The case of a boundary defect is studied in section \ref{sec:boundary} and described by simple formulae based on fronts moving through the system. In section \ref{sec:summary} we sum up our findings. In the two appendices we discuss the initial entanglement and present a simple example of a global quench for comparison. \section{Model} \label{sec:model} We consider a system of free spinless fermions hopping between neighbouring sites of an infinite chain (XX model). The system is half filled and initially prepared in the ground state of the inhomogeneous Hamiltonian \eq{H_0 =- \frac 1 2 \sum_{n=-\infty}^{\infty} t_n (c_n^{\dagger} c_{n+1} + c_{n+1}^{\dagger} c_{n}) \, ,} where the hopping amplitudes are $t_0=t'$ and $t_n=1$ for $n\ne 0$. Thus one has a single bond defect between sites $0$ and $1$. At time $t=0$ this defect is removed and the time evolution is governed by the homogeneous Hamiltonian \eq{H_1 =- \frac 1 2 \sum_{n=-\infty}^{\infty} (c_n^{\dagger} c_{n+1} + c_{n+1}^{\dagger} c_{n}) \, .} \par Both $H_0$ and $H_1$ describe critical systems. Our aim is to study how the entanglement between a subsystem of length $L$ and the rest of the chain evolves after the quench. The subsystem is chosen in such a way that the initial defect is inside of it or at its boundary. \begin{figure}[thb] \center \includegraphics[scale=0.6]{figs/defect.eps} \end{figure} \par The entanglement properties are determined by the reduced density matrix which for free fermions has the following diagonal form \cite{Cheong/Henley04,Peschel03}: \eq{ \rho_L = \frac{1}{\tilde Z} e^{-\tilde H} \; , \quad \tilde H = \sum_{k=1}^{L} \varepsilon_k(t) f_k^{\dagger} f_k \, .} Here $\tilde Z$ is a normalization constant ensuring $\mathrm{tr}\,\rho_L = 1$ and the fermionic operators $f_k$ follow from the $c_n$ by an orthogonal transformation. The eigenvalues $\varepsilon_k(t)$ are given by \eq{ \varepsilon_k(t) = \ln \frac{1-\zeta_k(t)}{\zeta_k(t)},} where $\zeta_k(t)$ are the eigenvalues of the time-dependent ($L \times L$) correlation matrix \eq{C_{jl}(t)=\langle 0| \, c_j^{\dagger}(t) \, c_l(t) \, |0 \rangle \, .} Here $|0 \rangle$ is the ground state of $H_0$ and the indices $j$ and $l$ run over the sites of the subsystem. The $\zeta_k(t)$ also determine the entanglement entropy $S= -\mathrm{tr}(\rho_L \ln \rho_L)$ via \eq{ S(t)=-\sum_k \zeta_k(t) \ln \zeta_k(t) - \sum_k (1-\zeta_k(t)) \ln (1-\zeta_k(t)) \, . \label{eq:entropy} } \par To obtain the matrix ${\bf{C}}(t)$ one diagonalizes the operator $H_1$ by a Fourier transform \eq{H_1 = -\sum_q \cos q\; c^\dagger_q c_q \, .} The time dependence of the $c_q$ then is $c_q(t) = \exp (it\cos q)\, c_q$ and Fourier transforming back gives, for a ring of $N$ sites, \eq{c_j(t)=\sum_m U_{jm}(t) c_m \, , \quad U_{jm}(t)=\frac 1 N \sum_q \mathrm{e}^{-iq(j-m)}\mathrm{e}^{it \cos q} \, .} In the thermodynamic limit, the matrix elements $U_{jm}(t)$ of the unitary evolution operator can be written as Bessel functions and the correlation matrix becomes \eq{C_{jl}(t)=i^{l-j} \sum_{m,n} i^{m-n} J_{j-m}(t) J_{l-n}(t) C_{mn}(0) \, . \label{corellt}} This expresses ${\bf{C}}(t)$ in terms of the matrix ${\bf{C}}(0)$ calculated with the initial state $|0 \rangle$. The information is transmitted via the Bessel functions which have a maximum when the spatial separation is equal to the elapsed time. The factors in front of the Bessel functions can lead to imaginary correlations and thus to local currents in the system. This is what one also expects on physical grounds. \par The initial matrix elements $C_{mn}(0)$ were already obtained in \cite{Peschel05} for the region to the right of the defect. Then they have the form \eq{C_{mn}(0)=C^0_{mn}-C^1_{mn} \, , \quad m,n > 0} where $C^0_{mn}$ denotes the correlation matrix of the homogeneous system \eq{C^0_{mn}=\frac{\sin \left[ \frac \pi 2 (m-n)\right]}{\pi (m-n)}} and $C^1_{mn}$ is the contribution of the defect, given explicitly in Eq. (8) of \cite{Peschel05}. It depends only on $m+n$ and vanishes for $t'=1$. If the defect cuts the chain, $C^1$ reduces to $C^0$ with argument $m+n$. When both sites are to the left of the defect, one simply has to replace $m+n$ with $m+n-2$, and by a straightforward generalization one can obtain corresponding formulae for $m$ and $n$ on opposite sides of the defect. \par With these initial values, the correlation matrix ${\bf{C}}(t)$ was calculated numerically and then diagonalized to obtain the eigenvalue spectrum and the entanglement entropy. Since the Bessel functions decay rapidly for indices much larger than the argument $t$, one can confine each of the sums in (\ref{corellt}) to about $(2t+4L)$ terms. Most calculations were done for times of the order 100-200. \section{Central defect and spectra} \label{sec:center} We start with analyzing a simple symmetric situation, namely a defect with $t'=0$ in the center of the subsystem. Thus one starts with two uncoupled infinite half-chains which then are connected. \par In Figure \ref{fig:epsilon_t} the time evolution of the single-particle spectrum is shown in two different ways. On the left hand side, the eight lowest eigenvalues $\varepsilon_k(t)$ are plotted in ascending order for several times. Only the positive ones are shown, since due to the half-filling the spectrum is symmetric with respect to zero. These snapshots show that the initial step structure resulting from the degeneracies of the uncoupled chains smoothens with time and the dispersion curve seems to approach the one of the homogeneous system. However, two steps remain and as a consequence the curves lie below the asymptotic one. Roughly speaking, this corresponds to a stronger entanglement. \begin{figure}[thb] \center \includegraphics[scale=0.3,angle=270]{figs/epst_l40_full.eps} \includegraphics[scale=0.3,angle=270]{figs/epst_l40.eps} \caption{Time evolution of the low-lying single-particle eigenvalues $\varepsilon_k(t)$ for a subsystem of $L=40$ sites with a central defect $t'=0$. Left: snapshots of the positive eigenvalues for different times, compared with the spectrum in equilibrium. Right: time evolution of the four lowest eigenvalues.} \label{fig:epsilon_t} \end{figure} \par On the right, the time evolution of the four lowest eigenvalues is displayed. It shows two important features. Firstly, the eigenvalues remain unchanged and the two-fold degeneracy survives up to a time $T\approx L/2$. This can be explained in terms of a front propagating with velocity $v=1$ \cite{Antal99} which is the Fermi velocity in the system and also the speed of the maxima in the Bessel functions. Before the front, starting at the defect, reaches the boundary, the subsystem to the right (left) of the defect cannot become entangled with the environment to the left (right). \par Secondly, for $t>T$ three of the eigenvalues quickly relax towards the values $\varepsilon_k^{h}$ of the homogeneous system but one, on the contrary, starts to evolve rather slowly. These ``anomalous'' eigenvalues (another one is found among the higher levels) lead to the kinks in the dispersion curves on the left of Fig. \ref{fig:epsilon_t} when they are close to another level. The smallest one is the most important for the entanglement, and its time dependence will determine that of $S$, since the other $\varepsilon_k(t)$ are basically constant. \par Therefore it is important to have a general picture of its behaviour. Figure \ref{fig:avoidcross} shows what happens for times $t$ up to 1600. In contrast to Figure \ref{fig:epsilon_t}, where the anomalous eigenvalue simply crossed over, one sees an avoided crossing with the next one, $\varepsilon_5$, which already looked relaxed at smaller times. This can be attributed to the fact that the two eigenstates in this case have the same reflection symmetry. \begin{figure}[thb] \center \includegraphics[scale=0.4,angle=270]{figs/avoidcross.eps} \caption{Large-time behaviour of the eigenvalues $\varepsilon_4(t)$ and $\varepsilon_5(t)$ already seen in Figure \ref{fig:epsilon_t}. The avoided level crossing can be fitted by $\ln (t /\tau)$ with $\tau \approx 1$.} \label{fig:avoidcross} \end{figure} \par At the avoided crossing, the two eigenvalues exchange role and the anomalous parts of the curves can be well described by a single logarithm of the form $\ln (t /\tau)$, as shown in the Figure. Therefore the spectrum roughly looks as in equilibrium with at least one additional eigenvalue $\varepsilon_{an}$. One expects that for very large times $\varepsilon_{an}$ finally converges to the maximal eigenvalue of the homogeneous system. Therefore the logarithmic behaviour cannot persist indefinitely. This could be confirmed for a rather small subsystem with $L=6$ sites, but for larger $L$ the necessary times are numerically inaccessible. Moreover, the $\varepsilon_{k}$ can only be calculated reliably for values up to about 25. \par One can also look at the single-particle eigenfunctions connected with the $\varepsilon_{k}$. Then one finds that they develop imaginary parts which vanish again as the eigenvalue approaches the equilibrium limit. In the anomalous eigenvector they persist more and generally speaking this eigenvector looks more extended than the others when $\varepsilon_{an}$ is not close to a crossing. \par The entanglement entropy, finally, is shown in Figure \ref{fig:entropyevol}. The basic features are consequences of eigenvalue spectrum discussed above. One has a sudden jump at $t= T$, followed by a very slow relaxation towards $S_{h}$, the value in the homogeneous system. For a large subsystem, initial and final value of $S$ are also the same, as discussed in Appendix A. \begin{figure}[thb] \center \includegraphics[scale=0.4,angle=270]{figs/entropyevol.eps} \caption{Time evolution of the entanglement entropy for a subsystem of $L=40$ sites with a central defect $t'=0$. A sudden jump is followed by a slow relaxation towards the homogeneous value $S_{h}$. The inset shows the logarithmic correction to the $1/t$ decay.} \label{fig:entropyevol} \end{figure} \par As mentioned above, the slow decay can be understood in terms of the anomalous eigenvalue $\varepsilon_{an}$. Its observed logarithmic time dependence implies $\zeta_{an}(t) \approx \tau / t$. This yields for the entropy \eq{ S(t)=S_h+ \frac{\alpha \ln (t) + \beta}{t}. \label{eq:entdecay}} The inset in Figure \ref{fig:entropyevol} shows that the logarithmic corrections to the $1/t$ time dependence can be indeed observed and fitted very well. \par The height of the maximum in $S$ depends on the size of the subsystem and one finds the behaviour \eq{ S_m -S_h = \frac{c_m }{3}\ln L + k_m \, , \label{eq:smax} } where $c_m \approx 0.23$ and $k_m \approx 0.06$. \par All these results were obtained for a defect with $t' = 0$. However, they also hold for the general case $0 < t' < 1$. Then the spectra and the entropy have similar behaviour only the amplitude $c_m$ and the constant $k_m$ decrease with increasing $t'$, since the effect must vanish for $t'=1$. \section{Entropy and defect position} \label{sec:defpos} We now ask how the picture changes if one varies the position of the defect. For simplicity we discuss the case $t'=0$, in which the defect cuts the subsystem into two parts with $L_1$ sites to the left and $L_2=L-L_1$ to the right of it. \par Figure \ref{fig:asymmblock} shows the entropy for intermediate times and several defect positions, specified by the numbers $L_1$ and $L_2$. The extreme cases are a defect at the boundary, $L_1=0$, and in the center $L_1=L/2$, as in the previous section. The main feature is the development of a plateau-like region, which can be understood in terms of fronts propagating from the defect site. The entropy increases rapidly at $T_1 \approx L_1$ when one of the fronts reaches the closest boundary and starts to decay at $T_2 \approx L_2$ when both fronts have left the subsystem. Thus the plateau is related to the asymmetry of the set-up. \begin{figure}[thb] \center \includegraphics[scale=0.4,angle=270]{figs/asymmblock_l40.eps} \caption{Time evolution of the entropy for a subsystem of $L=40$ sites for several defect positions $L_1/L_2$, indicating the number of sites to the left/right of the defect with $t'=0$. } \label{fig:asymmblock} \end{figure} \par One also finds that at time $t=L_2$ where the plateau region ends, the entropy always has the same value which moreover coincides with the maximum value (\ref{eq:smax}) found for the central defect in the previous section. Beyond this point, the long-time region begins and shows the same relaxation behaviour as in (\ref{eq:entdecay}) for all defect positions, although the curves do not coincide completely when shifted appropriately. The plateau itself will be studied in more detail in the following section. \section{Defect at the boundary} \label{sec:boundary} To obtain a better understanding of the development of the entropy plateau we now investigate the situation when it is most pronounced. This is the case for a defect located at the boundary of the subsystem. The behaviour for these intermediate times should be connected with the propagation of a wavefront in the subsystem. That such a front actually exists, can be seen from the single-particle eigenfunctions. \par In Figure \ref{fig:eigvec} we show snapshots of the lowest eigenvector for several intermediate times in a subsystem of $L=100$ sites. One can clearly recognize the front, either from the real part or from the imaginary part of the eigenvector. The latter, in particular, only develops behind the front, while it is approximately zero otherwise. \begin{figure}[thb] \center \includegraphics[scale=0.2,angle=270]{figs/eigvec_1_t0.eps} \includegraphics[scale=0.2,angle=270]{figs/eigvec_1_t20.eps}\\ \includegraphics[scale=0.2,angle=270]{figs/eigvec_1_t40.eps} \includegraphics[scale=0.2,angle=270]{figs/eigvec_1_t80.eps} \caption{Snapshots from the time evolution of the eigenvector corresponding to the lowest lying single-particle eigenvalue of a subsystem with $L=100$ sites and with a defect $t'=0$ at the left boundary.} \label{fig:eigvec} \end{figure} \par Furthermore, the form of the eigenvectors suggests to interpret the front as an effective defect with a time-dependent position, which divides the subsystem in two parts of size $t$ and $L-t$. This leads to an analogy with the equilibrium problem where the defect is located inside a subsystem. Some details for this situation, which was not covered in \cite{Peschel05} are given in Appendix A. In that case, the entropy can be written as a sum of logarithmic terms, which can be combined into a scaling function depending only on $x/L$ and $1-x/L$ where $x$ is the location of the defect. \par Starting from these equilibrium formulae, one is lead to make the following generalized ansatz for the nonequilibrium case \eq{ f(t,L)=\frac{c_1}{3}\ln (1+t) + \frac{c_2}{3}\ln (1+L-t) + k, \label{fitplateau}} where each of the coefficients depends on the strength of the defect. Note, that one has to have $c_1 \ne c_2$ in order to account for the nonsymmetric plateaus. The arguments of the logarithms have been shifted to avoid divergences in the interval $\left[ 0,L \right]$. \par In the simplest case of zero defect strength one can also motivate the ansatz (\ref{fitplateau}) with the help of a simple physical picture. Initially one has a subsystem which is entangled only with one of the half-chains and corresponding entropy $S \sim 1/6 \, \ln L$. The propagating front carries information about the other half-chain, and therefore the $t$ sites of the subsystem which have already been visited by the front acquire an entropy $S \sim 1/3 \, \ln t$, while the other $L-t$ sites still remain entangled only with one half-chain. This gives the values $c_1=1$ and $c_2=1/2$ for the constants. \par Figure \ref{fig:plateau} shows the entropy plateau curves with $t'=0$ for different system sizes, together with the fit functions (\ref{fitplateau}). Our fit gave the values $c_1=0.96$, $c_2=0.55$ and $k=1.03$. We emphasize, that these coefficients have been obtained via a single fit to the $L=200$ data, and the other two curves differ only in the parameter $L$. One can see a good agreement with the data, except for the boundaries of the time intervals. Indeed, for $t=L$ the fit formula (\ref{fitplateau}) scales as $S \approx 1/3 \, \ln L$, that is the asymptotics of the equilibrium entropy $S_h$. However, as it was mentioned in the previous section, the value $S(t=L)$ is described by (\ref{eq:smax}), and for later times one enters in the regime of slow relaxation towards $S_h$. \begin{figure}[thb] \center \includegraphics[scale=0.4,angle=270]{figs/ent_plateau.eps} \caption{Entropy plateaus in case of a $t'=0$ boundary defect for different subsystem sizes. The fitting function is defined in Eq. (\ref{fitplateau}) and the coefficients were determined by a single fit to the $L=200$ data.} \label{fig:plateau} \end{figure} \par The situation becomes slightly more complicated for arbitrary defect strengths in the range $0 \le t' \le 1$. Considering the limiting case $t' = 1$ one has no time dependence at all, thus $c_1=c_2=0$, however the entropy in this homogeneous case must scale as $S \sim 1/3 \, \ln L$. Therefore the constant in (\ref{fitplateau}) must be rewritten to contain a term proportional to $\ln L$ with a new coefficient. Finally, one could change to the scaling variables $t/L$ and write the entropy in the form \eq{ S(t,L)=\frac{c_0}{3} \ln L + \frac{c_1}{3}\ln (t/L) + \frac{c_2}{3}\ln (1-t/L) + k', \label{entscale}} where all the parameters $c_0,c_1,c_2$ and $k'$ are functions of the defect strength and have to be determined by fitting to the data. Note, that the above form of the entropy is expected to hold only for large $L$ and away from the boundaries of the time interval. \par In fact, we have always used the regularised ansatz (\ref{fitplateau}) to determine the $t'$-dependence of $c_1,c_2$ and $k$ by fitting to time series with $L=100$ fixed. Then the additional logarithmic term was extracted from fitting to the $k$ values obtained from time series with different $L$, $t'$ being fixed. One finds after all, that $c_0 \approx 1+c_2$, thus one needs only $c_1,c_2$ and $k'$ to describe the plateaus. The $t'$-dependence of the former two is depicted in Figure \ref{fig:effc}. One can see that the coefficients vary smoothly with $t'$, both of them going to zero as $t' \to 1$. \begin{figure}[thb] \center \includegraphics[scale=0.4,angle=270]{figs/effc.eps} \caption{Scaling function coefficients $c_1$ and $c_2$ as functions of the defect strength.} \label{fig:effc} \end{figure} \par In conclusion, the time dependence of the entropy in the plateau-region can be written in the scaling form (\ref{entscale}) for all values of the defect strength. The effective height of the plateau scales as $c_0/3 \, \ln L$ with $c_0 > 1$ which indicates a logarithmic increase of the entropy for these intermediate times. \section{Summary and conclusions} \label{sec:summary} We have studied a particular kind of quench, where the system remained critical and was only modified locally. Starting from the ground state with a defect, we calculated the evolution of the reduced density matrix and the entanglement following from it. The main effect is an increase of the entropy $S$ on a time scale $t \approx L$ which can be related to the excitations near the Fermi energy with velocity $v_F=1$ in our units. Thus the entanglement always increases at the early stages of the rearrangement process. The criticality of the system is reflected in the logarithmic dependence of various quantities, in particular the height of the maximum of $S$, on the size of the subsystem. For the case of a boundary defect, where the mechanism at work leads to a plateau, we could also find the scaling function describing its shape. The defect strength only changes the parameters, which are closely related to those in equilibrium, but not the qualitative features. The long-time behaviour is characterized by a slow approach to the homogeneous ground state. For the times accessible, one finds a universal law independent of the defect strength. Thus no anomalous exponents as in the X-ray problem enter. These appear for time-delayed correlation functions or in the overlap between initial and time-dependent state, whereas here one is following the evolving state directly. \par A particular feature of our situation is that the initial state is asymptotically degenerate with the final ground state. This distinguishes it from the case of global quenches, where the energy difference between both is extensive, and also from the study in \cite{Skrov06}, where the evolution started from a randomly chosen high-energy state and the main interest was in the thermalization. On the level of the reduced density matrix, one can define an effective temperature, if $\rho$ has thermal form, which is always the case for free fermions, and if the $\varepsilon_k$, or at least the important ones, are proportional to the single-particle excitations of the Hamiltonian. This is the case for critical systems where both have linear dispersion \cite{EisLegRa06} and also for the global quench considered in Appendix B. In this sense, our subsystem thermalizes with an effective temperature $T_{eff}=\ln L/\pi L$ which vanishes if one considers the whole system, while the quench in Appendix B leads to a finite value $T_{eff}= 1/2$. \par Altogether we have obtained a good overall picture of the phenomena, although some aspects like the oscillations in $S$ and the influence of the filling have not been addressed. However, an analytical derivation of the asymptotic time dependence would still be desirable. \ack We would like to thank M. Cramer, J. Eisert, E. Jeckelmann, C. Kollath, A. L\"auchli and K.D. Schotte for discussions. \section*{Appendix A: Initial entanglement} For a defect at the boundary of the subsystem, the entanglement entropy has already been studied in \cite{Peschel05}. For a general location, an expression can be given immediately if $t'=0$. Then one has two half-chains where segments at the ends are part of the subsystem. If their respective lengths $l$ and $L-l$ are large, the conformal formulae \cite{CC04} lead to \eq{ S= \frac {c}{6} \bigl[\,\ln(l)+ \ln(L-l) \, \bigr]+ 2 k_1 \label{eqn:defect1} } where $c=1$ and $k_1 \approx 0.479$ for the hopping model. Thus $S$ is maximal when the defect is in the center of the subsystem, $l =L/2$. \par Numerical calculations show that this law also holds for arbitrary defect strengths $t' \leq 1$ but with a constant $c'$ depending on $t'$. Measuring the position of the defect from the center of the subsystem, $l=L/2+x$, one can write it in the form \eq{ S= \bigl[\, \frac {1}{3} \ln L + k \,\bigr] + \frac {c'}{3} \ln \,\bigl[\,1 - (\frac {2x}{L})^2\; \bigr], \label{eqn:defect2} } The term in the bracket with $k \approx 0.726$ represents the entropy of the homogeneous system which is reduced by the scaling function, except for a central defect. The coefficient $c'$ is related to the quantity $c_{eff}$ which appears for a boundary defect \cite{Peschel05} via $c'=1-c_{eff}$. If the defect is outside the subsystem, the same formula holds with $2x/L$ replaced by $L/2x$. Then $S$ is unaffected, if the defect is far away from the subsystem, as one expects. The situation is illustrated in Figure \ref{fig:ent_defectpos} for the case $t'=0.5$ and served as a guidance for the analysis of the non-equilibrium results in Section \ref{sec:boundary}. \begin{figure}[thb] \center \includegraphics[scale=0.4,angle=270]{figs/ent_defectpos.eps} \caption{Equilibrium entanglement entropy for a subsystem of size $L=200$ with a defect of $t'=0.5$ as a function of the defect position $x$ measured from the center of the subsystem.} \label{fig:ent_defectpos} \end{figure} \section*{Appendix B: Homogeneous quench} It is instructive to compare local and global quenches at the level of the density-matrix spectra. A simple global quench in the hopping model is obtained if one starts with a fully dimerized chain in its ground state which is then made homogeneous. Thus initially $t_{2n}= 1$ and $t_{2n+1}= 0$ and only pairs of sites are coupled. For half filling, this leads to a correlation matrix ${\bf{C}}(0)$ which consists of ($2 \times 2$) blocks along the diagonal with all elements equal to one-half. Explicitly, \eq{ C_{mn}(0) = \frac {1}{2} \bigl[\delta_{m,n} + \frac {1}{2}(\delta_{n,m+1}+ \delta_{n,m-1}) + \frac {(-1)^m}{2} (\delta_{n,m+1}-\delta_{n,m-1})\bigr] \label{eqn:global1} } Due to the translational invariance (up to the alternating factors) this leads to the simple result \eq{ C_{mn}(t) = \frac {1}{2} \bigl[\delta_{m,n} + \frac {1}{2}(\delta_{n,m+1}+ \delta_{n,m-1}) + e^{-i \frac {\pi}{2}(m+n)} \frac {i(m-n)}{2t}J_{m-n}(2t) \bigr] \label{eqn:global2} } which involves only single Bessel functions. The resulting $\varepsilon_k$ are shown in Fig. \ref{fig:globalquench} for $L=100$ and several times. \begin{figure}[thb] \center \includegraphics[scale=0.4,angle=270]{figs/epsilon_hom.eps} \caption{Time evolution of the single-particle spectrum for a subsystem of $L=100$ sites in case of a global quench starting with a fully dimerized initial state.} \label{fig:globalquench} \end{figure} \par One sees that the dispersion is linear near zero with a slope which decreases in time. This makes the number of low-lying levels larger and is responsible for the initial increase of the entanglement entropy. For times exceeding $L/2$, however, an asymptotic curve is approached and $S$ saturates. The asymptotic form of the $\varepsilon_k$ follows from the first two terms in (\ref{eqn:global2}) which describe a homogeneous tridiagonal matrix with eigenvalues \eq{ \zeta_k(\infty)= \frac {1}{2} (1+\cos(p_k)), \;\;\; p_k =\frac {\pi\,k}{L+1}, \;\;\; k=1,2...L \label{eqn:global3} } where the $p_k$ are the allowed momenta for an open chain. This gives \eq{ \varepsilon_k(\infty)= 2 \ln \tan(p_k/2) \label{eqn:global4} } The spacing of the $p_k$ proportional to $1/L$ then leads to an asymptotic value $S \sim L$ if one converts the sums for $S$ into integrals. The explicit result is $S = L(2\ln2-1)$ and was found also in \cite{CC05} for a similar quench in the transverse Ising model. These results illustrate the strong influence of a global quench on the form of the spectrum and the level spacing. For a local quench, Figure \ref{fig:epsilon_t} shows that the effects are much smaller and largely confined to intermediate times. \section*{References}
1,108,101,562,370
arxiv
\section{Notation} We briefly summarise the notation used in the main manuscript and in this supplement: \begin{itemize} \item Discrete numbers of the two types of protein are denoted by $N_{\rm X}$ and $N_{\rm Y}$. We write $M_{\rm X}$ and $M_{\rm Y}$ for the number of mRNA molecules of the two types. \item Variables such as $x(t)$ denote continuous particle densities (or concentrations). Specifically $x(t)$ and $y(t)$ are protein densities, i.e., $x=N_{\rm X}/K$ and $y=N_{\rm Y}/K$ in the limit $K\gg 1$. \item We denote the probabilities in the master equations by capital $P$ (discrete particle numbers). \item The lower-case notation $p$ is used for probability density functions in the diffusion approximation (continuous particle densities/concentrations). \end{itemize} \section{Master equations of the individual-based models} The different individual-based model in the main manuscript are uniquely defined by their master equations. Here we briefly summarise the master equations of the FM and of the GB, CB and NB models. \subsection{Master equation of full model (FM)} We write $P_{a,b,c,d}$ for the probability that the system is in state $M_{\rm X}=a,M_{\rm Y}=b,N_{\rm X}=c,N_{\rm Y}=d$ at time $t$. The master equation of the FM is then \begin{align} \frac{d}{dt} P_{a,b,c,d} ={}& - \l\{H_{}\l(c\r)+H_{}\l(d\r) + a\gamma \l[ 1+B \r] + b\gamma\l[1+B\r] - \gamma_0 c - \gamma_0 d\r\} P_{a,b,c,d} \nonumber \\ {}&+ H_{}\l(c\r) P_{a, b-1, c, d} + H_{}\l(d\r) P_{a-1, b, c, d} + \gamma \l(a+1\r) P_{a+1, b, c, d}+ \gamma \l(b+1\r) P_{a, b+1, c, d}\nonumber \\ {}&+ B\gamma a P_{a,b,c-1,d} + B\gamma bP_{a, b,c,d-1} + \gamma_0 \l(c+1\r) P_{a,b,c+1,d}+ \gamma_0 \l(d+1\r) P_{a,b,c,d+1}. \label{eq:masterTSIB} \end{align} The probability of a state is zero if any of the variables $a,b,c$ or $d$ are negative. \subsection{Infinitely fast-degrading mRNA limit: the GB model} In the kinetic scheme of the full model the mRNA decays with a rate $\gamma$, but synthesizes a protein with a rate $\gamma B$. Both of the rates are constants. One an mRNA is created the next event involving this mRNA particle is either the production of protein or the decay of the mRNA molecule. The probability that the next event is the synthesis of a protein is $B/(B+1)$, and the probability that a decay occurs next (before production of a protein) is $1/(B+1)$. The random number, $\ell$, of protein molecules generated by one particular mRNA molecule during its lifetime then follows a geometric distribution \eq{ g(\ell)= \l(\frac{B}{1+B}\r)^\ell \l(\frac{1}{1+B}\r). }{eq:geo} The lifetime of an mRNA molecule is of order $\mathcal{O}\l(1/\gamma\r)$. As a consequence, we can think of the protein-generating process as follows in the infinitely-fast decaying mRNA limit ($\gamma \rightarrow \infty$): As soon as an mRNA is transcribed, it immediately releases a random number of proteins $\ell$ drawn from the distribution \eqref{eq:geo} and then decays. \subsection{Master equation of the GB model} In the limit $\gamma \rightarrow \infty$, the dynamics of the full model can effectively be coarse-grained into a single-species model \subeq{ \text{Gene X} \xrightarrow{H_{}\l(N_{\rm Y}\r)} {}& \ell \times \text{Protein X} \quad \text{(transcription and translation of protein X)} , \\ \text{Gene Y} \xrightarrow{H_{}\l(N_{\rm X}\r)} {}& \ell \times \text{Protein Y} \quad \text{(transcription and translation of protein Y)},\\ \text{Protein X} \xrightarrow{\gamma_0} {}& \emptyset \quad \quad \text{(degradation of protein X)}, \\ \text{Protein Y} \xrightarrow{\gamma_0} {}& \emptyset \quad \quad \text{(degradation of protein Y)}, }{eq:geo IB} where $\ell$ is drawn from the above geometric distribution, every time one of the first two reactions fires. The master equation of this process is \al{ \frac{d}{dt} P_{c,d} ={}& -\l[H_{}\l(c\r) + H_{}\l(d\r) + \gamma_0 c + \gamma_0 d \r] P_{c,d} + \gamma_0 \l(c+1\r) P_{c+1,d} + \gamma_0 \l(d+1\r) P_{c,d+1}\nonumber \\ {}&+ \sum_{\ell=0}^{c} H_{}\l(d\r) \l(\frac{B}{1+B}\r)^\ell \l(\frac{1}{1+B}\r) P_{c-\ell,d} + \sum_{\ell=0}^{d} H_{}\l(c\r) \l(\frac{B}{1+B}\r)^\ell \l(\frac{1}{1+B}\r) P_{c,d-\ell}. }{eq:master geo} We have written $P_{c,d}(t)$ for the probability that the system is in state $N_{\rm X}=c,N_{\rm Y}=d$ at time $t$. Again, the probability of a state is zero if $c$ or $d$ are negative. \subsection{Master equation of CB model} The master equation for the CB model is obtained by replacing $g(\ell)\rightarrow \delta_{\ell,B}$, i.e., $\ell$ takes value $\ell=B$ with probability one. We find \begin{align} \frac{d}{dt} P_{c,d} ={}& -\l[H_{}\l(c\r) + H_{}\l(d\r) + \gamma_0 c + \gamma_0 d \r] P_{c,d} \nonumber \\ {}&+ H_{}\l(d\r) P_{c-B,d} + H_{}\l(c\r) P_{c, d-B} + \gamma_0 \l(c+1\r) P_{c+1,d} + \gamma_0 \l(d+1\r) P_{c,d+1}. \end{align} \subsection{Master equation of the model without bursts (NB)} In this case we have \begin{align} \frac{d}{dt} P_{c,d} ={}& -\l[B\times H_{}\l(c\r) + B\times H_{}\l(d\r) + \gamma_0 c + \gamma_0 d \r] P_{c,d} \nonumber \\ {}&+ B\times H_{}\l(d\r) P_{c-1,d} + B\times H_{}\l(c\r) P_{c, d-1} + \gamma_0 \l(c+1\r) P_{c+1,d} + \gamma_0 \l(d+1\r) P_{c,d+1}. \end{align} \section{Deriving the diffusion approximations} \subsection{Diffusion approximation of the GB model} Simulations of the full model (FM) show that the number of mRNA molecules present at any one time is typically very small ($M_{\rm X},M_{\rm Y}<10$) when biologically relevant parameters are used. The conventional diffusion approximation relies on large particle numbers, and so it is not adequate for the full model. Instead, we perform the diffusion approximation to the master equation of the GB model, equation \eqref{eq:master geo}. This is a standard method, and we proceed along the lines of \cite{vanKampen}. The only complication is the presence of the geometrically distributed random numbers (denoted by $\ell$) in the protein-generation reactions. As we will discuss below this requires only modest modifications to the standard Kramers-Moyal expansion. We assume that the scale of the population size, $K$, is large but finite, i.e., $K\gg 1$, and we write $x =N_{\rm X}/K$ and $y = N_{\rm Y}/K$, and replace $P_{c,d}(t)$ in favour of $p(x,y,t)$. The master equation \eqref{eq:master geo} then becomes \al{\partial_t p(x,y,t) = {}& -\l[ H_{}\l(Kx\r) + H_{}\l(Ky\r) + \gamma_0 Kx + \gamma_0 Ky \r] p\l(x,y,t\r) \nonumber \\ {}&+ \gamma_0 K \l(x+\frac{1}{K}\r) p\l(x+\frac{1}{K},y,t\r) + \gamma_0 K \l(y+\frac{1}{K}\r) p\l(x, y+\frac{1}{K},t\r) \nonumber \\ {}& + \sum_{\ell=0}^{\infty} H_{}\l(Ky\r) \l(\frac{B}{1+B}\r)^\ell \l(\frac{1}{1+B}\r) p\l(x-\frac{\ell}{K},y,t\r) \nonumber \\ {}& + \sum_{\ell=0}^{\infty} H_{}\l(Kx\r) \l(\frac{B}{1+B}\r)^\ell \l(\frac{1}{1+B}\r) p\l(x,y-\frac{\ell}{K},t\r). }{} In the last two terms we have extended the summation over $\ell$ to infinity, terms in which $x-\ell/K$ or $y-\ell/K$ become negative are automatically suppressed as the corresponding probabilities $p(x-\ell/K,y,t)$ and $p(x,y-\ell/K,t)$ vanish. The above expression can then be written as \al{ \partial_t p(x,y,t) = {}& -\l[ H_{}\l(Kx\r) + H_{}\l(Ky\r) + \gamma_0 Kx + \gamma_0 Ky \r] p\l(x,y,t\r) \nonumber \\ {}&+ \gamma_0 K \l(x+\frac{1}{K}\r) p\l(x+\frac{1}{K},y,t\r) + \gamma_0 K \l(y+\frac{1}{K}\r) p\l(x, y+\frac{1}{K},t\r) \nonumber \\ {}& +H_{}\l(Ky\r)\l\langle p\l(x-\frac{\ell}{K},y,t\r) \r\rangle_\ell + H_{}\l(Kx\r) \l\langle p\l(x,y-\frac{\ell}{K},t\r) \r\rangle_\ell, }{} where $\langle \cdots \rangle_l$ denotes an average with respect to a geometrically distributed random number $\ell$, i.e., $\langle f_\ell\rangle_\ell =(1+B)^{-1} \sum_{\ell} \l(\frac{B}{1+B}\r)^\ell f_\ell$. We next expand the above equation in powers of $1/K$, keeping only the leading and sub-leading order terms \cite{vanKampen}. We also use the explicit expressions $\langle \ell\rangle_\ell=B$ and $\langle \ell^2\rangle_\ell = B(2B+1)$ for the first two moments of the geometric distribution. We arrive at the Fokker--Planck equation \al{ \partial_t p(x,y,t) = {}& -\partial_x \l\{ \l[ \frac{B}{K} H_{}\l(Ky\r) - \gamma_0 x\r] p\l(x,y,t\r) \r\} - \partial_x \l\{ \l[ \frac{B}{K} H_{}\l(Kx\r) - \gamma_0 y\r] p\l(x,y,t\r) \r\}\nonumber \\ {}&+ \frac{1}{2K} \partial_x^2 \l\{ \l[{ \frac{B\l(2B+1\r)}{K} H_{}\l(Ky\r) + \gamma_0 x }\r] p\l(x,y,t\r) \r\} \nonumber\\ {}&+ \frac{1}{2K} \partial_y^2 \l\{ \l[{ \frac{B\l(2B+1\r)}{K} H_{}\l(Kx\r) + \gamma_0 y }\r] p\l(x,y,t\r) \r\}, }{eq:FPGB} where we have written $\partial_x=\frac{\partial}{\partial x}$ and similarly for $\partial_y$. Realisations of the random process described by the Fokker-Planck equation \eqref{eq:FPGB} can be obtained as the solutions of the coupled It\=o stochastic differential equations \subeq{ dx_t ={}& v(x_t,y_t) dt + \sqrt{D(x_t,y_t)} dW_t^{(x)}, \\ dy_t ={}& v(x_t,y_t) dt + \sqrt{D(y_t,x_t)} dW_t^{(y)}, }{eq:diffapprox} with the defined drift $v$ and diffusion $D$ given by \subeq{ v(w,z):={}&B \l(r_0+ \frac{r}{1+z^n}\r)- \gamma_0 w, \\ D(w,z):={}& \frac{B}{K} \l[{\l(2B+1\r)} \l(r_0+ \frac{r}{1+z^n}\r) + \frac{1}{B} \gamma_0 w\r]. }{eq:vD} The quantities $W_t^{(x)}$ and $W_t^{(y)}$ are independent Wiener processes. \subsection{Diffusion approximation of the CB and NB models} The same procedure can be applied to the master equation of the CB and NB models, and for completeness we report the resulting Fokker-Planck equations. For the CB model one finds \al{ \partial_t p(x,y,t) = {}& -\partial_x \l\{ \l[ \frac{B}{K} H_{} \l(Ky\r) - \gamma_0 x\r] p\l(x,y,t\r) \r\} - \partial_x \l\{ \l[ \frac{B}{K} H_{}\l(Kx\r) - \gamma_0 y\r] p\l(x,y,t\r) \r\}\nonumber \\ {}&+ \frac{B}{2K} \partial_{x}^2 \l\{ \l[{ \frac{B}{K} H_{}\l(Ky\r) + \frac{1}{B} \gamma_0 x }\r] p\l(x,y,t\r) \r\} \nonumber\\ {}&+ \frac{B}{2K} \partial_{y}^2 \l\{ \l[{ \frac{B}{K} H_{}\l(Kx\r) + \frac{1}{B} \gamma_0 y }\r] p\l(x,y,t\r) \r\}, }{eq:FPCB} and for the model without bursts (NB) one has \al{ \partial_t p(x,y,t) = {}& -\partial_x \l\{ \l[ \frac{1}{K} B\times H_{} \l(Ky\r) - \gamma_0 x\r] p\l(x,y,t\r) \r\} - \partial_x \l\{ \l[ \frac{1}{K} B\times H_{}\l(Kx\r) - \gamma_0 y\r] p\l(x,y,t\r) \r\}\nonumber \\ {}&+ \frac{1}{2K} \partial_{x}^2 \l\{ \l[{ \frac{1}{K} B\times H_{}\l(Ky\r) + \gamma_0 x }\r] p\l(x,y,t\r) \r\} \nonumber\\ {}&+ \frac{1}{2K} \partial_{y}^2 \l\{ \l[{ \frac{1}{K} B\times H_{}\l(Kx\r) + \gamma_0 y }\r] p\l(x,y,t\r) \r\}. }{eq:FPNB} \section{The piecewise deterministic Markov process (PDMP)} \subsection{Construction of the PDMP model} In this section we outline the construction of the PDMP approximation, starting from the full model. For this purpose it is useful to introduce the notation \eq{ P^{\l(a,b\r)}\l(c,d,t\r) = P(M_{\rm X}=a,M_{\rm Y}=b,N_{\rm X}=c,N_{\rm Y}=d,t). }{} Thus the upper indices $(a,b)$ denote the number of mRNA molecules of either type in the system, and the arguments $a, b$ stand for protein numbers. The master equation \eqref{eq:masterTSIB} can then be written in matrix form \al{ \frac{d}{dt} \l[ \begin{array}{c} P^{\l(0,0\r)}\l(c,d,t\r)\\ P^{\l(1,0\r)}\l(c,d,t\r)\\ P^{\l(0,1\r)}\l(c,d,t\r)\\ \ldots \end{array} \r] = \mathcal{L}^\dagger \l[ \begin{array}{c} P^{\l(0,0\r)}\l(c,d,t\r) \\ P^{\l(1,0\r)}\l(c,d,t\r) \\ P^{\l(0,1\r)}\l(c,d,t\r) \\ \ldots \end{array} \r]. }{eq:Yu} We have introduced \eq{ \mathcal{L}^{\dagger}:=\l[ \begin{array}{cccc} \mathcal{L}^{\dagger\l(0,0\r)} - H_{}(c) - H_{}(d) & \gamma & \gamma & \ldots \\ H_{}(d) & \mathcal{L}^{\dagger\l(1,0\r)} - H_{}(c) - H_{}(d)-\gamma & 0 & \ldots \\ H_{}(c) & 0 & \mathcal{L}^{\dagger\l(0,1\r)} - H_{}(c) - H_{}(d) -\gamma & \ldots \\ \ldots &&& \end{array} \r] , }{} where the operator $L^{\dagger\l(m,n\r)}$ describes the forward evolution of protein numbers when there are $M_X=m$ and $M_Y=n$ mRNA molecules in the system. Specifically, \eq{ \mathcal{L}^{\dagger\l(m,n\r)} = m \gamma B\l(\mathcal{E}^{-1,0}-1\r) + n\gamma B\l(\mathcal{E}^{0,-1}-1\r) + \gamma_0 \l[\l(\mathcal{E}^{0,1}-1\r) + \l(\mathcal{E}^{0,1}-1\r)\r], }{} where $\mathcal{E}^{i,j}$ are the shift operators \cite{vanKampen} acting on functions of protein numbers. They are defined through \eq{ \mathcal{E}^{i,j} f(m,n) \equiv f(m+i,n+j). }{} Next we consider the limit of fast mRNA decay, that is, large values of $\gamma$. More specifically mRNA molecules of either type are generated with rates $H(N_Y)$ and $H(N_X)$ respectively, and we assume that $\gamma$ is much larger than either of these two rates ($\gamma\gg H(N_Y), \gamma \gg H(N_X)$, for any values of $N_X$ and $N_Y$). In this limit, the system is almost always in the state without mRNA molecules $(i.e., M_{\rm X}=0,M_{\rm Y}=0)$, except for short spells during which there is either one molecule of mRNA of type $X$, or one of type $Y$. The duration of the episodes spent in these $(1,0)$ and $(0,1)$ states is of order $\gamma^{-1}$, then a switch back to the $(0,0)$ state occurs. The probability to find the system in states with $M_X>1$ or $M_Y>1$ is even smaller, specifically of order $(H/\gamma)^2$, and we neglect contributions from these states. Equation \eqref{eq:Yu} can then be simplified into a forward equation of a three-state model: \al{ \frac{d}{dt} \l[ \begin{array}{c} P^{\l(0,0\r)}\l(c,d,t\r) \\ P^{\l(1,0\r)}\l(c,d,t\r) \\ P^{\l(0,1\r)}\l(c,d,t\r) \\ \end{array} \r] = \mathcal{L^{\dagger}_{\rm{approx}}} \l[ \begin{array}{c} P^{\l(0,0\r)} \l(c,d,t\r)\\ P^{\l(1,0\r)} \l(c,d,t\r)\\ P^{\l(0,1\r)} \l(c,d,t\r)\\ \end{array} \r] }{eq:Yu2} with \eq{ \mathcal{L}^{\dagger}_{\rm{approx}}:=\l[ \begin{array}{ccc} \mathcal{L}^{\dagger\l(0,0\r)} - H_{}(c) - H_{}(d) & \gamma & \gamma \\ H_{}(d) & \mathcal{L}^{\dagger\l(1,0\r)} -\gamma & 0 \\ H_{}(c) & 0 & \mathcal{L}^{\dagger\l(0,1\r)}-\gamma \\ \end{array} \r]. }{} In the next step we consider the limit of large values of $K$. Formally we take the limit $K\to\infty$. The system can then be described by the protein concentration $x=N_{\rm X}/K$ and $y=N_{\rm Y}/K$. The corresponding probability distributions in the continuum limit are \subeq{ p_\text{0}(x,y):=P^{(0,0)}\l(Kx, Ky\r) K^2, \\ p_\text{X}(x,y):=P^{(1,0)}\l(Kx, Ky\r) K^2, \\ p_\text{Y}(x,y):=P^{(0,1)}\l(Kx, Ky\r) K^2. }{} On the left-hand side we have introduced the notation $0, X$ and $Y$ to describe the states in which there are no mRNA molecules ($M_X=M_Y=0$), one mRNA molecule of type $X$ ($M_X=1, M_Y=0$) and one mRNA molecule of type $Y$ respectively ($M_X=0, M_Y=1$). This is in-line with the notation in the main manuscript. The time evolution of the protein concentrations between the random switching events between these three states is then taken to be deterministic. Mathematically this corresponds to expanding the discrete operators $\mathcal{L}^{\dagger\l(a,b\r)}$ in powers of $K^{-1}$, and keeping only the lowest-order advection terms. This generates so-called Liouville operators, and leads to \eq{ \frac{\partial}{\partial t} \l[ \begin{array}{c} p_\text{0}\\p_\text{X}\\p_\text{Y} \end{array} \r] = \l(L^{\dagger}_{d} + L^{\dagger}_\text{s}\r) \l[ \begin{array}{c} p_\text{0}\\p_\text{X}\\p_\text{Y} \end{array} \r], }{eq:Yu3} where $L^{\dagger}_d$ and $L^{\dagger}_s$ are the forward operators driving the deterministic flow and the random switching between states, respectively. They are given by \subeq{ L^{\dagger}_d :={}& \l[ \begin{array}{ccc} \l(L^\dagger_d\r)_{11} & 0 & 0\\ 0 &\l(L^\dagger_d\r)_{22} & 0 \\ 0&0 & \l(L^\dagger_d\r)_{33} \end{array} \r], \\ L^{\dagger}_s :={}& \l[ \begin{array}{ccc} - H_\text{}(Kx)-H_\text{}(Ky) & \gamma & \gamma\\ H_\text{}(Ky) &- \gamma & 0 \\ H_\text{}(Kx)&0 & - \gamma \end{array} \r], }{eq:PDMP operators} and \subeq{ \l(L^\dagger_d\r)_{11} :={}&\gamma_0\partial_x \l(x\r) + \gamma_0\partial_y \l(y\r),\\ \l(L^\dagger_d\r)_{22} :={}& \partial_x \l(-\gamma b + \gamma_0 x\r) + \gamma_0 \partial_y \l(y\r) ,\\ \l(L^\dagger_d\r)_{33} :={}&\gamma_0 \partial_x \l(x\r) + \partial_y \l(-\gamma b + \gamma_0y\r). }{} \noindent\underline{Nature of the approximation}\\ In deriving Eq. (\ref{eq:Yu3}) we have made several assumptions and approximations: \begin{enumerate} \item[(i)] First, we have assumed that $\gamma/H\gg 1$, where $H$ stands for the maximum value $H(N_{\rm X})$ and $H(N_{\rm Y})$ can attain. We recall that $H(N)=K\left[r_0+\frac{r}{1+(N/K)^h}\right]$. The function $h(x)=r_0+r/(1+x^n)$ does not involve $K$ or $\gamma$, and its maximum value is $r_0+r$. In dimensionless units, the assumption $\gamma/H\gg 1$ is thus fulfilled if $\gamma\gg (r_0+r)K$. \item[(ii)] We have replaced the discrete operators $L^{\l(a,b\r)}$ by deterministic Liouville operators, i.e., we neglected \emph{demographic stochasticity} of the protein degradation. The purpose of this is to isolate the contribution of the bursting noise, originating from the random switching of the mRNA state ($0, X$ and $Y$). Making the deterministic approximation for the protein concentrations is formally valid only in the limit of very large protein populations, $K\gg 1$ ($K$ sets the scale of the numbers of protein molecules). \end{enumerate} In summary we assume $\gamma \gg (r_0+r)K$ and $K\gg 1$. We expect our approximations to be accurate when both of these are fulfilled, in particular the typical value of $\gamma$ above which our theory can be expected to be accurate will depend on the choice of $K$, which in turn must be chosen large enough to justify the deterministic approximation of the protein dynamics. The data in the main manuscript reveals that the mathematical approximation agrees well with simulations for $\gamma=30$ and $K=200$. In our simulations we use $r_0\approx 0.007$ and $r=0.06$. \subsection{Forward equation in the limit $\gamma \rightarrow \infty$} We start from \subeq{ \partial_t p_\text{0} ={}& \l[\gamma_0 \partial_x x+ \gamma_0 \partial_y y- H_\text{}(Kx) -H_\text{}(Ky)\r]p_\text{0} + \gamma p_\text{X} + \gamma p_\text{Y}, \label{eq:alice}\\ \partial_t p_\text{X} ={}& \l[\partial_x \l(\gamma_0 x - \gamma b\r) + \gamma_0 \partial_y - \gamma \r]p_\text{X} + H_\text{}\l(Ky\r) p_\text{0},\\ \partial_t p_\text{Y} ={}& \l[\partial_y \l(\gamma_0 y - \gamma b\r) + \gamma_0 \partial_x - \gamma \r]p_\text{Y} + H_\text{}\l(Kx\r) p_\text{0}. }{} Applying the operator $\l[\partial_y \l(\frac{\gamma_0}{\gamma} y - b\r) + \frac{\gamma_0}{\gamma} \partial_x - 1 \r]\l[\partial_x \l(\frac{\gamma_0}{\gamma} x - b\r) +\frac{\gamma_0}{\gamma}\partial_y - 1 \r]$ to both sides of equation \eqref{eq:alice} results in \al{ \partial_t {}&\l[\partial_y \l(\frac{\gamma_0}{\gamma} y - b\r) + \frac{\gamma_0}{\gamma} \partial_x - 1 \r]\l[\partial_x \l(\frac{\gamma_0}{\gamma} x - b\r) +\frac{\gamma_0}{\gamma}\partial_y - 1 \r] p_\text{0} \nonumber\\ ={}& \l[\partial_y \l(\frac{\gamma_0}{\gamma} y - b\r) + \frac{\gamma_0}{\gamma} \partial_x - 1 \r] \l[\partial_t p_\text{X} - H_\text{}\l(Ky\r) p_\text{0} \r] + \l[\partial_x \l(\frac{\gamma_0}{\gamma} x - b\r) +\frac{\gamma_0}{\gamma}\partial_y - 1 \r] \l[\partial_t p_\text{Y} - H_\text{}\l(Kx\r) p_\text{0} \r]\nonumber\\ {}&+\l[\partial_y \l(\frac{\gamma_0}{\gamma} y - b\r) + \frac{\gamma_0}{\gamma} \partial_x - 1 \r]\l[\partial_x \l(\frac{\gamma_0}{\gamma} x - b\r) +\frac{\gamma_0}{\gamma}\partial_y - 1 \r] \l[\gamma_0 \partial_x x+ \gamma_0 \partial_y y- H_\text{}(Kx) -H_\text{}(Ky)\r]p_\text{0}. }{eq:forward finite gamma} We note that this equation is not closed in $p_\text{0}$. Next, we take the $\gamma \rightarrow \infty$ limit, keeping in mind that $H$ and $\gamma_0$ are finite. The system then almost-surely stays in the 0-state, and consequently $p_\text{X}, p_\text{Y} \rightarrow 0$. Equation \eqref{eq:forward finite gamma} then reduces to \al{ \partial_t {}&\l(-b\partial_y - 1 \r)\l(-b\partial_x -1 \r) p_\text{0} =\l(b\partial_y +1 \r) \l[H_\text{}\l(Ky\r) p_\text{0} \r] +\l(b\partial_x +1 \r) \l[H_\text{}\l(Kx\r) p_\text{0} \r]. }{eq:forward inf} The inverse operator of $1+b\partial_z$ is \eq{ (1+b\partial_z)^{-1} f(z) = \int^z \frac{e^{-\frac{z-z'}{b}}}{b} f(z') dz, }{} and so equation \eqref{eq:forward inf} turns into the `forward equation' presented in the main text: \al{ \partial_t p_0={}& \partial_x \l( \gamma_0 x p_0 \r) + \partial_y \l( \gamma_0 y p_0\r) - \l[ H_\text{}(Kx) + H_\text{}(Ky) \r] p_0 \nonumber\\ {}&+ H_\text{}(Ky) \int_0^{x} \frac{1}{b} e^{-\frac{x-x'}{b}} p_0\l(x',y,t\r)dx' + H_\text{}(Kx) \int_0^{y} \frac{1}{b} e^{-\frac{y-y'}{b}} p_0\l(x,y',t\r) dy'. }{eq:PDMP ID} \subsection{Equations for the mean first switching time} Here we illustrate the detail derivation to the adjoint equation in the main text. We focus on initial conditions $y>x$ and our goal is to calculate the mean time it takes the dynamics to reach states with $x=y$. We write $T_Z(x,y)$ for the time it takes the dynamics to reach a state in which $x=y$ if started from initial condition $x,y$, and in mRNA state $\text{Z} \in\{0,\text{X},\text{Y}\}$. The $T_\text{Z}(x,y)$ then satisfy the following adjoint equation \cite{vanKampen} \eq{ -\l[ \begin{array}{c} 1\\ 1\\ 1 \end{array} \r] = \l(L_{d} + L_s\r) \l[ \begin{array}{c} T_0(x,y) \\T_\text{X}(x,y)\\T_\text{Y}(x,y) \end{array} \r], }{eq:backward eq} where $L_d$ and $L_s$ are the adjoint operators of $L^\dagger_d$ and $L^\dagger_s$. They are given by \al{ L_d :={}& \l[ \begin{array}{ccc} \l(L_d\r)_{11} & 0 & 0\\ 0 &\l(L_d\r)_{22} & 0 \\ 0&0 & \l(L_d\r)_{33} \end{array} \r] \text{ and } L_s = \l[ \begin{array}{ccc} - H_\text{}(Kx)-H_\text{}(Ky) & H_\text{}(Ky) & H_\text{}(Kx)\\ \gamma &- \gamma & 0 \\ \gamma &0 & - \gamma \end{array} \r], }{eq:b-operators} with \subeq{ \l(L_d\r)_{11} ={}&-\gamma_0 x\partial_x - \gamma_0 y\partial_y ,\\ \l(L_d\r)_{22} ={}&\l(\gamma b - \gamma_0 x\r) \partial_x - \gamma_0 \l(y\r) \partial_y,\\ \l(L_d\r)_{33} ={}&-\gamma_0 x \partial_x + \l(\gamma b - \gamma_0y\r) \partial_y . }{} In the infinitely fast degrading mRNA limit, $\gamma\rightarrow \infty$, equations \eqref{eq:backward eq} can be seen to converge to \al{ -\l[ \begin{array}{c} 1\\ 0\\ 0 \end{array} \r] ={}& \l[ \begin{array}{ccc} \begin{array}{c} -\gamma_0 x \partial_x -\gamma_0 y\partial_y - H_\text{}(Kx)-H_\text{}(Ky) \end{array} & H_\text{}(Ky) & H_\text{}(Kx)\\ 1 & b\partial_x -1 & 0 \\ 1&0 & b\partial_y -1 \end{array} \r]. \l[ \begin{array}{c} T_0(x,y) \\T_\text{X}(x,y)\\T_\text{Y}(x,y) \end{array} \r], }{eq:backward eq2} The boundary conditions for the mean first exist times are determined by $T_\text{Z}\l(x_b,y_b\r)=0$, for all locations $(x_b, y_b)\in \partial \Omega$ at which the deterministic flow driven by $L_d^\dagger$ flows \emph{out of the domain $\Omega$} in state Z. Next, we specify a bounded domain $\Omega_C := \l\{\l(x,y\r): 0<x<y, y<C \r\}$. The boundary conditions of equations \eqref{eq:backward eq2} are then $T_{\rm X}(z, z) =0$ and $T_{\rm Y}(z,C)=0 \ \forall z<C$. We now use these boundary conditions, and integrate the second and the third components of the expression in equation \eqref{eq:backward eq2}. Subsequently we send $C\rightarrow \infty$ and arrive at \al{ -1 ={}& \l[-\gamma_0 x \partial_x -\gamma_0 y\partial_y - H(Kx)-H(Ky)\r] T_0\l(x,y\r) \nonumber\\ {}&+ H(Ky) \int_x^y \frac{e^{-\frac{x'-x}{b}}}{b}T_0\l(x',y\r) dx'+ H(Kx) \int_y^\infty \frac{e^{-\frac{y'-y}{b}}}{b} T_0\l(x,y'\r) dx'. }{eq:integro-diff} \section{WKB analysis} \subsection{WKB ansatz} In order to find the quasi-stationary distribution of the PDMP model and of the diffusion approximation of the GB and CB models, one uses the ansatz \eq{ p_{\rm stat} = \exp \l\{-\frac{1}{\epsilon} \l[S_0\l(x,y\r) + \mathcal{O}\l(\frac{B}{K}\r)\r]\r\} }{} where $\epsilon\propto K^{-1}$ is the magnitude of the intrinsic noise in the protein dynamics. For the purposes of the WKB analysis the noise is assumed to be weak, i.e., $\epsilon \ll 1$. \subsection{DA of the GB model} In the context of the diffusion approximation of the GB model we use $\epsilon=B/K$. To leading order ($\mathcal{O}\l(K^0\r)$) one finds a Hamilton--Jacobi equation of the form \eq{ 0 = \frac{1}{2} \l(\nabla S_0\r)^{\mathbf T} \mathbf{D} \l(\nabla S_0\r) + \mathbf{v}^\mathbf{T} \cdot \nabla S_0. }{eq:GaussianHJ} The vector $\mathbf v$ denotes the deterministic flow \eq{ \mathbf{v}\l(x,y\r) := \l[\begin{array}{c} H_{\rm MF}\l(y\r) - \gamma_0 x\\H_{\rm MF}\l(x\r) - \gamma_0 y \end{array}\r], }{eq:vvv} and the (scaled) diffusion matrix $\mathbf D$ is given by \eq{ \mathbf{D}\l(x,y\r) := \l[\begin{array}{cc} D_{11}\l(x,y\r) & 0 \\ 0 & D_{22}\l(x,y\r) \end{array}\r], }{} with entries \subeq{ D_{11}\l(x,y\r) ={}& \frac{2B+1}{K} H(Ky)+\frac{1}{B}\gamma_0 x,\\ D_{22}\l(x,y\r) ={}&\frac{2B+1}{K}H(Kx)+\frac{1}{B}\gamma_0y. }{} \subsection{DA of the CB model} As before we use $\epsilon=B/K$. A similar leading-order calculation delivers the Hamilton--Jacobi equation, which is again of the form described in equation $\eqref{eq:GaussianHJ}$. The only differences are minor modifications in the diffusion matrix, which now has entries \subeq{ D_{11}\l(x,y\r) ={}& H_{\rm MF}(Ky)+ \frac{1}{B}\gamma_0x, \\ D_{22}\l(x,y\r) ={}& H_{\rm MF}(Kx)+ \frac{1}{B}\gamma_0y. }{} \subsection{DA of the NB model} It is now convenient to use $\epsilon=1/K$. Again one finds a Hamilton-Jacobi equation of the form as above. The diffusion matrix now has entries \subeq{ D_{11}\l(x,y\r) ={}& H_{\rm MF}(Ky)+\gamma_0x, \\ D_{22}\l(x,y\r) ={}& H_{\rm MF}(Kx)+\gamma_0y. }{} \subsection{PDMP model} For the PDMP model, similar calculations deliver the Hamilton--Jacobi equation \al{ 0 ={}& \l[\gamma_0 x - H_{\rm MF}\l(Ky\r)\r] \partial_x S_0 + \l[\gamma_0 y - H_{\rm MF}\l(Kx\r)\r] \partial_y S_0 \nonumber\\ {}&+ \l[\gamma_0 x+\gamma_0 y -H_{\rm MF}\l(Kx\r) - H_{\rm MF}\l(Ky\r)\r] \l(\partial_x S_0\r)\l(\partial_y S_0\r) \nonumber\\ {}&+\gamma_0 x \l(\partial_x S_0\r)^2 + \gamma_0 y \l(\partial_y S_0\r)^2 +\gamma_0 x \l(\partial_x S_0\r)^2\l(\partial_y S_0\r) + \gamma_0 y \l(\partial_x S_0\r)\l(\partial_y S_0\r)^2. }{eq:HJPDMP} \section{Numerical methods} Sample paths of the individual-based processes (FM, CB, NB, and GB) are generated by the standard kinetic Monte Carlo algorithm \cite{Schwartz,Gillespie} implemented in \text{c++}. The PDMP process is simulated using the algorithm proposed by Bokes et al \cite{Bokes}. Simulations of the diffusion approximations are performed using the standard Euler--Maruyama algorithm with a constant time step $\delta t = 10^{-4}$. In all cases $10^6$ sample paths are simulated for a sufficiently long time to measure stationary distributions. For the mean first switching times, we sample $105$ initial states on a lattice on the domain $0\le N_{\rm X}(0)<N_{\rm Y}(0)\le 700$. For each initial state, we simulate $10^4$ sample paths, each until they cross the boundary $N_{\rm X}=N_{\rm Y}$ to measure the mean first switching times. The geometric minimum action method proposed by Heymann and Vanden--Eijnden \cite{Heymann} is implemented using MATLAB R2010a, and used to find the rate function $S_0$ of the WKB method. For each model, we sample at least $150$ end points and solve for the least-action paths, discretized into 257 equidistant points, connecting one of the fixed points and the end point. The final landscapes are generated by linear interpolation of the rate functions so obtained. The finite-difference scheme to solve the adjoint equation was implemented in MATLAB R2010a, discretizing the domain $0\leq x,y\leq C=2000$ into $150\times150$ grid points. The adjoint equation is then transformed to a set of $22500$ linear equations, which is solved using a built-in numerical solver in MATLAB R2010a. \newcommand{0.85}{0.85} \newpage \section{Sample paths of the different models} \subsection{Full model} \begin{figure}[h!!!] \begin{center} \includegraphics[width=0.85\textwidth]{./ Supplementary1.pdf} \caption{One sample path of the full model (FM). Left panel: short time scale. Right panel: the protein expressions switches at a longer time scale driven by intrinsic noise.} \label{fig:S1} \end{center} \end{figure} \newpage \subsection{Geometrically distributed burst model (GB)} \begin{figure*}[h!!!] \begin{center} \includegraphics[width=0.85\textwidth]{./ Supplementary20.pdf} \caption{One sample path of the model with geometrically distributed burst size (GB). Left panel: short time scale. Right panel: long time scale.} \label{fig:S2} \end{center} \end{figure*} \subsection{Constant burst model (CB)} \begin{figure*}[h!!!] \begin{center} \includegraphics[width=0.85\textwidth]{./ Supplementary2.pdf} \caption{One sample path of the model with constant bursts (CB). Left panel: short time scale. Right panel: long time scale.} \label{fig:S2} \end{center} \end{figure*} \newpage \subsection{No-burst model (NB)} \begin{figure*}[h!!!] \begin{center} \includegraphics[width=0.85\textwidth]{./ Supplementary3.pdf} \caption{One sample path of the model without bursts. Left panel: short time scale. Right panel: long time scale. In $1000$ cell cycles, we observe no switching event in this sample path.} \label{fig:S3} \end{center} \end{figure*} \subsection{PDMP model} \begin{figure*}[h!!!] \begin{center} \includegraphics[width=0.85\textwidth]{./ Supplementary4.pdf} \caption{One sample path of the PDMP. Left panel: short time scale. Right panel: long time scale.} \label{fig:S4} \end{center} \end{figure*} \newpage \subsection{Diffusion approximation of the GB model} \begin{figure*}[h!!!] \begin{center} \includegraphics[width=0.85\textwidth]{./ Supplementary5.pdf} \caption{One sample path of the diffusion approximation of the GB model. Left panel: short time scale. Right panel: long time scale.} \label{fig:S5} \end{center} \end{figure*} \subsection{Diffusion approximation of the CB model} \begin{figure*}[h!!!] \begin{center} \includegraphics[width=0.85\textwidth]{./ Supplementary6.pdf} \caption{One sample path of the diffusion approximation of the CB model. Left panel: short time scale. Right panel: long time scale.} \label{fig:S6} \end{center} \end{figure*} \newpage \subsection{Diffusion approximation of the NB model} \begin{figure*}[h!!!] \begin{center} \includegraphics[width=0.85\textwidth]{./ Supplementary7.pdf} \caption{One sample path of the diffusion approximation of the single-stage model without bursts. Left panel: short time scale. Right panel: long time scale. Similar to the NB model, no switching event occurs in $1000$ cell cycles in this sample path. } \label{fig:S7} \end{center} \end{figure*} \newpage \section{Comparison of stationary distributions} \begin{figure*}[h!!!] \begin{center} \includegraphics[width=0.6\textwidth]{./ Supplementary_SD.pdf} \caption{Stationary distribution measured in simulations. All axes show $0\leq N_X,N_Y\leq 700$ on a linear scale. Insets show the distribution as viewed from the point $N_{\rm X}=N_{\rm Y}=700$ facing towards the origin. ({\bf a}) Full model; ({\bf b}) PDMP; ({\bf c}) GB model; ({\bf d}) Diffusion approximation (DA) of GB; ({\bf e}) CB model; ({\bf f}) DA of CB; ({\bf g}) NB model; ({\bf a}) DA of NB. The same colour scale is used in all panels, except for panels e and f.} \end{center} \end{figure*} \newpage \section{Comparison of the mean first switching times} \begin{figure*}[h!!!] \begin{center} \includegraphics[width=0.7\textwidth]{./ Supplementary_MFST.pdf} \caption{Mean first switching times. All graphs show $0\leq N_X,N_Y\leq 700$ on linear scales. ({\bf a}) Full model; ({\bf b}) PDMP; ({\bf c}) GB model; ({\bf d}) Diffusion approximation (DA) of GB; ({\bf e}) CB model; ({\bf f}) Numerical solution of the adjoint equation of the PDMP. Data are plotted on the same colour scale in all panels to allow comparison.} \end{center} \end{figure*} \newpage \section{Comparing WKB results} \begin{figure*}[h!!!] \begin{center} \includegraphics[width=0.6\textwidth]{./ Supplementary_WKB.jpg} \caption{Results from the WKB analysis. Panels a, c, e, and g show the WKB rate functions $S_0(N_X,N_Y)$, and panels b, d, f, and h the corresponding approximation for the stationary probability distribution $\mathcal{N} \exp \l[-S_0\l(N_{\rm X},N_{\rm Y}\r)/\epsilon\r]$ where $\mathcal{N}$ is the normalisation factor. All panels show $0\leq N_X,N_Y\leq 700$ on a linear scale. The insets show the stationary distributions viewed from $\l(N_{\rm X},N_{\rm Y}\r)=\l(700,700\r)$. ({\bf a, b}) PDMP; ({\bf c, d}) DA of GB ; ({\bf e, f}) DA of CB; ({\bf g, h}) DA of NB.} \end{center} \end{figure*} \newpage \section{Results} \noindent{\bf Different scales of individual-based models of a toggle switch network.} We compare four individual-based models and investigate the effect of bursting noise in a toggle switch network. The first model we consider describes both the mRNA and the protein population dynamics\cite{Strasser}. Fig.~\ref{fig:1}a illustrates the Markovian model of the regulatory network. Genes X and Y are transcribed into mRNA X and mRNA Y, respectively, which in turn are translated to produce proteins X and Y. The transcription of each of the two genes is suppressed by proteins of the respective other type via a Hill function\cite{Thattai,Walczak} $H(N)=K\left[r_0+r/[1+(N/K)^n]\right]$, where $N$ stands for the number of suppressing proteins. The model parameter $K$ represents a typical population scale of the proteins, and the parameters $r$ and $r_0$ set the minimal ($r_0K$) and maximal transcription rates ($(r_0+r)K$). The parameter $n>0$ is the so-called Hill coefficient which models the cooperative binding of the repressors \cite{Walczak}. More details of the reaction scheme can be found in the Supplementary Information. Proteins of either type, and the mRNA molecules degrade with constant rates $\gamma_0$ and $\gamma$ respectively. Biologically, mRNA molecules degrade much faster than the proteins do ($\gamma \gg \gamma_0$) \cite{Thattai,Friedman,Cai}. The translation rate of the mRNA is parametrised by $\gamma B$ where the parameter $B$ is the relative frequency of protein production to mRNA degradation. In this parametrisation, the number of proteins one single mRNA molecule produces during its lifetime is a geometrically distribution random variable with mean $B$ (see Supplementary Information). Biologically the parameter $B$ varies depending on the type of product protein \cite{Swain}. We assume $B\gtrsim 10$ in this work \cite{Thattai,WalczakSasai} to investigate the effect of translational bursting. Together with the relatively short lifetime of mRNA molecules, this constitutes the origin of `translational bursting' in the model \cite{Friedman,Raj}: a relatively large number of protein molecules is synthesized in a relatively short period of time. \begin{table*}[t] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Parameter & Description & Value & Unit & Reference \\ \hline \hline $B$ & Average number of protein each mRNA produces & 30 & molecule & \cite{Thattai}\\ $\gamma$ & mRNA degradation rate & 30 & 1/(cell cycle) & \cite{Thattai}\\ $\gamma_0$ & Protein degradation rate & 1.0 & 1/(cell cycle) & \cite{Thattai,Roma,Taniguchi}\\ $r$ & Maximum suppressed transcription rate & $ 6/100 $ & 1/(cell cycle) & \cite{Lu,Kobayashi}\footnote{In \cite{Lu} $r=1.8$ and the time unit is defined as the inverse of the protein degradation rate. In our full model we use this value, normalized by to the mean burst size $B=30$ molecules ($r=1.8/30=0.06$).} \\ $r_0$ & Basal transcription rate & $1/150$ & 1/(cell cycle) & \cite{Lu,Kobayashi}\footnote{In \cite{Lu} $r_0=0.2$. After normalising with respect to the burst size $30$, we obtain $1/150$. In \cite{Kobayashi} $r_0=.05r$, which is of the same order as \cite{Lu}.} \\ $K$ & A typical population scale of the proteins & 200 & molecule & \cite{Lu,Kobayashi}\footnote{In \cite{Lu} $K$ is set to be 200 molecules. In \cite{Kobayashi} only the deterministic dynamics are provided and $r+r_0=4.0$. To match the protein population scale $\approx 400$ in \cite{Lu,Taniguchi}, we impose $rK=400$, resulting in a typical population scale of the proteins $K~\sim 100$ molecules, which is of the same order as that of \cite{Lu}.} \\ $n$ & Hill coefficient & 3.0 & Dimensionless & \cite{Roma,Gardner,Lu,Kobayashi}\\ \hline \end{tabular} \caption{Parameter set.}\label{table:1} \end{center} \end{table*} For simplicity, the process in Fig.~\ref{fig:1}a is assumed to be symmetric with respect to X and Y, but the analysis is easily generalised to asymmetric circuits. In Table \ref{table:1} we list a set of estimated values of the parameters for the model organism \emph{E. coli}, along with relevant references. In the context of this work the model just described constitutes the most detailed model we will investigate and compare against. It serves as a starting point for the derivation of more coarse grained models, and for these purposes we will refer to it as the `full model' ({\bf FM}) in the following. The FM describes both the mRNA and the protein populations, hence it constitutes a relatively high-dimensional system which complicates the mathematical analysis. Notably, the only role of mRNA in the FM is to generate proteins, and so mRNA can be left out, so long as the correct statistics of protein production is retained. The timescale separation between the mRNA and protein lifetimes leads to the following reduced model describing only the protein dynamics. In the limit of infinitely-fast mRNA degradation ($\gamma\gg\gamma_0$), proteins are generated instantaneously in bursts of geometrically distributed sizes with a mean $B$, and in between bursting events protein populations decay with rate $\gamma_0$. We will refer to the reduced model as the {\bf GB} model (geometrically distributed bursts), see Fig.~\ref{fig:1}b \cite{Swain,Assaf}. In the GB model, the transcription rates are regulated via the Hill function exactly as before in the FM. A further reduction of the GB model involves replacing the geometrically distributed burst sizes by a constant size $B$. We will call this the {\bf CB} model (constant bursts) \cite{WalczakSasai}. While the reduction of the full model to the model with geometrically distributed bursts is well controlled and exact in the limit $\gamma\gg\gamma_0$, the effects of introducing constant burst sizes are unclear at this stage, and require a detailed analysis (see below). An even more reduced model is a model with no bursts\cite{Warren,Roma,WalczakSasai}, we will refer to this as the {\bf NB} model. The reaction scheme is illustrated in Fig.~\ref{fig:1}c. In this model, only one single protein is synthesized when a transcription event occurs. We assume a $B$-fold increased transcription rate so that the average number of proteins synthesized per unit time is consistent with the FM, GB, and CB models. \\ \noindent{\bf Only the GB model approximates the stationary distribution of the FM.} Numerical simulations of each of the models are carried out using standard methods \cite{Schwartz,Gillespie}. In the following we present statistical properties of the models, leaving typical sample paths to the Supplementary Information. Fig.~\ref{fig:2} displays the numerically computed stationary distributions for the FM, GB, CB and NB models. They illustrate that the profiles of protein expressions in different model settings are quite distinct. This is due to the different representation of the underlying intrinsic noise. While the stationary distributions of the FM and the GB model are in good agreement with each other, substantial discrepancies from the full model are found in the CB and NB models. In the CB model the stationary distribution of protein numbers is very localised compared to the FM and the GB model. In the NB model the probability distribution is even more sharply concentrated. This is because the NB model misses out two pertinent sources of noise. Bursting production in the CB model amplifies the stochasticity of transcription events and leads to a broadening of the protein distribution. Adding randomly distributed burst sizes (GB model) introduces further stochasticity, and diversifies of protein numbers even further. Based on these results, we conclude that the bursting noise introduced by the mRNA populations significantly broadens the stationary distribution. In addition, the GB model approximates the FM model significantly better than the CB and NB models do. We can effectively discard the CB and NB models as faithful representation of the FM, and our subsequent discussion hence focuses mostly on the GB model. \\ \noindent{\bf The GB model approximates the mean first switching time of the FM.} The toggle switch has two dynamic attractors, one in which protein X is highly expressed and where protein Y has a low concentration, and the other with inverted roles by symmetry. Starting from one attractor the switch can be driven to the other attractor by fluctuations. The timescale of such a transition quantifies the dynamical stability: the longer the timescale, the more stable the system is at the initial position. As we will study next, the way in which the bursting production of protein is implemented significantly affects the timescale of these switching processes. Starting from initial condition $N_{\rm X}(0)=n_{x,0}$ and $N_{\rm Y}(0)=n_{y,0}$, we define the first switching time as the time it takes a sample path to reach the symmetric boundary $N_{\rm X}=N_{\rm Y}$. Mathematically, the first switching time is a random variable. The mean first switching time (MFST) is then the average value of the random first switching time. The MFST depends on the initial condition $\l(n_{x,0},n_{y,0}\r)$. Sweeping across the space of possible initial configuration, the MFST of the FM and of the GB model are measured in simulations and presented in Fig.~\ref{fig:3}. We show the MFST of the CB in the Supplementary Information. As with the stationary distributions, the data in Fig.~\ref{fig:3} indicates that the GB model approximates the switching times of the full model to a good accuracy. We remark that the MFST of the CB model is almost as twice as long as that of the GB and FM models, and the switching time in the NB model is longer than $1000$ cell cycles (Supplementary Information). \\ \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{./ SD.pdf} \caption{Stationary distribution of protein numbers, shown in the range $0\leq N_{\rm X},N_{\rm Y}\leq 700$ on a linear scale on both axes. ({\bf a}) FM: Full model describing the mRNA and protein populations; ({\bf b}) GB: protein-only model with geometrically distributed bursts; ({\bf c}) CB: protein-only model with constant bursts; and ({\bf d}) NB: protein-only model without bursts. \label{fig:2}} \end{center} \end{figure} \noindent{\bf Diffusion approximation of the GB model.} The evolution of the protein population in the GB model is described by a master equation (Supplementary Information). Solving master equations mathematically is however difficult and mostly limited to linear dynamics \cite{Kumar,Swain}. The only realistic way forward for a theoretical analysis is often the so-called diffusion approximation. In the diffusion approximation, the discrete-molecule process is approximated by a Gaussian process for continuous concentrations---numbers of the different types of molecules normalized by a typical population scale. The Gaussian process satisfies a diffusion equation (the Fokker--Planck equation) \cite{vanKampen,Gardiner}. Based on these methods, it is often possible to calculate or approximate the stationary behaviour and switching times of model gene networks. For existing studies in the context of toggle switches see \cite{Wang,Lu,WangHuang}. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{./ MFST.pdf} \caption{Mean first switching time as a function of the initial protein numbers ($0\leq N_{\rm X},N_{\rm Y}\leq 700$, shown on a linear scale). ({\bf a}) FM: Full model; ({\bf b}) GB model. \label{fig:3}} \end{center} \end{figure} Deriving the diffusion approximation of the GB model requires modest modifications to the standard Kramers--Moyal expansion \cite{vanKampen,Gardiner}. These modifications are necessary to account for the randomness induced by the geometrically distributed burst size. Details of the derivation can be found in the Supplementary Information, we here only report the final outcome. The expansion results in two coupled It\=o stochastic differential equations for the concentrations $x_t=N_{\rm X}(t)/K$ and $y_t=N_{\rm Y}(t)/K$. These are valid in the limit of large but finite populations \cite{KurtzFP} and of the form \subeq{ dx_t ={}& v(x_t,y_t) dt + \sqrt{D(x_t,y_t)} dW_t^{(x)}, \\ dy_t ={}& v(y_t,x_t) dt + \sqrt{D(y_t,x_t)} dW_t^{(y)}, }{eq:diffapprox} with drift $v$ and diffusion $D$ given by \subeq{ v(w,z):={}&B \l(r_0+ \frac{r}{1+z^n}\r)- {\gamma_0} w, \\ D(w,z):={}& \frac{B}{K}\l[ \l(2B+1\r) \l(r_0+ \frac{r}{1+z^n}\r) + \frac{\gamma_0}{B} w\r]. }{eq:vD} The quantities $dW_t^{(x)}$ and $dW_t^{(y)}$ represent independent Wiener processes. The diffusion approximation can only be expected to be accurate when molecule numbers are large so that the concentations $x_t$ and $y_t$ are effectively continuous. In principle, a similar analysis can also be applied to the master equations of the full model. In the FM mRNA numbers are rather small though (typically $<5$, see Supplementary Information), so the Gaussian approximation does not capture the statistics of the intrinsic noise faithfully. Similarly further analysis of the CB and NB models can be carried out based on the diffusion approximation. Given that CB and NB models fail to reproduce the behaviour of the FM, these results are relegated to the Supplementary Information. Results from simulating the Gaussian process of equations \eqref{eq:diffapprox} are shown in Fig.~\ref{fig:4}. While the data for the stationary distribution (Fig.~\ref{fig:4}a) looks similar to that of the full model (Fig.~\ref{fig:2}a), noticeable discrepancies are manifest in the mean first switching times (compare Fig.~\ref{fig:4}b and Fig.~\ref{fig:3}a). In Fig.~\ref{fig:4}c and d, we show the differences between simulation outcomes of the full model and those of the diffusion approximation of the GB model. Although the GB model itself approximates the full model well (Figs.~\ref{fig:2} and \ref{fig:3}), we conclude that the diffusion approximation fails to capture the relevant model outcomes. \\ \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{./ Error_DA.pdf} \caption{Diffusion approximation of the protein-only model with geometrically distributed random bursts (GB); ({\bf a}) Stationary distribution as a function of the protein numbers; ({\bf b}) Mean first switching time (MFST) as a function of the initial protein numbers, in the unit of cell cycles; ({\bf c}) Net deviation of the stationary distribution from the full model; ({\bf d}) Net deviation of the MFST from the FM. All axes show the range $0\leq N_{\rm X},N_{\rm Y}\leq 700$ on linear scales. \label{fig:4}} \end{center} \end{figure} \noindent{\bf Constructing a mesoscopic piecewise deterministic Markov process.} We have seen that the diffusion approximation of the GB model fails to reproduce the statistics of the full model. This underlines the need to construct coarse-grained models {\em directly from the full model} and without the intermediate step of a protein-only dynamics. We now proceed to introduce such a model. As before we describe protein concentrations by continuous variables, $x$ and $y$. The mRNA dynamics are captured by introducing three `states': The $0$-state describes phases in which no mRNA is present. In the X-state there is one mRNA of type X and protein X is generated with rate $\gamma b$. The quantity $b=B/K$ is the mean burst size in the unit of protein concentration. No proteins of type Y are produced in the X-state. Similarly, in the Y-state protein Y is generated with rate $\gamma b$. Both types of protein are subject to natural degradation with rate $\gamma_0$ in any of the three states. This is described by the following \emph{deterministic} differential equations: \subeq{ \text{0-state:} {}& \quad \dot{x} = -\gamma_0 x \quad \text{and} \quad \dot{y}=-\gamma_0 y,\\ \text{X-state:} {}& \quad \dot{x} = \gamma b -\gamma_0 x \quad \text{and} \quad \dot{y}=-\gamma_0 y,\\ \text{Y-state:} {}& \quad \dot{x} = -\gamma_0 x \quad \text{and} \quad \dot{y}= \gamma b-\gamma_0 y. }{} The rates with which the system transits between the states are based on the dynamics of the FM: \al{ \text{0-state} \xrightarrow{H_\text{}\l(K y\r)} {}&\text{X-state, }~~~~~\text{X-state} \xrightarrow{\gamma}\text{0-state},\nonumber\\ \text{0-state} \xrightarrow{H_\text{}\l(K x\r)} {}&\text{Y-state, }~~~~~\text{Y-state} \xrightarrow{\gamma}\text{0-state}. }{eq:PDMP transition} No transitions occur directly between the X and Y states. The kinetic scheme is illustrated in Fig.~\ref{fig:1}d. The stochasticity and discreteness of the mRNA populations is reflected in the random transitioning between the 0-, X- and Y-states. Between these Markovian events the protein concentrations evolve deterministically. We will refer to this model as the piecewise deterministic Markov process ({\bf PDMP}). Notably, at most one mRNA molecule of either type can be present in PDMP at any time. Although the model can be generalised to allow more than one mRNA molecule, the analysis below shows that the lowest-order approximation is sufficient to capture the relevant fluctuations of the mRNA dynamics. \\ \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{./ Error_PDMP.pdf} \caption{PDMP approximation. ({\bf a}) Stationary distribution; ({\bf b}) Mean first switching time in the unit of cell cycles as a function of initial protein numbers. ({\bf c}) Net deviation of the stationary distribution from the full model; ({\bf d}) Net deviation of the MFST of the PDMP model from the FM. All axes are on linear scales and show the range $0\leq N_{\rm X},N_{\rm Y}\leq 700$. \label{fig:5}} \end{center} \end{figure} \noindent{\bf The PDMP approximation outperforms the diffusion approximation of the GB model.} As in the GB model, we work in the limit of infinitely fast degrading mRNA ($\gamma \rightarrow \infty$). Simulations of the PDMP model in this limit can be carried out using a minor modification of a previously proposed algorithm\cite{Bokes}. We measure the stationary distribution of the PDMP model and the mean first switching times for different initial protein numbers. Results are shown in Fig.~\ref{fig:5}a and b, and we compare the outcome against that of the full model in Fig.~\ref{fig:5}c and d. The simulation data indicate that the PDMP approximation outperforms the diffusion approximation of the GB model, and it provides a more faithful approximation to the FM. This is because the diffusion approximation introduces Gaussian noise. It retains some information about the variance of protein production and degradation, but it does not capture the geometrically distributed burst sizes in the GB model well enough. The PDMP approximation, on the other hand, models exponentially distributed bursts in protein concentration. The exponential distribution in the PDMP model is the analogue of the geometric distribution in the discrete-molecule GB model. While the PDMP model is an approximation as well, it retains the typical characteristics of the stationary distribution and switching times of the original model. At the same time the PDMP model is suitable for further mathematical analysis (see below). \\ \noindent{\bf When does the PDMP outperform the diffusion approximation?} We now investigate the robustness of these findings. In Fig.~\ref{fig:6} we vary two essential parameters, the mean burst size $B$, and the population scale $K$, while keeping the other parameters fixed. We measure the Jensen--Shannon distance \cite{Lin,Endres} between the resulting stationary distributions of the PDMP and that of the the full model. Data is shown in Fig.~\ref{fig:6}a and c. We also compare the mean first switching times starting from one of the stable modes, see Fig.~\ref{fig:6}b and d. The figure also shows results from the diffusion approximation of the GB model. Results indicate that PDMP model outperforms the diffusion approximation of the GB model for mean burst sizes of $B\gtrsim 5$. We conclude that the bursting noise has to be considered in this biologically relevant regime \cite{Swain}. The PDMP model incorporates only the bursting noise and neglects the demographic noise from random degradation of the proteins. The strength of this demographic noise is proportional to $1/\sqrt{K}$. The results in Fig.~\ref{fig:6} c and d indicate that the difference in describing intrinsic noise propagates to physical observables even when the noise is weak ($K\approx 1000$ for fixed $B=30$). \\ \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{./ changingBK.pdf} \caption{Performance of the PDMP model and the diffusion approximation of the GB model (DA-GB). ({\bf a}) Jensen--Shanon distance between the stationary distribution PDMP (and DA-GB) and the stationary distribution of the FM; ({\bf b}) Mean first switching time for varying value of $B$ at fixed $K=200$; ({\bf c}-{\bf d}) Similar to ({\bf a}-{\bf b}) but now varying $K$ at fixed $B=30$. \label{fig:6}} \end{center} \end{figure} \noindent{\bf Analytic investigation of the PDMP process.} The simplicity of the PDMP approach allows us to proceed with a mathematical analysis. We here only outline the main steps, further details are reported in the Supplementary Information. We denote the probability density that the system is in the $0$-state and with protein densities $x,y$ at time $t$ by $p_\text{0}(x,y,t)$. Similarly we write $p_\text{X}(x,y,t)$ and $p_\text{Y}(x,y,t)$ when the system is in the X- or Y-states. The evolution of these distributions then follows the forward equation \eq{ \frac{\partial}{\partial t} \l[ \begin{array}{c} p_\text{0}\\p_\text{X}\\p_\text{Y} \end{array} \r] = \l(L^{\dagger}_{d} + L^{\dagger}_\text{s}\r) \l[ \begin{array}{c} p_\text{0}\\p_\text{X}\\p_\text{Y} \end{array} \r], }{eq:PDMP} where $L^{\dagger}_d$ and $L^{\dagger}_s$ drive the deterministic flow and the random switching between states respectively. These operators are of the form \subeq{ L^{\dagger}_d :={}& \l[ \begin{array}{ccc} \l(L^\dagger_d\r)_{11} & 0 & 0\\ 0 &\l(L^\dagger_d\r)_{22} & 0 \\ 0&0 & \l(L^\dagger_d\r)_{33} \end{array} \r], \\ L^{\dagger}_s :={}& \l[ \begin{array}{ccc} - H_\text{}(Kx)-H_\text{}(Ky) & \gamma & \gamma\\ H_\text{}(Ky) &- \gamma & 0 \\ H_\text{}(Kx)&0 & - \gamma \end{array} \r], }{eq:PDMP operators} with \subeq{ \l(L^\dagger_d\r)_{11} :={}&\gamma_0\partial_x \l(x\r) + \gamma_0\partial_y \l(y\r),\\ \l(L^\dagger_d\r)_{22} :={}& \partial_x \l(-\gamma b + \gamma_0 x\r) + \gamma_0 \partial_y \l(y\r) ,\\ \l(L^\dagger_d\r)_{33} :={}&\gamma_0 \partial_x \l(x\r) + \partial_y \l(-\gamma b + \gamma_0y\r). }{} The differential operators $\partial_x$ and $\partial_y$ act on all that follows to their right, including the probability densities $p_0, p_\text{X}$ and $p_\text{Y}$ outside the matrix notation in equations \eqref{eq:PDMP} and \eqref{eq:PDMP operators}. The PDMP approximation applies in the limit $\gamma\rightarrow \infty$, i.e., fast return into the 0-state. The resident time in the X- and Y-states is exponentially distributed and scales as $\gamma^{-1}$. It formally tends to zero as $\gamma\to\infty$. On the other hand the translation rate $\gamma B$ tends to infinity in this limit. Combining the limiting behaviours of resident time and translation rate results in an \emph{exponentially distributed} increment of protein concentration in each cycle of switching from the 0-state to the X- or Y-state, and then returning to the 0-state. As a consequence the PDMP converges to previously proposed continuous-state bursting models \cite{Friedman,Bokes,Cai} in the limit $\gamma \rightarrow \infty$, and $p_\text{0}(x,y,t)$ satisfies \al{ \partial_t p_0={}& \partial_x \l( \gamma_0 x p_0 \r) + \partial_y \l( \gamma_0 y p_0\r) - \l[ H_\text{}(Kx) + H_\text{}(Ky) \r] p_0 \nonumber\\ {}&+ H_\text{}(Ky) \int_0^{x} \frac{1}{b} e^{-\frac{x-x'}{b}} p_0\l(x',y,t\r)dx' \nonumber\\ {}&+ H_\text{}(Kx) \int_0^{y} \frac{1}{b} e^{-\frac{y-y'}{b}} p_0\l(x,y',t\r) dy', }{eq:PDMP ID} as detailed in the Supplementary Information. \\ \noindent{\bf Analytic investigation of the mean first switching time.} One of the strengths of the PDMP formulation (equations \eqref{eq:PDMP} and \eqref{eq:PDMP operators}) is the relative ease with which mean first switching times can be obtained. We first proceed by computing mean escape time from an arbitrary open domain $\Omega$. The mean first switching time can be calculated by setting $\Omega=\l\{(x,y):x<y\r\}$, recognising that the process can only exit this domain by crossing the boundary $x=y$. Suppose, the system is initially at $(x,y)\in \Omega$, and in state $\text{Z}\in \l\{0, \text{X},\text{Y}\r\}$. We write $T_\text{Z}\l(x,y\r)$ for the mean first time at which the process exits the domain $\Omega$. The quantities $T_\text{Z}$ then satisfy the following backward equation \cite{vanKampen,Gardiner} \eq{ -\l[ \begin{array}{c} 1\\ 1\\ 1 \end{array} \r] = \l(L_{d} + L_\text{s}\r) \l[ \begin{array}{c} T_0(x,y) \\T_\text{X}(x,y)\\T_\text{Y}(x,y) \end{array} \r], }{eq:backward eq} where $L_d$ and $L_s$ are adjoint to the operators in equations \eqref{eq:PDMP operators}. They are given by \subeq{ L_d :={}& \l[ \begin{array}{ccc} \l(L_d\r)_{11} & 0 & 0\\ 0 &\l(L_d\r)_{22} & 0 \\ 0&0 & \l(L_d\r)_{33} \end{array} \r], \\ L_s :={}& \l[ \begin{array}{ccc} - H_\text{}(Kx)-H_\text{}(Ky) & H_\text{}(Ky) & H_\text{}(Kx)\\ \gamma &- \gamma & 0 \\ \gamma &0 & - \gamma \end{array} \r] }{eq:b-operators} with \subeq{ \l(L_d\r)_{11} ={}&-\gamma_0 x\partial_x - \gamma_0 y\partial_y ,\\ \l(L_d\r)_{22} ={}&\l(\gamma b - \gamma_0 x\r) \partial_x - \gamma_0 \l(y\r) \partial_y,\\ \l(L_d\r)_{33} ={}&-\gamma_0 x \partial_x +\l(\gamma b - \gamma_0y\r) \partial_y . }{} In the infinitely-fast degrading mRNA limit ($\gamma\rightarrow \infty$), and using appropriate boundary conditions (Supplementary Information) we arrive at \al{ -1 ={}& \l[-\gamma_0 x \partial_x -\gamma_0 y\partial_y - H(Kx)-H(Ky)\r] T_0\l(x,y\r) \nonumber\\ {}&+ H(Ky) \int_x^y \frac{e^{-\frac{x'-x}{b}}}{b}T_0\l(x',y\r) dx' \nonumber\\ {}&+ H(Kx) \int_y^\infty \frac{e^{-\frac{y'-y}{b}}}{b} T_0\l(x,y'\r) dx'. }{eq:integro-diff} This is the adjoint equation\cite{vanKampen} of the expression in equation \eqref{eq:PDMP ID} on the open domain $\Omega$. Equation \eqref{eq:integro-diff} is solved by a finite difference method, noting that it is self-consistent and no boundary condition needs to be specified. The solution is shown in Fig.~\ref{fig:7}, and reproduces the simulation outcome of the FM well. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{./ Analysis_PDMP.pdf} \caption{Theoretical prediction of the PDMP model. ({\bf a}) Mean first passage time as a function of initial protein numbers, calculated from the backward equation \eqref{eq:integro-diff}; ({\bf b}) Stationary distribution of protein numbers calculated from the WKB method. Axes of both panels show the range $0\leq N_{\rm X}(0),N_{\rm Y}(0)\leq 700$ on linear scales. \label{fig:7}} \end{center} \end{figure} We remark that equation \eqref{eq:integro-diff} is only valid for the half-plane $\Omega$. A detailed discussion can be found in the Supplementary Information. \\ \noindent{\bf Analytic investigation of the weak-noise limit.} The analytical calculation of the stationary distributions of the PDMP model can be pursued further using the so-called Wentzel-Kramers-Brillouin (WKB) method. This technique is based on the ansatz \begin{equation} p_{\rm stat}\l(x,y\r) =\exp \l[-\frac{1}{b} \sum_{\ell=0}^\infty b^\ell S_\ell\l(x, y\r)\r], \label{eq:WKB ansatz} \end{equation} where $b=B/K\ll 1$. One proceeds by considering $\l(L_d^\dagger+ L_s^\dagger\r) p_{\rm stat}(x,y)=0$ order-by-order in $b$. To leading order we find the Hamilton--Jacobi equation \al{ 0 ={}& \l[\gamma_0 x - Bh\l(y\r)\r)] \partial_x S_0 + \l[\gamma_0 y - Bh\l(x\r)\r] \partial_y S_0 \nonumber \\ {}& + \l[\gamma_0 x+\gamma_0 y -Bh\l(x\r) - Bh\l(y\r)\r] \l(\partial_x S_0\r)\l(\partial_y S_0\r) \nonumber\\ {}&+\gamma_0 x \l(\partial_x S_0\r)^2 + \gamma_0 y \l(\partial_y S_0\r)^2 \nonumber \\ {}&+\gamma_0 x \l(\partial_x S_0\r)^2\l(\partial_y S_0\r) + \gamma_0 y \l(\partial_x S_0\r)\l(\partial_y S_0\r)^2, }{eq:PDMPHJ} where $h(z):=H(Kz)/K$. This equation is then numerically solved using the algorithm of Heymann and Vanden--Eijnden \cite{Heymann}. Results are shown in Fig.~\ref{fig:8}. Even though this only provides a first-order approximation and despite the fact that we have used $b=0.15$ (which is not very small) we obtain a reasonable agreement with the stationary distribution in Fig.~\ref{fig:5}a. For completeness we have also carried out a WKB analysis of the diffusion approximation of the GB, CB and NB models. These are presented in the Supplementary Information. The leading order function $S_0\l(x,y\r)$ is the so-called `rate function' which quantifies the rare-event statistics of the process in the weak-noise limit $b\ll 1$ \cite{WKB,Zhou}. Several studies have suggested that $S_0\l(x,y\r)$ is a suitable candidate for a `landscape' of the non-equilibrium random processes in models of gene regulatory networks\cite{Wang,Warren,Strasser,Assaf,Roma}. The Hamilton--Jacobi equation \eqref{eq:PDMPHJ} contains cubic terms such as $\l(\partial_x S_0\r)^2 \l(\partial_y S_0\r)$, while diffusion equations are quadratic in derivatives of $S_0$. This illustrates the fundamental difference between the statistics of intrinsic noise in the diffusion approximation and the bursting noise in PDMP. Further more rigorous mathematical investigations into these differences would be very welcome in our view. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{./ Analysis_DA.pdf} \caption{Rate functions $S_0$ as functions of the protein numbers $0\leq N_{\rm X},N_{\rm Y}\leq 700$ on a linear scale. ({\bf a}) PDMP model; ({\bf b}) Diffusion approximation of the GB model. \label{fig:8}} \end{center} \end{figure} We compare the functions $S_0$ of the PDMP and the diffusion approximation of the GB model in Fig.~\ref{fig:8}. One observes a much `shallower' rate function in the PMDP model, especially at larger protein numbers ($N_{\rm X},N_{\rm Y}\approx 700$). This is due to the long tails in the exponential bursting kernel of the PDMP model, which are not present in the diffusion approximation of the GB model. Such a fat-tail bursting kernel enhances the probability for the system to evolve to high protein concentrations. We identify this as the origin of the qualitatively distinct rare-event statistics in the two models. \\ \noindent{\bf Effects of bursting noise in a multi-switch network.} Recently, multi-switch systems have gained interest\cite{Lu,Lu2,Guantes}. A schematic diagram of the three-way switch network proposed by \cite{Lu} is shown in Fig.~\ref{fig:9}a. It is obtained from the classical toggle switch network by including a self-enhancing autoregulation. Our computational and mathematical setup requires only minor modifications to include generalisation to this case. Specifically, we replace the earlier Hill functions by \al{ G(N_{\rm X},N_{\rm Y}) ={}& q_0 \l(1 + \frac{r_1 }{(N_{\rm X}/K_1)^{n_1} + 1}\r)\nonumber \\ {}&\times \l(1 + \frac{r_2 }{(N_{\rm Y}/K_2)^{n_2} + 1}\r), }{} with parameters\cite{Lu} $q_0= 4$, $r_1= -4/5$, $r_2 = 7/3$, $n_1=3$, $n_2 =1$, $K_1=160$, and $K_2=320$. The rest of the parameters follows Table \ref{table:1}. The negative value of $r_1$ reflects the positive autoregulation. To evaluate the effects of bursting noise on this multi-switch model, we consider again the full model, the diffusion approximation of the GB model, as well as the CB and NB models of the extended network. Fig.~\ref{fig:9} displays the stationary distribution to illustrate the effects of the bursting noise in the multi-switch network. The model without bursts (NB, panel f) has a stationary distribution consisting of three modes, as reported earlier\cite{Lu}. Inclusion of constant bursts (CB, panel e) diversifies the protein expression and reduces the stability of the mode located at $N_{\rm X}=N_{\rm Y}\approx 230$. In the full model (panel b) there is no discernible concentration of probability in the symmetric mode, hence the three-way switching capability appears to be absent. We also notice that the saddle of the distribution in the FM is located at a state with much lower number of proteins compared to the NB and CB models. The most likely switching path \cite{Roma} from one of the asymmetric modes to the other will differ significantly between the different variants of the model. The diffusion approximation of the GB model (panel c) does not capture the outcome of the FM either. Overall these findings confirm again that the inclusion of bursty noise statistics has significant effects on the model outcome. Finally, we observe in Fig.~\ref{fig:9}d that the PDMP model approximates the full model of the three-way switch well. We conclude that randomly distributed burst sizes are again the predominant form of intrinsic noise in the multi-switch network. \section{Discussion and conclusion} Explicitly including mRNA dynamics in gene regulatory models inevitably introduces more complexity. We have quantitatively studied the effects of bursting noise \cite{Kaern} in a biologically relevant regime or the model organism \emph{E.~coli}. To our knowledge, this is one of the first which attempts to build a rigorous connection between existing individual-based models\cite{Walczak,WalczakSasai,Roma,Warren} and more coarse-grained models\cite{Wang,WangHuang,Friedman,Bokes}. Results of our simulations indicate that the bursting statistics of transcription and translation are essential ingredients of models of gene regulation. Coarse grained models need to account for bursting to retain correct statistics of noise-driven phenomena such as the switching between different dynamic attractors. The implications of our observations are relevant to the abstract modelling of regulatory networks in different ways. We are now in a better position to address our opening question, and to say how noise propagates between different levels of modelling. Perhaps more importantly, our study may ultimately help to decide what level of modelling is most appropriate to study gene regulatory circuits computationally. The answer will of course depend on the question in the focus of the investigation. We have examined different levels of coarse graining, and we have identified the steps in these reduction procedures at which significant alternations to different model outcomes are introduced. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{./ MSSD.pdf} \caption{({\bf a}) Schematic diagram illustrating the network of the three-way switch, remaining panels show stationary distribution of protein numbers in the range $0\leq N_X,N_Y\leq 700$ on linear scale. ({\bf b}) Full model; ({\bf c}) Diffusion approximation of the GB model; ({\bf d}) PDMP approximation; ({\bf e}) CB model; and ({\bf f})NB model. \label{fig:9}} \end{center} \end{figure} Systematically choosing a suitable level of coarse-graining also facilitates the mathematical analysis of regulatory networks. The high dimensionality of full regulatory network effectively makes them intractable. Model reduction is needed to make progress, and our analysis demonstrates that the PDMP formulation is a powerful way forward, and that it can be more suitable than the conventional diffusion approximation. The PDMP model explicitly retains the bursting noise originating from the mRNA dynamics. Even though it effectively disregards the demographic noise from random degradation of the proteins, it delivers accurate predictions for stationary distributions and switching times. As another strength, the PDMP formulation can relatively easily be generalised to accommodate more complex reactions. For example, in the \emph{Enterobacteria phage $\lambda$} switch it is not the monomer of the synthesized proteins which acts as the repressor to regulate transcription, but instead their dimer. Modelling these processes requires the inclusion of dimerization further downstream after transcription and translation \cite{Arkin,Warren}. Preliminary results not shown here reveal that the PDMP approximates such dynamics well. The fact that the piecewise deterministic Markov process is successful in approximating the full model opens a relatively new type of modelling paradigm. We acknowledge that we are not the first to propose this \cite{Hasty1,Zeiser,Zeiser2,Hu,Kumar}. Our contribution consists in a first analytical treatment of PDMP models and in an systematic embedding into a wider landscape of modelling approaches. The bursting phenomenon is ubiquitous whenever there is a separation of time scales between the source and the product of a biological process. These are mRNA and protein in models of gene regulation, but we expect that these ideas can be applied to other biological problems with similar time-scale separation. \section{Methods} Sample paths of the individual-based processes (FM, CB, NB, and GB) are generated using the standard Gillespie algorithm \cite{Schwartz,Gillespie}. The PDMP process is simulated using the algorithm discussed by Bokes et al\cite{Bokes}. Simulations of the stochastic differential equations resulting from the diffusion approximation are performed with a standard Euler--Maruyama scheme. The geometric minimum action method\cite{Heymann} is implemented using MATLAB R2010a, as is the finite-difference scheme to solve the backward equation \eqref{eq:integro-diff}. Further details can be found in the Supplementary Information.
1,108,101,562,371
arxiv
\section{\bf Introduction} Lorentz symmetry violation (LSV) in QED has been studied by a number of authors concerned with its consistency with causality, unitarity \cite{SCHR,KLINK1,KLINK2}, the structure of asymptotic states and renormalisation theory \cite{KOST2,KOST4,LEHN}. In previous papers \cite{ITD3,ITD4} we studied some of these issues in QED starting with a premetric formulation \cite{ITIN,GOMB} based on an action \begin{equation} S=-\frac{1}{8}\int d^4x U^{\mu\nu\sigma\tau}F_{\mu\nu}(x)F_{\sigma\tau}(x), \end{equation} where $F_{\mu\nu}(x)$ is the standard electromagnetic field tensor and the (constant) background tensor $U^{\mu\nu\sigma\tau}$ has the same symmetry properties as the Riemann tensor in General Relativity, namely \begin{equation} U^{\mu\nu\sigma\tau}=-U^{\nu\mu\sigma\tau}=U^{\sigma\tau\mu\nu}, \end{equation} and \begin{equation} U^{\mu\nu\sigma\tau}+U^{\mu\sigma\tau\nu}+U^{\mu\tau\nu\sigma}=0. \end{equation} This latter condition excludes parity violation. An outcome of the analysis was that even when the Lorentz symmetry violation is not constrained to be small the behaviour of the renormalised theory in the infra-red limit is dominated by the fixed point at zero coupling in a manner consistent with Lorentz symmetry. That is at a sufficiently large scale in spacetime Lorentz symmetry re-emerges. This is consistent with related earlier work \cite{NLSN1,NLSN2,NLSN3}. In this paper we study a QCD type model with $SU(N)$ gauge symmetry. In addition to the gauge field we include a quark field that transforms under the fundamental representation of $SU(N)$. A closely related model is investigated in reference \cite{APETROV}. The significance of such a theory is that it exhibits asymptotic freedom, that is, its behaviour at high energy is controlled, at least in the standard case of Lorentz invariance, by a weak coupling fixed point \cite{GROW,POLZ}. Our aim here is to investigate the manner in which asymptotic freedom is modified by the presence of Lorentz symmetry violation. An investigation with similar aims, in particular comparing QED and QCD is presented in reference \cite{VIEIRA}. Although we look in detail only at the simplest type of LSV, we set out the general theory in a manner parallel to reference \cite{ITD3} in order to clarify the logical structure of the argument. This prepares a framework for analyses of more complex models. In the obvious generalisation of the case of QED we take the action for the $SU(N)$ gauge field to be \begin{equation} S_{g}=-\frac{1}{8}\int d^4xU^{\mu\nu\sigma\tau}F_{a\mu\nu}(x)F_{a\sigma\tau}(x), \label{GINVACT} \end{equation} where $F_{a\mu\nu}(x)$ is the standard gauge field tensor transforming according to the orthogonal representation of $SU(N)$. For a general choice of $U^{\mu\nu\sigma\tau}$ this action although gauge invariant is not in general Lorentz invariant. Lorentz invariance with respect to a metric $g^{\mu\nu}$ can be recovered by choosing \begin{equation} U^{\mu\nu\sigma\tau}=g^{\mu\sigma}g^{\nu\tau}-g^{\nu\sigma}g^{\mu\tau}. \end{equation} Although there is {\it a priori} no metric in the general case with LSV, there is nevertheless, as argued in reference \cite{ITD3}, a {\it preferred} metric $g^{\mu\nu}$ that allows us to decompose $U^{\mu\nu\sigma\tau}$ in the following way \begin{equation} U^{\mu\nu\sigma\tau}=g^{\mu\sigma}g^{\nu\tau}-g^{\nu\sigma}g^{\mu\tau}-C^{\mu\nu\sigma\tau}, \end{equation} where the tensor $C^{\mu\nu\sigma\tau}$ has the same symmetries as the Weyl tensor in General Relativity. That is \begin{equation} C^{\mu\nu\sigma\tau}=-C^{\nu\mu\sigma\tau}=C^{\sigma\tau\mu\nu}, \end{equation} and \begin{equation} C^{\mu\nu\sigma\tau}+C^{\mu\sigma\tau\nu}+C^{\mu\tau\nu\sigma}=0. \end{equation} In addition it satisfies the trace condition \begin{equation} g_{\mu\sigma}C^{\mu\nu\sigma\tau}=0. \end{equation} We refer to $C^{\mu\nu\sigma\tau}$ as a Weyl-like tensor (WLT). It follows that the WLT determines the nature of the LSV. As in the case of QED the possible types of LSV can be determined by applying the Petrov classification to the WLT \cite{PTRV}. A useful approach to the Petrov scheme is contained in references \cite{JMS,PODON}. Its application in QED with LSV is presented in reference \cite{ITD3}. There are six cases, conventionally labeled O,N,D,I,II,III. Each case has a canonical form for the WLT \cite{PENRIN}. Class O corresponds to the case $C^{\mu\nu\sigma\tau}=0$ which for pure gauge theory implies no LSV. However as in the case of QED \cite{ITD3}, the quark field can engender LSV in the model through its contribution to vacuum polarisation provided the associated metric for quark propagation shares with the gluon metric an invariance under a subgroup of the Lorentz group that is the little group of the given 4-vector \cite{COLGL1}. The 4-vector can be time-like, space-like or light-like (with respect to both metrics). The time-like case implies that there is a reference frame in which the theory is invariant under rotations of the spatial axes. This is the case we study in detail. However it is convenient to set out the scheme for quantising and renormalising the theory in a general form. Canonical forms for the WLT in other Petrov classes and the implications for the vector meson dispersion relations are the same as those for photons in QED as described in detail in reference \cite{ITD3}. \section{\label{GFIX} Gauge Fixing and Ghost Fields} In terms of the vector gauge fields $A_{a\mu}(x)$ the tensor fields are given by \begin{equation} F_{a\mu\nu}(x)=\partial_\mu A_{a\nu}(x)-\partial_\nu A_{a\mu}(x)+gf_{abc}A_{b\mu}(x)A_{c\nu}(x), \end{equation} where $g$ is the gauge field coupling constant and $f_{abc}$ are the structure constants of SU(N). In order to deal with the gauge invariance of the action for the vector fields in eq(\ref{GINVACT}) we follow the approach in reference \cite{ITD3} and impose the gauge condition \begin{equation} \Lambda^{\mu\nu}\partial_\mu A_{a\nu}(x)=0. \end{equation} Here $\Lambda^{\mu\nu}$ is a metric-like tensor which we will find it convenient to distinguish from $g^{\mu\nu}$ because the two tensors behave differently under the renormalisation procedure. We are therefore led to add a gauge fixing term to the action of the form \begin{equation} S_{gf}=\frac{1}{2}\int d^4x(\Lambda^{\mu\nu}\partial_\mu A_{a\nu}(x))^2. \label{GFACT} \end{equation} In addition and in contrast to the case of QED \cite{ITD3}, we must introduce anticommuting ghost fields $c_a(x)$ and ${\bar c}_a(x)$ in order to construct in the standard way the Fadeev-Popov determinant in the path integral formalism for the computation of Greens functions in the gauge theory. We therefore complete the action for the gauge theory by adding a term \begin{equation} S_{gh}=-\int d^4x {\bar c}_a(x)\Lambda^{\mu\nu}\partial_\mu D_{ab\nu}c_b(x). \label{GSTACT} \end{equation} Here $D_{ab\nu}={\delta}_{ab}\partial_\nu+gf_{abc}A_{c\nu}(x)$ is the gauge covariant derivative for the ghost fields. The complete action for the theory is $S$ where \begin{equation} S=S_{g}+S_{gf}+S_{gh}. \label{TOTACT} \end{equation} \section{\label{FEYN} Feynman Rules} The Feynman rules for the theory can be read off from the action $S$ in the standard way. They are in certain respects analogous to the corresponding rules for BIMQED \cite{ITD3}. \subsection{\label{PROP} Feynman Propagator} The Feynman propagator, illustrated in Fig \ref{FIG1}(i), is \begin{equation} \Delta_{Fab\mu\nu}(q)=-i{\delta}_{ab}M_{\mu\nu}(q), \end{equation} where $M_{\mu\nu}(q)$ is the matrix inverse to $M^{\mu\nu}(q)$ and \begin{equation} M^{\mu\nu}(q)=(U^{\mu\alpha\nu\beta}+\Lambda^{\mu\alpha}\Lambda^{\nu\beta})q_{\alpha}q_{\beta}. \end{equation} More explicitly \begin{equation} M^{\mu\nu}(q)=q^2g^{\mu\nu}-q^\mu q^\nu+Q^\mu Q^\nu -C^{\mu\alpha\nu\beta}q_\alpha q_\beta. \end{equation} Here $Q^\mu=\Lambda^{\mu\alpha}q_{\alpha}$. It is easy to verify that when $C^{\mu\alpha\nu\beta}$ vanishes and $\Lambda^{\mu\nu}=g^{\mu\nu}$ this reduces to the standard Lorentz invariant form. Following the analysis for the photon propagator in reference \cite{ITD3}, we first introduce ${\cal M}^{\mu\nu}(q)$ which is the form taken by $M^{\mu\nu}(q)$ when indeed $\Lambda^{\mu\nu}$ is replaced by $g^{\mu\nu}$ that is \begin{equation} {\cal M}^{\mu\nu}(q)=q^2g^{\mu\nu}-C^{\mu\alpha\nu\beta}q_{\alpha}q_{\beta}. \end{equation} The inverse matrix is ${\cal M}_{\mu\nu}(q)$ (see reference \cite{ITD3}) and it can be used to construct $M_{\mu\nu}(q)$ in the form \begin{equation} M_{\mu\nu}(q)=\left({\delta}^\sigma_\mu-\frac{q_\mu Q^\sigma}{q.Q}\right){\cal M}_{\sigma\tau}(q) \left({\delta}^\tau_\nu-\frac{Q^\tau q_\nu}{q.Q}\right) +\frac{q_\mu q_\nu}{(q.Q)^2} \end{equation} A careful analysis \cite{ITD3} shows that the Feynman propagator has the same vector meson poles as ${\cal M}_{\mu\nu}(q)$ together with the ghost poles at $q.Q=\Lambda^{\mu\nu}q_\mu q_\nu=0$. It is straightforward to check that were we to set $\Lambda^{\mu\nu}=g^{\mu\nu}$ then we find \begin{equation} M_{\mu\nu}(q)={\cal M}_{\mu\nu}(q) \end{equation} \begin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{FIG1} \caption{Gluons (i) propagator, (ii) 3-vertex, (iii) 4-vertex. 4-Momenta inward.} \label{FIG1} \end{figure} \subsection{\label{VERTS} Gluon Vertices} The three-gluon vertex, Fig \ref{FIG1}(ii), is \begin{equation} V^{\mu\nu\sigma}_{abc}=-gf_{abc}(p_\rho U^{\rho\mu\nu\sigma}+q_\rho U^{\rho\nu\sigma\mu}+k_\rho U^{\rho\sigma\mu\nu}). \end{equation} For the four-gluon vertex Fig \ref{FIG1}(iii) we have \begin{equation} V^{\mu\nu\sigma\tau}=-ig^2(U^{\mu\nu\sigma\tau}f_{hab}f_{hcd}+U^{\mu\sigma\tau\nu}f_{hac}f_{hbd} +U^{\mu\tau\nu\sigma}f_{had}f_{hbc}). \end{equation} Of course momentum conservation is enforced at each vertex. Again it is easy to verify that in the absence of LSV these vertices reduce to standard form ({\it see for example} \cite{PESK}). \subsection{\label{GHOST} Ghost Propagator and Vertex} The Feynman propagator for the ghost fields, Fig \ref{FIG2}(i), is \begin{equation} \Delta^{(gh)}_{ab}(p)=i\frac{{\delta}_{ab}}{P.p}, \end{equation} where $P^\mu=\Lambda^{\mu\nu}p_{\nu}$ and $P.p=P^\mu p_\mu=\Lambda^{\mu\nu}p_\mu p_{\nu}$. Momentum follows the ghost direction. From this it is obvious that the mass-shell condition for the ghosts is $P.p=\Lambda^{\mu\nu}p_\mu p_{\nu}=0$. The vertex coupling the ghosts to the vector field is indicated in Fig \ref{FIG2}(ii) and has the form \begin{equation} V^{(gh)\mu}_{abc}=-gf_{abc}\Lambda^{\mu\nu}p_\nu. \end{equation} \begin{figure}[t] \centering \includegraphics[width=0.4\linewidth]{FIG2} \caption{The ghost propagator is indicated in (i) and the vertex for coupling to gluons in (ii).} \label{FIG2} \end{figure} \section{\label{QUARK} Quark Field} \begin{figure}[t] \centering \includegraphics[width=0.4\linewidth]{FIG3} \caption{The quark propagator is indicated in (i) and the quark gluon coupling in (ii).} \label{FIG3} \end{figure} The model can be extended by including one or more spinor fields each transforming under $SU(N)$. For simplicity we will consider the case of one such field. Modifications can be added later. The action for this field is $S_{qu}$ where \begin{equation} S_{qu}=\int d^4x{\bar \psi}(x)(i{\bar \gg}^\mu D_\mu[A]-m)\psi(x). \end{equation} Here \begin{equation} {\bar \gg}^\mu=\gamma^a{\bar e}^\mu_{~~a}, \end{equation} where ${\bar e}^\mu_{~~a}$ is the vierbein associated with the spinor field and $\{\gamma^a\}$ are the standard Dirac $\gamma$-matrices. The metric associated with the spinor field is ${\bar g}^{\mu\nu}=\eta^{ab}{\bar e}^\mu_{~~a}{\bar e}^\nu_{~~b}$. This is a second source of LSV in the model. Of course $D_\mu[A]$ is the gauge covariant derivative appropriate to the spinor field. The Feynman propagator for the spinor field is indicated in Fig \ref{FIG3}(i) \begin{equation} S_F(p)=\frac{i}{{\bar \gg}^\mu p_\mu-m}, \end{equation} and the coupling to the gauge field is indicated in Fig \ref{FIG3}(ii) \begin{equation} {\cal V}^\mu_a=ig{\bar \gg}^\mu t_a. \end{equation} Here $t_a$ is the $SU(N)$ generator appropriate to the representation of the spinor field (see reference \cite{PESK}). We use dimensional regularisation \cite{tHV} in order to deal with the ultraviolet divergences of the theory. The Feynman diagrams for the perturbation series for the Greens functions of the theory are evaluated in $n$-dimensions. All the parameters of the theory, coupling constant, mass, metrics and WLT are subject to corrections involving UV divergences. We denote the bare quantities with an extra suffix $0$. Each bare parameter is expanded in terms of a renormalised coupling constant $g$. For example \begin{equation} g\rightarrow g_0=\mu^{(4-n)/2}g(1+\sum_{k=1}^{\infty}g^{(k)}g^{2k}). \label{RENORM1} \end{equation} Here $\mu$ is the scale associated with the renormalised coupling $g$ \cite{tHV}. The coefficients $g^{(k)}$ depend on the dimension $n$ and exhibit poles of various orders at $n=4$. For example $g^{(1)}$ has a simple pole at $n=4$. The other parameters are similarly replaced by bare versions that are expanded in powers of the renormalised coupling $g$. \begin{eqnarray} m_0&=&m(1+\sum_{k=1}^\infty b^{(k)}g^{2k})\nonumber\\ g_0^{\mu\nu}&=&g^{\mu\nu}+\sum_{k=1}^\infty g^{(k)\mu\nu}g^{2k}\nonumber\\ {\bar e}^\mu_{0~a}&=&{\bar e}^\mu_{~~a}+\sum_{k=1}^\infty {\bar e}^{(k)\mu}_{~~~~~a}g^{2k}\nonumber\\ C_0^{\mu\nu\sigma\tau}&=&C^{\mu\nu\sigma\tau}+\sum_{k=1}^\infty C^{(k)\mu\nu\sigma\tau}g^{2k}\nonumber\\ \Lambda_0^{\mu\nu}&=&\Lambda^{\mu\nu}+\sum_{k=1}^\infty \Lambda^{(k)\mu\nu}g^{2k} \label{RENORM2} \end{eqnarray} Again the coefficients in the various expansions exhibit poles at $n=4$. Note that we are free to assume that $\det g_0^{\mu\nu}=\det g^{\mu\nu}=-1$. This implies that \begin{equation} g_{\mu\nu}g^{(1)\mu\nu}=0. \label{RENORM3} \end{equation} \section{\label{LOOP1} Perturbative Calulations at One-loop} \subsection{\label{GPROP} Renormalisation of Gauge Field Propagator} The vacuum polarisation tensor $\Sigma^{\mu\nu}_{ab}(q)$ determines the renormalisation properties of the (inverse) gluon propagator $\Delta^{\mu\nu}_{ab}(q)$. The (one-loop) diagrams that contribute to $i\Sigma^{\mu\nu}_{ab}(q)$ are shown in Fig \ref{FIG4} (see reference \cite{PESK}). We have, to $O(g^2)$ \begin{equation} \Delta^{\mu\nu}_{0ab}(q)=\Delta^{\mu\nu}_{F0ab}(q)+i\Sigma^{\mu\nu}_{ab}(q). \label{RENORM4} \end{equation} Here the inverse Feynman propagator is expressed in terms of the bare parameters, \begin{equation} \Delta^{\mu\nu}_{F0ab}(q)=-i{\delta}_{ab}\left\{(g_0^{\mu\nu}g_0^{\alpha\beta}-g_0^{\mu\beta}g_0^{\nu\alpha}+\Lambda_0^{\mu\beta}\Lambda_0^{\nu\alpha} -C_0^{\mu\alpha\nu\beta})q_\alpha q_\beta\right\}. \label{RENORM5} \end{equation} In principle the contributions to the vacuum polarisation are also computed from the appropriate Feynman diagrams using the bare parameters. However our calculation will be of $O(g^2)$ so we need only use the lowest order expansions in eqs(\ref{RENORM2}). This amounts to using the renormalised parameters in the vertices and propagators when computing $i\Sigma^{\mu\nu}_{ab}(q)$. In addition the UV divergences of $i\Sigma^{\mu\nu}_{ab}(q)$ occur only in the lowest terms in its Taylor expansion in $q_\alpha$. On general grounds then we can exhibit the UV divergences to $O(g^2)$ by writing \begin{equation} i\Sigma^{\mu\nu}_{ab}(q)=i{\delta}_{ab}\frac{g^2}{n-4}W^{\mu\alpha\nu\beta}q_\alpha q_\beta+O(q^4), \label{RENORM6} \end{equation} where the $O(q^4)$ terms are UV-finite. The tensor $W^{\mu\alpha\nu\beta}$ has the same symmetry properties as the Riemann tensor. Hence, in a standard fashion, it can be expressed in the form \begin{equation} W^{\mu\alpha\nu\beta}=\left\{\frac{1}{12}W(g^{\mu\nu}g^{\alpha\beta}-g^{\mu\beta}g^{\nu\alpha}) +\frac{1}{2}(V^{\mu\nu}g^{\alpha\beta}+g^{\mu\nu}V^{\alpha\beta}-V^{\mu\beta}g^{\nu\alpha}-g^{\mu\beta}g^{\nu\alpha}) -V^{\mu\alpha\nu\beta}\right\}, \label{RENORM7} \end{equation} where \begin{eqnarray} W&=&W^{\alpha\beta}g_{\alpha\beta},\nonumber\\ W^{\alpha\beta}&=&W^{\mu\alpha\nu\beta}g_{\mu\nu},\nonumber\\ V^{\alpha\beta}&=&W^{\alpha\beta}-\frac{1}{4}Wg^{\alpha\beta}. \label{WEYL1} \end{eqnarray} We have also \begin{eqnarray} V^{\alpha\beta}g_{\alpha\beta}&=&0,\nonumber\\ V^{\mu\alpha\nu\beta}g_{\alpha\beta}&=&0. \label{WEYL2} \end{eqnarray} It follws that $V^{\mu\alpha\nu\beta}$, having the appropriate symmetries and trace properties, is a WLT. In equations (\ref{WEYL1}) and (\ref{WEYL2}) we have used the 4-D decomposition which is adequate when computing the residues of the pole at $n=4$. The type of Lorentz symmetry breaking exhibited by the model can be specified by means of the Petrov classification of Weyl tensors. We will return to this point later. From equations (\ref{RENORM1}), (\ref{RENORM2}) and (\ref{RENORM5}) we see that \begin{equation} \Delta_{F0ab}^{\mu\nu}(q)=\Delta_{Fab}^{\mu\nu}(q)+{\delta}\Delta_{Fab}^{\mu\nu}(q), \label{RENORM8} \end{equation} where $\Delta_{Fab}^{\mu\nu}(q)$ is obtained from $\Delta_{F0ab}^{\mu\nu}(q)$ by replacing the bare parameters with their renormalised versions and \begin{eqnarray} {\delta}\Delta_{Fab}^{\mu\nu}(q)&=&-{\delta}_{ab}ig^2\{g^{\alpha\beta}g^{(1)\mu\nu}+g^{(1)\alpha\beta}g^{\mu\nu} -g^{\mu\alpha}g^{(1)\nu\beta}-g^{(1)\mu\alpha}g^{\nu\beta}\nonumber\\ && ~~~~~~~~~~~~ +\Lambda^{\alpha\beta}\Lambda^{(1)\mu\nu}+\Lambda^{(1)\alpha\beta}\Lambda^{\mu\nu} -\Lambda^{\mu\alpha}\Lambda^{(1)\nu\beta}-\Lambda^{(1)\mu\alpha}\Lambda^{\nu\beta}\nonumber\\ && ~~~~~~~~~~~~ -C^{(1)\mu\alpha\nu\beta}\}q_\alpha q_\beta \label{RENORM9} \end{eqnarray} The renormalisation parameters are fixed by requiring that the renormalisation of $\Delta_{0ab}^{\mu\nu}(q)$ reduces to an overall multiplicative factor, that is \begin{equation} \Delta_{0ab}^{\mu\nu}(q)=\left(1-\frac{1}{12}\frac{g^2}{n-4}W\right)\left(\Delta_{Fab}^{\mu\nu}(q)+O(q^4)\right). \label{RENORM10} \end{equation} This is achieved by requiring that \begin{eqnarray} g^{(1)\mu\nu}&=&\frac{1}{2}\frac{1}{n-4}V^{\mu\nu},\nonumber\\ \Lambda^{(1)\mu\nu}&=&\frac{1}{24}\frac{1}{n-4}W\Lambda^{\mu\nu},\nonumber\\ C^{(1)\mu\alpha\nu\beta}&=&\frac{1}{n-4}\left(V^{\mu\alpha\nu\beta}-C^{\mu\alpha\nu\beta}\right). \label{RENORM11} \end{eqnarray} Note that the results in equations(\ref{RENORM11}) do imply that $g^{(1)\mu\nu}g_{\mu\nu}=0$. Furthermore the renormalisation of $\Lambda_0^{\mu\nu}$ is multiplicative and involves the field renormalisation factor $Z^{1/2}$ where \begin{equation} Z^{1/2}=1+\frac{1}{24}\frac{g^2}{n-4}W. \label{RENORM12} \end{equation} This is not necessarily true of the metric renormalisation. For this reason it is conceptually convenient to distinguish the bare metric from the bare ghost metric. However at the level of one loop in perturbation theory it is possible and convenient to allow the two metrics to coincide. Higher order calculations may require the maintenance of the distinction, imposed order by order, between the two metrics. \subsection{\label{QPROP} Renormalisation of Quark Propagator} The renormalisation of the quark propagator $iS_0(p)$ proceeds along similar lines. We have \begin{equation} S_0^{-1}(p)=S_{F0}^{-1}(p)+\Sigma(p), \label{RENORM13} \end{equation} where \begin{equation} S_{F0}^{-1}(p)={\bar \gg}_0^\mu p_\mu+m_0, \label{RENORM14} \end{equation} \begin{equation} {\bar \gg}_0^\mu=\gamma^a e_{0~a}^\mu \label{RENORM15} \end{equation} and $\gamma^a$ are standard Dirac $\gamma$-matrices. The quark propagator is implicitly a unit operator in $SU(N)$ space. The self-energy amplitude $i\Sigma(p)$ can be calculated (at one loop) from the diagram in Fig \ref{FIG5} using the Feynman rules with renormalised parameters. The UV divergences can be exposed in the Taylor expansion \begin{equation} \Sigma(p)=\Sigma(0)+\Sigma^\mu(0)p_\mu+O(p^2), \label{RENORM16} \end{equation} where the contribution $O(p^2)$ is finite at $n=4$, and we have \begin{equation} \Sigma(0)=m\frac{g^2\sigma}{n-4}, \label{RENORM17a} \end{equation} and \begin{equation} \Sigma^\mu(0)=\frac{g^2}{n-4}H^\mu_{~~\nu}{\bar \gg}^\nu. \label{RENORM17} \end{equation} We write \begin{equation} H^\mu_{~~\nu}=\frac{1}{n}H{\delta}^\mu_\nu+h^\mu_{~\nu}, \label{RENORM18} \end{equation} where $h^\mu_{~~\mu}=0$. The term $H$ determines the quark propagator renormalisation $Z_q$, while $h^\mu_{~~\nu}$ fixes the counter-term in ${\bar e}^\mu_{0~a}$. More explicitly we have, taking account only of the poles at $n=4$, \begin{equation} Z_q=1-\frac{1}{4}\frac{g^2}{n-4}H. \label{RENORM19} \end{equation} \begin{equation} g^2b^{(1)}=\frac{g^2}{n-4}\sigma+\frac{1}{4}\frac{g^2}{n-4}H, \label{RENORM20} \end{equation} and \begin{equation} g^2{\bar e}^{(1)\mu}_{~~~~b}{\bar e}^b_{~~\nu}+\frac{g^2}{n-4}h^\mu_{~~\nu}=0. \label{RENORM21} \end{equation} That is \begin{equation} g^2{\bar e}^{(1)\mu}_{~~~~a}+\frac{g^2}{n-4}h^\mu_{~~\nu}{\bar e}^\nu_{~~a}=0, \label{RENORM22} \end{equation} implying that, to $O(g^2)$ \begin{equation} {\bar g}^{\mu\nu}_0={\bar g}^{\mu\nu}-\frac{g^2}{n-4}(h^\mu_{~~\rho}{\bar g}^{\rho\nu}+h^\nu_{~~\rho}{\bar g}^{\rho\mu}). \label{RENORM23} \end{equation} \subsection{\label{COUP} Renormalisation of Coupling Constant} The coupling constant renormalisation is most easily followed by considering the quark-gluon vertex. The relevant diagrams are shown in Fig \ref{FIG6} and yield contributions to the truncated three-point function that render it finite after appropriate field renormalisations. We can anticipate the nature of the divergences by examining the bare vertex ${\cal V}^\mu_{0a}=ig_0{\bar \gg}_0^\mu t_a$. From eqs (\ref{RENORM1}) and (\ref{RENORM2}) we have \begin{equation} {\cal V}^\mu_{0a}=ig_0{\bar \gg}_0^\mu t_a =i\mu^{(4-n)/2}g(1+g^2g^{(1)})({\bar e}^{\mu}_{~~b}+g^2{\bar e}^{(1)\mu}_{~~~~b}))\gamma^b t_a. \label{RENORM24} \end{equation} That is \begin{equation} {\cal V}^\mu_{0a}=i\mu^{(4-n)/2}g\left(\left(1+g^2g^{(1)}\right){\delta}^\mu_\nu-\frac{g^2}{n-4}h^\mu_{~~\nu}\right){\bar \gg}^\nu t_a. \label{RENORM25} \end{equation} The point we make here is that the renormalisation of the quark vierbein enters into the vertex calculation and the contribution ${\cal V}^\mu_a(p,p')$ to the three point function from the diagrams in Fig \ref{FIG6} must be consistent with this. We can expect then that ${\cal V}^\mu_a(p,p')$ will (at zero external momenta) have the form \begin{equation} {\cal V}^\mu_a(0,0)=i\mu^{(4-n)/2}g\left(\frac{g^2}{n-4}K{\delta}^\mu_\nu t_a+i\frac{g^2}{n-4}h^\mu_{~~\nu}\right){\bar \gg}^\nu t_a. \label{RENORM26} \end{equation} This will be verified in particular calculations. The {\it truncated} three point function ${\cal V}^{(3)\mu}_a(p,p')$ will, at zero external momentum, satisfy \begin{equation} {\cal V}^{(3)\mu}_a(0,0)=i\mu^{(4-n)/2}g\left(1+g^2g^{(1)}+\frac{g^2}{n-4}K\right){\bar \gg}^\mu t_a. \label{RENORM27} \end{equation} Finally $g^{(1)}$ is determined by requiring that the right side of this equation is rendered finite by extracting the field renormalisation factors $Z^{-1}Z_q^{-1/2}$, implying \begin{equation} g^2g^{(1)}=\frac{g^2}{n-4}\left(\frac{1}{4}H-\frac{1}{24}W-K\right). \label{RENORM28} \end{equation} We will look at this in more detail when evaluating the vertex in the special case of LSV we consider below. \section{\label{RENGRP} Renormalisation Group} From the results in section \ref{LOOP1} we can obtain the renormalisation group equations for the renormalised parameters to lowest non-trivial order in the coupling constant $g$. These are derived from the requirement that the bare parameters are indepependent of the renormalisation scale $\mu$. For the coupling constant we have \begin{equation} \mu\frac{\partial}{\partial\mu}g_0=0. \label{RENGRP1} \end{equation} From eq(\ref{RENORM26}) we then obtain the renormalisation group $\beta$-function \begin{equation} \beta(\mu)=\mu\frac{\partial}{\partial\mu}g=-(2-n/2)g-g^3\left(\frac{1}{4}H-\frac{1}{24}W-K\right). \label{RENGRP2} \end{equation} Note that in deriving eq(\ref{RENGRP2}) from eq(\ref{RENGRP1}) we have ignored derivatives of $H$, $W$ and $K$ since they are of $O(g^2)$ and may, and indeed must, be ignored at one loop. Of course the first term on the right of eq(\ref{RENGRP2}) vanishes in four dimensions. Following the same principles we obtain for the gluon metric \begin{equation} \mu\frac{\partial}{\partial\mu}g^{\alpha\beta}=-\frac{1}{2}g^2V^{\alpha\beta}, \label{RENGRP3} \end{equation} and \begin{equation} \mu\frac{\partial}{\partial\mu}{\bar g}^{\alpha\beta}=g^2(h^\alpha_{~~\rho}{\bar g}^{\rho\beta}+h^\beta_{~~\rho}{\bar g}^{\rho\alpha}). \label{RENGRP4} \end{equation} We obtain also for the renormalised mass \begin{equation} \mu\frac{\partial m}{\partial\mu}=-mg^2(\sigma+\frac{1}{4}H). \label{RENGRP5} \end{equation} The renormalisation scheme set out here can be applied to LSV associated with a WLT of any of the Petrov classes. However even at one loop the calculations are rather complex. Partly then for reasons of simplicity we restrict attention in this paper to models in Petrov class O. The results nevertheless remain interesting. \section{\label{PETROV0} Renormalisation for Petrov Class O} As explained above, in the case of Petrov class O, the LSV is due entirely to the difference between the light-cone associated with the gluons and that associated with the quark field. That is, we are assuming that the tensor $C^{\mu\nu\sigma\tau}$ vanishes. This is possible if there is a reference frame in which rotational invariance holds simultaneously for both metrics. Given the symmetry properties of $C^{\mu\nu\sigma\tau}$ it is consistent with this rotational invariance only if it is null. Similar remarks apply to situations where the preserved subgroup of the Lorentz group, in a suitable frame, leaves invariant either a purely space-like or light-like vector. We will concentrate on the rotationally invariant case. We have for the gluon metric \begin{equation} g^{\mu\nu}=\left(\begin{array}{cccc} \alpha&0&0&0\\ 0&-\beta&0&0\\ 0&0&-\beta&0\\ 0&0&0&-\beta \end{array}\right), \end{equation} and for the quark metric \begin{equation}{\bar g}^{\mu\nu}=\left(\begin{array}{cccc} {\bar \aa}&0&0&0\\ 0&-{\bar \bb}&0&0\\ 0&0&-{\bar \bb}&0\\ 0&0&0&-{\bar \bb} \end{array}\right), \end{equation} We assume that \begin{equation} \alpha\beta^3={\bar \aa}{\bar \bb}^3=1, \end{equation} so that $\det g^{\mu\nu}=\det {\bar g}^{\mu\nu}=-1$. We have similar forms for the bare metrics $g_0^{\mu\nu}$ and ${\bar g}_0^{\mu\nu}$ in terms of the bare parameters ($\alpha_0,\beta_0$) and (${\bar \aa}_0,{\bar \bb}_0$) which have appropriate expansions in powers of the renormalised coupling. That is \begin{equation} \alpha_0=\alpha+\frac{g^2}{n-4}\alpha^{(1)}+\ldots, \end{equation} together with similar expansions for the other bare parameters. It is convenient to relate the two metrics by setting ${\bar \aa}=a\alpha$ and ${\bar \bb}=b\beta$ with the consequence that $ab^3=1$. Similar remarks hold in an obvious way for bare parameters $a_0$ and $b_0$. The significance then of $b$ is that in a coordinate frame in which the gluon metric is diagonal with entries$(1,-1,-1,-1)$ the quarks have a lightcone associated with a velocity $c_q=b^2$. At appropriately high energies we would expect that when $b>1$ the quarks travel faster than the gluons and slower when $b<1$. \subsection{\label{VACPOL0} Vacuum Polarisation for Petrov Class O} In order to carry out the renormalisation process we assume the renormalised gauge metric satisfies (to one loop order) $\Lambda^{\mu\nu}=g^{\mu\nu}$. The vanishing of $C^{\mu\nu\sigma\tau}$ then yields for the renormalised gluon propagator \begin{equation} M_{\mu\nu}(q)=\frac{g_{\mu\nu}}{q^2}. \end{equation} The vertex factors, as mentioned above, acquire standard form for the gauge theory without (at this stage) any LSV. We can therefore use the discussion of non-abelian gauge theories in reference \cite{PESK} to evaluate the contributions from the diagrams in Fig \ref{FIG4}(i), Fig \ref{FIG4}(ii) and Fig \ref{FIG4}(iv) to obtain the gauge field contribution to the UV divergence at one loop as \begin{equation} i\Sigma^{(g)\mu\nu}_{ab}(q)=-\frac{10}{3}C_2(G)\frac{ig^2}{(4\pi)^2}{\delta}_{ab} (g^{\mu\nu}g^{\alpha\beta}-g^{\mu\beta}g^{\alpha\nu})q_\alpha q_\beta\frac{1}{n-4}, \end{equation} where $C_2(G)$ is the value of the quadratic Casimir operator in the adjoint representation of $SU(N)$. As explained in reference \cite{PESK} the diagram in Fig \ref{FIG4}(iv) does not contribute to this pole residue. In a similar way, we can use the results in reference \cite{PESK} to evaluate the quark contribution to the vacuum polarisation from Fig \ref{FIG4}(iii) provided we use ${\bar g}^{\mu\nu}$ as the appropriate metric. The result is \begin{equation} i\Sigma^{(q)\mu\nu}_{ab}(q)=\frac{4}{3}\frac{ig^2}{(4\pi)^2}{\delta}_{ab}({\bar g}^{\mu\nu}{\bar g}^{\alpha\beta}-{\bar g}^{\mu\beta}{\bar g}^{\alpha\nu}) \frac{1}{n-4}. \end{equation} \begin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{FIG4} \caption{Contributions to vacuum polarisation from (i) gluons, (ii) ghosts, (iii) quarks and (iv) gluon loop.} \label{FIG4} \end{figure} From eq(\ref{WEYL1}) we see that contribution to the tensor $W^{\mu\alpha\nu\beta}$ from the gluon field term is \begin{equation} W^{(g)\mu\alpha\nu\beta}=-\frac{1}{(4\pi)^2}\frac{10}{3}C_2(G)(g^{\mu\nu}g^{\alpha\beta}-g^{\mu\alpha}g^{\beta\nu}), \end{equation} and \begin{equation} W^{(g)}=-\frac{40}{(4\pi)^2}C_2(G). \end{equation} It is immediately obvious $V^{(g)\mu\nu}$ vanishes as does $V^{(g)\mu\alpha\nu\beta}$. From the quark field term we have \begin{equation} W^{(q)\mu\alpha\nu\beta}=\frac{1}{(4\pi)^2}\frac{4}{3}({\bar g}^{\mu\nu}{\bar g}^{\alpha\beta}-{\bar g}^{\mu\beta}{\bar g}^{\alpha\nu}). \end{equation} We have then \begin{equation} W^{(q)}=\frac{8}{(4\pi)^2}\frac{{\bar \bb}}{\beta}\left(\frac{{\bar \aa}}{\alpha}+\frac{{\bar \bb}}{\beta}\right) =\frac{8}{(4\pi)^2}b(a+b), \label{PERT1} \end{equation} and \begin{equation} V^{(q)\alpha\beta}=\frac{2}{(4\pi)^2}b(a-b)\left(\begin{array}{cccc} \alpha&0&0&0\\ 0&\beta/3&0&0\\ 0&0&\beta/3&0\\ 0&0&0&\beta/3 \end{array}\right). \label{PERT2} \end{equation} It follows that \begin{equation} W=W^{(g)}+W^{(q)}=\frac{8}{(4\pi)^2}(b(a+b)-5C_2(G)). \label{PERT3} \end{equation} Of course \begin{equation} V^{\mu\nu}=V^{(q)\mu\nu}. \label{PERT4} \end{equation} Finally then from eqs(\ref{RENORM11}) and (\ref{PERT4}) we have \begin{equation} \alpha^{(1)}=\frac{1}{(4\pi)^2}\frac{1}{n-4}b(a-b)\alpha=\frac{1}{(4\pi)^2}\frac{1}{n-4}\left(\frac{1}{b^2}-b^2\right)\alpha, \end{equation} that is \begin{equation} \alpha_0=\alpha\left(1+\frac{g^2}{(4\pi)^2}\frac{1}{n-4}\left(\frac{1}{b^2}-b^2\right)\right), \end{equation} and \begin{equation} \beta_0=\beta\left(1-\frac{1}{3}\frac{g^2}{(4\pi)^2}\frac{1}{n-4}\left(\frac{1}{b^2}-b^2\right)\right). \label{PERT5} \end{equation} In the limit of Lorentz invariance $a=b=1$ and we have $\alpha_0=\alpha$ and $\beta_0=\beta$. The field renormalisation factor is \begin{equation} Z^{1/2}=1+\frac{1}{3}\frac{g^2}{(4\pi)^2}\frac{1}{n-4}\left(\left(\frac{1}{b^2}+b^2\right)-5C_2(G)\right). \end{equation} \subsection{\label{QSELFE0} Quark Self-Energy for Petrov Class O} The results for the calculations in the previous section, because each term involves only a single metric, can be read off from standard results (see reference \cite{PESK}). However the one loop self-energy correction to the quark propagator is obtained from the Feynman rules applied to the diagram in Fig \ref{FIG5}. Both metrics are involved. A possible conflict with causality might be anticipated. This possibility has been discussed previously and the conclusion is that in the present case there is no difficulty, as is confirmed directly in the calculations \cite{ITD2,ITD3,SCHR,KLINK1,KLINK2}. We have then \begin{figure} \centering \includegraphics[width=0.3\linewidth]{FIG5} \caption{Quark self energy} \label{FIG5} \end{figure} \begin{equation} i\Sigma(p)=-g^2C_2(N)\int\frac{d^nk}{(2\pi)^n}{\bar \gg}^\mu\frac{1}{{\bar \gg}^\alpha(p+k)_\alpha-m}{\bar \gg}^\nu\frac{g_{\mu\nu}}{k^2}. \end{equation} We require \begin{equation} i\Sigma(0)=-g^2C_2(N)\int\frac{d^nk}{(2\pi)^n}{\bar \gg}^\mu\frac{1}{{\bar \gg}^\alpha k_\alpha-m}{\bar \gg}^\nu\frac{g_{\mu\nu}}{k^2}, \end{equation} and \begin{equation} i\frac{\partial}{\partial p_\lambda}\Sigma(p)|_{p=0}=i\Sigma^\lambda(0)=g^2C_2(N)\int\frac{d^nk}{(2\pi)^n}{\bar \gg}^\mu\frac{1}{{\bar \gg}^\alpha k_\alpha-m} {\bar \gg}^\lambda\frac{1}{{\bar \gg}^\beta k_\beta-m}{\bar \gg}^\nu\frac{g_{\mu\nu}}{k^2}. \end{equation} The evaluation of these terms is along the same lines as the corresponding calculation for QED in \cite{ITD3}. We find for the UV poles at $n=4$, \begin{equation} i\Sigma(0)=im\frac{4g^2}{(4\pi)^2}C_2(N)\frac{1}{n-4}\frac{a+3b}{\sqrt{b}(\sqrt{a}+\sqrt{b})}, \end{equation} with the result \begin{equation} \sigma=\frac{4}{(4\pi)^2}C_2(N)\frac{1+3b^4}{b^2(1+b^2)}. \label{RENORM29} \end{equation} We have also \begin{equation} i\Sigma^0(0)=i\frac{4g^2}{(4\pi)^2}C_2(N)\frac{1}{n-4}\frac{a-3b}{(\sqrt{a}+\sqrt{b})^2}{\bar \gg}^0, \end{equation} \begin{equation} i\Sigma^j(0)=-i\frac{4}{3}\frac{g^2}{(4\pi)^2}C_2(N)\frac{1}{n-4} \frac{(a+b)(2\sqrt{a}+\sqrt{b})}{\sqrt{b}(\sqrt{a}+\sqrt{b})^2}{\bar \gg}^j. \end{equation} Using $a=b^{-3}$ we find for the mass renormalisation \begin{equation} b^{(1)}=-\frac{4}{(4\pi)^2}C_2(N)\frac{1}{n-4}\frac{1+3b^4}{b^2(1+b^2)}. \end{equation} We find also \begin{equation} H=-\frac{4}{(4\pi)^2}C_2(N)\frac{2(1-b^2+2b^4)}{b^2(1+b^2)}, \label{RENORM30} \end{equation} and \begin{equation} h^\lambda_{~~\rho}=\frac{4}{(4\pi)^2}C_2(N)\frac{(1-b^2)(1+3b^2+4b^4)}{2b^2(1+b^2)^2}{\cal T}^\lambda_{~~\rho}. \end{equation} Here ${\cal T}^\lambda_{~~\rho}$ is a traceless diagonal matrix with entries $(1,-1/3,-1/3,-1/3)$. It then follows that \begin{equation} {\bar \bb}_0={\bar \bb}\left(1+\frac{1}{3}\frac{4g^2}{(4\pi)^2}C_2(N)\frac{1}{n-4}\frac{(1-b^2)(1+3b^2+4b^4)}{b^2(1+b^2)^2}\right). \end{equation} Combining this with eq(\ref{PERT5}) we obtain \begin{equation} b_0=b\left(1+\frac{1}{3}\frac{g^2}{(4\pi)^2}\frac{1}{n-4}\frac{(1-b^2)}{b^2}\left[(1+b^2)+4C_2(N)\frac{(1+3b^2+4b^4)}{(1+b^2)^2}\right]\right). \end{equation} \subsection{\label{COUPREN0} Coupling Constant Renormalisation for Petrov Class O} \begin{figure}[t] \centering \includegraphics[width=0.4\linewidth]{FIG6} \caption{Quark gluon coupling diagram} \label{FIG6} \end{figure} The one loop diagrams in Fig \ref{FIG6} yield the coupling constant renormalisation. For the computation of the UV pole divergence it is sufficient to calculate the vertex with $p=p'=0$. From the first diagram we obtain \begin{equation} {\cal V}^\lambda_a(0,0)=g^3t_bt_at_bg_{\mu\nu}I^{\mu\lambda\nu}, \end{equation} where \begin{equation} I^{\mu\lambda\nu}=\int \frac{d^nk}{(2\pi)^n}{\bar \gg}^\mu\frac{1}{{\bar \gg}^\alpha k_\alpha-m} {\bar \gg}^\lambda\frac{1}{{\bar \gg}^\beta k_\beta-m}{\bar \gg}^\nu\frac{1}{k^2}. \end{equation} On omitting terms that do not contribute to the UV divergence we have \begin{equation} I^{\mu\lambda\nu}={\bar \gg}^\mu{\bar \gg}^\alpha{\bar \gg}^\lambda{\bar \gg}^\beta{\bar \gg}^\nu T_{\alpha\beta}, \end{equation} then \begin{equation} T_{\alpha\beta}=\int\frac{d^nk}{(2\pi)^n}\frac{k_\alpha k_\beta}{({\bar g}^{\alpha'\beta'}k_{\alpha'}k_{\beta'}-m^2)^2k^2} \label{CPRN1} \end{equation} The result using $t_bt_at_b=(C_2(N)-C_2(G)/2)t_a$ and $ab^3=1$ is \begin{equation} {\cal V}^0_a(0,0)=4\frac{ig^3}{(4\pi)^2}\frac{1}{n-4}(C_2(N)-C_2(G)/2)\frac{1-3b^4}{(1+b^2)^2}t_a{\bar \gg}^0. \end{equation} We have also \begin{equation} {\cal V}^j_a(0,0)=-\frac{4}{3}\frac{ig^3}{(4\pi)^2}\frac{1}{n-4}(C_2(N)_-C_2(G)/2) \frac{(1+b^4)(2+b^2)}{b^2(1+b^2)^2}t_a{\bar \gg}^j. \end{equation} In the case of the second diagram we note that in the limit of zero external momenta the internal three-gluon vertex reduces to \begin{equation} V^{\lambda\mu\nu}_{abc}=-gf_{abc}k_\rho(U^{\rho\nu\lambda\mu}-U^{\rho\mu\nu\lambda}) =-gf_{abc}k_\rho(2g^{\rho\lambda}g^{\nu\mu}-g^{\rho\mu}g^{\nu\lambda}-g^{\rho\nu}g^{\mu\lambda}). \end{equation} The contribution to the vertex becomes \begin{equation} {\cal V}^\mu_a(0,0)=-\frac{1}{2}g^3t_a{\bar \gg}^{\nu'}{\bar \gg}^\alpha{\bar \gg}^{\mu'}g_{\mu\mu'}g_{\nu\nu'} (2g^{\rho\lambda}g^{\mu\nu}-g^{\rho\mu}-g^{\rho\nu}g^{\mu\lambda}){\tilde{T}}_{\alpha\rho}, \end{equation} where \begin{equation} {\tilde{T}}_{\alpha\rho}=\int\frac{d^nk}{(2\pi)^4}\frac{k_\alpha k_\rho}{({\bar g}^{\alpha'\beta'}k_{\alpha'}k_{\beta'}-m^2)(k^2)^2}. \end{equation} We have again omitted terms that do not contribute to the UV pole at $n=4$. The tensor ${\tilde{T}}_{\alpha\rho}$ is closely related to $T_{\alpha\rho}$ in eq(\ref{CPRN1}). Finally we have the contributions \begin{equation} {\cal V}^0_a(0,0)=-4\frac{ig^3}{(4\pi)^2}C_2(G)\frac{1}{n-4}\frac{b^2(1+2b^2)}{(1+b^2)^2}t_a{\bar \gg}^0, \end{equation} and \begin{equation} {\cal V}^j_a(0,0)=-\frac{4}{3}\frac{ig^3}{(4\pi)^2}C_2(G)\frac{1}{n-4}\frac{1+2b^2+4b^4+2b^6 }{b^2(1+b^2)^2}t_a{\bar \gg}^j. \end{equation} Combining the two sets of results we obtain \begin{equation} {\cal V}^0_a(0,0)=4\frac{ig^3}{(4\pi)^2}\frac{1}{n-4}t_a{\bar \gg}^0\left(C_2(N)\frac{1-3b^4}{(1+b^2)^2}-\frac{1}{2}C_2(G)\right). \end{equation} and \begin{equation} {\cal V}^j_a(0,0)=-\frac{4}{3}\frac{ig^3}{(4\pi)^2}\frac{1}{n-4}t_a{\bar \gg}^j\left(C_2(N)\frac{(1+b^4)(2+b^2)}{b^2(1+b^2)^2}+\frac{3}{2}C_2(G)\right). \end{equation} This leads to \begin{equation} {\cal V}^\lambda_a(0,0)=4\frac{ig^3}{(4\pi)^2}\frac{1}{n-4}t_aR^\lambda_{~~\rho}{\bar \gg}^\rho, \end{equation} where \begin{equation} R^\lambda_{~~\rho}=-\frac{1}{2}\left(C_2(N)\frac{1-b^2+2b^4}{b^2(1+b^2)}+C_2(G)\right){\delta}^\lambda_\rho +\frac{1}{2}C_2(N)\frac{(1-b^2)(1+3b^2+4b^4)}{b^2(1+b^2)^2}{\cal T}^\lambda_{~~\rho}. \end{equation} As expected from eq(\ref{RENORM25}) the second contribution to $R^\lambda_{~~\rho}$ yields the correct term $\propto h^\lambda_{~~\rho}$. The bare coupling and the one loop correction yields \begin{equation} {\cal V}^\lambda_{0a}(0,0)+{\cal V}^\lambda_a(0,0)= i\mu^{(4-n)/2}g\left(1+g^{(1)}g^2-\frac{4g^2}{(4\pi)^2}\frac{1}{n-4}\frac{1}{2}\left(C_2(N) \frac{1-b^2+2b^4}{b^2(1+b^2)}+C_2(G)\right)\right)t_a{\bar \gg}^\lambda \label{CPRN2} \end{equation} Thus from eq(\ref{RENORM28}) we find that \begin{equation} K=-\frac{2}{(4\pi)^2}\left(C_2(N)\frac{1-b^2+2b^4}{b^2(1+b^2)}+C_2(G)\right). \end{equation} The remaining UV divergences, as shown in eq(\ref{RENORM28}), are removed by appropriate field renormalisation factors and then finally by the pole in the coupling constant expansion. We have then from eq(\ref{CPRN2}) the result for the $\beta$-function for the renormalised coupling \begin{equation} \beta(g)=-(2-n/2)-\frac{g^3}{(4\pi)^2}\left(\frac{11}{3}C_2(G)-\frac{1}{3}\left(\frac{1}{b^2}+b^2\right)\right). \end{equation} Obviously the first term vanishes in four dimensions. The second term reduces to the standard answer when $b=1$ and there is no Lorentz symmetry breaking. \subsection{\label{RENGRP0} Renormalisation Group for Petrov class O} In four dimensions then, the important renormalisation group equations are for $g$ and $b$. They take the form \begin{equation} \mu\frac{\partial g}{\partial\mu}=-\frac{g^3}{(4\pi)^2}\left(\frac{11}{3}C_2(G)-\frac{1}{3}\left(\frac{1}{b^2}+b^2\right)\right), \label{RG4D_1} \end{equation} and \begin{equation} \mu\frac{\partial b}{\partial\mu}=-\frac{1}{3}\frac{g^2}{(4\pi)^2}\frac{1-b^2}{b^2}\left(1+b^2+4C_2(N)\frac{1+3b^2+4b^4}{(1+b^2)^2}\right). \label{RG4D_2} \end{equation} In general the two variables influence one another as they evolve along the RG trajectory. However some points in $(g,b)$-space are particularly significant. The Lorentz invariant situation $b=1$ is stable and is maintained under the RG. The coupling constant $g$ then runs to zero, its fixed point, in the standard way as $\mu$ rises to infinity. The rate at which $g$ drops is the result of a competition between the contributions of the gauge field and the quark field to the vacuum polarisation. When we explore values of $b\ne 1$ we see that the effect of the quark field is enhanced with the result that $\beta(g)$ vanishes when $b$ satisfies \begin{equation} 11C_2(G)-\frac{1}{b^2}-b^2=0. \label{MINgi_1} \end{equation} That is \begin{equation} b=b_{\pm}=\left(\frac{1}{2}(R\pm\sqrt{R^2-4})\right)^{1/2}, \label{MINg_2} \end{equation} where $R=11C_2(G)$. On these two lines $\partial b/\partial\mu$ remains non-vanishing. The RG trajectories cross the lines and at the crossing point the coupling constant attains a minimum value. It increases again as the scaling energy $\mu$ continues to increase. If we modify the model so that it contains $n_f$ quarks, all sharing the same metric, then eq(\ref{RG4D_1}) becomes \begin{equation} \mu\frac{\partial g}{\partial\mu}=-\frac{g^3}{(4\pi)^2}\left(\frac{11}{3}C_2(G)-\frac{n_f}{3}\left(\frac{1}{b^2}+b^2\right)\right), \label{RG4D_3} \end{equation} Eq(\ref{RG4D_2}) remains unchanged. The minimum of $g$ occurs when \begin{equation} 11C_2(G)-n_f(\frac{1}{b^2}+b^2)=0, \label{MINg_3} \end{equation} that is when $b=b_{\pm}$ where now $R=11C_2(G)/n_f$. The RG equation for the quark mass is obtained from eq({\ref{RENGRP5}) and eq(\ref{RENORM29}). It is \begin{equation} \mu\frac{\partial m}{\partial\mu}=-2m\frac{g^2}{(4\pi)^2}C_2(N)\frac{1+b^2+4b^4}{b^2(1+b^2)}. \label{RG4D4} \end{equation} \section{\label{DISCUSSION} Discussion} \begin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{FIG7} \caption{ The RG-trajectories for $SU(3)$ with $n_f=6$ starting at $\alpha_S=0.1$. The initial values of $b$ are (i) 1.025, (ii) 1.0125, (iii) 1.00625, (iv) 0.99375. The horizontal line is at $b=b_+=2.3047$} \label{FIG7} \end{figure} The behaviour of the RG trajectory described above shows that a frustration of asymptotic freedom can arise in the presence of LSV, at least in this model. The question arises as to whether it might be observable experimentally. There are several issues to be considered, for example the relationship of the lab frame to the gluon frame used in the above discussion. We set this matter provisionally aside, though it must ultimately be resolved, and assume that the lab frame is travelling slowly relative to the gluon frame. More significant is the size of the energy range implicit in the model. We introduce an initial energy $M_I$ and associated LSV parameter $b_I$, $QCD$ coupling $\alpha_I=\alpha_S(M_I)$ and the energy $M_{\hbox{min}}$ at which the coupling reaches its minimum value. In a gesture towards "reality" we consider the case $SU(3)$ gauge theory, where $C_2(G)=3$ and $C_2(3)=4/3$ \cite {PESK} with $n_f=6$ quarks. In this case $b_+=2.3947$ and $b_-=0.4339$. The light cone of the quarks in the gluon frame at minimum coupling is $c_q=b_+^2=5.311$ for LSV with $b>1$ and $c_q=b_-^2=0.1882$ for LSV with $b<1$. These values for $c_q$ represent rather severe LSV. In order that our asymptotic calculation be relevant we must assume $M_I$ is sufficiently large and in particular is greater than the top quark mass, that is $173GeV$. In this energy regime there is no easy way to relate our calculation to low energy determinations of $\alpha_S(\mu)$. The story of the running coupling and its relationship to low energy phenomena and $\Lambda_{QCD}$ is complicated. It is comprehensively reviewed in \cite{DBT}. An important point is the subtraction scheme used to obtain finite results for physical quantities. We are using the minimal subtraction ($MS$) scheme \cite{tHV}. More widely used is the $\overline{MS}$ scheme introduced in \cite{BBDM} to improve convergence. In that scheme the strong coupling has an evaluation $\alpha_S(M_Z\simeq 90 GeV)\simeq 0.12$. On the grounds that our calculation is exploratory we feel justified in neglecting the difference in subtraction schemes and propose $\alpha_S(M_I)=0.1$ when $M_I=10^2-10^3 GeV$. The qualitative nature of the results is not altered by (relatively) small changes in our initial conditions. With these initial conditions the results for examples of the renormalisation trajectory, obtained by numerical integration (2nd order Runge-Kutta) of eq(\ref{RG4D_3}) and eq(\ref{RG4D_2}) are shown in Fig \ref{FIG7}. Obviously the closer the initial value $b_I$ is to unity the closer the renormalisation group trajectory stays near the Lorentz symmetry line and the later it breaks away, heading for its minimum value. These results are illustrated in Fig \ref{FIG8} which shows the connection between $\log_{10}(M_{\hbox{min}}/M_I)$ and $b_I$. The smooth curve is obtained by fitting the rightmost point on the plot. Even for this implausibly high value $b_I=1.1$ at our initial energy scale $M_I=10^2GeV$ we still find $M_{\hbox{min}}\simeq 10^{25}M_I$. Tuning $b_I$ down to potentially more realistic values results in yet greater disparities in the orders of magnitude of $M_{\hbox{min}}$ and experimentally attainable values for $M_I$. The conclusion must therefore be that for $QCD$ with the known set of quarks there is little hope of observing any of the frustration of asymptotic freedom in accelarator experiments. However the complex asymptotic behaviour that we encounter in this model may have relevance to very high energy processes at very early times in the initiating big bang of the universe. In view of the fact that the energy range associated with frustration of aymptotic freedom appears to lie well above the Planck mass ($M_P \simeq 10^{19} GeV$) where gravitational effects must become important, one might question its physical relevance. However it may also be possible and would certainly be interesting to relate the behaviour of $\alpha_S(M)$ when $M\simeq M_P$ to models of quantum gravity constructed with appropriate running LSV parameters \cite{EICH1,EICH2}. These considerations do not preclude the possibility of discovering LSV effects in an energy range for which $b$ remains close to unity. For example if we set $b=1+x$ and assume $x$ is small then to lowest order in $x$ eq(\ref{RG4D_3}) and eq(\ref{RG4D_2}) become \begin{equation} \mu\frac{\partial g}{\partial\mu}=-A\frac{g^3}{(4\pi)^2}, \end{equation} \begin{equation} \mu\frac{\partial x}{\partial\mu}=B\frac{g^2}{(4\pi)^2}x, \end{equation} where $A=(11C_2(G)-2n_f)/3$ and $B=4(1+4C_2(N))/3$. In this approximation \begin{equation} \alpha_S(E)=\alpha_I\left(1+7\frac{\alpha_I}{4\pi}\log\frac{E}{M_I}\right)^{-1}. \label{DISC} \end{equation} the RG trajectories have the form \begin{equation} b=1+x_I (\alpha_S/\alpha_I)^{-\kappa}, \label{WFL} \end{equation} where $x_I$ is the initial value of $x$ and $\kappa=-B/2A$. Eq(\ref{WFL}) exhibits the instability at the fixed point $(\alpha_S,b)=(0,1)$. Depending on the sign selected for $x_I$, $b$ will either rise or fall from unity as $\alpha_S$ approaches zero. With our choice of parameters we have $A=7$ and $B=8.4444$ with the result $\kappa=0.6031$. The dependence of $x$ on $\alpha_S$ is therefore relatively weak. When $\alpha_S$ decreases by an order of magnitude $x$ only increases by a factor of roughly 4. A similar approximation for the renormalised mass $m$ yields \begin{equation} \frac{m}{m_I}=\left(\frac{\alpha_S}{\alpha_I}\right)^{\tau}, \end{equation} where $\tau=0.762$. In the asymptotic energy range then, the renormalised mass reduces, also relatively slowly, with a power of the renormalised coupling. If effective methods were developed for computing the structure and scattering of high energy particles in the model (see references \cite{KOST5,LEHN2} for related discussions in QED and the Standard Model Extension) then possibly it could provide guidance for accelerator experiments and cosmic ray detectors investigating LSV phenomena in a high energy regime of $PeV$ and beyond. For example, if we take (intuitively) the quark metric, diag$(a,-b-b-b)$, as determining the dispersion relation for quark based states, it would become for a particle with mass $m$, energy $E$ and momentum $P$ \begin{equation} aE^2-bP^2=m^2c_q^4. \end{equation} Combining the above results we find for the velocity of quark based particles \begin{equation} v=\frac{d E}{d p}=\frac{p}{E}\left(1+Dx_I-F\frac{m_I^2}{E^2}\right), \label{DISC2} \end{equation} where \begin{equation} D=4\left(\frac{\alpha_S}{\alpha_I}\right)^{-\kappa}+\frac{14\kappa\alpha_I}{4\pi}\left(\frac{\alpha_S}{\alpha_I}\right)^{1-\kappa} \end{equation} and \begin{equation} F=\frac{7\alpha_I}{4\pi}\left(\frac{\alpha_S}{\alpha_I}\right)^{2\tau+1}\left(\tau+7(\tau-\kappa)x_I\left(\frac{\alpha_S}{\alpha_I}\right)^{-\kappa}\right) \end{equation} The point here being that the coefficients in the dispersion relation depend only on $\alpha_S/\alpha_I$ and therefore vary only logarithmically with the energy $E$. The outcome for the velocity of quark based particles shown in eq(\ref{DISC2}). Omitting all terms $O(\alpha_I)$ that decrease logarithmically, we are left with the simple result \begin{equation} v\simeq\frac{p}{E}\left(1+4x_I\left(\frac{\alpha_S}{\alpha_I}\right)^{-\kappa}\right) \label{DISC3} \end{equation} This is qualitatively different from LSV originating in higher derivative contributions to the QCD Lagrangian \cite{KOST7,KOST6} or spacetime foam models \cite{ELLIS1,ELLIS2}. These are parametrised by large mass scales and suggest powerlaw increases in energy. In our case eq(\ref{DISC3}) suggests a slow logarithmic increase that we might expect to be more difficult to detect. However were LSV to have been detected the suggested energy dependence would distinguish this QCD model from such higher derivative models. \begin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{FIG8} \caption{ Here $M_I$ is the initial energy scale and $b_I$ is the corresponding LSV parameter, $M_{\hbox{min}}$ is the energy scale at which the running coupling attains a minimum. The results of the R-K integration are represented by crosses and compare well with the continuous curve $\log_{10}(M_{\hbox{min}}/M_I)=2.5/(b_I-1)$.} \label{FIG8} \end{figure} \section{\label{CONC} Conclusions} We have studied an $SU(N)$ QCD model with quarks in the fundamental representation and formulated the perturbation series to one loop with no restriction on the magnitude of the Lorentz symmetry breaking. In the particular case we studied the LSV was due entirely to a mismatch between the lightcones of the quarks and gluons. This is a consistent possibility if the lightcones are generated by two metrics that are both invariant under the same subgroup of the Lorentz group that leaves a 4-vector, time-like in both metrics, invariant, a rotation group in fact. Similar results can be obtained with space-like and light-like vectors. The renormalisation group equation for the coupling constant $\alpha_S$ and the LSV parameter $b$ was obtained with the result, exhibited in Fig\ref{FIG7}, that initially $\alpha_S$ decreases with energy just as in the standard Lorentz symmetric case. However $b$ departs from unity increasingly with energy and this enhances the contribution of the quark vacuum polarisation to the $\beta$-function for $\alpha_S$. The outcome is that at sufficiently high energy $\alpha_S$ ceases to decrease, reaches a minimum and then increases again with energy. This constitutes the frustration of asymptotic freedom in QCD with LSV of the kind we have investigated. We suggest plausible values for the energy range $E>M_I$ we are investigating and the associated initial value $\alpha_I$ for the strong coupling. The outcome is that the frustration part of the RG trajectory for $(\alpha_S,b)$ is at energies many orders of magnitude greater than is accessible to accelerator experiments. It is well above the Planck mass. However it is possible that part of the RG trajectory lying near the Lorentz symmetry line $b=1$ might be attainable in accelerator or cosmic ray observations. The effect on the dispersion relation of particles is through powers of $\alpha_S$ and hence is logarithmic in character and represents a kind of intrinsic LSV rather than one parametrised by higher derivative contributions to the Lagrangian. There are many variations of the model that might be investigated such as increasing the number of quarks, varying the quark metrics in ways that induce more complex LSV associated with higher Petrov classes. Of course one should also consider how these results relate to the full structure of the Standard Model and its extensions. Finally it is worth noting that in the context of relatively weak LSV it may be possible to pursue a nonperturbative investigation of our model using the techniques of lattice QCD. \section*{Acknowledgements} This work has been partially supported by STFC consolidated grant ST/P000681/1. I am grateful to R. R. Horgan for discussions concerning the potential relevance of lattice field theory calculations to evaluating LSV in gauge theories.
1,108,101,562,372
arxiv
\section{Introduction} The problem of quickest change detection (QCD) is of fundamental importance in a variety of applications and has been extensively studied in mathematical statistics (see, e.g., \cite{tartakovsky_sequential,tartakovsky_qcd2020,vvv_qcd_overview,xie_vvv_qcd_overview} for overviews). Given a sequence of observations whose distribution changes at some unknown change point, the goal is to detect the change in distribution as quickly as possible after it occurs, while not making too many false alarms. In the classical formulations of the QCD problem, it is assumed that observations are independent and identically distributed (i.i.d.) with known pre- and post-change distributions. In many practical situations, while it is reasonable to assume that we can accurately estimate the pre-change distribution, the post-change distribution is rarely completely known. Furthermore, in many cases, it is reasonable to assume that the system is in a steady state before the change point and produces i.i.d.\ observations, but in the post-change mode the observations may be substantially non-identically distributed, i.e., non-stationary. For example, in the pandemic monitoring problem, the distribution of the number of people infected daily might have achieved a steady (stationary) state before the start of a new wave, but after the onset of a new wave, the post-change observations may no longer be stationary. Indeed, during the early phase of the new wave, the mean of the post-change distribution grows approximately exponentially. We will address the pandemic monitoring problem in detail in Section~\ref{sec:num-res}. In this paper, our main focus is on the QCD problem with independent observations\footnote{The extension to the case of dependent observations is discussed in Section~\ref{sec:ext}.}, where the pre-change observations are assumed to be stationary with a known distribution, while the post-change observations are allowed to be non-stationary with some possible parametric uncertainty in their distribution. There have been extensions of the classical formulation to the case where the pre- and/or post-change distributions are not fully known and observations may be non-i.i.d., i.e., dependent and nonidentically distributed. For the i.i.d. case with parametric uncertainty in the post-change regime, Lorden~\cite{lorden1971} proposed a generalized likelihood ratio (GLR) Cumulative Sum (CuSum) procedure, and proved its asymptotic optimality in the minimax sense as the false alarm rate goes to zero, for one-parameter exponential families. An alternative to the GLR-CuSum, the mixture-based CuSum, was proposed and studied by Pollak \cite{pollakmixture} in the same setting as in \cite{lorden1971}. The GLR approach has been studied in detail for the problem of detecting the change in the mean of a Gaussian i.i.d. sequence with an unknown post-change mean by Siegmund~\cite{siegmund1995}. Both the mixture-based and GLR-CuSum procedures have been studied by Lai~\cite{lai1998} in the pointwise setting in the non-i.i.d. case of possibly dependent and non-identically distributed observations, with parametric uncertainty in the post-change regime. More specifically, Lai assumed that the log-likelihood ratio process (between pre- and post-change distributions) normalized by the number of observations $n$ converges to a positive and finite constant as $n \to\infty$, which can be interpreted as a Kullback-Leibler (KL) information number. In the case of independent (but non-identically distributed observations) this means that the expected value of the log-likelihood ratio process grows approximately linearly in the number of observations $n$, for large $n$. Tartakovsky~\cite{TartakovskySISP98} and Tartakovsky et al.~\cite{tartakovsky_sequential} refer to such a case as ``asymptotically homogeneous'' (or stationary) case. Lai developed a universal lower bound on the worst-case expected delay as well as on the expected delay to detection for every change point and proved that a specially designed window-limited (WL) CuSum procedure asymptotically achieves the lower bound as the maximal probability of false alarm approaches 0, when both pre- and post-change distributions are completely known, i.e., that the designed WL-CuSum is asymptotically pointwise optimal to first order. For the case where the post-change distribution has parametric uncertainty, Lai proposed and analyzed a WL-GLR-CuSum procedure. A general Bayesian theory for non-i.i.d. asymptotically stationary stochastic models has been developed by Tartakovsky and Veeravalli~\cite{TartakovskyVeerTVP05} and Tartakovsky~\cite{TartakovskyIEEEIT2017} for the discrete-time scenario, and by Baron and Tartakovsky~\cite{BaronTartakovskySA06} for the continuous-time scenario, when both pre- and post-change models are completely known. It was shown in these works that a Shiryaev-type change detection procedure minimizes not only average detection delay but also higher moments of the detection delay asymptotically, as the weighed probability of false alarm goes to zero, under very general conditions for the prior distribution of the change point. Extensions of these results to the case of the parametric composite post-change hypothesis have been provided by Tartakovsky~\cite{TartakovskyIEEEIT2019,tartakovsky_qcd2020} where it has been shown that mixture Shiryaev-type detection rule is asymptotically first-order optimal in the Bayesian setup and by Pergamenchtchikov and Tartakovsky~\cite{PerTar-JMVA2019} where it was shown that the mixture Shiryaev-Roberts-type procedure pointwise and minimax asymptotically optimal in the non-Bayesian setup. Note that all the cited works focus on the asymptotically stationary case. To the best of our knowledge, the asymptotically non-stationary case where the expected value of the log-likelihood ratio process normalized to some nonlinear function $g(n)$ converges to a positive and finite (information) number has never been considered.\footnote{It should be noted that such anasymptotically non-stationary case has been previously considered for sequential hypothesis testing problems by Tartakovsky~\cite{TartakovskySISP98} and Tartakovsky et al.~\cite{tartakovsky_sequential} .} Our contributions are as follows: \begin{enumerate} \item We develop a universal asymptotic (as the false alarm rate goes to zero) lower bound on the worst-case expected delay for our problem setting with non-stationary post-change observations. \item We develop a WL-CuSum procedure that asymptotically achieves the lower bound on the worst-case expected delay when the post-change distribution is fully known. \item We develop and analyze a WL-GLR-CuSum procedure that asymptotically achieves the worst-case expected delay when the post-change distribution has parametric uncertainty. \item We validate our analysis through numerical results and demonstrate the use of our approach in monitoring pandemics. \end{enumerate} The rest of the paper is structured as follows. In Section~\ref{sec:info-bd}, we derive the information bounds and propose an asymptotically optimal WL-CuSum procedure when the post-change distribution completely known. In Section~\ref{sec:unknown-param}, we propose an asymptotically optimal WL-GLR-CuSum procedure when the post-change distribution has unknown parameters. In Section~\ref{sec:ext}, we discuss possible extensions to the general non-i.i.d. case where the observations can be dependent and non-stationary. In Section~\ref{sec:num-res}, we present some numerical results, including results on monitoring pandemics. We conclude the paper in Section~\ref{sec:concl}. In the Appendix, we provide proofs of certain results. \section{Information Bounds and Optimal Detection} \label{sec:info-bd} Let $\{X_n\}_{n\ge 1}$ be a sequence of independent random variables (generally vectors), and let $\nu$ be a change point. Assume that $X_1, \dots, X_{\nu-1}$ all have density $p_0$ with respect to some non-degenerate, sigma-finite measure $\mu$ and that $X_\nu, X_{\nu+1}, \dots$ have densities $p_{1,\nu,\nu}, p_{1,\nu+1,\nu}, \ldots$, respectively, with respect to $\mu$. Note that the observations are allowed to be non-stationary after the change point and the post-change distributions may generally depend on the change point. Let $({\cal F}_{n})_{n\ge 0}$ be the filtration, i.e., ${\cal F}_{0}=\{\Omega,\varnothing\}$ and ${\cal F}_{n}=\sigma\left\{X_{\ell}, 1\le \ell \le n \right\}$ is the sigma-algebra generated by the vector of $n$ observations $X_1,\dots,X_n$ and let ${\cal F}_\infty= \sigma(X_1,X_2, \dots)$. In what follows we denote by $\mathbb{P}_\nu$ the probability measure on the entire sequence of observations when the change-point is $\nu$. That is, under $\mathbb{P}_\nu$ the random variables $X_1, \dots, X_{\nu-1}$ are i.i.d. with the common (pre-change) density $p_0$ and $X_\nu, X_{\nu+1}, \dots$ are independent with (post-change) densities $p_{1,\nu,\nu}, p_{1,\nu+1,\nu}, \ldots$ . Let $\mathbb{E}_\nu$ denote the corresponding expectation. For $\nu=\infty$ this distribution will be denoted by $\mathbb{P}_\infty$ and the corresponding expectation by $\mathbb{E}_\infty$. Evidently, under $\mathbb{P}_\infty$ the random variables $X_1,X_2,\dots$ are i.i.d. with density $p_0$. In the sequel, we denote by $\tau$ Markov (stopping) times with respect to the filtration $({\cal F}_{n})_{n\ge 0}$, i.e., the event $\{\tau=n\}$ belongs to ${\cal F}_n$. The change-time $\nu$ is assumed to be unknown but deterministic. The problem is to detect the change quickly while not causing too many false alarms. Let $\tau$ be a stopping time defined on the observation sequence associated with the detection rule, i.e., $\tau$ is the time at which we stop taking observations and declare that the change has occurred. The problem is to detect the change quickly, minimizing the delay to detection $\tau -\nu$, while not causing too many false alarms. \subsection{Classical Results under i.i.d. Model} \label{subsec:stat_postc} A special case of the model described above is where both the pre- and post-change observations are i.i.d., i.e., $p_{1,n,\nu} \equiv p_1$ for all $n \geq \nu \geq 1$. In this case, Lorden \cite{lorden1971} proposed solving the following optimization problem to find the best stopping time $\tau$: \begin{equation} \label{prob_def} \inf_{\tau \in \mathcal{C}_\alpha} \WADD{\tau} \end{equation} where \begin{equation} \label{LordenADD} \WADD{\tau} := \sup_{\nu \geq 1} \esssup \E{\nu}{\left(\tau-\nu+1\right)^+|{\cal F}_{\nu-1}} \end{equation} characterizes the worst-case expected delay, and $\esssup$ stands for essential supremum. The constraint set is \begin{equation} \label{fa_constraint} \mathcal{C}_\alpha := \left\{ \tau: \FAR{\tau} \leq \alpha \right\} \end{equation} with \begin{equation} \FAR{\tau} := \frac{1}{ \E{\infty}{\tau}} \label{eq:FAR_def} \end{equation} which guarantees that the false alarm rate of the algorithm does not exceed $\alpha$. Recall that $\E{\infty}{\cdot}$ is the expectation operator when the change never happens, and we use the conventional notation $(\cdot)^+:=\max\{0,\cdot\}$ for the nonnegative part. The mean time to a false alarm (MTFA) $\E{\infty}{\tau}$ is sometimes referred to as the {\em average run length to false alarm}. Lorden also showed that Page's CuSum detection algorithm \cite{page1954} whose detection statistic is given by: \begin{equation} \label{cusum_stat} W(n) = \max_{1\leq k \leq n+1} \sum_{i=k}^n Z_i = \left(W(n-1) + Z_n \right)^+ \end{equation} solves the problem in \eqref{prob_def} asymptotically as $\alpha \to 0$. Here, $Z_n$ is the log-likelihood ratio defined as: \begin{equation} \label{eq:lorden_llr} Z_n = \log \frac{p_1(X_n)}{p_0(X_n)}. \end{equation} The CuSum stopping rule is given by: \begin{equation} \label{defstoppingrule} \tau_{\text{Page}}\left(b\right) := \inf \{n: W(n)\geq b \} \end{equation} In particular, if threshold is set as $b = \abs{\log \alpha}$, then $\FAR{\tau_{\text{Page}}\left(b_\alpha\right)} \leq \alpha$ (see, e.g., \cite[Lemma~8.2.1]{tartakovsky_sequential})). It was shown by Moustakides \cite{moustakides1986} that the CuSum algorithm is exactly optimal for the problem in (\ref{prob_def}) if threshold $b=b_\alpha$ is selected so that $\FAR{\tau_{\text{Page}}\left(b_\alpha\right)} = \alpha$. If threshold $b_\alpha$ is selected in a special way that accounts for the overshoot of $W(n)$ over $b_\alpha$ at stopping, which guarantees the approximation $\FAR{\tau_{\text{Page}}\left(b_\alpha\right)} \sim \alpha$ as $\alpha\to0$, then we have the following third-order asymptotic approximation (as $\alpha\to 0$) for the worst-case expected detection delay of the optimal procedure: \begin{align*} \inf_{\tau \in \mathcal{C}_\alpha} \WADD{\tau} & = \WADD{\tau_{\text{Page}}(b_\alpha)} + o(1), \\ \WADD{\tau_{\text{Page}}(b_\alpha)} & = \frac{1}{{\KL{p_1}{p_0}}} (\abs{\log \alpha} - \mathsf{const} + o(1)) \end{align*} (see, e.g., \cite{tartakovsky_sequential}), which also implies the first-order asymptotic approximation (as $\alpha \to 0$): \begin{equation} \label{FOWADDPage} \inf_{\tau \in \mathcal{C}_\alpha} \WADD{\tau} \sim \WADD{\tau_{\text{Page}}\left(\abs{\log\alpha}\right)} = \frac{\abs{\log \alpha}}{\KL{p_1}{p_0}} (1+o(1)) \end{equation} where $Y_\alpha\sim G_\alpha$ is equivalent to $Y_\alpha = G_\alpha (1+o(1))$. Here $\KL{p_1}{p_0}$ is the Kullback-Leibler (KL) divergence between $p_1$ and $p_0$. Also, in the following we use a standard notation $o(x)$ as $x\to x_0$ for the function $f(x)$ such that $f(x)/x \to 0$ as $x\to x_0$, i.e., $o(1) \to 0$ as $\alpha \to 0$, and $O(x)$ for the function $f(x)$ such that $f(x)/x$ is bounded as $x \to 0$, i.e., $O(1)$ is a finite constant. Along with Lorden's worst average detection delay $\WADD{\tau}$, defined in \eqref{LordenADD}, we can also consider also the less pessimistic Pollak's performance measure \cite{PollakAS85}: \[ \SADD{\tau} := \sup_{\nu \geq 1} \E{\nu}{\tau-\nu+1| \tau \geq \nu}. \] Pollak suggested the following minimax optimization problem in class $\mathcal{C}_\alpha$: \begin{equation}\label{Pollak_problem} \inf_{\tau\in \mathcal{C}_\alpha} \SADD{\tau}. \end{equation} An alternative to CuSum is the Shiryaev-Roberts (SR) change detection procedure $\tau_{\text{SR}}$ based not on the maximization of the likelihood ratio over the unknown change point but on summation of likelihood ratios (i.e., on averaging over the uniform prior distribution). As shown in \cite{tartakovsky_polpolunch_2012}, the SR procedure is second-order asymptotically minimax with respect to Pollak's measure: \[ \inf_{\tau\in \mathcal{C}_\alpha} \SADD{\tau} = \SADD{\tau_{\text{SR}}} + O(1) \quad \text{as}~ \alpha\to 0. \] The CuSum procedure with a certain threshold $b_\alpha$ also has a second-order optimality property with respect to the risk $\SADD{\tau}$. A detailed numerical comparison of CuSum and SR procedures for i.i.d.\ models was performed in \cite{MoustPolTarCS09}. \subsection{Information Bounds for Non-stationary Post-Change Observations} \label{subsec:ext_ib} In the case where both the pre- and post-change observations are independent and the post-change observations are non-stationary, the log-likelihood ratio is: \begin{equation} \label{llr:def} Z_{n,k} = \log \frac{p_{1,n,k}(X_n)}{p_0(X_n)} \end{equation} where $n \geq k \geq 1$. Here $k$ is a hypothetisized change-point and $X_n$ is drawn from the true distribution $\mathbb{P}_\nu$ ($\nu \in [1,\infty)$ or $\nu =\infty$). In the classical i.i.d. model described in Section~\ref{subsec:stat_postc}, the cumulative KL-divergence after the change point increases linearly in the number of observations. We generalize this condition as follows. Let $g_\nu: \mathbb{R}^+ \to \mathbb{R}^+$ be an increasing and continuous function, which we will refer to as \emph{growth function}. Note that the inverse of $g_\nu$, denoted by $g_\nu^{-1}$, exists and is also increasing and continuous. We assume that the expected sum of the log-likelihood ratios under $\mathbb{P}_\nu$, which corresponds to the cumulative Kullback-Leibler (KL) divergence for our non-stationary model, matches the value of the growth function at all positive integers, i.e., \begin{equation} \label{llr:growth} g_\nu(n) = \sum_{i=\nu}^{\nu+n-1} \E{\nu}{Z_{i,\nu}},\forall n \geq 1 \end{equation} Furthermore, we assume that $\E{\nu}{Z_{i,\nu}} > 0$ for all $i \geq \nu$. and that for each $x >0$ \begin{equation} g^{-1}(x) := \sup_{\nu \geq 1} g_\nu^{-1}(x) \end{equation} exists. Note that $g^{-1}$ is also increasing and continuous. A key assumption that we will need for our analysis is that $g^{-1}(x)$ satisfies \begin{equation} \label{llr:grow_cond} \log g^{-1}(x) = o(x) \quad \text{as}~ x \to \infty. \end{equation} We should note that such a growth function $g (n)$ has been adopted in sequential hypothesis testing with non-stationary observations \cite[Sec.~3.4]{tartakovsky_sequential}. In the special case where the post-change distribution is invariant to the change-point $\nu$, i.e., for $j\geq 0$, $p_{1,\nu+j,\nu}$ is not a function of $\nu$, we have $g \equiv g_\nu$ and $g^{-1} \equiv g_\nu^{-1}$, for all $\nu \geq 1$. The proof of asymptotic optimality is performed in two steps. First, we derive a first-order asymptotic (as $\alpha \to 0$) lower bound for the maximal expected detection delays $\inf_{\tau \in \mathcal{C}_{\alpha} }\WADD{\tau}$ and $\inf_{\tau \in \mathcal{C}_{\alpha} }\SADD{\tau}$. To this end, we need the following right-tail condition for the log-likelihood ratio process: \begin{equation} \label{llr:upper} \sup_{\nu \geq 1} \Prob{\nu}{\max_{t \leq n} \sum_{i=\nu}^{\nu+t-1} Z_{i,\nu} \geq (1+\delta) g_\nu(n)} \xrightarrow{n \to \infty} 0 \quad \forall \delta >0, \end{equation} assuming that for all $\nu \ge 1$ \[ \frac{\sum_{i=\nu}^{\nu+t-1} Z_{i,\nu}}{g_\nu(t)} \xrightarrow[t\to\infty]{\text{in} ~ \mathbb{P}_\nu\text{-probability}} 1. \] At the second stage, we show that this lower bound is attained for the Window-Limited (WL) CuSum procedure under the following left-tail condition \begin{equation} \label{llr:lower} \max_{t \geq \nu \geq 1} \Prob{\nu}{\sum_{i=t}^{t+n-1} Z_{i,t} \leq (1-\delta) g_\nu(n) } \xrightarrow{n \to \infty} 0 \quad \forall \delta \in (0, 1). \end{equation} The following lemma provides sufficient conditions under which conditions \eqref{llr:upper} and \eqref{llr:lower} hold for the sequence of independent and non-stationary observations. Hereafter we use the notation $\Var{\nu}{Y}= \E{\nu}{Y^2 - \E{\nu}{Y}^2}$ for variance of the random variable $Y$ under distribution $\mathbb{P}_\nu$. \begin{lemma} \label{llr:lemma} Consider the growth function $g_\nu(n)$ defined in \eqref{llr:growth}. Suppose that the sum of variances of the log-likelihood ratios satisfies \begin{equation} \label{llr:var} \sup_{t \geq \nu \geq 1} \frac{1}{g_\nu^2(n)} \sum_{i=t}^{t+n-1} \Var{\nu}{Z_{i,t}} \xrightarrow{n \to \infty} 0 \end{equation} Then condition \eqref{llr:upper} holds. If, in addition, for all $\nu \geq 1$ and all positive integers $\Delta$, \begin{equation} \label{llr:tshift} \E{\nu}{Z_{i,\nu}} \leq \E{\nu}{Z_{i+\Delta,\nu+\Delta}}, \end{equation} then condition \eqref{llr:lower} holds. \end{lemma} The proof is given in the appendix. \begin{remark} One can generalize condition \eqref{llr:tshift} in a way that either $\E{\nu}{Z_{i,\nu}} \leq \E{\nu}{Z_{i+\Delta,\nu+\Delta}}$ or \begin{equation*} \frac{1}{g_\nu(n)} \sum_{i=\nu}^{\nu+n-1} \left( \E{\nu}{Z_{i,\nu}} - \E{\nu}{Z_{i+\Delta,\nu+\Delta}}\right) = o(1) \end{equation*} holds for all positive integers $\Delta$. \end{remark} \begin{example} \label{gex} Consider the following Gaussian exponential mean-change detection problem. Denote by ${\cal N}(\mu_0,\sigma_0^2)$ the Gaussian distribution with mean $\mu_0$ and variance $\sigma_0^2$. Let $X_1,\dots,X_{\nu-1}$ be distributed as ${\cal N}(\mu_0,\sigma_0^2)$, and for all $n \geq \nu$, let $X_n$ be distributed as ${\cal N}(\mu_0 e^{\theta(n-\nu)},\sigma_0^2)$. Here $\theta$ is some positive fixed constant. The log-likelihood ratio is given by: \begin{align} \label{gex:llr} Z_{n,t} = \log \frac{p_{1,n,t}(X_n)}{p_0(X_n)} &= -\frac{(X_n-\mu_0 e^{\theta(n-t)})^2}{2 \sigma_0^2} + \frac{(X_n-\mu_0)^2}{2 \sigma_0^2} \nonumber\\ &= \frac{\mu_0}{\sigma_0^2} (e^{\theta(n-t)} - 1) X_n - \frac{\mu_0^2 (e^{2 \theta (n-t)}-1)}{2 \sigma_0^2}. \end{align} Now, the growth function can be calculated as \begin{equation} \label{gex:growth} g_\nu(n) = \sum_{i=\nu}^{\nu+n-1} \E{\nu}{Z_{i,\nu}} = \sum_{i=0}^{n-1} \frac{\mu_0^2}{2 \sigma_0^2} (e^{\theta i}-1)^2. \end{equation} Since the post-change distribution is invariant to the change-point $\nu$, $g^{-1}(n) = g_1^{-1}(n) = O(\log n) \implies \log g^{-1}(n) = o(n)$, which satisfies \eqref{llr:grow_cond}. Also, the sum of variances of the log-likelihood ratios is \begin{equation*} \sum_{i=t}^{t+n-1} \Var{\nu}{Z_{i,t}} = \sum_{i=t}^{t+n-1} \frac{\mu_0^2}{\sigma_0^4} (e^{\theta(i-t)} - 1)^2 \Var{\nu}{X_i} = \frac{2}{\sigma_0^2} g_\nu(n) = o(g_\nu^2(n)) \end{equation*} for all $t \geq \nu$, which establishes condition \eqref{llr:var}. Further, for any $i \geq \nu$ and $\Delta \geq 1$, \begin{align*} \E{\nu}{Z_{i+\Delta,\nu+\Delta}} &= \frac{\mu_0}{\sigma_0^2} (e^{\theta(i-\nu)} - 1) \E{\nu}{X_{i+\Delta}} - \frac{\mu_0^2 (e^{2 \theta (i-\nu)}-1)}{2 \sigma_0^2}\\ &\geq \frac{\mu_0}{\sigma_0^2} (e^{\theta(i-\nu)} - 1) \E{\nu}{X_{i}} - \frac{\mu_0^2 (e^{2 \theta (i-\nu)}-1)}{2 \sigma_0^2} = \E{\nu}{Z_{i,\nu}} \end{align*} which establishes condition \eqref{llr:tshift}. \qed \end{example} The following theorem gives a lower bound on the worst-case average detection delays as $\alpha \to 0$ in class $ \mathcal{C}_{\alpha} $. \begin{theorem} \label{llr:delay_lower_bound} For $\delta\in(0,1)$ let \begin{equation} \label{eq:thm1_h} h_\delta (\alpha) := g^{-1} ((1-\delta) |\log\alpha|). \end{equation} Suppose that $g^{-1}(x)$ satisfies \eqref{llr:grow_cond}. Then for all $\delta\in(0,1)$ and some $\nu \ge 1$ \begin{equation}\label{supProb0} \lim_{\alpha \to 0} \sup_{\tau\in \mathcal{C}_{\alpha} } \Prob{\nu}{\nu \leq \tau < \nu + h_\delta (\alpha) } = 0 \end{equation} and as $\alpha \to 0$, \begin{equation} \label{LowerSADD} \inf_{\tau \in \mathcal{C}_{\alpha} } \WADD{\tau} \geq \inf_{\tau \in \mathcal{C}_{\alpha} } \SADD{\tau} \geq g^{-1}(\abs{\log{\alpha}}) (1+o(1)). \end{equation} \end{theorem} \begin{proof} Obviously, for any Markov time $\tau$, \[ \WADD{\tau} \geq \SADD{\tau} \ge \E{\nu}{(\tau -\nu)^+} . \] Therefore, to prove the asymptotic lower bound \eqref{LowerSADD} we have to show that as $\alpha \to 0$, \begin{equation} \label{eq:thm1_main} \sup_{\nu \ge 1} \E{\nu}{(\tau -\nu)^+} \geq g^{-1} (|\log{\alpha}|) (1+o(1)), \end{equation} where the $o(1)$ term on the right-hand side does not depend on $\tau$, i.e., uniform in $\tau \in \mathcal{C}_{\alpha} $. To begin, let the stopping time $\tau \in {\cal C}_\alpha$ and note that by Markov's inequality, \[ \E{\nu}{(\tau -\nu)^+} \ge h_\delta (\alpha) \Prob{\nu}{(\tau -\nu)^+ \ge h_\delta (\alpha)}. \] Hence, if assertion \eqref{supProb0} holds, then for some $\nu \ge 1$ \[ \inf_{\tau \in \mathcal{C}_{\alpha} } \Prob{\nu}{(\tau -\nu)^+ \ge h_\delta (\alpha)} = 1 -o(1) \quad \text{as}~ \alpha\to0. \] This implies the asymptotic inequality \begin{equation} \label{Exptauplus} \inf_{\tau\in \mathcal{C}_{\alpha} } \E{\nu}{(\tau -\nu)^+} \ge h_\delta (\alpha) (1+o(1)), \end{equation} which holds for an arbitrary $ \delta\in (0,1)$ and some $\nu$. Since by our assumption the function $ h_\delta (\alpha) $ is continuous, taking the limit $\delta \to 0$ and maximizing over $\nu\ge 1$ yields inequality \eqref{eq:thm1_main}. It remains to prove \eqref{supProb0}. Changing the measure $\mathbb{P}_\infty \to \mathbb{P}_\nu$ and using Wald's likelihood ratio identity, we obtain the following chain of equalities and inequalities for any $C >0$ and $\delta\in (0,1)$: \begin{align*} & \Prob{\infty}{\nu \leq \tau < \nu + h_\delta (\alpha)} = \E{\nu}{\ind{0 \le \tau -\nu < h_\delta(\alpha)} \exp\left(- \sum_{i=\nu}^\tau Z_{i,\nu} \right)} \\ & \ge \E{\nu}{\ind{0 \le \tau -\nu < h_\delta(\alpha), \sum_{i=\nu}^\tau Z_{i,\nu} < C} \exp\left(- \sum_{i=\nu}^\tau Z_{i,\nu} \right) } \\ &\ge e^{-C} \Prob{\nu}{0 \le \tau -\nu < h_\delta(\alpha), \max_{0 \le n-\nu < h_\delta(\alpha) }\sum_{i=\nu}^n Z_{i,\nu} < C} \\ &\ge e^{-C} \left(\Prob{\nu}{0 \le \tau -\nu < h_\delta(\alpha)} - \Prob{\nu}{\max_{0 \le n < h_\delta(\alpha) }\sum_{i=\nu}^{\nu+n} Z_{i,\nu} \ge C}\right) \end{align*} where the last inequality follows from the fact that $\Pr({\cal A} \cap {\cal B}) = \Pr({\cal A}) - \Pr({\cal B}^c)$ for any events ${\cal A}$ and ${\cal B}$, where ${\cal B}^c$ is the complement event of ${\cal B}$. Setting $C=g(h_\delta (\alpha)) (1+\delta) = (1-\delta^2) |\log\alpha|$ yields \begin{equation}\label{Probnuh} \Prob{\nu}{\nu \leq \tau < \nu + h_\delta (\alpha)} \le \kappa^{(\nu)}_{\delta,\alpha}(\tau) + \sup_{\nu \ge 1} \beta^{(\nu)}_{\delta,\alpha}, \end{equation} where \[ \kappa^{(\nu)}_{\delta,\alpha}(\tau) =e^{(1-\delta^2) |\log\alpha|} \Prob{\infty}{0 \le \tau -\nu < h_\delta(\alpha)} \] and \[ \beta^{(\nu)}_{\delta,\alpha}= \Prob{\nu}{\max_{0 \le n < h_\delta(\alpha) }\sum_{i=\nu}^{\nu+n} Z_{i,\nu} \ge (1+\delta) g(h_\delta (\alpha))}. \] Since $g(h_\delta (\alpha))\to \infty$ as $\alpha \to 0$, by condition \eqref{llr:upper}, \begin{equation}\label{subbetato0} \sup_{\nu \ge 1} \beta^{(\nu)}_{\delta,\alpha}\to 0. \end{equation} Next we turn to the evaluation of the term $\kappa^{(\nu)}_{\delta,\alpha}(\tau)$ for any stopping time $\tau \in {\cal C}_\alpha$. It follows from Lemma 2.1 in \cite[page 72]{tartakovsky_qcd2020} that for any $M < \alpha^{-1}$, there exists some $\ell \geq 1$ (possibly depending on $\alpha$) such that \begin{equation} \label{eq:thm1_lemma} \Prob{\infty}{\ell \leq \tau < \ell+M} \le \Prob{\infty}{\tau < \ell + M | \tau \geq \ell} < M \, \alpha, \end{equation} so for some $\nu \ge 1$, \[ \kappa^{(\nu)}_{\delta,\alpha}(\tau) \le M \alpha e^{(1-\delta^2) |\log\alpha|} = M\alpha^{\delta^2}. \] If we choose $M \le M_\alpha= \floor{h_\delta(\alpha)^2}\Big|_{\delta = 0} = \floor{(g^{-1}(|\log\alpha|))^2}$, then for all sufficiently small $\alpha$, \begin{equation*} \log M \leq 2 \log g^{-1}(|\log\alpha|) = o(|\log\alpha|) \end{equation*} so that the condition \eqref{llr:grow_cond} is satisfied. Furthermore, \begin{equation*} M_\alpha \, \alpha^p \xrightarrow{\alpha \to 0} 0 \quad \text{as}~ \alpha\to 0 \end{equation*} for any $p > 0$. To see this, assume for purpose of contradiction that there exists some $p_0 > 0$ and $c_0 > 0$ such that $\lim_{\alpha \to 0} M_\alpha \alpha^{p_0} = c_0$. Then, since $\lim_{\alpha \to 0} \alpha^{-p_0} \neq 0$, $\lim_{\alpha \to 0} \log M_\alpha = p_0 \lim_{\alpha \to 0} |\log \alpha| + \log c_0$ and thus $\log M_\alpha \neq o(|\log\alpha|)$. Hence, it follows that for some $\nu \ge 1$, which may depend on $\alpha$, as $\alpha \to 0$ \begin{equation}\label{kappainfto0} \inf_{\tau\in \mathcal{C}_{\alpha} } \kappa^{(\nu)}_{\delta,\alpha}(\tau) \le M_\alpha \alpha^{\delta^2} \to 0. \end{equation} Combining \eqref{Probnuh}, \eqref{subbetato0}, and \eqref{kappainfto0} we obtain that for some $\nu\ge 1$ \[ \Prob{\nu}{\nu \leq \tau < \nu + h_\delta (\alpha)} \le M_\alpha \alpha^{\delta^2} + \sup_{\nu \ge 1} \beta^{(\nu)}_{\delta,\alpha} = o(1), \] where the $o(1)$ term is uniform over all $\nu\ge 1$. This yields assertion \eqref{supProb0}, and the proof is complete. \end{proof} \subsection{Asymptotically Optimal Detection with Non-stationary Post-Change Observations with Known Distributions} \label{subsec:asym-opt-det} Recall that under the classical setting, Page's CuSum procedure (in \eqref{defstoppingrule}) is optimal and has the following structure: \begin{equation} \tau_{\text{Page}}\left(b\right) = \inf \left\{n: \max_{1\leq k \leq n+1} \sum_{i=k}^n Z_i \geq b \right\} \end{equation} where $Z_i$ is the log-likelihood ratio when the post-change distributions are stationary (defined in \eqref{eq:lorden_llr}). When the post-change distributions are potentially non-stationary, the CuSum stopping rule is defined similarly as: \begin{equation} \label{TC:def} \tau_C\left(b\right) := \inf \left\{n:\max_{1 \leq k \leq n+1} \sum_{i=k}^{n} Z_{i,k} \geq b \right\} \end{equation} where $Z_{i,k}$ represents the log-likelihood ratio between densities $p_{1,i,k}$ and $p_0$ for observation $X_i$ (defined in \eqref{llr:def}). Here $i$ is the time index and $k$ is the hypothesized change point. Note that if the post-change distributions are indeed stationary, i.e., $p_{1,i,k} \equiv p_1$, we would get $Z_{i,k} \equiv Z_i$ for all $k \leq i$, and thus $\tau_C \equiv \tau_{\text{Page}}$. As shown in \eqref{cusum_stat}, Page's classical CuSum algorithm admits a recursive way to compute its test statistic. Unfortunately, despite having independent observations, the test statistic in \eqref{TC:def} cannot be computed recursively, even for the special case where the post-change distribution is invariant to the change-point as in Example~\ref{gex}. \begin{example} Consider the Gaussian Exponential Mean-Change problem defined in Example~\ref{gex}. Suppose $\mu_0 = \sigma_0^2 = \theta = 1$. Then, the log-likelihood ratio is given by \[ Z_{n,t} = (e^{n-t} - 1) X_n - \frac{e^{2 (n-t)} - 1}{2}. \] Note that $Z_{n,t}$ is a (linear) function of $X_n$. Consider the following realization: \[ X_1 = 1, \quad X_2 = 0, \quad X_3 = 10. \] It can be verified that \[ \arg \max_{1 \leq k \leq 3} \sum_{i=k}^{2} Z_{i,k} = 2,~\text{and}~\arg \max_{1 \leq k \leq 4} \sum_{i=k}^{3} Z_{i,k} = 1. \] Note that maximizer $k^*$ goes backward in time in this case, in contrast to what happens when both the pre- and post-change observations follow i.i.d. models. The test statistic at time $n=2$ is a function of only $X_2$, and this is insufficient to construct the test statistic at time $n=3$, which is a function of $X_1$, in addition to being a function of $X_2$, and $X_3$. \qed \end{example} For computational tractability, we therefore consider a window-limited (WL) version of the CuSum procedure in \eqref{TC:def}: \begin{equation} \label{TC:test} \Tilde{\tau}_C\left(b\right) := \inf \left\{n:\max_{n-m \leq k \leq {n+1}} \sum_{i=k}^n Z_{i,k} =: W(n) \geq b \right\} \end{equation} where $m$ is the window size. For $n < m$ maximization is performed over $1\le k \le n$. In the asymptotic setting, $m=m_\alpha$ depends on $\alpha$ and should go to infinity as $\alpha \to 0$ with certain appropriate rate. Specifically, following a similar condition that Lai~\cite{lai1998} used in the asymptotically stationary case, we shall require that $m_\alpha \to \infty$ as $\alpha \to 0$ in such a way that \begin{equation} \label{TC:m_alpha} \liminf_{\alpha \to 0} m_\alpha / g^{-1}(\abs{\log\alpha}) > 1. \end{equation} Since the range for the maximum is smaller in $\Tilde{\tau}_C(b)$ than in $\tau_C(b)$, given any realization of $X_1,X_2,\ldots$, if the test statistic of $\Tilde{\tau}_C(b)$ crosses the threshold $b$ at some time $n$, so does that of $\tau_C(b)$. Therefore, for any fixed threshold $b > 0$, \begin{equation} \label{eq:tautautil} \tau_C(b) \leq \Tilde{\tau}_C(b) \end{equation} almost surely. In the following, we first control the asymptotic false alarm rate of $\Tilde{\tau}_C(b)$ with an appropriately chosen threshold in Lemma~\ref{TC:fa}. Then, we obtain asymptotic approximation of the expected detection delays of $\Tilde{\tau}_C(b)$ in Theorem \ref{TC:delay}. Finally, we combine these two results and provide an asymptotically optimal solution to the problem in \eqref{prob_def} in Theorem~\ref{TC:asymp_opt}. \begin{lemma} \label{TC:fa} Suppose that $b_\alpha = \abs{\log\alpha}$. Then \begin{equation} \label{FARCUSUMm} \FAR{\Tilde{\tau}_C(b_\alpha)} \leq \alpha \quad \text{for all} ~ \alpha \in (0,1), \end{equation} i.e., $\Tilde{\tau}_C(b_\alpha) \in \mathcal{C}_{\alpha} $. \end{lemma} \begin{proof} Define the statistic \[ R_n = \sum_{k=1}^n \exp\left(\sum_{i=k}^n Z_{i,k}\right), \quad R_0=0 \] and the corresponding stopping time $T_b:=\inf\{n: R_n \ge e^b\}$. We now show that $\E{\infty}{T_b} \ge e^b$, which implies that $\E{\infty}{\Tilde{\tau}_C(b)} \ge e^b$ for any $b>0$ since, evidently, $\Tilde{\tau}_C(b) \ge T_b$ for any $b>0$. Recall that ${\cal F}_n=\sigma(X_\ell, 1\le \ell \le n)$ denotes a sigma-algebra generated by $(X_1,\dots,X_n)$. Since $\E{\infty}{e^{Z_{n,k}}|{\cal F}_{n-1}}=1$, it is easy to see that \[ \E{\infty}{R_n | {\cal F}_{n-1}} = 1 + R_{n-1} \quad \text{for}~ n \ge 1. \] Consequently, the statistic $\{R_n-n\}_{n \ge 1}$ is a zero-mean $(\mathbb{P}_\infty,{\cal F}_n)$-martingale. It suffices to assume that $\E{\infty}{T_b}<\infty$ since otherwise the statement is trivial. Then, $\E{\infty}{R_{T_b}-T_b}$ exists and also \[ \liminf_{n\to\infty} \int_{\{T_b >n\}} |R_n - n| \mathrm{d} \mathbb{P}_\infty = 0 \] since $0 \le R_n < e^b$ on the event $\{T_b >n\}$. Hence, we can apply the optional sampling theorem (see, e.g. \cite[Th 2.3.1, page 31]{tartakovsky_sequential}), which yields $\E{\infty}{R_{T_b}} = \E{\infty}{T_b}$. Since $R_{T_b} \ge e^b$ it follows that $\E{\infty}{\Tilde{\tau}_C(b)} \ge \E{\infty}{T_b} \ge e^b$. Now, setting $b_\alpha = \abs{\log\alpha}$ implies the inequality \begin{equation} \label{FARCUSUMbalpha} \E{\infty}{\Tilde{\tau}_C(b_\alpha)} \ge e^{b_\alpha} = \frac{1}{\alpha} \end{equation} (for any $m_\alpha \ge 1$), and therefore \eqref{FARCUSUMm} follows. \end{proof} The following result establishes asymptotic performance of the WL-CuSum procedure given in \eqref{TC:test} for large threshold values. \begin{theorem} \label{TC:delay} Fix $\delta \in (0,1)$ and let $N_{b,\delta} := \lfloor g^{-1}(b /(1-\delta)) \rfloor$. Suppose that in the WL-CuSum procedure the size of the window $m=m_b$ diverges (as $b \to \infty$) in such a way that \begin{equation}\label{Condmb} m_b \ge N_{b,\delta} (1+o(1)). \end{equation} Further, suppose that conditions \eqref{llr:upper} and \eqref{llr:lower} hold for $Z_{n,k}$ when $n \geq k \geq 1$. Then, as $b \to \infty$, \begin{equation}\label{SADDasympt} \SADD{\Tilde{\tau}_C(b)} \sim \WADD{\Tilde{\tau}_C(b)} \sim g^{-1} (b) . \end{equation} \end{theorem} \begin{proof} Since $\FAR{\Tilde{\tau}_C(b)} \le e^{-b}$, the window-limited CuSum procedure $\Tilde{\tau}_C(b)$ belongs to class $ \mathcal{C}_{\alpha} $ with $\alpha=e^{-b}$. Hence, replacing $\alpha$ by $e^{-b}$ in the asymptotic lower bound \eqref{LowerSADD} in Theorem~\ref{llr:delay_lower_bound}, we obtain that under condition \eqref{llr:upper} the following asymptotic lower bound holds: \begin{equation}\label{LBtaumb} \liminf_{b\to\infty} \frac{\WADD{\Tilde{\tau}_C(b))}}{g^{-1}(b)} \ge \liminf_{b\to\infty} \frac{\SADD{\Tilde{\tau}_C(b))}}{g^{-1}(b)} \ge 1 . \end{equation} Thus, to establish \eqref{SADDasympt} it suffices to show that under condition \eqref{llr:lower} as $b\to\infty$ \begin{equation}\label{UpperSADD} \WADD{\Tilde{\tau}_C(b))} \leq g^{-1}(b) (1+o(1)). \end{equation} Note that we have the following chain of equalities and inequalities: \begin{align} \label{ExpkTAplus} &\E{\nu}{(\Tilde{\tau}_C(b))-\nu)^+ | {\cal F}_{\nu-1}} \nonumber \\ &= \sum_{\ell=0}^{\infty} \int_{\ell N_{b,\delta}}^{(\ell+1) N_{b,\delta}} \Prob{\nu}{\Tilde{\tau}_C(b)-\nu > t | {\cal F}_{\nu-1}} \, \mathrm{d} t \nonumber\\ & \le N_{b,\delta} + \sum_{\ell =1}^{\infty} \int_{\ell N_{b,\delta}}^{(\ell +1) N_{b,\delta}} \Prob{\nu}{\Tilde{\tau}_C(b)-\nu > t |{\cal F}_{\nu-1}} \, \mathrm{d} t \nonumber\\ & \le N_{b,\delta} + \sum_{\ell=1}^{\infty} \int_{\ell N_{b,\delta}}^{(\ell+1) N_{b,\delta}} \Prob{\nu}{\Tilde{\tau}_C(b)-\nu > \ell N_{b,\delta} | {\cal F}_{\nu-1}} \, \mathrm{d} t \nonumber\\ & = N_{b,\delta} \left(1 + \sum_{\ell=1}^{\infty} \Prob{\nu}{\Tilde{\tau}_C(b)-\nu > \ell N_{b,\delta} | {\cal F}_{\nu-1}}\right) . \end{align} Define $\lambda_{n,k} := \sum_{i=k}^n Z_{i,k}$ and $K_{n} := \nu+n N_{b,\delta}$. We have $W(n) = \max_{n-m_b < k \le n} \lambda_{k,n}$. Since by condition \eqref{Condmb} $m_b > N_{b,\delta}$ (for a sufficiently large $b$), for any $n \ge 1$, \[ W(\nu+n N_{b,\delta}) \ge \lambda_{K_{n}, K_{n-1}} \] and we have \begin{align}\label{Needit} &\Prob{\nu}{\Tilde{\tau}_C(b)-\nu > \ell N_{b,\delta} | {\cal F}_{\nu-1} } \nonumber \\ & =\Prob{\nu}{W(1) < b, \dots, W(\nu+\ell N_{b,\delta}) <b | {\cal F}_{\nu-1}} \nonumber \\ & \le \Prob{\nu}{W(\nu + N_{b,\delta}) < b, \dots, W(\nu+\ell N_{b,\delta}) <b | {\cal F}_{\nu-1}} \nonumber \\ & \le \Prob{\nu}{\lambda_{K_1, K_0}< b, \dots, \lambda_{K_{\ell}, K_{\ell-1}} < b | {\cal F}_{\nu-1}}\nonumber \\ &= \prod_{n=1}^\ell \Prob{\nu}{\lambda_{K_{n}, K_{n-1}}< b} , \end{align} where the last equality follows from independence of the increments of $\{\lambda_{t,n}\}_{n \ge t}$. By condition \eqref{llr:lower}, for a sufficiently large $b$ there exists a small $\varepsilon_b$ such that \[ \Prob{\nu}{\lambda_{K_{n}, K_{n-1}} < b} \le \varepsilon_b, \quad \forall n \ge 1. \] Therefore, for any $\ell \ge 1$, \[ \Prob{\nu}{\Tilde{\tau}_C(b)-\nu > \ell N_{b_\alpha,\delta} | {\cal F}_{\nu-1}} \le \varepsilon_b^\ell. \] Combining this inequality with \eqref{ExpkTAplus} and using the fact that $\sum_{\ell=1}^\infty \varepsilon_b^\ell = \varepsilon_b(1-\varepsilon_b)^{-1}$ , we obtain \begin{align}\label{Momineqrho} \E{\nu}{(\Tilde{\tau}_C(b)-\nu)^+ | {\cal F}_{\nu-1} } \le N_{b,\delta} \left(1 + \frac{\varepsilon_b}{1-\varepsilon_b}\right) = \frac{\lfloor g^{-1}(b /(1-\delta)) \rfloor}{1-\varepsilon_b}. \end{align} Since the right-hand side of this inequality does not depend on $\nu$, $g^{-1}(b /(1-\delta))\to \infty$ as $b\to \infty$ and $\varepsilon_b$ and $\delta$ can be arbitrarily small numbers, this implies the upper bound \eqref{UpperSADD}. The proof is complete. \end{proof} Using Lemma~\ref{TC:fa} and Theorem~\ref{TC:delay}, we obtain the following asymptotic result which establishes asymptotic optimality of the WL-CuSum procedure and its asymptotic operating characteristics. \begin{theorem} \label{TC:asymp_opt} Suppose that threshold $b_\alpha$ is so selected that $b_\alpha \sim|\log \alpha|$ as $\alpha\to 0$, in particular as $b_\alpha =|\log \alpha|$. Further, suppose that left-tail \eqref{llr:upper} and right-tail \eqref{llr:lower} conditions hold for $Z_{n,k}$ when $n \geq k \geq 1$. Then, the WL-CuSum procedure in \eqref{TC:test} with the window size $m_\alpha$ that satisfies the condition \begin{equation}\label{Condmalpha} m_\alpha \ge g^{-1}(|\log\alpha|) (1+o(1)) \quad \text{as}~ \alpha \to 0 \end{equation} solves the problems \eqref{prob_def} and \eqref{Pollak_problem} asymptotically to first order as $\alpha \to 0$, i.e., \begin{equation} \label{FOAOCUSUM} \begin{split} \inf_{\tau\in \mathcal{C}_{\alpha} } \WADD{\tau} & \sim \WADD{\Tilde{\tau}_C(b_\alpha)} , \\ \inf_{\tau\in \mathcal{C}_{\alpha} } \SADD{\tau} & \sim \SADD{\Tilde{\tau}_C(b_\alpha)} \end{split} \end{equation} and \begin{equation} \label{FOAPPRCUSUM} \SADD{\Tilde{\tau}_C(b_\alpha)} \sim \WADD{\Tilde{\tau}_C(b_\alpha)} \sim g^{-1} (\abs{\log{\alpha}}). \end{equation} \end{theorem} \begin{proof} Let $b_\alpha$ be so selected that $\FAR{\Tilde{\tau}_C(b_\alpha)}\leq \alpha$ and $b_\alpha \sim |\log \alpha|$ as $\alpha\to0$. Then by Theorem~\ref{TC:delay}, as $\alpha\to0$ \[ \SADD{\Tilde{\tau}_C(b_\alpha)} \sim \WADD{\Tilde{\tau}_C(b_\alpha)} \sim g^{-1} (|\log \alpha|) . \] Comparing these asymptotic equalities with the asymptotic lower bound \eqref{LowerSADD} in Theorem~\ref{llr:delay_lower_bound} immediately yields asymptotics \eqref{FOAOCUSUM} and \eqref{FOAPPRCUSUM}. In particular, if $b_\alpha = \abs{\log\alpha}$, then by Lemma~\ref{TC:fa} $\FAR{\Tilde{\tau}_C(b_\alpha)}\leq \alpha$, and therefore the assertions hold.\end{proof} \begin{remark} Clearly, the asymptotic optimality result still holds in the case where no window is applied, i.e., $m_\alpha = n-1$. \end{remark} \begin{example} Consider the same setting as in Example~\ref{gex}. We have shown that conditions \eqref{llr:var} and \eqref{llr:tshift} hold in this setting, and thus \eqref{llr:upper} and \eqref{llr:lower} also hold by Lemma~\ref{llr:lemma}. Considering the growth function $g (n)$ given in \eqref{gex:growth}, as $n \to \infty$, we obtain \begin{equation*} g (n) = \sum_{i=0}^{n-1} \frac{\mu_0^2}{2 \sigma_0^2} (e^{\theta i}-1)^2 = \frac{\mu_0^2}{2 \sigma_0^2} e^{2 \theta (n-1)} (1+o(1)). \end{equation*} Thus, as $y \to \infty$, \begin{equation*} g^{-1}(y) = \frac{1}{2 \theta} \log\left( \frac{2 \sigma_0^2}{\mu_0^2} y\right) (1+o(1)) \end{equation*} and if $b_\alpha = |\log \alpha|$ or more generally $b_\alpha\sim |\log \alpha|$ as $\alpha\to 0$ we obtain \begin{align} \label{gex:perf} \WADD{\Tilde{\tau}_C(b_\alpha)} &= \frac{1}{2 \theta} \log\left( \frac{2 \sigma_0^2 }{\mu_0^2} \abs{\log \alpha}\right) (1+o(1)) \nonumber\\ &= O\left(\frac{1}{2 \theta} \log(\abs{\log \alpha})\right) \end{align} \end{example} \section{Asymptotically Optimum Procedure for Non-Stationary Post-Change Observations with Parametric Uncertainty} \label{sec:unknown-param} We now study the case where the evolution of the post-change distribution is parametrized by an unknown but deterministic parameter $\theta \in \mathbb{R}^d$. Let $X_\nu, X_{\nu+1}, \dots$ each have density $p_{1,0}^{\theta},p_{1,1}^{\theta},\dots$, respectively, with respect to the common non-degenerate measure $\mu$, when post-change parameter is $\theta$. Let $\mathbb{P}_{k,\theta}$ and $\mathbb{E}_{k,\theta}$ denote, respectively, the probability measure on the entire sequence of observations and expectation, when the change point is $\nu=k<\infty$ and the post-change parameter is $\theta$. Let $\Theta \subset \mathbb{R}^d$ be an open and bounded set of parameter values, with $\theta \in \Theta$. The log-likelihood ratio process is given by: \begin{equation} \label{TG:llr} Z_{n,k}^{\theta} = \log \frac{p_{1,n,k}^{\theta}(X_n)}{p_0(X_n)} \end{equation} for any $n \geq k$ and $\theta \in \Theta$. Also, the growth function in \eqref{llr:growth} is redefined as \begin{equation} \label{TG:growth} g_{\nu,\theta}(n) = \sum_{i=\nu}^{\nu+n-1} \E{\nu,\theta}{Z^{\theta}_{i,\nu}},\forall n \geq 1. \end{equation} and it is assumed that $g_\theta^{-1}(x) = \sup_{\nu \geq 1} g_{\nu,\theta}^{-1}(x)$ exists. It is also assumed that \begin{equation} \label{TG:growth_cond} \log g_\theta^{-1}(x) = o(x) \quad \text{as}~ x \to \infty. \end{equation} The goal in this section is to solve the optimization problems \eqref{prob_def} and \eqref{Pollak_problem} asymptotically as $\alpha \to 0$, under parameter uncertainty. More specifically, for $\theta \in \Theta$, define Lorden's and Pollak's worst-case expected detection delay measures \[ \WADDth{\tau} := \esssup \sup_{\nu \geq 1} \E{\nu,\theta}{(\tau-\nu+1)^+ | {\cal F}_{\nu-1}} \] and \[ \SADDth{\tau} := \sup_{\nu \geq 1} \E{\nu, \theta}{\tau-\nu+1| \tau \geq \nu} \] and the corresponding asymptotic optimization problems: find a change detection procedure $\tau^*$ that minimizes these measures to first order in class $ \mathcal{C}_{\alpha} $, i.e., for all $\theta\in\Theta$, \begin{equation}\label{AsymptProblems} \lim _{\alpha \to 0}\frac{ \inf_{\tau\in \mathcal{C}_{\alpha} } \WADDth{\tau}}{\WADDth{\tau^*}} = 1, \quad \lim_{\alpha\to0}\frac{\inf_{\tau\in \mathcal{C}_{\alpha} } \SADDth{\tau}}{\SADDth{\tau^*}} = 1. \end{equation} Consider the following window-limited GLR CuSum stopping procedure: \begin{equation} \label{TG:test} \Tilde{\tau}_G\left(b\right) := \inf \left\{n:\max_{n-m_b \leq k \leq {n+1}} \sup_{\theta \in \Theta_b} \sum_{i=k}^n Z_{i,k}^{\theta} \geq b \right\} \end{equation} where $\Theta_b \nearrow \Theta$ as $b \nearrow \infty$. For $n < m_b$ maximization is performed over $1\le k \le n$. Therefore, it is guaranteed that $\theta \in \Theta_b$ for all large enough $b$. Since we are interested in class $ \mathcal{C}_{\alpha} =\{\tau: \FAR{\tau}\le \alpha\}$, in which case both threshold $b=b_\alpha$ and window size $m_b=m_\alpha$ are the functions of $\alpha$, we will write $\Theta_{b}=\Theta_\alpha$ and suppose that $\Theta_\alpha \subset \mathbb{R}^d$ is compact for each $\alpha$. Hereafter we omit the dependency of $\hat{\theta}_{n,k}$ on $\alpha$ for brevity. In this paper, we focus on the case where $\Theta_\alpha$ is continuous for all $\alpha$'s. The discrete case is simpler and will be considered elsewhere. The following assumption is made to guarantee the existence of an upper bound on FAR. \begin{assumption} \label{TG:smooth} There exists $\varepsilon > 0$ such that for any large enough $b > 0$, \begin{equation} \label{TG:llr_sec_der} \Prob{\infty}{\max_{(k,n):k \leq n \leq k+m_b} \sup_{\theta: \norm{\theta-\hat{\theta}_{n,k}} < b^{-\frac{\varepsilon}{2}}} \emax{- \nabla_\theta^2 \sum_{i=k}^n Z_{i,k}^{\theta}} \leq 2 b^{\varepsilon}} \geq 1 - \varepsilon_b \end{equation} where $\emax{A}$ represents the maximum absolute eigenvalue of a symmetric matrix $A$ and $\varepsilon_b \searrow 0$ as $b \nearrow \infty$. \end{assumption} \begin{example} Consider again the Gaussian exponential mean-change detection problem in Example~\ref{gex}. Now we consider the case where the exact value of the post-change exponent coefficient $\Theta = [\Theta_{\text{min}},\Theta_{\text{max}}]$ is unknown. Note that $\theta$ characterizes the entire post-change evolution rather than a single post-change distribution. We shall verify Assumption~\ref{TG:smooth} below. Recalling the definition of log-likelihood ratio given in \eqref{gex:llr}, for any $\theta \in \Theta$ and $k \leq i \leq n$ where $n-k\leq m_b$, we have \begin{align} -\frac{\partial^2}{\partial \theta^2} Z^\theta_{i,k} &= -\frac{\partial^2}{\partial \theta^2} \left(\frac{\mu_0}{\sigma_0^2} (e^{\theta(i-k)} - 1) X_i - \frac{\mu_0^2 (e^{2 \theta (i-k)}+1)}{2 \sigma_0^2}\right) \nonumber\\ &= -\frac{\mu_0}{\sigma_0^2} (i-k)^2 e^{\theta(i-k)} X_i + 2 (i-k)^2 \frac{\mu_0^2 e^{2 \theta (i-k)}}{\sigma_0^2}\nonumber\\ &= \frac{\mu_0}{\sigma_0^2} (i-k)^2 e^{\theta(i-k)} (2 \mu_0 e^{\theta(i-k)} - X_i). \end{align} Therefore, \begin{align} \label{gex:sec_der} &\max_{(k,n):k \leq n \leq k+m_b} \sup_{\theta \in \Theta} \abs{-\frac{\partial^2}{\partial \theta^2} \sum_{i=k}^n Z^\theta_{i,k}}\nonumber\\ &= \sup_{\theta \in \Theta} \max_{(k,n):k \leq n \leq k+m_b} \frac{\mu_0}{\sigma_0^2} \abs{\sum_{i=k}^n (i-k)^2 e^{\theta(i-k)} (2 \mu_0 e^{\theta (i-k)} - X_i)} \nonumber\\ &\leq \sup_{\theta \in \Theta} \frac{\mu_0}{\sigma_0^2} m_b^2 e^{\theta m_b} \left(2 \mu_0 m_b e^{\theta m_b} + \max_{(k,n):k \leq n \leq k+m_b} \abs{\sum_{i=k}^n X_i}\right) \nonumber\\ &\stackrel{(*)}{\leq} \sup_{\theta \in \Theta} \frac{4\mu_0^2}{\sigma_0^2} m_b^3 e^{2 \theta m_b} \leq \frac{4\mu_0^2}{\sigma_0^2} m_b^3 e^{2 \Theta_{\text{max}} m_b} \end{align} where $(*)$ is true provided that \[ \max_{(k,n):k \leq n \leq k+m_b} \abs{\sum_{i=k}^n X_i} < 2 \mu_0 m_b e^{\theta m_b}. \] Since $X_i$'s are i.i.d. under $\mathbb{P}_\infty$, $\sum_{i=k}^n X_i$ has a Gaussian distribution with mean $\leq (m_b+1) \mu_0$ and variance $\leq (m_b+1) \sigma_0^2$. Therefore, for any $\theta \in \Theta$, \begin{align*} & \Prob{\infty}{\max_{(k,n):k \leq n \leq k+m_b} \abs{\sum_{i=k}^n X_i} > 2 \mu_0 m_b e^{\theta m_b}}\\ &\leq \Prob{\infty}{\abs{\sum_{i=1}^{m_b} X_i} > 2 \mu_0 m_b e^{\theta m_b}} \\ &= 2 Q\left( \frac{2 \mu_0 m_b e^{\theta m_b} - m_b \mu_0}{\sigma_0\sqrt{m_b+1}} \right)\\ &\leq 2\exp\left(-\frac{2 \mu_0^2 m_b^2 (e^{\theta m_b}-1)^2}{\sigma_0^2 (m_b+1)}\right) \searrow 0,~\text{as $b \to \infty$} \end{align*} where $Q(x)= (2\pi)^{-1/2} \int_{x}^\infty e^{-t^2/2} \mathrm{d} t$ is the standard Q-function. Recalling the condition in \eqref{Condmb} on the window size and using the formula \eqref{gex:perf} for the worst-case expected delay, we obtain that if we set \begin{equation*} m_b = \frac{1}{2 \Theta_{\text{min}}}\log b \end{equation*} then \begin{equation*} \frac{4\mu_0^2}{\sigma_0^2} m_b^3 e^{2 \Theta_{\text{max}} m_b} \sim (\log b)^3 b^{\Theta_{\text{max}}/\Theta_{\text{min}}}. \end{equation*} Then Assumption~\ref{TG:smooth} holds when $\varepsilon = (1+\delta)\Theta_{\text{max}}/\Theta_{\text{min}}$ with arbitrary $\delta>0$. \qed \end{example} Note that $\WADDth{\Tilde{\tau}_G(b)} \leq \WADDth{\Tilde{\tau}_C(b)}$ for any threshold $b > 0$. In order to establish asymptotic optimality of the WL-GLR-CuSum procedure we need the following lemma that allows us to select threshold $b=b_\alpha$ in such a way that the FAR of $\Tilde{\tau}_G(b)$ is controlled at least asymptotically. \begin{lemma} \label{TG:fa} Suppose that the log-likelihood ratio $\{Z_{n,k}^{\theta}\}_{n\ge k}$ satisfies \eqref{TG:llr_sec_der}. Then, as $b\to\infty$, \begin{equation}\label{FARGLR} \FAR{\Tilde{\tau}_G(b)} \le |\Theta_\alpha| C_d^{-1} b^{\frac{\varepsilon d}{2}} e^{1-b} (1+o(1)), \end{equation} where $C_d =\frac{\pi^{d/2}}{\Gamma(1+d/2)}$ is a constant that does not depend on $\alpha$. Consequently, if $b=b_\alpha$ satisfies equation \begin{equation} \label{TG:thr} |\Theta_\alpha| C_d^{-1} b_\alpha^{\frac{\varepsilon d}{2}} e^{1-b_\alpha} = \alpha, \end{equation} then $\FAR{\Tilde{\tau}_G(b_\alpha)} \le \alpha (1+o(1))$ as $\alpha\to0$. \end{lemma} \begin{remark} Since $\abs{\Theta_\alpha} \leq \abs{\Theta} < \infty$, it follows from \eqref{TG:thr} that $b_\alpha \sim \abs{\log\alpha}$ as $\alpha \to 0$. \end{remark} The proof of Lemma~\ref{TG:fa} is given in the appendix. The following theorem establishes asymptotic optimality properties of the WL-GLR-CuSum detection procedure. \begin{theorem} \label{TG:asymp_opt} Suppose that threshold $b = b_\alpha$ is so selected that $\FAR{\Tilde{\tau}_C(b_\alpha)}\leq \alpha$ or at least so that $\FAR{\Tilde{\tau}_C(b_\alpha)}\leq \alpha (1+o(1))$ and $b_\alpha \sim |\log \alpha|$ as $\alpha \to 0$, in particular from equation \eqref{TG:thr} in Lemma~\ref{TG:fa}. Further, suppose that conditions \eqref{llr:upper}, \eqref{llr:lower} and \eqref{TG:llr_sec_der} hold for $\{Z_{n,k}\}_{n \ge k}$. Then, the window-limited GLR CuSum procedure $\Tilde{\tau}_G(b_\alpha)$ defined by \eqref{TG:test} with the window size $m_\alpha$ that satisfies the condition \eqref{Condmalpha} solves first-order asymptotic optimization problems \eqref{AsymptProblems} uniformly for all parameter values $\theta \in \Theta$, and \begin{equation} \label{FOAPPRGCUSUM} \SADDth{\Tilde{\tau}_G(b_\alpha)} \sim \WADDth{\Tilde{\tau}_G(b_\alpha)} \sim g_\theta^{-1} (\abs{\log{\alpha}}),\quad \forall \theta \in \Theta. \end{equation} as $\alpha \to 0$. \end{theorem} \begin{proof} Evidently, for any $\theta \in \Theta$ and any threshold $b>0$, \begin{equation*} \WADDth{\Tilde{\tau}_G(b)} \leq \WADDth{\Tilde{\tau}_C(b)}, \quad \SADDth{\Tilde{\tau}_G(b)} \leq \SADDth{\Tilde{\tau}_C(b)}. \end{equation*} Let $b=b_\alpha$ be so selected that $\FAR{\Tilde{\tau}_G(b_\alpha)} \leq \alpha$ and $b_\alpha \sim |\log \alpha|$ as $\alpha \to 0$. Then it follows from the asymptotic approximations \eqref{FOAPPRCUSUM} in Theorem~\ref{TC:asymp_opt} that, as $\alpha \to 0$, \[ \SADDth{\Tilde{\tau}_G(b_\alpha)} \le \WADDth{\Tilde{\tau}_G(b_\alpha)} \le g_\theta^{-1} (|\log \alpha|) (1+o(1)) . \] Comparing these asymptotic inequalities with the asymptotic lower bound \eqref{LowerSADD} in Theorem~\ref{llr:delay_lower_bound}, immediately yields \eqref{FOAPPRGCUSUM}, which is asymptotically the best one can do to first order according to Theorem~\ref{llr:delay_lower_bound}. In particular, if $b_\alpha$ is found from equation \eqref{TG:thr}, then $b_\alpha \sim |\log \alpha|$ as $\alpha\to0$ and by Lemma~\ref{TG:fa} $\FAR{\Tilde{\tau}_G(b_\alpha)}\leq \alpha (1+o(1))$, and therefore the assertions hold. \end{proof} \section{Extensions to Pointwise Optimality and Dependent Non-homogeneous Models} \label{sec:ext} The measure of FAR that we have used in this paper (see \eqref{eq:FAR_def}) is the inverse of the MTFA . However, the MTFA is a good measure of the FAR if, and only if, the pre-change distributions of the window-limited CuSum stopping time $\tilde{\tau}_C(b)$ and the window-limited GLR CuSum stopping time $\tilde{\tau}_G(b)$ are approximately geometric. While this geometric property can be established for i.i.d.\ data models (see, e.g., Pollak and Tartakovsky~\cite{PollakTartakovskyTPA09} and Yakir~\cite{Yakir-AS95}), it is not neccessarily true for non-homogeneous and dependent data, as discussed in Mei~\cite{Mei-SQA08} and Tartakovsky~\cite{Tartakovsky-SQA08a}. Therefore, in general, the MTFA is not appropriate for measuring the FAR. In fact, large values of MTFA may not necessarily guarantee small values of the probability of false alarm as discussed in detail in \cite{Tartakovsky-SQA08a,tartakovsky_sequential}. When the post-change model is Gaussian non-stationary as defined in Example~\ref{gex}, the MTFA may still an appropriate measure for false alarm, as shown in the simulation study in Section~\ref{num-res:mtfa}. Based on this result we conjecture that the MTFA-based FAR constraint may be suitable for other independent and non-stationary data models as well. However, in general, this may not be the case, and a more appropriate measure of the FAR in the general case may be the maximal (local) conditional probability of false alarm in the time interval $(k, k+m]$ defined as \cite{tartakovsky_sequential}: \[ \mathrm{SPFA}_m(\tau) = \sup_{k \ge 0} \Prob{\infty}{\tau \le k+m | \tau > k}. \] Then the constraint set in \eqref{fa_constraint} can be replaced by set \[ \mathbb{C}_{\beta,m} = \{\tau: \mathrm{SPFA}_m(\tau) \leq \beta\} \] of procedures for which the SPFA does not exceed a prespecified value $\beta \in (0,1)$. Pergamenschtchikov and Tartakovsky~\cite{PergTarSISP2016,PerTar-JMVA2019} considered general stochastic models of dependent and nonidentically distributed observations but asymptotically homogeneous (i.e., $g(n)=n$). They proved not only minimax optimality but also asymptotic pointwise optimality as $\beta\to0$ (i.e., for all change points $\nu \ge 1$) of the Shiryaev-Roberts (SR) procedure for the simple post-change hypothesis, and the mixture SR for the composite post-change hypothesis in class $\mathbb{C}_{\beta,m}$, when $m=m_\beta$ depends on $\beta$ and goes to infinity as $\beta\to0$ at such a rate that $\log m_\beta=o(|\log \beta|)$. The results of \cite{PergTarSISP2016,PerTar-JMVA2019} can be readily extended to the asymptotically non-homogeneous case where the function $g (n)$ increases with $n$ faster than $\log n$. In particular, using the developed in \cite{PergTarSISP2016,PerTar-JMVA2019} techniques based on embedding class $\mathbb{C}_{\beta,m}$ in the Bayesian class with a geometric prior distribution for the change point and the upper-bounded weighted PFA, it can be shown that the window-limited CuSum procedure \eqref{TC:test} with $m_\alpha$ replaced by $m_\beta$ is first-order pointwise asymptotically optimal in class $\mathbb{C}_{\beta,m_\beta}=\mathbb{C}_\beta$ as long as the uniform complete version of the strong law of large numbers for the log-likelihood ratio holds, i.e., for all $\delta > 0$ \[ \sum_{n=1}^\infty \sup_{\nu \ge 1} \Prob{\nu}{\left | \frac{1}{g_\nu(n)} \sum_{i=\nu}^{\nu+n-1} Z_{i, \nu} -1 \right | > \delta} < \infty, \] where in the general non-i.i.d. case the partial LLR $Z_{i, \nu} $ is \[ Z_{i, \nu} = \log \frac{p_{1, i,\nu} (X_i| X_1,\dots, X_{i-1})}{p_{0, i}(X_i| X_1,\dots, X_{i-1})} . \] Specifically, it can be established that for all fixed $\nu \ge 1$, as $\beta\to 0$, \begin{align*} \inf_{\tau \in \mathbb{C}_\beta} \mathrm{ADD}_\nu(\tau) \sim \mathrm{ADD}_\nu(\tilde{\tau}_C(b_\alpha)) \sim g^{-1}(|\log \beta|), \end{align*} where we used the notation $\mathrm{ADD}_\nu(\tau)=\E{\nu}{\tau-\nu | \tau \geq \nu}$ for the conditional average delay to detection. Similar results also hold for the maximal average detection delays $\WADD{\tau}$ and $\SADD{\tau}= \sup_{\nu \ge 1} \mathrm{ADD}_\nu(\tau)$. It is worth noting that it follows from the proof of Theorem~\ref{llr:delay_lower_bound} that under condition \eqref{llr:upper} the following asymptotic lower bound holds for the average detection delay $\mathrm{ADD}_\nu(\tau)$ uniformly for all values of the change point in class $\mathbb{C}_\beta$: \[ \inf_{\tau\in\mathbb{C}_\beta} \mathrm{ADD}_\nu(\tau) \ge g^{-1}(|\log \beta|) (1+o(1)), \quad \forall \nu \ge 1 ~ \text{as}~ \beta \to 0. \] In the case where the post-change observations have parametric uncertainty, sufficient conditions for the optimality of the WL-GLR-CuSum procedure are more sophisticated -- a probability in the vicinity of the true post-change parameter should be involved \cite{PerTar-JMVA2019}. Further details and the proofs are omitted and will be given elsewhere. \section{Numerical Results} \label{sec:num-res} \subsection{Performance Analysis} \label{num-res:pa} \begin{figure}[tbp] \centerline{\includegraphics[width=.75\textwidth,height=9cm]{perf.png}} \vspace{-3mm}\caption{Performances of the WL-CuSum with different window-sizes for the Gaussian exponential mean-change detection problem with $\mu_0=0.1$, $\sigma_0^2 = 10000$, and $\theta = 0.4$. The change-point is $\nu = 1$.} \label{fig:perf} \end{figure} In Fig. \ref{fig:perf}, we study the performance of the proposed WL-CuSum procedure in \eqref{TC:test} through Monte Carlo (MC) simulations for the Gaussian exponential mean-change detection problem (see Example~\ref{gex}), with known post-change parameter. The change-point is taken to be $\nu=1$\footnote{Note that $\nu=1$ may not necessarily be the worst-case value for the change-point for the WL-CuSum procedure. However, extensive experimentation with different values of $\nu$ ranging from 1 to 100, with window-sizes of 15 and 25, shows that in almost all cases $\nu=1$ results in the largest expected delay, or one that is within 1\% of the largest expected delay.}. Three window-sizes are considered, with the window size of 12 being smaller than the range expected delay values in the plot, and therefore not large enough to satisfy condition \eqref{TC:m_alpha}. The window size of 25 is sufficiently large, and the window size of 100 essentially corresponds to having no window at all. It is seen that the performance is nearly identical for all window-sizes considered. We also observe that the expected delay is $O(\log(\abs{\log\alpha}))$ for window-sizes considered, which matches our theoretical analysis in \eqref{gex:perf}. \begin{figure}[tbp] \centerline{\includegraphics[width=.75\textwidth,height=9cm]{perf_glr.png}} \vspace{-3mm}\caption{Comparison of operating characteristics of the window-limited CuSum (solid lines) and WL-GLR-CuSum (dotted lines) procedures with different sizes of windows for the Gaussian exponential mean-change detection problem with $\mu_0=0.1$, $\sigma_0^2 = 10000$, and $\theta = 0.4$. The post-change parameter set is $\Theta = (0,1]$. The change-point $\nu = 1$. Procedures with sufficiently large (in red, circle) and insufficiently large window-sizes (in blue, triangle) are also compared. \label{fig:perf_glr}} \end{figure} In Fig. \ref{fig:perf_glr}, we compare, also through MC simulations for the problem of Example~\ref{gex}, the performance of the WL-CuSum procedure \eqref{TC:test} tuned to the true post-change parameter and the WL-GLR-CuSum procedure \eqref{TG:test} where only the set of post-change parameter values is known. It is seen that the operating characteristic of WL-GLR-CuSum is slightly worse but close to that of the WL-CuSum procedure that knows the true post-change parameter. We also observe that procedures with a slightly insufficiently large window-sizes perform similarly to those with a sufficiently large window sizes. \subsection{Analysis of MTFA as False Alarm Measure} \label{num-res:mtfa} \begin{figure}[tbp] \centerline{\includegraphics[width=.75\textwidth,height=10cm]{QQ.png}} \vspace{-3mm}\caption{Quantile-Quantile (QQ) plots for full-history and window-limited CuSum stopping times with different thresholds for the Gaussian exponential mean-change detection problem with $\mu_0=0.1$, $\sigma_0^2 = 10000$, and $\theta = 0.4$. In all subplots, the x-axis shows the theoretical quantiles of the best-fit geometric distribution and the y-axis shows the experimental quantiles of distributions of the stopping times. The first row corresponds to WL-CuSum procedure \eqref{TC:test} and the second row corresponds to the full-history CuSum procedure \eqref{TC:def}.} \label{fig:qq} \end{figure} In Fig. \ref{fig:qq}, we study the distribution of the WL-CuSum stopping times using simulation results from the Gaussian exponential mean-change detection problem. This study is similar to the one in \cite{PollakTartakovskyTPA09}. It is observed that the experimental quantiles of stopping times for the WL-CuSum procedure are close to the theoretical quantiles of a geometric distribution. This indicates that the distribution of the stopping time is approximately geometric, in which case MTFA is an appropriate false alarm performance measure, and our measure of FAR as the reciprocal of the MTFA is justified. \subsection{Application: Monitoring COVID-19 Second Wave} \label{num-res:covid} \begin{figure}[htbp] \centerline{\includegraphics[width=.75\textwidth,height=8cm]{cov_val.png}} \vspace{-3mm}\caption{Validation of distribution model using past COVID-19 data. The plot shows the four-day moving average of the daily new cases of COVID-19 as a fraction of the population in Wayne County, MI from October 1, 2020 to February 1, 2021 (in blue). The shape of the pre-change distribution $\mathcal{B}(a_0,b_0)$ is estimated using data from the previous 20 days (from September 11, 2020 to September 30, 2021), where $\hat{a}_0=20.6$ and $\hat{b}_0=2.94 \times 10^5$. The mean of the Beta distributions with the best-fit $h$ (defined in \eqref{num_sim:h}) is also shown (in orange), which minimizes the mean-square distance between the daily incremental fraction and mean of the Beta distributions. The best-fit parameters are: $\hat{\theta}_0=0.464$, $\hat{\theta}_1=3.894$, and $\hat{\theta}_2=0.445$. } \label{fig:cov_val} \end{figure} Next, we apply the developed WL-GLR-CuSum algorithm to monitoring the spread of COVID-19 using new case data from various counties in the US \cite{nyt-covid-data}. The goal is to detect the onset of a new wave of the pandemic based on the incremental daily cases. The problem is modeled as one of detecting a change in the mean of a Beta distribution as in \cite{covid_beta}. Let $\mathcal{B}(x;a,b)$ denote the density of the Beta distribution with shape parameters $a$ and $b$, i.e., \begin{equation*} \mathcal{B}(x;a,b) = \frac{x^{a-1}(1-x)^{b-1}\Gamma(a+b)}{\Gamma(a)\Gamma(b)}, \quad \forall x \in [0,1], \end{equation*} where $\Gamma$ represents the gamma function. Note that the mean of an observation under density $\mathcal{B}(x;a,b)$ is $a / (a+b)$. Let \begin{equation} \label{num_sim:dist_model} p_0(x) = \mathcal{B}(x;a_0,b_0), \quad p_{1,n,k}(x) = \mathcal{B}(x;a_0 h_\theta(n-k),b_0), ~\forall n \geq k. \end{equation} Here, $h_\theta$ is a function such that $h_\theta(x) \geq 1,\forall x > 0$. Note that if $a_0 \ll b_0$ and $h_\theta(n-\nu)$ is not too large, \begin{equation} \E{\nu}{X_n} = \frac{a_0 h_\theta(n-\nu)}{a_0 h_\theta(n-\nu) + b_0} \approx \frac{a_0}{b_0} h_\theta(n-\nu) \end{equation} for all $n \geq \nu$. We design $h_\theta$ to capture the behavior of the average fraction of daily incremental cases. In particular, we model $h_\theta$ as \begin{equation} \label{num_sim:h} h_\theta(x) = 1+\frac{10^{\theta_0}}{\theta_2} \exp\left(-\frac{(x-\theta_1)^2}{2 \theta_2^2} \right) \end{equation} where $\theta_0,\theta_1,\theta_2 \geq 0$ are the model parameters and $\theta = (\theta_0,\theta_1,\theta_2) \in \Theta$. When $n-\nu$ is small, $h_\theta(n-\nu)$ grows like the left tail of a Gaussian density, which matches the exponential growth in the average fraction of daily incremental cases seen at the beginning of a new wave of the pandemic. Also, as $n \to \infty$, $h_\theta(n-\nu) \to 0$, which corresponds to the daily incremental cases eventually vanishing at the end of the pandemic. In Fig. \ref{fig:cov_val}, we validate the choice of distribution model defined in \eqref{num_sim:dist_model} using data from COVID-19 wave of Fall 2020. In the simulation, $a_0$ and $b_0$ are estimated using observations from previous periods in which the increments remain low and roughly constant. It is observed that the mean of the daily fraction of incremental cases matches well with the mean of the fitted Beta distribution with $h_\theta$ in \eqref{num_sim:h}. Note that the growth condition given in \eqref{TG:growth_cond} that is required for our asymptotic analysis is not satisfied for the observation model \eqref{num_sim:dist_model} with $h_\theta$ given in \eqref{num_sim:h}. Nevertheless, we expect the WL-GLR-CuSum procedure to perform as predicted by our analysis if the procedure stops during a time interval where $h_\theta$ is still increasing, which is what we would require of a useful procedure for detecting the onset of a new wave of the pandemic anyway. \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth,height=10cm]{covid.png}} \vspace{-3mm}\caption{COVID-19 monitoring example. The upper row shows the four-day moving average of the daily new cases of COVID-19 as a fraction of the population in Wayne County, MI (left), New York City, NY (middle) and Hamilton County, OH (right). A pre-change $\mathcal{B}(a_0,b_0)$ distribution is estimated using data from the previous 20 days (from May 26, 2021 to June 14, 2021). The plots in the lower row show the evolution of the WL-GLR-CuSum statistic defined in \eqref{TG:test}. The FAR rate $\alpha$ is set to $0.001$ and the corresponding thresholds of the WL-CuSum GLR procedure are shown in red. The post-change distribution at time $n$ with hypothesized change-point $k$ is modeled as $\mathcal{B}(a_0 h_\theta(n-k),b_0)$, where $h_\theta$ is defined in \eqref{num_sim:h}, and $\Theta = (0.1,5) \times (1,20) \times (0.1,5)$. The parameters $\theta_0$, $\theta_1$ and $\theta_2$ are assumed to be unknown. The window size $m_\alpha = 20$. The threshold is set using equation \eqref{TG:thr}.} \label{fig:covid} \end{figure} In Fig. \ref{fig:covid}, we illustrate the use the WL-GLR-CuSum procedure with the distribution model \eqref{num_sim:dist_model} for the detection of the onset of a new wave of COVID-19. We assumed a start date of June 15th, 2021 for the monitoring, at which time the pandemic appeared to be in a steady state with incremental cases staying relatively flat. We observe that the WL-GLR-CuSum statistic significantly and persistently crosses the detection threshold around late July in all counties, which is strong indication of a new wave of the pandemic. More importantly, unlike the raw observations which are highly varying, the WL-GLR-CuSum statistic shows a clear dichotomy between the pre- and post-change settings, with the statistic staying near zero before the purported onset of the new wave, and taking off very rapidly (nearly vertically) after the onset. \section{Conclusion} \label{sec:concl} We considered the problem of the quickest detection of a change in the distribution of a sequence of independent observations, assuming that the pre-change observation are stationary with known distribution, while the post-change observations are non-stationary with possible parametric uncertainty. Specifically, we assumed that the cumulative KL divergence between the post-change and the pre-change distributions grows super-linearly with time after the change point. We derived a universal asymptotic lower bound on the worst-case expected detection delay under a constraint on the false alarm rate in this non-stationary setting, which had been previously been derived only in the asymptotically stationary setting. We showed that the developed WL-CuSum procedure for known post-change distribution, as well as the developed WL-GLR-CuSum procedure for the unknown post-change parameters, asymptotically achieve the lower bound on the worst-case expected detection delay, as the false alarm rate goes to zero. We validated these theoretical results through numerical Monte-Carlo simulations. We also demonstrated that the proposed WL-GLR-CuSum procedure can be effectively used in monitoring pandemics. We also provided in Section~\ref{sec:ext} some possible avenues for future research, in particular, those allowing for dependent observations and more general false alarm constraints. \section{Introduction} The problem of quickest change detection (QCD) is of fundamental importance in a variety of applications and has been extensively studied in mathematical statistics (see, e.g., \cite{tartakovsky_sequential,tartakovsky_qcd2020,vvv_qcd_overview,xie_vvv_qcd_overview} for overviews). Given a sequence of observations whose distribution changes at some unknown change point, the goal is to detect the change in distribution as quickly as possible after it occurs, while not making too many false alarms. In the classical formulations of the QCD problem, it is assumed that observations are independent and identically distributed (i.i.d.) with known pre- and post-change distributions. In many practical situations, while it is reasonable to assume that we can accurately estimate the pre-change distribution, the post-change distribution is rarely completely known. Furthermore, in many cases, it is reasonable to assume that the system is in a steady state before the change point and produces i.i.d.\ observations, but in the post-change mode the observations may be substantially non-identically distributed, i.e., non-stationary. For example, in the pandemic monitoring problem, the distribution of the number of people infected daily might have achieved a steady (stationary) state before the start of a new wave, but after the onset of a new wave, the post-change observations may no longer be stationary. Indeed, during the early phase of the new wave, the mean of the post-change distribution grows approximately exponentially. We will address the pandemic monitoring problem in detail in Section~\ref{sec:num-res}. In this paper, our main focus is on the QCD problem with independent observations\footnote{The extension to the case of dependent observations is discussed in Section~\ref{sec:ext}.}, where the pre-change observations are assumed to be stationary with a known distribution, while the post-change observations are allowed to be non-stationary with some possible parametric uncertainty in their distribution. There have been extensions of the classical formulation to the case where the pre- and/or post-change distributions are not fully known and observations may be non-i.i.d., i.e., dependent and nonidentically distributed. For the i.i.d. case with parametric uncertainty in the post-change regime, Lorden~\cite{lorden1971} proposed a generalized likelihood ratio (GLR) Cumulative Sum (CuSum) procedure, and proved its asymptotic optimality in the minimax sense as the false alarm rate goes to zero, for one-parameter exponential families. An alternative to the GLR-CuSum, the mixture-based CuSum, was proposed and studied by Pollak \cite{pollakmixture} in the same setting as in \cite{lorden1971}. The GLR approach has been studied in detail for the problem of detecting the change in the mean of a Gaussian i.i.d. sequence with an unknown post-change mean by Siegmund~\cite{siegmund1995}. Both the mixture-based and GLR-CuSum procedures have been studied by Lai~\cite{lai1998} in the pointwise setting in the non-i.i.d. case of possibly dependent and non-identically distributed observations, with parametric uncertainty in the post-change regime. More specifically, Lai assumed that the log-likelihood ratio process (between pre- and post-change distributions) normalized by the number of observations $n$ converges to a positive and finite constant as $n \to\infty$, which can be interpreted as a Kullback-Leibler (KL) information number. In the case of independent (but non-identically distributed observations) this means that the expected value of the log-likelihood ratio process grows approximately linearly in the number of observations $n$, for large $n$. Tartakovsky~\cite{TartakovskySISP98} and Tartakovsky et al.~\cite{tartakovsky_sequential} refer to such a case as ``asymptotically homogeneous'' (or stationary) case. Lai developed a universal lower bound on the worst-case expected delay as well as on the expected delay to detection for every change point and proved that a specially designed window-limited (WL) CuSum procedure asymptotically achieves the lower bound as the maximal probability of false alarm approaches 0, when both pre- and post-change distributions are completely known, i.e., that the designed WL-CuSum is asymptotically pointwise optimal to first order. For the case where the post-change distribution has parametric uncertainty, Lai proposed and analyzed a WL-GLR-CuSum procedure. A general Bayesian theory for non-i.i.d. asymptotically stationary stochastic models has been developed by Tartakovsky and Veeravalli~\cite{TartakovskyVeerTVP05} and Tartakovsky~\cite{TartakovskyIEEEIT2017} for the discrete-time scenario, and by Baron and Tartakovsky~\cite{BaronTartakovskySA06} for the continuous-time scenario, when both pre- and post-change models are completely known. It was shown in these works that a Shiryaev-type change detection procedure minimizes not only average detection delay but also higher moments of the detection delay asymptotically, as the weighed probability of false alarm goes to zero, under very general conditions for the prior distribution of the change point. Extensions of these results to the case of the parametric composite post-change hypothesis have been provided by Tartakovsky~\cite{TartakovskyIEEEIT2019,tartakovsky_qcd2020} where it has been shown that mixture Shiryaev-type detection rule is asymptotically first-order optimal in the Bayesian setup and by Pergamenchtchikov and Tartakovsky~\cite{PerTar-JMVA2019} where it was shown that the mixture Shiryaev-Roberts-type procedure pointwise and minimax asymptotically optimal in the non-Bayesian setup. Note that all the cited works focus on the asymptotically stationary case. To the best of our knowledge, the asymptotically non-stationary case where the expected value of the log-likelihood ratio process normalized to some nonlinear function $g(n)$ converges to a positive and finite (information) number has never been considered.\footnote{It should be noted that such anasymptotically non-stationary case has been previously considered for sequential hypothesis testing problems by Tartakovsky~\cite{TartakovskySISP98} and Tartakovsky et al.~\cite{tartakovsky_sequential} .} Our contributions are as follows: \begin{enumerate} \item We develop a universal asymptotic (as the false alarm rate goes to zero) lower bound on the worst-case expected delay for our problem setting with non-stationary post-change observations. \item We develop a WL-CuSum procedure that asymptotically achieves the lower bound on the worst-case expected delay when the post-change distribution is fully known. \item We develop and analyze a WL-GLR-CuSum procedure that asymptotically achieves the worst-case expected delay when the post-change distribution has parametric uncertainty. \item We validate our analysis through numerical results and demonstrate the use of our approach in monitoring pandemics. \end{enumerate} The rest of the paper is structured as follows. In Section~\ref{sec:info-bd}, we derive the information bounds and propose an asymptotically optimal WL-CuSum procedure when the post-change distribution completely known. In Section~\ref{sec:unknown-param}, we propose an asymptotically optimal WL-GLR-CuSum procedure when the post-change distribution has unknown parameters. In Section~\ref{sec:ext}, we discuss possible extensions to the general non-i.i.d. case where the observations can be dependent and non-stationary. In Section~\ref{sec:num-res}, we present some numerical results, including results on monitoring pandemics. We conclude the paper in Section~\ref{sec:concl}. In the Appendix, we provide proofs of certain results. \section{Information Bounds and Optimal Detection} \label{sec:info-bd} Let $\{X_n\}_{n\ge 1}$ be a sequence of independent random variables (generally vectors), and let $\nu$ be a change point. Assume that $X_1, \dots, X_{\nu-1}$ all have density $p_0$ with respect to some non-degenerate, sigma-finite measure $\mu$ and that $X_\nu, X_{\nu+1}, \dots$ have densities $p_{1,\nu,\nu}, p_{1,\nu+1,\nu}, \ldots$, respectively, with respect to $\mu$. Note that the observations are allowed to be non-stationary after the change point and the post-change distributions may generally depend on the change point. Let $({\cal F}_{n})_{n\ge 0}$ be the filtration, i.e., ${\cal F}_{0}=\{\Omega,\varnothing\}$ and ${\cal F}_{n}=\sigma\left\{X_{\ell}, 1\le \ell \le n \right\}$ is the sigma-algebra generated by the vector of $n$ observations $X_1,\dots,X_n$ and let ${\cal F}_\infty= \sigma(X_1,X_2, \dots)$. In what follows we denote by $\mathbb{P}_\nu$ the probability measure on the entire sequence of observations when the change-point is $\nu$. That is, under $\mathbb{P}_\nu$ the random variables $X_1, \dots, X_{\nu-1}$ are i.i.d. with the common (pre-change) density $p_0$ and $X_\nu, X_{\nu+1}, \dots$ are independent with (post-change) densities $p_{1,\nu,\nu}, p_{1,\nu+1,\nu}, \ldots$ . Let $\mathbb{E}_\nu$ denote the corresponding expectation. For $\nu=\infty$ this distribution will be denoted by $\mathbb{P}_\infty$ and the corresponding expectation by $\mathbb{E}_\infty$. Evidently, under $\mathbb{P}_\infty$ the random variables $X_1,X_2,\dots$ are i.i.d. with density $p_0$. In the sequel, we denote by $\tau$ Markov (stopping) times with respect to the filtration $({\cal F}_{n})_{n\ge 0}$, i.e., the event $\{\tau=n\}$ belongs to ${\cal F}_n$. The change-time $\nu$ is assumed to be unknown but deterministic. The problem is to detect the change quickly while not causing too many false alarms. Let $\tau$ be a stopping time defined on the observation sequence associated with the detection rule, i.e., $\tau$ is the time at which we stop taking observations and declare that the change has occurred. The problem is to detect the change quickly, minimizing the delay to detection $\tau -\nu$, while not causing too many false alarms. \subsection{Classical Results under i.i.d. Model} \label{subsec:stat_postc} A special case of the model described above is where both the pre- and post-change observations are i.i.d., i.e., $p_{1,n,\nu} \equiv p_1$ for all $n \geq \nu \geq 1$. In this case, Lorden \cite{lorden1971} proposed solving the following optimization problem to find the best stopping time $\tau$: \begin{equation} \label{prob_def} \inf_{\tau \in \mathcal{C}_\alpha} \WADD{\tau} \end{equation} where \begin{equation} \label{LordenADD} \WADD{\tau} := \sup_{\nu \geq 1} \esssup \E{\nu}{\left(\tau-\nu+1\right)^+|{\cal F}_{\nu-1}} \end{equation} characterizes the worst-case expected delay, and $\esssup$ stands for essential supremum. The constraint set is \begin{equation} \label{fa_constraint} \mathcal{C}_\alpha := \left\{ \tau: \FAR{\tau} \leq \alpha \right\} \end{equation} with \begin{equation} \FAR{\tau} := \frac{1}{ \E{\infty}{\tau}} \label{eq:FAR_def} \end{equation} which guarantees that the false alarm rate of the algorithm does not exceed $\alpha$. Recall that $\E{\infty}{\cdot}$ is the expectation operator when the change never happens, and we use the conventional notation $(\cdot)^+:=\max\{0,\cdot\}$ for the nonnegative part. The mean time to a false alarm (MTFA) $\E{\infty}{\tau}$ is sometimes referred to as the {\em average run length to false alarm}. Lorden also showed that Page's CuSum detection algorithm \cite{page1954} whose detection statistic is given by: \begin{equation} \label{cusum_stat} W(n) = \max_{1\leq k \leq n+1} \sum_{i=k}^n Z_i = \left(W(n-1) + Z_n \right)^+ \end{equation} solves the problem in \eqref{prob_def} asymptotically as $\alpha \to 0$. Here, $Z_n$ is the log-likelihood ratio defined as: \begin{equation} \label{eq:lorden_llr} Z_n = \log \frac{p_1(X_n)}{p_0(X_n)}. \end{equation} The CuSum stopping rule is given by: \begin{equation} \label{defstoppingrule} \tau_{\text{Page}}\left(b\right) := \inf \{n: W(n)\geq b \} \end{equation} In particular, if threshold is set as $b = \abs{\log \alpha}$, then $\FAR{\tau_{\text{Page}}\left(b_\alpha\right)} \leq \alpha$ (see, e.g., \cite[Lemma~8.2.1]{tartakovsky_sequential})). It was shown by Moustakides \cite{moustakides1986} that the CuSum algorithm is exactly optimal for the problem in (\ref{prob_def}) if threshold $b=b_\alpha$ is selected so that $\FAR{\tau_{\text{Page}}\left(b_\alpha\right)} = \alpha$. If threshold $b_\alpha$ is selected in a special way that accounts for the overshoot of $W(n)$ over $b_\alpha$ at stopping, which guarantees the approximation $\FAR{\tau_{\text{Page}}\left(b_\alpha\right)} \sim \alpha$ as $\alpha\to0$, then we have the following third-order asymptotic approximation (as $\alpha\to 0$) for the worst-case expected detection delay of the optimal procedure: \begin{align*} \inf_{\tau \in \mathcal{C}_\alpha} \WADD{\tau} & = \WADD{\tau_{\text{Page}}(b_\alpha)} + o(1), \\ \WADD{\tau_{\text{Page}}(b_\alpha)} & = \frac{1}{{\KL{p_1}{p_0}}} (\abs{\log \alpha} - \mathsf{const} + o(1)) \end{align*} (see, e.g., \cite{tartakovsky_sequential}), which also implies the first-order asymptotic approximation (as $\alpha \to 0$): \begin{equation} \label{FOWADDPage} \inf_{\tau \in \mathcal{C}_\alpha} \WADD{\tau} \sim \WADD{\tau_{\text{Page}}\left(\abs{\log\alpha}\right)} = \frac{\abs{\log \alpha}}{\KL{p_1}{p_0}} (1+o(1)) \end{equation} where $Y_\alpha\sim G_\alpha$ is equivalent to $Y_\alpha = G_\alpha (1+o(1))$. Here $\KL{p_1}{p_0}$ is the Kullback-Leibler (KL) divergence between $p_1$ and $p_0$. Also, in the following we use a standard notation $o(x)$ as $x\to x_0$ for the function $f(x)$ such that $f(x)/x \to 0$ as $x\to x_0$, i.e., $o(1) \to 0$ as $\alpha \to 0$, and $O(x)$ for the function $f(x)$ such that $f(x)/x$ is bounded as $x \to 0$, i.e., $O(1)$ is a finite constant. Along with Lorden's worst average detection delay $\WADD{\tau}$, defined in \eqref{LordenADD}, we can also consider also the less pessimistic Pollak's performance measure \cite{PollakAS85}: \[ \SADD{\tau} := \sup_{\nu \geq 1} \E{\nu}{\tau-\nu+1| \tau \geq \nu}. \] Pollak suggested the following minimax optimization problem in class $\mathcal{C}_\alpha$: \begin{equation}\label{Pollak_problem} \inf_{\tau\in \mathcal{C}_\alpha} \SADD{\tau}. \end{equation} An alternative to CuSum is the Shiryaev-Roberts (SR) change detection procedure $\tau_{\text{SR}}$ based not on the maximization of the likelihood ratio over the unknown change point but on summation of likelihood ratios (i.e., on averaging over the uniform prior distribution). As shown in \cite{tartakovsky_polpolunch_2012}, the SR procedure is second-order asymptotically minimax with respect to Pollak's measure: \[ \inf_{\tau\in \mathcal{C}_\alpha} \SADD{\tau} = \SADD{\tau_{\text{SR}}} + O(1) \quad \text{as}~ \alpha\to 0. \] The CuSum procedure with a certain threshold $b_\alpha$ also has a second-order optimality property with respect to the risk $\SADD{\tau}$. A detailed numerical comparison of CuSum and SR procedures for i.i.d.\ models was performed in \cite{MoustPolTarCS09}. \subsection{Information Bounds for Non-stationary Post-Change Observations} \label{subsec:ext_ib} In the case where both the pre- and post-change observations are independent and the post-change observations are non-stationary, the log-likelihood ratio is: \begin{equation} \label{llr:def} Z_{n,k} = \log \frac{p_{1,n,k}(X_n)}{p_0(X_n)} \end{equation} where $n \geq k \geq 1$. Here $k$ is a hypothetisized change-point and $X_n$ is drawn from the true distribution $\mathbb{P}_\nu$ ($\nu \in [1,\infty)$ or $\nu =\infty$). In the classical i.i.d. model described in Section~\ref{subsec:stat_postc}, the cumulative KL-divergence after the change point increases linearly in the number of observations. We generalize this condition as follows. Let $g_\nu: \mathbb{R}^+ \to \mathbb{R}^+$ be an increasing and continuous function, which we will refer to as \emph{growth function}. Note that the inverse of $g_\nu$, denoted by $g_\nu^{-1}$, exists and is also increasing and continuous. We assume that the expected sum of the log-likelihood ratios under $\mathbb{P}_\nu$, which corresponds to the cumulative Kullback-Leibler (KL) divergence for our non-stationary model, matches the value of the growth function at all positive integers, i.e., \begin{equation} \label{llr:growth} g_\nu(n) = \sum_{i=\nu}^{\nu+n-1} \E{\nu}{Z_{i,\nu}},\forall n \geq 1 \end{equation} Furthermore, we assume that $\E{\nu}{Z_{i,\nu}} > 0$ for all $i \geq \nu$. and that for each $x >0$ \begin{equation} g^{-1}(x) := \sup_{\nu \geq 1} g_\nu^{-1}(x) \end{equation} exists. Note that $g^{-1}$ is also increasing and continuous. A key assumption that we will need for our analysis is that $g^{-1}(x)$ satisfies \begin{equation} \label{llr:grow_cond} \log g^{-1}(x) = o(x) \quad \text{as}~ x \to \infty. \end{equation} We should note that such a growth function $g (n)$ has been adopted in sequential hypothesis testing with non-stationary observations \cite[Sec.~3.4]{tartakovsky_sequential}. In the special case where the post-change distribution is invariant to the change-point $\nu$, i.e., for $j\geq 0$, $p_{1,\nu+j,\nu}$ is not a function of $\nu$, we have $g \equiv g_\nu$ and $g^{-1} \equiv g_\nu^{-1}$, for all $\nu \geq 1$. The proof of asymptotic optimality is performed in two steps. First, we derive a first-order asymptotic (as $\alpha \to 0$) lower bound for the maximal expected detection delays $\inf_{\tau \in \mathcal{C}_{\alpha} }\WADD{\tau}$ and $\inf_{\tau \in \mathcal{C}_{\alpha} }\SADD{\tau}$. To this end, we need the following right-tail condition for the log-likelihood ratio process: \begin{equation} \label{llr:upper} \sup_{\nu \geq 1} \Prob{\nu}{\max_{t \leq n} \sum_{i=\nu}^{\nu+t-1} Z_{i,\nu} \geq (1+\delta) g_\nu(n)} \xrightarrow{n \to \infty} 0 \quad \forall \delta >0, \end{equation} assuming that for all $\nu \ge 1$ \[ \frac{\sum_{i=\nu}^{\nu+t-1} Z_{i,\nu}}{g_\nu(t)} \xrightarrow[t\to\infty]{\text{in} ~ \mathbb{P}_\nu\text{-probability}} 1. \] At the second stage, we show that this lower bound is attained for the Window-Limited (WL) CuSum procedure under the following left-tail condition \begin{equation} \label{llr:lower} \max_{t \geq \nu \geq 1} \Prob{\nu}{\sum_{i=t}^{t+n-1} Z_{i,t} \leq (1-\delta) g_\nu(n) } \xrightarrow{n \to \infty} 0 \quad \forall \delta \in (0, 1). \end{equation} The following lemma provides sufficient conditions under which conditions \eqref{llr:upper} and \eqref{llr:lower} hold for the sequence of independent and non-stationary observations. Hereafter we use the notation $\Var{\nu}{Y}= \E{\nu}{Y^2 - \E{\nu}{Y}^2}$ for variance of the random variable $Y$ under distribution $\mathbb{P}_\nu$. \begin{lemma} \label{llr:lemma} Consider the growth function $g_\nu(n)$ defined in \eqref{llr:growth}. Suppose that the sum of variances of the log-likelihood ratios satisfies \begin{equation} \label{llr:var} \sup_{t \geq \nu \geq 1} \frac{1}{g_\nu^2(n)} \sum_{i=t}^{t+n-1} \Var{\nu}{Z_{i,t}} \xrightarrow{n \to \infty} 0 \end{equation} Then condition \eqref{llr:upper} holds. If, in addition, for all $\nu \geq 1$ and all positive integers $\Delta$, \begin{equation} \label{llr:tshift} \E{\nu}{Z_{i,\nu}} \leq \E{\nu}{Z_{i+\Delta,\nu+\Delta}}, \end{equation} then condition \eqref{llr:lower} holds. \end{lemma} The proof is given in the appendix. \begin{remark} One can generalize condition \eqref{llr:tshift} in a way that either $\E{\nu}{Z_{i,\nu}} \leq \E{\nu}{Z_{i+\Delta,\nu+\Delta}}$ or \begin{equation*} \frac{1}{g_\nu(n)} \sum_{i=\nu}^{\nu+n-1} \left( \E{\nu}{Z_{i,\nu}} - \E{\nu}{Z_{i+\Delta,\nu+\Delta}}\right) = o(1) \end{equation*} holds for all positive integers $\Delta$. \end{remark} \begin{example} \label{gex} Consider the following Gaussian exponential mean-change detection problem. Denote by ${\cal N}(\mu_0,\sigma_0^2)$ the Gaussian distribution with mean $\mu_0$ and variance $\sigma_0^2$. Let $X_1,\dots,X_{\nu-1}$ be distributed as ${\cal N}(\mu_0,\sigma_0^2)$, and for all $n \geq \nu$, let $X_n$ be distributed as ${\cal N}(\mu_0 e^{\theta(n-\nu)},\sigma_0^2)$. Here $\theta$ is some positive fixed constant. The log-likelihood ratio is given by: \begin{align} \label{gex:llr} Z_{n,t} = \log \frac{p_{1,n,t}(X_n)}{p_0(X_n)} &= -\frac{(X_n-\mu_0 e^{\theta(n-t)})^2}{2 \sigma_0^2} + \frac{(X_n-\mu_0)^2}{2 \sigma_0^2} \nonumber\\ &= \frac{\mu_0}{\sigma_0^2} (e^{\theta(n-t)} - 1) X_n - \frac{\mu_0^2 (e^{2 \theta (n-t)}-1)}{2 \sigma_0^2}. \end{align} Now, the growth function can be calculated as \begin{equation} \label{gex:growth} g_\nu(n) = \sum_{i=\nu}^{\nu+n-1} \E{\nu}{Z_{i,\nu}} = \sum_{i=0}^{n-1} \frac{\mu_0^2}{2 \sigma_0^2} (e^{\theta i}-1)^2. \end{equation} Since the post-change distribution is invariant to the change-point $\nu$, $g^{-1}(n) = g_1^{-1}(n) = O(\log n) \implies \log g^{-1}(n) = o(n)$, which satisfies \eqref{llr:grow_cond}. Also, the sum of variances of the log-likelihood ratios is \begin{equation*} \sum_{i=t}^{t+n-1} \Var{\nu}{Z_{i,t}} = \sum_{i=t}^{t+n-1} \frac{\mu_0^2}{\sigma_0^4} (e^{\theta(i-t)} - 1)^2 \Var{\nu}{X_i} = \frac{2}{\sigma_0^2} g_\nu(n) = o(g_\nu^2(n)) \end{equation*} for all $t \geq \nu$, which establishes condition \eqref{llr:var}. Further, for any $i \geq \nu$ and $\Delta \geq 1$, \begin{align*} \E{\nu}{Z_{i+\Delta,\nu+\Delta}} &= \frac{\mu_0}{\sigma_0^2} (e^{\theta(i-\nu)} - 1) \E{\nu}{X_{i+\Delta}} - \frac{\mu_0^2 (e^{2 \theta (i-\nu)}-1)}{2 \sigma_0^2}\\ &\geq \frac{\mu_0}{\sigma_0^2} (e^{\theta(i-\nu)} - 1) \E{\nu}{X_{i}} - \frac{\mu_0^2 (e^{2 \theta (i-\nu)}-1)}{2 \sigma_0^2} = \E{\nu}{Z_{i,\nu}} \end{align*} which establishes condition \eqref{llr:tshift}. \qed \end{example} The following theorem gives a lower bound on the worst-case average detection delays as $\alpha \to 0$ in class $ \mathcal{C}_{\alpha} $. \begin{theorem} \label{llr:delay_lower_bound} For $\delta\in(0,1)$ let \begin{equation} \label{eq:thm1_h} h_\delta (\alpha) := g^{-1} ((1-\delta) |\log\alpha|). \end{equation} Suppose that $g^{-1}(x)$ satisfies \eqref{llr:grow_cond}. Then for all $\delta\in(0,1)$ and some $\nu \ge 1$ \begin{equation}\label{supProb0} \lim_{\alpha \to 0} \sup_{\tau\in \mathcal{C}_{\alpha} } \Prob{\nu}{\nu \leq \tau < \nu + h_\delta (\alpha) } = 0 \end{equation} and as $\alpha \to 0$, \begin{equation} \label{LowerSADD} \inf_{\tau \in \mathcal{C}_{\alpha} } \WADD{\tau} \geq \inf_{\tau \in \mathcal{C}_{\alpha} } \SADD{\tau} \geq g^{-1}(\abs{\log{\alpha}}) (1+o(1)). \end{equation} \end{theorem} \begin{proof} Obviously, for any Markov time $\tau$, \[ \WADD{\tau} \geq \SADD{\tau} \ge \E{\nu}{(\tau -\nu)^+} . \] Therefore, to prove the asymptotic lower bound \eqref{LowerSADD} we have to show that as $\alpha \to 0$, \begin{equation} \label{eq:thm1_main} \sup_{\nu \ge 1} \E{\nu}{(\tau -\nu)^+} \geq g^{-1} (|\log{\alpha}|) (1+o(1)), \end{equation} where the $o(1)$ term on the right-hand side does not depend on $\tau$, i.e., uniform in $\tau \in \mathcal{C}_{\alpha} $. To begin, let the stopping time $\tau \in {\cal C}_\alpha$ and note that by Markov's inequality, \[ \E{\nu}{(\tau -\nu)^+} \ge h_\delta (\alpha) \Prob{\nu}{(\tau -\nu)^+ \ge h_\delta (\alpha)}. \] Hence, if assertion \eqref{supProb0} holds, then for some $\nu \ge 1$ \[ \inf_{\tau \in \mathcal{C}_{\alpha} } \Prob{\nu}{(\tau -\nu)^+ \ge h_\delta (\alpha)} = 1 -o(1) \quad \text{as}~ \alpha\to0. \] This implies the asymptotic inequality \begin{equation} \label{Exptauplus} \inf_{\tau\in \mathcal{C}_{\alpha} } \E{\nu}{(\tau -\nu)^+} \ge h_\delta (\alpha) (1+o(1)), \end{equation} which holds for an arbitrary $ \delta\in (0,1)$ and some $\nu$. Since by our assumption the function $ h_\delta (\alpha) $ is continuous, taking the limit $\delta \to 0$ and maximizing over $\nu\ge 1$ yields inequality \eqref{eq:thm1_main}. It remains to prove \eqref{supProb0}. Changing the measure $\mathbb{P}_\infty \to \mathbb{P}_\nu$ and using Wald's likelihood ratio identity, we obtain the following chain of equalities and inequalities for any $C >0$ and $\delta\in (0,1)$: \begin{align*} & \Prob{\infty}{\nu \leq \tau < \nu + h_\delta (\alpha)} = \E{\nu}{\ind{0 \le \tau -\nu < h_\delta(\alpha)} \exp\left(- \sum_{i=\nu}^\tau Z_{i,\nu} \right)} \\ & \ge \E{\nu}{\ind{0 \le \tau -\nu < h_\delta(\alpha), \sum_{i=\nu}^\tau Z_{i,\nu} < C} \exp\left(- \sum_{i=\nu}^\tau Z_{i,\nu} \right) } \\ &\ge e^{-C} \Prob{\nu}{0 \le \tau -\nu < h_\delta(\alpha), \max_{0 \le n-\nu < h_\delta(\alpha) }\sum_{i=\nu}^n Z_{i,\nu} < C} \\ &\ge e^{-C} \left(\Prob{\nu}{0 \le \tau -\nu < h_\delta(\alpha)} - \Prob{\nu}{\max_{0 \le n < h_\delta(\alpha) }\sum_{i=\nu}^{\nu+n} Z_{i,\nu} \ge C}\right) \end{align*} where the last inequality follows from the fact that $\Pr({\cal A} \cap {\cal B}) = \Pr({\cal A}) - \Pr({\cal B}^c)$ for any events ${\cal A}$ and ${\cal B}$, where ${\cal B}^c$ is the complement event of ${\cal B}$. Setting $C=g(h_\delta (\alpha)) (1+\delta) = (1-\delta^2) |\log\alpha|$ yields \begin{equation}\label{Probnuh} \Prob{\nu}{\nu \leq \tau < \nu + h_\delta (\alpha)} \le \kappa^{(\nu)}_{\delta,\alpha}(\tau) + \sup_{\nu \ge 1} \beta^{(\nu)}_{\delta,\alpha}, \end{equation} where \[ \kappa^{(\nu)}_{\delta,\alpha}(\tau) =e^{(1-\delta^2) |\log\alpha|} \Prob{\infty}{0 \le \tau -\nu < h_\delta(\alpha)} \] and \[ \beta^{(\nu)}_{\delta,\alpha}= \Prob{\nu}{\max_{0 \le n < h_\delta(\alpha) }\sum_{i=\nu}^{\nu+n} Z_{i,\nu} \ge (1+\delta) g(h_\delta (\alpha))}. \] Since $g(h_\delta (\alpha))\to \infty$ as $\alpha \to 0$, by condition \eqref{llr:upper}, \begin{equation}\label{subbetato0} \sup_{\nu \ge 1} \beta^{(\nu)}_{\delta,\alpha}\to 0. \end{equation} Next we turn to the evaluation of the term $\kappa^{(\nu)}_{\delta,\alpha}(\tau)$ for any stopping time $\tau \in {\cal C}_\alpha$. It follows from Lemma 2.1 in \cite[page 72]{tartakovsky_qcd2020} that for any $M < \alpha^{-1}$, there exists some $\ell \geq 1$ (possibly depending on $\alpha$) such that \begin{equation} \label{eq:thm1_lemma} \Prob{\infty}{\ell \leq \tau < \ell+M} \le \Prob{\infty}{\tau < \ell + M | \tau \geq \ell} < M \, \alpha, \end{equation} so for some $\nu \ge 1$, \[ \kappa^{(\nu)}_{\delta,\alpha}(\tau) \le M \alpha e^{(1-\delta^2) |\log\alpha|} = M\alpha^{\delta^2}. \] If we choose $M \le M_\alpha= \floor{h_\delta(\alpha)^2}\Big|_{\delta = 0} = \floor{(g^{-1}(|\log\alpha|))^2}$, then for all sufficiently small $\alpha$, \begin{equation*} \log M \leq 2 \log g^{-1}(|\log\alpha|) = o(|\log\alpha|) \end{equation*} so that the condition \eqref{llr:grow_cond} is satisfied. Furthermore, \begin{equation*} M_\alpha \, \alpha^p \xrightarrow{\alpha \to 0} 0 \quad \text{as}~ \alpha\to 0 \end{equation*} for any $p > 0$. To see this, assume for purpose of contradiction that there exists some $p_0 > 0$ and $c_0 > 0$ such that $\lim_{\alpha \to 0} M_\alpha \alpha^{p_0} = c_0$. Then, since $\lim_{\alpha \to 0} \alpha^{-p_0} \neq 0$, $\lim_{\alpha \to 0} \log M_\alpha = p_0 \lim_{\alpha \to 0} |\log \alpha| + \log c_0$ and thus $\log M_\alpha \neq o(|\log\alpha|)$. Hence, it follows that for some $\nu \ge 1$, which may depend on $\alpha$, as $\alpha \to 0$ \begin{equation}\label{kappainfto0} \inf_{\tau\in \mathcal{C}_{\alpha} } \kappa^{(\nu)}_{\delta,\alpha}(\tau) \le M_\alpha \alpha^{\delta^2} \to 0. \end{equation} Combining \eqref{Probnuh}, \eqref{subbetato0}, and \eqref{kappainfto0} we obtain that for some $\nu\ge 1$ \[ \Prob{\nu}{\nu \leq \tau < \nu + h_\delta (\alpha)} \le M_\alpha \alpha^{\delta^2} + \sup_{\nu \ge 1} \beta^{(\nu)}_{\delta,\alpha} = o(1), \] where the $o(1)$ term is uniform over all $\nu\ge 1$. This yields assertion \eqref{supProb0}, and the proof is complete. \end{proof} \subsection{Asymptotically Optimal Detection with Non-stationary Post-Change Observations with Known Distributions} \label{subsec:asym-opt-det} Recall that under the classical setting, Page's CuSum procedure (in \eqref{defstoppingrule}) is optimal and has the following structure: \begin{equation} \tau_{\text{Page}}\left(b\right) = \inf \left\{n: \max_{1\leq k \leq n+1} \sum_{i=k}^n Z_i \geq b \right\} \end{equation} where $Z_i$ is the log-likelihood ratio when the post-change distributions are stationary (defined in \eqref{eq:lorden_llr}). When the post-change distributions are potentially non-stationary, the CuSum stopping rule is defined similarly as: \begin{equation} \label{TC:def} \tau_C\left(b\right) := \inf \left\{n:\max_{1 \leq k \leq n+1} \sum_{i=k}^{n} Z_{i,k} \geq b \right\} \end{equation} where $Z_{i,k}$ represents the log-likelihood ratio between densities $p_{1,i,k}$ and $p_0$ for observation $X_i$ (defined in \eqref{llr:def}). Here $i$ is the time index and $k$ is the hypothesized change point. Note that if the post-change distributions are indeed stationary, i.e., $p_{1,i,k} \equiv p_1$, we would get $Z_{i,k} \equiv Z_i$ for all $k \leq i$, and thus $\tau_C \equiv \tau_{\text{Page}}$. As shown in \eqref{cusum_stat}, Page's classical CuSum algorithm admits a recursive way to compute its test statistic. Unfortunately, despite having independent observations, the test statistic in \eqref{TC:def} cannot be computed recursively, even for the special case where the post-change distribution is invariant to the change-point as in Example~\ref{gex}. \begin{example} Consider the Gaussian Exponential Mean-Change problem defined in Example~\ref{gex}. Suppose $\mu_0 = \sigma_0^2 = \theta = 1$. Then, the log-likelihood ratio is given by \[ Z_{n,t} = (e^{n-t} - 1) X_n - \frac{e^{2 (n-t)} - 1}{2}. \] Note that $Z_{n,t}$ is a (linear) function of $X_n$. Consider the following realization: \[ X_1 = 1, \quad X_2 = 0, \quad X_3 = 10. \] It can be verified that \[ \arg \max_{1 \leq k \leq 3} \sum_{i=k}^{2} Z_{i,k} = 2,~\text{and}~\arg \max_{1 \leq k \leq 4} \sum_{i=k}^{3} Z_{i,k} = 1. \] Note that maximizer $k^*$ goes backward in time in this case, in contrast to what happens when both the pre- and post-change observations follow i.i.d. models. The test statistic at time $n=2$ is a function of only $X_2$, and this is insufficient to construct the test statistic at time $n=3$, which is a function of $X_1$, in addition to being a function of $X_2$, and $X_3$. \qed \end{example} For computational tractability, we therefore consider a window-limited (WL) version of the CuSum procedure in \eqref{TC:def}: \begin{equation} \label{TC:test} \Tilde{\tau}_C\left(b\right) := \inf \left\{n:\max_{n-m \leq k \leq {n+1}} \sum_{i=k}^n Z_{i,k} =: W(n) \geq b \right\} \end{equation} where $m$ is the window size. For $n < m$ maximization is performed over $1\le k \le n$. In the asymptotic setting, $m=m_\alpha$ depends on $\alpha$ and should go to infinity as $\alpha \to 0$ with certain appropriate rate. Specifically, following a similar condition that Lai~\cite{lai1998} used in the asymptotically stationary case, we shall require that $m_\alpha \to \infty$ as $\alpha \to 0$ in such a way that \begin{equation} \label{TC:m_alpha} \liminf_{\alpha \to 0} m_\alpha / g^{-1}(\abs{\log\alpha}) > 1. \end{equation} Since the range for the maximum is smaller in $\Tilde{\tau}_C(b)$ than in $\tau_C(b)$, given any realization of $X_1,X_2,\ldots$, if the test statistic of $\Tilde{\tau}_C(b)$ crosses the threshold $b$ at some time $n$, so does that of $\tau_C(b)$. Therefore, for any fixed threshold $b > 0$, \begin{equation} \label{eq:tautautil} \tau_C(b) \leq \Tilde{\tau}_C(b) \end{equation} almost surely. In the following, we first control the asymptotic false alarm rate of $\Tilde{\tau}_C(b)$ with an appropriately chosen threshold in Lemma~\ref{TC:fa}. Then, we obtain asymptotic approximation of the expected detection delays of $\Tilde{\tau}_C(b)$ in Theorem \ref{TC:delay}. Finally, we combine these two results and provide an asymptotically optimal solution to the problem in \eqref{prob_def} in Theorem~\ref{TC:asymp_opt}. \begin{lemma} \label{TC:fa} Suppose that $b_\alpha = \abs{\log\alpha}$. Then \begin{equation} \label{FARCUSUMm} \FAR{\Tilde{\tau}_C(b_\alpha)} \leq \alpha \quad \text{for all} ~ \alpha \in (0,1), \end{equation} i.e., $\Tilde{\tau}_C(b_\alpha) \in \mathcal{C}_{\alpha} $. \end{lemma} \begin{proof} Define the statistic \[ R_n = \sum_{k=1}^n \exp\left(\sum_{i=k}^n Z_{i,k}\right), \quad R_0=0 \] and the corresponding stopping time $T_b:=\inf\{n: R_n \ge e^b\}$. We now show that $\E{\infty}{T_b} \ge e^b$, which implies that $\E{\infty}{\Tilde{\tau}_C(b)} \ge e^b$ for any $b>0$ since, evidently, $\Tilde{\tau}_C(b) \ge T_b$ for any $b>0$. Recall that ${\cal F}_n=\sigma(X_\ell, 1\le \ell \le n)$ denotes a sigma-algebra generated by $(X_1,\dots,X_n)$. Since $\E{\infty}{e^{Z_{n,k}}|{\cal F}_{n-1}}=1$, it is easy to see that \[ \E{\infty}{R_n | {\cal F}_{n-1}} = 1 + R_{n-1} \quad \text{for}~ n \ge 1. \] Consequently, the statistic $\{R_n-n\}_{n \ge 1}$ is a zero-mean $(\mathbb{P}_\infty,{\cal F}_n)$-martingale. It suffices to assume that $\E{\infty}{T_b}<\infty$ since otherwise the statement is trivial. Then, $\E{\infty}{R_{T_b}-T_b}$ exists and also \[ \liminf_{n\to\infty} \int_{\{T_b >n\}} |R_n - n| \mathrm{d} \mathbb{P}_\infty = 0 \] since $0 \le R_n < e^b$ on the event $\{T_b >n\}$. Hence, we can apply the optional sampling theorem (see, e.g. \cite[Th 2.3.1, page 31]{tartakovsky_sequential}), which yields $\E{\infty}{R_{T_b}} = \E{\infty}{T_b}$. Since $R_{T_b} \ge e^b$ it follows that $\E{\infty}{\Tilde{\tau}_C(b)} \ge \E{\infty}{T_b} \ge e^b$. Now, setting $b_\alpha = \abs{\log\alpha}$ implies the inequality \begin{equation} \label{FARCUSUMbalpha} \E{\infty}{\Tilde{\tau}_C(b_\alpha)} \ge e^{b_\alpha} = \frac{1}{\alpha} \end{equation} (for any $m_\alpha \ge 1$), and therefore \eqref{FARCUSUMm} follows. \end{proof} The following result establishes asymptotic performance of the WL-CuSum procedure given in \eqref{TC:test} for large threshold values. \begin{theorem} \label{TC:delay} Fix $\delta \in (0,1)$ and let $N_{b,\delta} := \lfloor g^{-1}(b /(1-\delta)) \rfloor$. Suppose that in the WL-CuSum procedure the size of the window $m=m_b$ diverges (as $b \to \infty$) in such a way that \begin{equation}\label{Condmb} m_b \ge N_{b,\delta} (1+o(1)). \end{equation} Further, suppose that conditions \eqref{llr:upper} and \eqref{llr:lower} hold for $Z_{n,k}$ when $n \geq k \geq 1$. Then, as $b \to \infty$, \begin{equation}\label{SADDasympt} \SADD{\Tilde{\tau}_C(b)} \sim \WADD{\Tilde{\tau}_C(b)} \sim g^{-1} (b) . \end{equation} \end{theorem} \begin{proof} Since $\FAR{\Tilde{\tau}_C(b)} \le e^{-b}$, the window-limited CuSum procedure $\Tilde{\tau}_C(b)$ belongs to class $ \mathcal{C}_{\alpha} $ with $\alpha=e^{-b}$. Hence, replacing $\alpha$ by $e^{-b}$ in the asymptotic lower bound \eqref{LowerSADD} in Theorem~\ref{llr:delay_lower_bound}, we obtain that under condition \eqref{llr:upper} the following asymptotic lower bound holds: \begin{equation}\label{LBtaumb} \liminf_{b\to\infty} \frac{\WADD{\Tilde{\tau}_C(b))}}{g^{-1}(b)} \ge \liminf_{b\to\infty} \frac{\SADD{\Tilde{\tau}_C(b))}}{g^{-1}(b)} \ge 1 . \end{equation} Thus, to establish \eqref{SADDasympt} it suffices to show that under condition \eqref{llr:lower} as $b\to\infty$ \begin{equation}\label{UpperSADD} \WADD{\Tilde{\tau}_C(b))} \leq g^{-1}(b) (1+o(1)). \end{equation} Note that we have the following chain of equalities and inequalities: \begin{align} \label{ExpkTAplus} &\E{\nu}{(\Tilde{\tau}_C(b))-\nu)^+ | {\cal F}_{\nu-1}} \nonumber \\ &= \sum_{\ell=0}^{\infty} \int_{\ell N_{b,\delta}}^{(\ell+1) N_{b,\delta}} \Prob{\nu}{\Tilde{\tau}_C(b)-\nu > t | {\cal F}_{\nu-1}} \, \mathrm{d} t \nonumber\\ & \le N_{b,\delta} + \sum_{\ell =1}^{\infty} \int_{\ell N_{b,\delta}}^{(\ell +1) N_{b,\delta}} \Prob{\nu}{\Tilde{\tau}_C(b)-\nu > t |{\cal F}_{\nu-1}} \, \mathrm{d} t \nonumber\\ & \le N_{b,\delta} + \sum_{\ell=1}^{\infty} \int_{\ell N_{b,\delta}}^{(\ell+1) N_{b,\delta}} \Prob{\nu}{\Tilde{\tau}_C(b)-\nu > \ell N_{b,\delta} | {\cal F}_{\nu-1}} \, \mathrm{d} t \nonumber\\ & = N_{b,\delta} \left(1 + \sum_{\ell=1}^{\infty} \Prob{\nu}{\Tilde{\tau}_C(b)-\nu > \ell N_{b,\delta} | {\cal F}_{\nu-1}}\right) . \end{align} Define $\lambda_{n,k} := \sum_{i=k}^n Z_{i,k}$ and $K_{n} := \nu+n N_{b,\delta}$. We have $W(n) = \max_{n-m_b < k \le n} \lambda_{k,n}$. Since by condition \eqref{Condmb} $m_b > N_{b,\delta}$ (for a sufficiently large $b$), for any $n \ge 1$, \[ W(\nu+n N_{b,\delta}) \ge \lambda_{K_{n}, K_{n-1}} \] and we have \begin{align}\label{Needit} &\Prob{\nu}{\Tilde{\tau}_C(b)-\nu > \ell N_{b,\delta} | {\cal F}_{\nu-1} } \nonumber \\ & =\Prob{\nu}{W(1) < b, \dots, W(\nu+\ell N_{b,\delta}) <b | {\cal F}_{\nu-1}} \nonumber \\ & \le \Prob{\nu}{W(\nu + N_{b,\delta}) < b, \dots, W(\nu+\ell N_{b,\delta}) <b | {\cal F}_{\nu-1}} \nonumber \\ & \le \Prob{\nu}{\lambda_{K_1, K_0}< b, \dots, \lambda_{K_{\ell}, K_{\ell-1}} < b | {\cal F}_{\nu-1}}\nonumber \\ &= \prod_{n=1}^\ell \Prob{\nu}{\lambda_{K_{n}, K_{n-1}}< b} , \end{align} where the last equality follows from independence of the increments of $\{\lambda_{t,n}\}_{n \ge t}$. By condition \eqref{llr:lower}, for a sufficiently large $b$ there exists a small $\varepsilon_b$ such that \[ \Prob{\nu}{\lambda_{K_{n}, K_{n-1}} < b} \le \varepsilon_b, \quad \forall n \ge 1. \] Therefore, for any $\ell \ge 1$, \[ \Prob{\nu}{\Tilde{\tau}_C(b)-\nu > \ell N_{b_\alpha,\delta} | {\cal F}_{\nu-1}} \le \varepsilon_b^\ell. \] Combining this inequality with \eqref{ExpkTAplus} and using the fact that $\sum_{\ell=1}^\infty \varepsilon_b^\ell = \varepsilon_b(1-\varepsilon_b)^{-1}$ , we obtain \begin{align}\label{Momineqrho} \E{\nu}{(\Tilde{\tau}_C(b)-\nu)^+ | {\cal F}_{\nu-1} } \le N_{b,\delta} \left(1 + \frac{\varepsilon_b}{1-\varepsilon_b}\right) = \frac{\lfloor g^{-1}(b /(1-\delta)) \rfloor}{1-\varepsilon_b}. \end{align} Since the right-hand side of this inequality does not depend on $\nu$, $g^{-1}(b /(1-\delta))\to \infty$ as $b\to \infty$ and $\varepsilon_b$ and $\delta$ can be arbitrarily small numbers, this implies the upper bound \eqref{UpperSADD}. The proof is complete. \end{proof} Using Lemma~\ref{TC:fa} and Theorem~\ref{TC:delay}, we obtain the following asymptotic result which establishes asymptotic optimality of the WL-CuSum procedure and its asymptotic operating characteristics. \begin{theorem} \label{TC:asymp_opt} Suppose that threshold $b_\alpha$ is so selected that $b_\alpha \sim|\log \alpha|$ as $\alpha\to 0$, in particular as $b_\alpha =|\log \alpha|$. Further, suppose that left-tail \eqref{llr:upper} and right-tail \eqref{llr:lower} conditions hold for $Z_{n,k}$ when $n \geq k \geq 1$. Then, the WL-CuSum procedure in \eqref{TC:test} with the window size $m_\alpha$ that satisfies the condition \begin{equation}\label{Condmalpha} m_\alpha \ge g^{-1}(|\log\alpha|) (1+o(1)) \quad \text{as}~ \alpha \to 0 \end{equation} solves the problems \eqref{prob_def} and \eqref{Pollak_problem} asymptotically to first order as $\alpha \to 0$, i.e., \begin{equation} \label{FOAOCUSUM} \begin{split} \inf_{\tau\in \mathcal{C}_{\alpha} } \WADD{\tau} & \sim \WADD{\Tilde{\tau}_C(b_\alpha)} , \\ \inf_{\tau\in \mathcal{C}_{\alpha} } \SADD{\tau} & \sim \SADD{\Tilde{\tau}_C(b_\alpha)} \end{split} \end{equation} and \begin{equation} \label{FOAPPRCUSUM} \SADD{\Tilde{\tau}_C(b_\alpha)} \sim \WADD{\Tilde{\tau}_C(b_\alpha)} \sim g^{-1} (\abs{\log{\alpha}}). \end{equation} \end{theorem} \begin{proof} Let $b_\alpha$ be so selected that $\FAR{\Tilde{\tau}_C(b_\alpha)}\leq \alpha$ and $b_\alpha \sim |\log \alpha|$ as $\alpha\to0$. Then by Theorem~\ref{TC:delay}, as $\alpha\to0$ \[ \SADD{\Tilde{\tau}_C(b_\alpha)} \sim \WADD{\Tilde{\tau}_C(b_\alpha)} \sim g^{-1} (|\log \alpha|) . \] Comparing these asymptotic equalities with the asymptotic lower bound \eqref{LowerSADD} in Theorem~\ref{llr:delay_lower_bound} immediately yields asymptotics \eqref{FOAOCUSUM} and \eqref{FOAPPRCUSUM}. In particular, if $b_\alpha = \abs{\log\alpha}$, then by Lemma~\ref{TC:fa} $\FAR{\Tilde{\tau}_C(b_\alpha)}\leq \alpha$, and therefore the assertions hold.\end{proof} \begin{remark} Clearly, the asymptotic optimality result still holds in the case where no window is applied, i.e., $m_\alpha = n-1$. \end{remark} \begin{example} Consider the same setting as in Example~\ref{gex}. We have shown that conditions \eqref{llr:var} and \eqref{llr:tshift} hold in this setting, and thus \eqref{llr:upper} and \eqref{llr:lower} also hold by Lemma~\ref{llr:lemma}. Considering the growth function $g (n)$ given in \eqref{gex:growth}, as $n \to \infty$, we obtain \begin{equation*} g (n) = \sum_{i=0}^{n-1} \frac{\mu_0^2}{2 \sigma_0^2} (e^{\theta i}-1)^2 = \frac{\mu_0^2}{2 \sigma_0^2} e^{2 \theta (n-1)} (1+o(1)). \end{equation*} Thus, as $y \to \infty$, \begin{equation*} g^{-1}(y) = \frac{1}{2 \theta} \log\left( \frac{2 \sigma_0^2}{\mu_0^2} y\right) (1+o(1)) \end{equation*} and if $b_\alpha = |\log \alpha|$ or more generally $b_\alpha\sim |\log \alpha|$ as $\alpha\to 0$ we obtain \begin{align} \label{gex:perf} \WADD{\Tilde{\tau}_C(b_\alpha)} &= \frac{1}{2 \theta} \log\left( \frac{2 \sigma_0^2 }{\mu_0^2} \abs{\log \alpha}\right) (1+o(1)) \nonumber\\ &= O\left(\frac{1}{2 \theta} \log(\abs{\log \alpha})\right) \end{align} \end{example} \section{Asymptotically Optimum Procedure for Non-Stationary Post-Change Observations with Parametric Uncertainty} \label{sec:unknown-param} We now study the case where the evolution of the post-change distribution is parametrized by an unknown but deterministic parameter $\theta \in \mathbb{R}^d$. Let $X_\nu, X_{\nu+1}, \dots$ each have density $p_{1,0}^{\theta},p_{1,1}^{\theta},\dots$, respectively, with respect to the common non-degenerate measure $\mu$, when post-change parameter is $\theta$. Let $\mathbb{P}_{k,\theta}$ and $\mathbb{E}_{k,\theta}$ denote, respectively, the probability measure on the entire sequence of observations and expectation, when the change point is $\nu=k<\infty$ and the post-change parameter is $\theta$. Let $\Theta \subset \mathbb{R}^d$ be an open and bounded set of parameter values, with $\theta \in \Theta$. The log-likelihood ratio process is given by: \begin{equation} \label{TG:llr} Z_{n,k}^{\theta} = \log \frac{p_{1,n,k}^{\theta}(X_n)}{p_0(X_n)} \end{equation} for any $n \geq k$ and $\theta \in \Theta$. Also, the growth function in \eqref{llr:growth} is redefined as \begin{equation} \label{TG:growth} g_{\nu,\theta}(n) = \sum_{i=\nu}^{\nu+n-1} \E{\nu,\theta}{Z^{\theta}_{i,\nu}},\forall n \geq 1. \end{equation} and it is assumed that $g_\theta^{-1}(x) = \sup_{\nu \geq 1} g_{\nu,\theta}^{-1}(x)$ exists. It is also assumed that \begin{equation} \label{TG:growth_cond} \log g_\theta^{-1}(x) = o(x) \quad \text{as}~ x \to \infty. \end{equation} The goal in this section is to solve the optimization problems \eqref{prob_def} and \eqref{Pollak_problem} asymptotically as $\alpha \to 0$, under parameter uncertainty. More specifically, for $\theta \in \Theta$, define Lorden's and Pollak's worst-case expected detection delay measures \[ \WADDth{\tau} := \esssup \sup_{\nu \geq 1} \E{\nu,\theta}{(\tau-\nu+1)^+ | {\cal F}_{\nu-1}} \] and \[ \SADDth{\tau} := \sup_{\nu \geq 1} \E{\nu, \theta}{\tau-\nu+1| \tau \geq \nu} \] and the corresponding asymptotic optimization problems: find a change detection procedure $\tau^*$ that minimizes these measures to first order in class $ \mathcal{C}_{\alpha} $, i.e., for all $\theta\in\Theta$, \begin{equation}\label{AsymptProblems} \lim _{\alpha \to 0}\frac{ \inf_{\tau\in \mathcal{C}_{\alpha} } \WADDth{\tau}}{\WADDth{\tau^*}} = 1, \quad \lim_{\alpha\to0}\frac{\inf_{\tau\in \mathcal{C}_{\alpha} } \SADDth{\tau}}{\SADDth{\tau^*}} = 1. \end{equation} Consider the following window-limited GLR CuSum stopping procedure: \begin{equation} \label{TG:test} \Tilde{\tau}_G\left(b\right) := \inf \left\{n:\max_{n-m_b \leq k \leq {n+1}} \sup_{\theta \in \Theta_b} \sum_{i=k}^n Z_{i,k}^{\theta} \geq b \right\} \end{equation} where $\Theta_b \nearrow \Theta$ as $b \nearrow \infty$. For $n < m_b$ maximization is performed over $1\le k \le n$. Therefore, it is guaranteed that $\theta \in \Theta_b$ for all large enough $b$. Since we are interested in class $ \mathcal{C}_{\alpha} =\{\tau: \FAR{\tau}\le \alpha\}$, in which case both threshold $b=b_\alpha$ and window size $m_b=m_\alpha$ are the functions of $\alpha$, we will write $\Theta_{b}=\Theta_\alpha$ and suppose that $\Theta_\alpha \subset \mathbb{R}^d$ is compact for each $\alpha$. Hereafter we omit the dependency of $\hat{\theta}_{n,k}$ on $\alpha$ for brevity. In this paper, we focus on the case where $\Theta_\alpha$ is continuous for all $\alpha$'s. The discrete case is simpler and will be considered elsewhere. The following assumption is made to guarantee the existence of an upper bound on FAR. \begin{assumption} \label{TG:smooth} There exists $\varepsilon > 0$ such that for any large enough $b > 0$, \begin{equation} \label{TG:llr_sec_der} \Prob{\infty}{\max_{(k,n):k \leq n \leq k+m_b} \sup_{\theta: \norm{\theta-\hat{\theta}_{n,k}} < b^{-\frac{\varepsilon}{2}}} \emax{- \nabla_\theta^2 \sum_{i=k}^n Z_{i,k}^{\theta}} \leq 2 b^{\varepsilon}} \geq 1 - \varepsilon_b \end{equation} where $\emax{A}$ represents the maximum absolute eigenvalue of a symmetric matrix $A$ and $\varepsilon_b \searrow 0$ as $b \nearrow \infty$. \end{assumption} \begin{example} Consider again the Gaussian exponential mean-change detection problem in Example~\ref{gex}. Now we consider the case where the exact value of the post-change exponent coefficient $\Theta = [\Theta_{\text{min}},\Theta_{\text{max}}]$ is unknown. Note that $\theta$ characterizes the entire post-change evolution rather than a single post-change distribution. We shall verify Assumption~\ref{TG:smooth} below. Recalling the definition of log-likelihood ratio given in \eqref{gex:llr}, for any $\theta \in \Theta$ and $k \leq i \leq n$ where $n-k\leq m_b$, we have \begin{align} -\frac{\partial^2}{\partial \theta^2} Z^\theta_{i,k} &= -\frac{\partial^2}{\partial \theta^2} \left(\frac{\mu_0}{\sigma_0^2} (e^{\theta(i-k)} - 1) X_i - \frac{\mu_0^2 (e^{2 \theta (i-k)}+1)}{2 \sigma_0^2}\right) \nonumber\\ &= -\frac{\mu_0}{\sigma_0^2} (i-k)^2 e^{\theta(i-k)} X_i + 2 (i-k)^2 \frac{\mu_0^2 e^{2 \theta (i-k)}}{\sigma_0^2}\nonumber\\ &= \frac{\mu_0}{\sigma_0^2} (i-k)^2 e^{\theta(i-k)} (2 \mu_0 e^{\theta(i-k)} - X_i). \end{align} Therefore, \begin{align} \label{gex:sec_der} &\max_{(k,n):k \leq n \leq k+m_b} \sup_{\theta \in \Theta} \abs{-\frac{\partial^2}{\partial \theta^2} \sum_{i=k}^n Z^\theta_{i,k}}\nonumber\\ &= \sup_{\theta \in \Theta} \max_{(k,n):k \leq n \leq k+m_b} \frac{\mu_0}{\sigma_0^2} \abs{\sum_{i=k}^n (i-k)^2 e^{\theta(i-k)} (2 \mu_0 e^{\theta (i-k)} - X_i)} \nonumber\\ &\leq \sup_{\theta \in \Theta} \frac{\mu_0}{\sigma_0^2} m_b^2 e^{\theta m_b} \left(2 \mu_0 m_b e^{\theta m_b} + \max_{(k,n):k \leq n \leq k+m_b} \abs{\sum_{i=k}^n X_i}\right) \nonumber\\ &\stackrel{(*)}{\leq} \sup_{\theta \in \Theta} \frac{4\mu_0^2}{\sigma_0^2} m_b^3 e^{2 \theta m_b} \leq \frac{4\mu_0^2}{\sigma_0^2} m_b^3 e^{2 \Theta_{\text{max}} m_b} \end{align} where $(*)$ is true provided that \[ \max_{(k,n):k \leq n \leq k+m_b} \abs{\sum_{i=k}^n X_i} < 2 \mu_0 m_b e^{\theta m_b}. \] Since $X_i$'s are i.i.d. under $\mathbb{P}_\infty$, $\sum_{i=k}^n X_i$ has a Gaussian distribution with mean $\leq (m_b+1) \mu_0$ and variance $\leq (m_b+1) \sigma_0^2$. Therefore, for any $\theta \in \Theta$, \begin{align*} & \Prob{\infty}{\max_{(k,n):k \leq n \leq k+m_b} \abs{\sum_{i=k}^n X_i} > 2 \mu_0 m_b e^{\theta m_b}}\\ &\leq \Prob{\infty}{\abs{\sum_{i=1}^{m_b} X_i} > 2 \mu_0 m_b e^{\theta m_b}} \\ &= 2 Q\left( \frac{2 \mu_0 m_b e^{\theta m_b} - m_b \mu_0}{\sigma_0\sqrt{m_b+1}} \right)\\ &\leq 2\exp\left(-\frac{2 \mu_0^2 m_b^2 (e^{\theta m_b}-1)^2}{\sigma_0^2 (m_b+1)}\right) \searrow 0,~\text{as $b \to \infty$} \end{align*} where $Q(x)= (2\pi)^{-1/2} \int_{x}^\infty e^{-t^2/2} \mathrm{d} t$ is the standard Q-function. Recalling the condition in \eqref{Condmb} on the window size and using the formula \eqref{gex:perf} for the worst-case expected delay, we obtain that if we set \begin{equation*} m_b = \frac{1}{2 \Theta_{\text{min}}}\log b \end{equation*} then \begin{equation*} \frac{4\mu_0^2}{\sigma_0^2} m_b^3 e^{2 \Theta_{\text{max}} m_b} \sim (\log b)^3 b^{\Theta_{\text{max}}/\Theta_{\text{min}}}. \end{equation*} Then Assumption~\ref{TG:smooth} holds when $\varepsilon = (1+\delta)\Theta_{\text{max}}/\Theta_{\text{min}}$ with arbitrary $\delta>0$. \qed \end{example} Note that $\WADDth{\Tilde{\tau}_G(b)} \leq \WADDth{\Tilde{\tau}_C(b)}$ for any threshold $b > 0$. In order to establish asymptotic optimality of the WL-GLR-CuSum procedure we need the following lemma that allows us to select threshold $b=b_\alpha$ in such a way that the FAR of $\Tilde{\tau}_G(b)$ is controlled at least asymptotically. \begin{lemma} \label{TG:fa} Suppose that the log-likelihood ratio $\{Z_{n,k}^{\theta}\}_{n\ge k}$ satisfies \eqref{TG:llr_sec_der}. Then, as $b\to\infty$, \begin{equation}\label{FARGLR} \FAR{\Tilde{\tau}_G(b)} \le |\Theta_\alpha| C_d^{-1} b^{\frac{\varepsilon d}{2}} e^{1-b} (1+o(1)), \end{equation} where $C_d =\frac{\pi^{d/2}}{\Gamma(1+d/2)}$ is a constant that does not depend on $\alpha$. Consequently, if $b=b_\alpha$ satisfies equation \begin{equation} \label{TG:thr} |\Theta_\alpha| C_d^{-1} b_\alpha^{\frac{\varepsilon d}{2}} e^{1-b_\alpha} = \alpha, \end{equation} then $\FAR{\Tilde{\tau}_G(b_\alpha)} \le \alpha (1+o(1))$ as $\alpha\to0$. \end{lemma} \begin{remark} Since $\abs{\Theta_\alpha} \leq \abs{\Theta} < \infty$, it follows from \eqref{TG:thr} that $b_\alpha \sim \abs{\log\alpha}$ as $\alpha \to 0$. \end{remark} The proof of Lemma~\ref{TG:fa} is given in the appendix. The following theorem establishes asymptotic optimality properties of the WL-GLR-CuSum detection procedure. \begin{theorem} \label{TG:asymp_opt} Suppose that threshold $b = b_\alpha$ is so selected that $\FAR{\Tilde{\tau}_C(b_\alpha)}\leq \alpha$ or at least so that $\FAR{\Tilde{\tau}_C(b_\alpha)}\leq \alpha (1+o(1))$ and $b_\alpha \sim |\log \alpha|$ as $\alpha \to 0$, in particular from equation \eqref{TG:thr} in Lemma~\ref{TG:fa}. Further, suppose that conditions \eqref{llr:upper}, \eqref{llr:lower} and \eqref{TG:llr_sec_der} hold for $\{Z_{n,k}\}_{n \ge k}$. Then, the window-limited GLR CuSum procedure $\Tilde{\tau}_G(b_\alpha)$ defined by \eqref{TG:test} with the window size $m_\alpha$ that satisfies the condition \eqref{Condmalpha} solves first-order asymptotic optimization problems \eqref{AsymptProblems} uniformly for all parameter values $\theta \in \Theta$, and \begin{equation} \label{FOAPPRGCUSUM} \SADDth{\Tilde{\tau}_G(b_\alpha)} \sim \WADDth{\Tilde{\tau}_G(b_\alpha)} \sim g_\theta^{-1} (\abs{\log{\alpha}}),\quad \forall \theta \in \Theta. \end{equation} as $\alpha \to 0$. \end{theorem} \begin{proof} Evidently, for any $\theta \in \Theta$ and any threshold $b>0$, \begin{equation*} \WADDth{\Tilde{\tau}_G(b)} \leq \WADDth{\Tilde{\tau}_C(b)}, \quad \SADDth{\Tilde{\tau}_G(b)} \leq \SADDth{\Tilde{\tau}_C(b)}. \end{equation*} Let $b=b_\alpha$ be so selected that $\FAR{\Tilde{\tau}_G(b_\alpha)} \leq \alpha$ and $b_\alpha \sim |\log \alpha|$ as $\alpha \to 0$. Then it follows from the asymptotic approximations \eqref{FOAPPRCUSUM} in Theorem~\ref{TC:asymp_opt} that, as $\alpha \to 0$, \[ \SADDth{\Tilde{\tau}_G(b_\alpha)} \le \WADDth{\Tilde{\tau}_G(b_\alpha)} \le g_\theta^{-1} (|\log \alpha|) (1+o(1)) . \] Comparing these asymptotic inequalities with the asymptotic lower bound \eqref{LowerSADD} in Theorem~\ref{llr:delay_lower_bound}, immediately yields \eqref{FOAPPRGCUSUM}, which is asymptotically the best one can do to first order according to Theorem~\ref{llr:delay_lower_bound}. In particular, if $b_\alpha$ is found from equation \eqref{TG:thr}, then $b_\alpha \sim |\log \alpha|$ as $\alpha\to0$ and by Lemma~\ref{TG:fa} $\FAR{\Tilde{\tau}_G(b_\alpha)}\leq \alpha (1+o(1))$, and therefore the assertions hold. \end{proof} \section{Extensions to Pointwise Optimality and Dependent Non-homogeneous Models} \label{sec:ext} The measure of FAR that we have used in this paper (see \eqref{eq:FAR_def}) is the inverse of the MTFA . However, the MTFA is a good measure of the FAR if, and only if, the pre-change distributions of the window-limited CuSum stopping time $\tilde{\tau}_C(b)$ and the window-limited GLR CuSum stopping time $\tilde{\tau}_G(b)$ are approximately geometric. While this geometric property can be established for i.i.d.\ data models (see, e.g., Pollak and Tartakovsky~\cite{PollakTartakovskyTPA09} and Yakir~\cite{Yakir-AS95}), it is not neccessarily true for non-homogeneous and dependent data, as discussed in Mei~\cite{Mei-SQA08} and Tartakovsky~\cite{Tartakovsky-SQA08a}. Therefore, in general, the MTFA is not appropriate for measuring the FAR. In fact, large values of MTFA may not necessarily guarantee small values of the probability of false alarm as discussed in detail in \cite{Tartakovsky-SQA08a,tartakovsky_sequential}. When the post-change model is Gaussian non-stationary as defined in Example~\ref{gex}, the MTFA may still an appropriate measure for false alarm, as shown in the simulation study in Section~\ref{num-res:mtfa}. Based on this result we conjecture that the MTFA-based FAR constraint may be suitable for other independent and non-stationary data models as well. However, in general, this may not be the case, and a more appropriate measure of the FAR in the general case may be the maximal (local) conditional probability of false alarm in the time interval $(k, k+m]$ defined as \cite{tartakovsky_sequential}: \[ \mathrm{SPFA}_m(\tau) = \sup_{k \ge 0} \Prob{\infty}{\tau \le k+m | \tau > k}. \] Then the constraint set in \eqref{fa_constraint} can be replaced by set \[ \mathbb{C}_{\beta,m} = \{\tau: \mathrm{SPFA}_m(\tau) \leq \beta\} \] of procedures for which the SPFA does not exceed a prespecified value $\beta \in (0,1)$. Pergamenschtchikov and Tartakovsky~\cite{PergTarSISP2016,PerTar-JMVA2019} considered general stochastic models of dependent and nonidentically distributed observations but asymptotically homogeneous (i.e., $g(n)=n$). They proved not only minimax optimality but also asymptotic pointwise optimality as $\beta\to0$ (i.e., for all change points $\nu \ge 1$) of the Shiryaev-Roberts (SR) procedure for the simple post-change hypothesis, and the mixture SR for the composite post-change hypothesis in class $\mathbb{C}_{\beta,m}$, when $m=m_\beta$ depends on $\beta$ and goes to infinity as $\beta\to0$ at such a rate that $\log m_\beta=o(|\log \beta|)$. The results of \cite{PergTarSISP2016,PerTar-JMVA2019} can be readily extended to the asymptotically non-homogeneous case where the function $g (n)$ increases with $n$ faster than $\log n$. In particular, using the developed in \cite{PergTarSISP2016,PerTar-JMVA2019} techniques based on embedding class $\mathbb{C}_{\beta,m}$ in the Bayesian class with a geometric prior distribution for the change point and the upper-bounded weighted PFA, it can be shown that the window-limited CuSum procedure \eqref{TC:test} with $m_\alpha$ replaced by $m_\beta$ is first-order pointwise asymptotically optimal in class $\mathbb{C}_{\beta,m_\beta}=\mathbb{C}_\beta$ as long as the uniform complete version of the strong law of large numbers for the log-likelihood ratio holds, i.e., for all $\delta > 0$ \[ \sum_{n=1}^\infty \sup_{\nu \ge 1} \Prob{\nu}{\left | \frac{1}{g_\nu(n)} \sum_{i=\nu}^{\nu+n-1} Z_{i, \nu} -1 \right | > \delta} < \infty, \] where in the general non-i.i.d. case the partial LLR $Z_{i, \nu} $ is \[ Z_{i, \nu} = \log \frac{p_{1, i,\nu} (X_i| X_1,\dots, X_{i-1})}{p_{0, i}(X_i| X_1,\dots, X_{i-1})} . \] Specifically, it can be established that for all fixed $\nu \ge 1$, as $\beta\to 0$, \begin{align*} \inf_{\tau \in \mathbb{C}_\beta} \mathrm{ADD}_\nu(\tau) \sim \mathrm{ADD}_\nu(\tilde{\tau}_C(b_\alpha)) \sim g^{-1}(|\log \beta|), \end{align*} where we used the notation $\mathrm{ADD}_\nu(\tau)=\E{\nu}{\tau-\nu | \tau \geq \nu}$ for the conditional average delay to detection. Similar results also hold for the maximal average detection delays $\WADD{\tau}$ and $\SADD{\tau}= \sup_{\nu \ge 1} \mathrm{ADD}_\nu(\tau)$. It is worth noting that it follows from the proof of Theorem~\ref{llr:delay_lower_bound} that under condition \eqref{llr:upper} the following asymptotic lower bound holds for the average detection delay $\mathrm{ADD}_\nu(\tau)$ uniformly for all values of the change point in class $\mathbb{C}_\beta$: \[ \inf_{\tau\in\mathbb{C}_\beta} \mathrm{ADD}_\nu(\tau) \ge g^{-1}(|\log \beta|) (1+o(1)), \quad \forall \nu \ge 1 ~ \text{as}~ \beta \to 0. \] In the case where the post-change observations have parametric uncertainty, sufficient conditions for the optimality of the WL-GLR-CuSum procedure are more sophisticated -- a probability in the vicinity of the true post-change parameter should be involved \cite{PerTar-JMVA2019}. Further details and the proofs are omitted and will be given elsewhere. \section{Numerical Results} \label{sec:num-res} \subsection{Performance Analysis} \label{num-res:pa} \begin{figure}[tbp] \centerline{\includegraphics[width=.75\textwidth,height=9cm]{perf.png}} \vspace{-3mm}\caption{Performances of the WL-CuSum with different window-sizes for the Gaussian exponential mean-change detection problem with $\mu_0=0.1$, $\sigma_0^2 = 10000$, and $\theta = 0.4$. The change-point is $\nu = 1$.} \label{fig:perf} \end{figure} In Fig. \ref{fig:perf}, we study the performance of the proposed WL-CuSum procedure in \eqref{TC:test} through Monte Carlo (MC) simulations for the Gaussian exponential mean-change detection problem (see Example~\ref{gex}), with known post-change parameter. The change-point is taken to be $\nu=1$\footnote{Note that $\nu=1$ may not necessarily be the worst-case value for the change-point for the WL-CuSum procedure. However, extensive experimentation with different values of $\nu$ ranging from 1 to 100, with window-sizes of 15 and 25, shows that in almost all cases $\nu=1$ results in the largest expected delay, or one that is within 1\% of the largest expected delay.}. Three window-sizes are considered, with the window size of 12 being smaller than the range expected delay values in the plot, and therefore not large enough to satisfy condition \eqref{TC:m_alpha}. The window size of 25 is sufficiently large, and the window size of 100 essentially corresponds to having no window at all. It is seen that the performance is nearly identical for all window-sizes considered. We also observe that the expected delay is $O(\log(\abs{\log\alpha}))$ for window-sizes considered, which matches our theoretical analysis in \eqref{gex:perf}. \begin{figure}[tbp] \centerline{\includegraphics[width=.75\textwidth,height=9cm]{perf_glr.png}} \vspace{-3mm}\caption{Comparison of operating characteristics of the window-limited CuSum (solid lines) and WL-GLR-CuSum (dotted lines) procedures with different sizes of windows for the Gaussian exponential mean-change detection problem with $\mu_0=0.1$, $\sigma_0^2 = 10000$, and $\theta = 0.4$. The post-change parameter set is $\Theta = (0,1]$. The change-point $\nu = 1$. Procedures with sufficiently large (in red, circle) and insufficiently large window-sizes (in blue, triangle) are also compared. \label{fig:perf_glr}} \end{figure} In Fig. \ref{fig:perf_glr}, we compare, also through MC simulations for the problem of Example~\ref{gex}, the performance of the WL-CuSum procedure \eqref{TC:test} tuned to the true post-change parameter and the WL-GLR-CuSum procedure \eqref{TG:test} where only the set of post-change parameter values is known. It is seen that the operating characteristic of WL-GLR-CuSum is slightly worse but close to that of the WL-CuSum procedure that knows the true post-change parameter. We also observe that procedures with a slightly insufficiently large window-sizes perform similarly to those with a sufficiently large window sizes. \subsection{Analysis of MTFA as False Alarm Measure} \label{num-res:mtfa} \begin{figure}[tbp] \centerline{\includegraphics[width=.75\textwidth,height=10cm]{QQ.png}} \vspace{-3mm}\caption{Quantile-Quantile (QQ) plots for full-history and window-limited CuSum stopping times with different thresholds for the Gaussian exponential mean-change detection problem with $\mu_0=0.1$, $\sigma_0^2 = 10000$, and $\theta = 0.4$. In all subplots, the x-axis shows the theoretical quantiles of the best-fit geometric distribution and the y-axis shows the experimental quantiles of distributions of the stopping times. The first row corresponds to WL-CuSum procedure \eqref{TC:test} and the second row corresponds to the full-history CuSum procedure \eqref{TC:def}.} \label{fig:qq} \end{figure} In Fig. \ref{fig:qq}, we study the distribution of the WL-CuSum stopping times using simulation results from the Gaussian exponential mean-change detection problem. This study is similar to the one in \cite{PollakTartakovskyTPA09}. It is observed that the experimental quantiles of stopping times for the WL-CuSum procedure are close to the theoretical quantiles of a geometric distribution. This indicates that the distribution of the stopping time is approximately geometric, in which case MTFA is an appropriate false alarm performance measure, and our measure of FAR as the reciprocal of the MTFA is justified. \subsection{Application: Monitoring COVID-19 Second Wave} \label{num-res:covid} \begin{figure}[htbp] \centerline{\includegraphics[width=.75\textwidth,height=8cm]{cov_val.png}} \vspace{-3mm}\caption{Validation of distribution model using past COVID-19 data. The plot shows the four-day moving average of the daily new cases of COVID-19 as a fraction of the population in Wayne County, MI from October 1, 2020 to February 1, 2021 (in blue). The shape of the pre-change distribution $\mathcal{B}(a_0,b_0)$ is estimated using data from the previous 20 days (from September 11, 2020 to September 30, 2021), where $\hat{a}_0=20.6$ and $\hat{b}_0=2.94 \times 10^5$. The mean of the Beta distributions with the best-fit $h$ (defined in \eqref{num_sim:h}) is also shown (in orange), which minimizes the mean-square distance between the daily incremental fraction and mean of the Beta distributions. The best-fit parameters are: $\hat{\theta}_0=0.464$, $\hat{\theta}_1=3.894$, and $\hat{\theta}_2=0.445$. } \label{fig:cov_val} \end{figure} Next, we apply the developed WL-GLR-CuSum algorithm to monitoring the spread of COVID-19 using new case data from various counties in the US \cite{nyt-covid-data}. The goal is to detect the onset of a new wave of the pandemic based on the incremental daily cases. The problem is modeled as one of detecting a change in the mean of a Beta distribution as in \cite{covid_beta}. Let $\mathcal{B}(x;a,b)$ denote the density of the Beta distribution with shape parameters $a$ and $b$, i.e., \begin{equation*} \mathcal{B}(x;a,b) = \frac{x^{a-1}(1-x)^{b-1}\Gamma(a+b)}{\Gamma(a)\Gamma(b)}, \quad \forall x \in [0,1], \end{equation*} where $\Gamma$ represents the gamma function. Note that the mean of an observation under density $\mathcal{B}(x;a,b)$ is $a / (a+b)$. Let \begin{equation} \label{num_sim:dist_model} p_0(x) = \mathcal{B}(x;a_0,b_0), \quad p_{1,n,k}(x) = \mathcal{B}(x;a_0 h_\theta(n-k),b_0), ~\forall n \geq k. \end{equation} Here, $h_\theta$ is a function such that $h_\theta(x) \geq 1,\forall x > 0$. Note that if $a_0 \ll b_0$ and $h_\theta(n-\nu)$ is not too large, \begin{equation} \E{\nu}{X_n} = \frac{a_0 h_\theta(n-\nu)}{a_0 h_\theta(n-\nu) + b_0} \approx \frac{a_0}{b_0} h_\theta(n-\nu) \end{equation} for all $n \geq \nu$. We design $h_\theta$ to capture the behavior of the average fraction of daily incremental cases. In particular, we model $h_\theta$ as \begin{equation} \label{num_sim:h} h_\theta(x) = 1+\frac{10^{\theta_0}}{\theta_2} \exp\left(-\frac{(x-\theta_1)^2}{2 \theta_2^2} \right) \end{equation} where $\theta_0,\theta_1,\theta_2 \geq 0$ are the model parameters and $\theta = (\theta_0,\theta_1,\theta_2) \in \Theta$. When $n-\nu$ is small, $h_\theta(n-\nu)$ grows like the left tail of a Gaussian density, which matches the exponential growth in the average fraction of daily incremental cases seen at the beginning of a new wave of the pandemic. Also, as $n \to \infty$, $h_\theta(n-\nu) \to 0$, which corresponds to the daily incremental cases eventually vanishing at the end of the pandemic. In Fig. \ref{fig:cov_val}, we validate the choice of distribution model defined in \eqref{num_sim:dist_model} using data from COVID-19 wave of Fall 2020. In the simulation, $a_0$ and $b_0$ are estimated using observations from previous periods in which the increments remain low and roughly constant. It is observed that the mean of the daily fraction of incremental cases matches well with the mean of the fitted Beta distribution with $h_\theta$ in \eqref{num_sim:h}. Note that the growth condition given in \eqref{TG:growth_cond} that is required for our asymptotic analysis is not satisfied for the observation model \eqref{num_sim:dist_model} with $h_\theta$ given in \eqref{num_sim:h}. Nevertheless, we expect the WL-GLR-CuSum procedure to perform as predicted by our analysis if the procedure stops during a time interval where $h_\theta$ is still increasing, which is what we would require of a useful procedure for detecting the onset of a new wave of the pandemic anyway. \begin{figure}[htbp] \centerline{\includegraphics[width=\textwidth,height=10cm]{covid.png}} \vspace{-3mm}\caption{COVID-19 monitoring example. The upper row shows the four-day moving average of the daily new cases of COVID-19 as a fraction of the population in Wayne County, MI (left), New York City, NY (middle) and Hamilton County, OH (right). A pre-change $\mathcal{B}(a_0,b_0)$ distribution is estimated using data from the previous 20 days (from May 26, 2021 to June 14, 2021). The plots in the lower row show the evolution of the WL-GLR-CuSum statistic defined in \eqref{TG:test}. The FAR rate $\alpha$ is set to $0.001$ and the corresponding thresholds of the WL-CuSum GLR procedure are shown in red. The post-change distribution at time $n$ with hypothesized change-point $k$ is modeled as $\mathcal{B}(a_0 h_\theta(n-k),b_0)$, where $h_\theta$ is defined in \eqref{num_sim:h}, and $\Theta = (0.1,5) \times (1,20) \times (0.1,5)$. The parameters $\theta_0$, $\theta_1$ and $\theta_2$ are assumed to be unknown. The window size $m_\alpha = 20$. The threshold is set using equation \eqref{TG:thr}.} \label{fig:covid} \end{figure} In Fig. \ref{fig:covid}, we illustrate the use the WL-GLR-CuSum procedure with the distribution model \eqref{num_sim:dist_model} for the detection of the onset of a new wave of COVID-19. We assumed a start date of June 15th, 2021 for the monitoring, at which time the pandemic appeared to be in a steady state with incremental cases staying relatively flat. We observe that the WL-GLR-CuSum statistic significantly and persistently crosses the detection threshold around late July in all counties, which is strong indication of a new wave of the pandemic. More importantly, unlike the raw observations which are highly varying, the WL-GLR-CuSum statistic shows a clear dichotomy between the pre- and post-change settings, with the statistic staying near zero before the purported onset of the new wave, and taking off very rapidly (nearly vertically) after the onset. \section{Conclusion} \label{sec:concl} We considered the problem of the quickest detection of a change in the distribution of a sequence of independent observations, assuming that the pre-change observation are stationary with known distribution, while the post-change observations are non-stationary with possible parametric uncertainty. Specifically, we assumed that the cumulative KL divergence between the post-change and the pre-change distributions grows super-linearly with time after the change point. We derived a universal asymptotic lower bound on the worst-case expected detection delay under a constraint on the false alarm rate in this non-stationary setting, which had been previously been derived only in the asymptotically stationary setting. We showed that the developed WL-CuSum procedure for known post-change distribution, as well as the developed WL-GLR-CuSum procedure for the unknown post-change parameters, asymptotically achieve the lower bound on the worst-case expected detection delay, as the false alarm rate goes to zero. We validated these theoretical results through numerical Monte-Carlo simulations. We also demonstrated that the proposed WL-GLR-CuSum procedure can be effectively used in monitoring pandemics. We also provided in Section~\ref{sec:ext} some possible avenues for future research, in particular, those allowing for dependent observations and more general false alarm constraints.
1,108,101,562,373
arxiv
\section{Proof of Lemma~\ref{lemmb_1}} \label{AppB} \begin{lemma} The optimal solution of the non-convex optimization problem \textbf{P1} is obtained when the the output size is $m=2$. \end{lemma} \begin{proof} Note that if $m=1$, then the optimal value of \textbf{P1} will be zero, and hence, we have $m\geq 2$. In the following, we prove that the optimal solution is achievable at $m=2$. Let $$f\left(\mathbf{q}^{m}_j,\mathbf{q}^{m}_{j+k/2}\right)=\sum_{l=1}^{m}\frac{\left(q_{l,j}-q_{l,j+k/2}\right)^{2}}{q_{l,j}+q_{l,j+k/2}}$$ denote the objective function of the problem~\textbf{P1}, where $\mathbf{q}^{m}_j=\left[q_{1,j},\ldots,q_{m,j}\right]$ and $\mathbf{q}^{m}_{j+k/2}=\left[q_{1,j+k/2},\ldots,q_{m,j+k/2}\right]$. Suppose that the optimal solution is obtained at $m>2$. In other words, there exist two distributions $\mathbf{q}_{j}^{m}$ and $\mathbf{q}^{m}_{j+k/2}$ with size $m>2$ that maximize the objective function $f\left(\mathbf{q}^{m}_j,\mathbf{q}^{m}_{j+k/2}\right)$ and satisfy the constraints~\eqref{eqnb_4}-\eqref{eqnb_7}. We prove that if $\mathbf{q}_{j}^{m}$ and $\mathbf{q}^{m}_{j+k/2}$ are optimal, then there exist two distributions $\tilde{\mathbf{q}}_{j}^{m-1}$ and $\tilde{\mathbf{q}}^{m-1}_{j+k/2}$ with support size $m-1$ that satisfy the problem constraints and achieve at least the same objective value as $\mathbf{q}_{j}^{m}$ and $\mathbf{q}^{m}_{j+k/2}$. Let $\tilde{\mathbf{q}}_{j}^{m-1}=\left[q_{1,j},\ldots,q_{m-2,j},q_{m-1,j}+q_{m,j}\right]$ and $\tilde{\mathbf{q}}_{j+k/2}^{m-1}=\left[q_{1,j+k/2},\ldots,q_{m-2,j+k/2},q_{m-1,j}+q_{m,j+k/2}\right]$. We can easily verify that $H\left(\tilde{\mathbf{q}}_{j}^{m-1}\right)\leq R$ as $H\left(\mathbf{q}_{j}^{m}\right)\leq R$ and $H\left(\tilde{\mathbf{q}}_{j+k/2}^{m-1}\right)\leq R$ as $H\left(\mathbf{q}_{j+k/2}^{m}\right)\leq R$. Furthermore, we have \begin{equation} e^{-\epsilon}=e^{-\epsilon} \frac{q_{m-1,j+k/2}+q_{m,j+k/2}}{q_{m-1,j+k/2}+q_{m,j+k/2}}\leq\frac{q_{m-1,j}+q_{m,j}}{q_{m-1,j+k/2}+q_{m,j+k/2}}\leq e^{\epsilon}\frac{q_{m-1,j+k/2}+q_{m,j+k/2}}{q_{m-1,j+k/2}+q_{m,j+k/2}} = e^{\epsilon} \end{equation} Hence, the distributions $\tilde{\mathbf{q}}_{j}^{m-1}$ and $\tilde{\mathbf{q}}^{m-1}_{j+k/2}$ satisfy the constraints of the problem~\textbf{P1}. Consider the following inequalities \begin{equation} \begin{aligned} &f\left(\tilde{\mathbf{q}}^{m}_j,\tilde{\mathbf{q}}^{m-1}_{j+k/2}\right)-f\left(\mathbf{q}^{m}_j,\mathbf{q}^{m}_{j+k/2}\right)\\ &\quad=\frac{\left(q_{m-1,j}+q_{m,j}-q_{m-1,j+k/2}+q_{m,j+k/2}\right)^{2}}{q_{m-1,j}+q_{m,j}+q_{m-1,j+k/2}+q_{m,j+k/2}}-\left[\frac{\left(q_{m-1,j}-q_{m-1,j+k/2}\right)^{2}}{q_{m-1,j}+q_{m-1,j+k/2}}+\frac{\left(q_{m,j}-q_{m,j+k/2}\right)^{2}}{q_{m,j}+q_{m,j+k/2}}\right]\\ &\quad\stackrel{\left(a\right)}{\geq} \frac{\left(q_{m-1,j}+q_{m,j}-q_{m-1,j+k/2}+q_{m,j+k/2}\right)^{2}}{q_{m-1,j}+q_{m,j}+q_{m-1,j+k/2}+q_{m,j+k/2}}-2\frac{\left(\frac{q_{m-1,j}+q_{m,j}}{2}-\frac{q_{m-1,j+k/2}+q_{m,j+k/2}}{2}\right)^{2}}{\frac{q_{m-1,j}+q_{m,j}}{2}+\frac{q_{m-1,j+k/2}+q_{m,j+k/2}}{2}}\\ &\quad=0 \end{aligned} \end{equation} where step $\left(a\right)$ follows from the convexity of the function $\left(x-y\right)^{2}/\left( x+y\right)$ for $x,y\in\left[0:1\right]$. Hence the distributions $\tilde{\mathbf{q}}^{m}_j,\tilde{\mathbf{q}}^{m-1}_{j+k/2}$ have at least the same objective value as $\mathbf{q}_{j}^{m}$ and $\mathbf{q}^{m}_{j+k/2}$. \end{proof} \section{Proof of Lemma~\ref{lemm3_2}} \label{AppC} Consider an arbitrary estimator $\hat{\mathbf{p}}$, then we have \begin{equation} \begin{aligned} \sup_{\mathbf{p}\in\Delta_k} & \mathbb{E}\left[\ell\left(\hat{\mathbf{p}}\left(Y^{n}\right),\mathbf{p}\right)\right]\geq \sup_{\nu\in\mathcal{V}}\mathbb{E}\left[\ell\left(\hat{\mathbf{p}}\left(Y^{n}\right),\mathbf{p}^{\nu}\right)\right]\\ &\geq \frac{1}{|\mathcal{V}|}\sum_{\nu\in\mathcal{V}}\mathbb{E}\left[\ell\left(\hat{\mathbf{p}}\left(Y^{n}\right),\mathbf{p}^{\nu}\right)\right]\\ &\geq \phi\left(\delta\right) \frac{1}{|\mathcal{V}|}\sum_{\nu\in\mathcal{V}}\mathbb{E}\left[\sum_{j=1}^{k/2}\mathbbm{1}\left(\psi_{j}\left(Y^{n}\right)\neq \nu_j\right)\right]\\ &\geq \phi\left(\delta\right) \sum_{j=1}^{k/2}\left( \frac{1}{|\mathcal{V}|}\sum_{\nu\in\mathcal{V}:\nu_j=+1}\mathbb{E}\left[\mathbbm{1}\left(\psi_{j}\left(Y^{n}\right)\neq +1\right)\right]+\frac{1}{|\mathcal{V}|}\sum_{\nu\in\mathcal{V}:\nu_j=-1}\mathbb{E}\left[\mathbbm{1}\left(\psi_{j}\left(Y^{n}\right)\neq -1\right)\right]\right)\\ &\geq \phi\left(\delta\right) \sum_{j=1}^{k/2}\inf_{\psi}\left( \frac{1}{|\mathcal{V}|}\sum_{\nu\in\mathcal{V}:\nu_j=+1}\text{Pr}\left[\psi_{j}\left(Y^{n}\right)\neq +1\right]+\frac{1}{|\mathcal{V}|}\sum_{\nu\in\mathcal{V}:\nu_j=-1}\text{Pr}\left[\psi_{j}\left(Y^{n}\right)\neq -1\right]\right)\\ &= \phi\left(\delta\right) \sum_{j=1}^{k/2}\frac{1}{2}\inf_{\psi}\left(\mathbf{M}_{+j}^{n}\left[\psi_{j}\left(\mathbf{Y}^{n}\right)\neq +1\right]+\mathbf{M}_{+j}^{n}\left[\psi_{j}\left(\mathbf{Y}^{n}\right)\neq -1\right]\right)\\ &\geq \phi\left(\delta\right)\sum_{j=1}^{k/2}\left(1-||\mathbf{M}^{n}_{+j}-\mathbf{M}^{n}_{-j}||_{\text{TV}}\right) \end{aligned} \end{equation} where $\psi=\left(\psi_1,\ldots,\psi_{k/2}\right)$ is a vector of test functions. \section{Proof of Lemma~\ref{lemm4_1}} \label{AppD} We claim that the conditional distribution on $Y_{i}^{j}|X_i$ is given by \begin{equation}~\label{eqnd_1} \text{Pr}\left[Y_{i}^{j}=1|X_i\right]=\left\{\begin{array}{ll} \frac{e^{\epsilon_j}}{e^{\epsilon_j}+1}& \text{if}\ X_i\in B_i\\ \frac{1}{e^{\epsilon_j}+1}& \text{if}\ X_i\notin B_i\\ \end{array}\right. \end{equation} which is $\epsilon_j$-LDP. We prove our claim by induction. For the basis step, we can easily verify that $Y_{i}^{1}$ defined in~\eqref{eqn4_2} follows the conditional distribution in~\eqref{eqnd_1}. For the induction step, suppose that our claim is true for $j$. Observe that $Y_{i}^{j+1}=Y_{i}^{j}\oplus U_{i}^{j+1}$. Hence, we have \begin{equation} \begin{aligned} &\text{Pr}\left[Y_{i}^{j+1}=1|X_i\in B_i\right]\\ &\ =\text{Pr}\left[Y_{i}^{j+1}=1|X_i\in B_i,Y_{i}^{j}=1\right]\text{Pr}\left[Y_{i}^{j}=1|X_i\in B_i\right]\\ &\qquad\qquad\qquad+\text{Pr}\left[Y_{i}^{j+1}=1|X_i\in B_i,Y_{i}^{j}=0\right]\text{Pr}\left[Y_{i}^{j}=0|X_i\in B_i\right]\\ &\ =\text{Pr}\left[U_{i}^{j+1}=0\right]\text{Pr}\left[Y_{i}^{j}=1|X_i\in B_i\right]+\text{Pr}\left[U_{i}^{j+1}=1\right]\text{Pr}\left[Y_{i}^{j}=0|X_i\in B_i\right]\\ &\ =\left(1-q_{j+1}\right)\left(1-z_j\right)+q_{j+1}z_j\\ &\ =1-z_{j+1}=\frac{e^{\epsilon_{j+1}}}{e^{\epsilon_{j+1}}+1} \end{aligned} \end{equation} Similarly, we can prove that $\text{Pr}\left[Y_{i}^{j+1}=1|X_i\notin B_i\right]=z_{j+1}=\frac{1}{e^{\epsilon_{j+1}}+1}$. Hence, the proof is completed. \section{Proof of Lemma~\ref{lemm5_1}} \label{AppE} In order to recover $X$ from $Y$ and $U$, it is required that each input database $x\in\left[k\right]$ is mapped to $y$ with a different value of key $U$ for every output $y\in\left[k\right]$. Let $y=x\oplus u$ for all $x\in\left[k\right]$ and $u\in\left[k\right]$, where $x\oplus y=\left[\left(x+u-2\right)\bmod k \right]+1$. Note that the set $\left[k\right]$ along with the operation $\oplus$ forms a group\footnote{It is exactly the group defined on integers $\lbrace 0,\ldots,k-1\rbrace$ with modulo-$k$ operation, but we subtract $-2$ before taking $\bmod k$ and adding one to fit modulo-$k$ operation with the set $\left[k\right]=\lbrace 1,\ldots,k\rbrace$}. The private mechanism $Q$ is defined as follows \begin{equation} Q\left(y|x\right)=q_{u}, \end{equation} for $y=x\oplus u$. Note that an input $x$ is mapped to each output $y$ with a different value of the key $U=\left(k-x+2\right)\oplus y$. Moreover, for a given output $y$, we can easily see that each input $x\in\left[k\right]$ is mapped to $y$ with a different value of the key $U$. Hence, it is possible to recover $X$ from $Y$ and $U$. Furthermore, for any two inputs $x,x^{\prime}\in\mathcal{X}$, we have \begin{equation} \sup_{y\in\left[k\right]}\frac{Q\left(y|x\right)}{Q\left(y|x^{\prime}\right)}\leq \frac{q_{\max}}{q_{\min}}\stackrel{\left(a\right)}{\leq} e^{\epsilon}, \end{equation} where $q_{\max}=\max\limits_{j\in\left[k\right]}q_j$ and $q_{\min}=\min\limits_{j\in\left[k\right]}q_j$. Step $\left(a\right)$ follows from the assumption that $\frac{q_{\max}}{q_{\min}}\leq e^{\epsilon}$. Thus, the mechanism $Q$ is an $\epsilon$-LDP-Rec mechanism. \section{Omitted Details from Section~\ref{Recov-A}}\label{AppF-2} First we prove the first necessary condition of Theorem~\ref{Th2_4}. As mentioned in Section~\ref{Recov-A}, we prove this in two parts: First we show $|\mathcal{Y}|\geq |\mathcal{X}|$ using the recoverability constraint and then $|\mathcal{U}|\geq |\mathcal{Y}|$ using the privacy constraint. $|\mathcal{Y}|\geq |\mathcal{X}|$: Observe that the output $Y$ of the private mechanism $Q$ can be represented as a function of the input $X$ and the random key $U$, i.e., $Y=f\left(X,U\right)$. Fix the value of the random key $U=u$ for an arbitrary $u\in\mathcal{U}$. Then, for each value of $x\in\mathcal{X}$, the function $f\left(X,U\right)$ should generate a different output $Y$ in order to be able to recover $X$ from $Y$ and $U$. In other words, each input $x\in\mathcal{X}$ should be mapped to a different output $y\in\mathcal{Y}$ for the same value of the random key $u\in\mathcal{U}$. Otherwise, there exists two inputs mapped with the same key value to the same output. As a result, it is required that the output size is at least the same as the input size: $|\mathcal{Y}|\geq |\mathcal{X}|$. $|\mathcal{U}|\geq |\mathcal{Y}|$: Let $\mathcal{Y}\left(x\right)\subseteq\mathcal{Y}$ be a subset of outputs such that input $X=x$ is mapped with non-zero probability to every $y\in\mathcal{Y}\left(x\right)$. We claim that $\mathcal{Y}\left(x\right)=\mathcal{Y}$ for all $x\in\mathcal{X}$ for any $\epsilon$-LDP-Rec mechanism. In other words, we claim that each input $x\in\mathcal{X}$ should be mapped with non-zero probability to every output $y\in\mathcal{Y}$. We prove our claim by contradiction. Suppose that there exist $x,x^{\prime}\in\mathcal{X}$ such that $\mathcal{Y}\left(x\right)\neq \mathcal{Y}\left(x^{\prime}\right)$. Thus, there exists $y\in\mathcal{Y}\left(x\right)\setminus \mathcal{Y}\left(x^{\prime}\right)$ or $y\in\mathcal{Y}\left(x^{\prime}\right)\setminus \mathcal{Y}\left(x\right)$. Hence, we have $\frac{Q\left(y|x\right)}{Q\left(y|x^{\prime}\right)}\to \infty$ or $\frac{Q\left(y|x^{\prime}\right)}{Q\left(y|x\right)}\to \infty$ which violates the privacy constraints. Therefore, $\mathcal{Y}\left(x\right)= \mathcal{Y}\left(x^{\prime}\right)=\mathcal{Y}$ for all $x,x^{\prime}\in\mathcal{X}$. However, for a given $x\in\mathcal{X}$, we have $|\mathcal{Y}\left(x\right)|\leq |\mathcal{U}|$, since each input $x\in\mathcal{X}$ can be mapped with non-zero probability to at most $|\mathcal{U}|$ outputs. Thus, we get that the random key size is at least the same as the output size: $|\mathcal{U}|\geq |\mathcal{Y}| \geq |\mathcal{X}|$. Hence, the first condition is necessary to design an $\epsilon$-LDP-Rec mechanism. This completes the proof of the first necessary condition of Theorem~\ref{Th2_4}. Now, assuming $q_1\leq q_2\leq \ldots\leq q_k$, we show $q_k/q_1\leq e^{\epsilon}$. This will be required to prove the second necessary condition to prove Theorem~\ref{Th2_4}. $q_k/q_1\leq e^{\epsilon}$: We prove our claim by contradiction. Suppose that $q_k/q_1>e^{\epsilon}$. Consider a certain output $y\in\mathcal{Y}$ such that there exists $x\in\mathcal{X}$ mapped to $y$ when $U=u_{k}$ with probability $q_{k}$. Note that each sample $x\in\mathcal{X}$ should be mapped using a different value of the key to each output $y\in\mathcal{Y}$ in order to be able to recover the sample $X$ from $Y$ and $U$. In our case, there are $k-1$ remaining inputs to be mapped to $y$ with different values of keys; however, none of these $k-1$ inputs can be mapped to $y$ with $U=u_1$, since $q_{k}/q_{1}>e^{\epsilon}$, which violates the privacy constraint. Hence, we have $k-1$ inputs mapped to $y$ using at most $k-2$ values of keys. Thus, there would exist at least two inputs mapped to output $y$ with the same key value. Therefore, we cannot recover $X$ from $y$ given $U$. As a result, we should have $q_{k}/q_{1}\leq e^{\epsilon}$. \section{Proof of Lemma~\ref{lemm5_2}} \label{AppF} Before we present the proof of Lemma~\ref{lemm5_2}, we provide the following lemma whose proof is in Appendix~\ref{AppI}. \begin{lemma}~\label{lemmf_1}Let $U\in\mathcal{U}=\lbrace u_1,\ldots,u_m\rbrace$ be a random variable with size $m$ having a distribution $\mathbf{q}=\left[q_1,\ldots,q_m\right]$, where $q_1\geq \cdots\geq q_m$. Then, the random variable $U^{\prime}\in\mathcal{U}^{\prime}=\lbrace u_1,\ldots,u_{m-1}\rbrace$ with distribution $\mathbf{q}^{\prime}=\left[q_1^{\prime},\ldots,q_{m-1}^{\prime}\right]$ has an entropy \begin{equation} H\left(U\right)\geq H\left(U^{\prime}\right), \end{equation} where $q_j^{\prime}=q_j/\left(1-q_m\right)$ for $j\in\lbrace 1,\ldots,m-1\rbrace$. \end{lemma} This lemma shows that if we trim the last symbol that has the lowest probability from a distribution, and normalize the remaining probabilities, then we get a distribution that has lower entropy. The main idea of the proof of Lemma~\ref{lemm5_2} is that we do some reduction steps to get a new random key $U^{\prime}$ with a support size equal to the input size from the random key $U$. In addition, this new random key $U^{\prime}$ has lower entropy than the entropy of the original random key $U$. First, we give an example to illustrate the idea, and then we proceed to the general proof. \begin{Example} Suppose that a random key $U\in\lbrace 1,2,\ldots,6\rbrace$ has a distribution $\mathbf{q}=\left[q_1,\ldots,q_6\right]$, where $q_1\geq\cdots\geq q_6$. The random key $U$ is used to design an $\epsilon$-LDP-Rec mechanism $Q$ with input $X\in\lbrace 1,2,3\rbrace$. Suppose that there exists an output $y$ such that $X=x$ is mapped to $y$ when $U\in\mathcal{U}_{yx}$, where $\mathcal{U}_{y1}=\lbrace 6\rbrace$, $\mathcal{U}_{y2}=\lbrace 2,3\rbrace$, and $\mathcal{U}_{y3}=\lbrace 1\rbrace$. Hence, $Q\left(y|X=1\right)=q_6$, $Q\left(y|X=2\right)=q_2+q_3$, and $Q\left(y|X=3\right)=q_1$. Let $\mathcal{U}_y=\bigcup_{x\in\left[3\right]}\mathcal{U}_{yx}=\lbrace 1,2,3,6\rbrace$, and $\overline{\mathcal{U}}_y=\mathcal{U}\setminus\mathcal{U}_y=\lbrace 4,5\rbrace$. Let $\tilde{\mathbf{q}}=\left[q_6,q_{2}+q_3,q_1,q_4,q_5\right]$, where the first three elements are $Q\left(y|X=i\right)$ for $i\in\left[3\right]$ and the remaining elements represent $q_u$ for $u\in\overline{\mathcal{U}}_y$. Then, we sort the distribution $\tilde{\mathbf{q}}$ in a descending order to get $\tilde{\mathbf{q}}^{\downarrow}=\left[q_{2}+q_{3},q_{1},q_4,q_5,q_6\right]$, where $\tilde{q}_{i}^{\downarrow}$ denotes the $i$th largest component in $\tilde{\mathbf{q}}$. Consider a random key $\tilde{U}\in\lbrace 1,2,3,4,5\rbrace$ having a distribution $\tilde{\mathbf{q}}^{\downarrow}$. Observe that $H\left(\tilde{U}\right)\leq H\left(U\right)$, since $\tilde{U}$ can be represented as a function of $U$. Furthermore, we have $\frac{q_{2}+q_{3}}{q_1}\leq\frac{q_{2}+q_{3}}{q_4}\leq \frac{q_{2}+q_{3}}{q_6}\leq e^{\epsilon}$, since $Q$ is an $\epsilon$-LDP mechanism, and $q_4\geq q_6$. Consider a random key $U^{\prime}$ having a distribution $\mathbf{q}^{\prime}=\left[\frac{q_2+q_3}{1-\left(q_5+q_6\right)},\frac{q_1}{1-\left(q_5+q_6\right)},\frac{q_4}{1-\left(q_5+q_6\right)}\right]$ obtained by trimming sequentially the last two symbols of the random key $\tilde{U}$. By applying Lemma~\ref{lemmf_1} twice on the distribution $\tilde{\mathbf{q}}^{\downarrow}$, we get that $H\left(U\right)\geq H\left(\tilde{U}\right)\geq H\left(U^{\prime}\right)$. Furthermore, we have $q_{\max}^{\prime}/q_{\min}^{\prime}\leq e^{\epsilon}$. Thus, from Lemma~\ref{lemm5_1}, we can construct an $\epsilon$-LDP-Rec mechanism with input $X\in\left[3\right]$ and an output $Y\in\left[3\right]$ using the random key $U^{\prime}$, where $H\left(U\right)\geq H\left(U^{\prime}\right)$. \end{Example} We now present the general proof. Let $U\in\mathcal{U}=\lbrace u_1,\ldots,u_m\rbrace$ be a random key with size $m>k$ having a distribution $\mathbf{q}=\left[q_1,\ldots,q_m\right]$. Without loss of generality, assume that $q_1\geq \cdots\geq q_m$. Let $Q$ be an $\epsilon$-LDP-Rec mechanism designed using a random key $U$ with input $X\in\left[k\right]$ and an output $Y\in\mathcal{Y}$. Let $\mathcal{U}_{yx}\subset\mathcal{U}$ be a subset of keys such that the input $X=x$ is mapped to $Y=y$ when $U\in\mathcal{U}_{yx}$ for all $x\in\left[k\right]$ and $y\in\mathcal{Y}$. As a result the private mechanism $Q$ can be represented by $Q\left(y|X=x\right)=\sum_{u\in\mathcal{U}_{yx}}q_u$. Observe that for given $y$, we have $\mathcal{U}_{yx}\bigcap \mathcal{U}_{yx^{\prime}}=\phi$, otherwise we cannot recover $X$ from $Y$ and $U$, since there would be $x$ and $x^{\prime}$ mapped to $y$ with the same key value. Let $\mathcal{U}_y=\bigcup_{x\in\left[k\right]}\mathcal{U}_{yx}$, and hence, $\mathcal{U}_y\subseteq \mathcal{U}$. Furthermore, for given $y$, we have $Q\left(y|X=x\right)/Q\left(y|X=x^{\prime}\right)\leq e^{\epsilon}$, since $Q$ is an $\epsilon$-LDP mechanism. Consider an output $y\in\mathcal{Y}$ such that $u_1\in\mathcal{U}_y$. Let $\overline{\mathcal{U}}_y=\mathcal{U}\setminus\mathcal{U}_y$ be an indexed set with size $l=|\overline{\mathcal{U}}_y|$, where $\overline{\mathcal{U}}_y\left(j\right)$ denotes the $j$th element in $\overline{\mathcal{U}}_y$. Consider a distribution $\tilde{\mathbf{q}}=\left[\tilde{q}_1,\ldots,\tilde{q}_{l+k}\right]$ designed as follows $\tilde{q}_j=Q\left(y|X=j\right)$ for all $j\in\left[k\right]$ and $\tilde{q}_j=q_{\overline{\mathcal{U}}_y\left(j-k\right)}$ for all $i\in\lbrace k+1,\ldots,k+l\rbrace$. We can sort the distribution $\tilde{\mathbf{q}}$ in a descending order to get $\tilde{\mathbf{q}}^{\downarrow}=\left[\tilde{q}_{1}^{\downarrow},\ldots,\tilde{q}_{l+k}^{\downarrow}\right]$, where $\tilde{q}_{i}^{\downarrow}$ denotes the $i$th largest component in $\tilde{\mathbf{q}}$. Let $\tilde{U}$ be a random key drawn from a distribution $\tilde{\mathbf{q}}^{\downarrow}$. We have the following two properties on the distribution $\tilde{\mathbf{q}}^{\downarrow}$: \begin{enumerate} \item $H\left(U\right)\geq H\left(\tilde{U}\right)$. \item $\frac{\tilde{q}_{1}^{\downarrow}}{\tilde{q}_{k}^{\downarrow}}\leq e^{\epsilon}$. \end{enumerate} The first property is straightforward, since the random key $\tilde{U}$ can be represented as a function of $U$. Observe that $u_1\in\mathcal{U}_y$, and $q_1\geq q_u$ for all $u\in\overline{\mathcal{U}}_y$. Hence, $\tilde{q}_{1}^{\downarrow}$ is one of the first $k$ elements in $\tilde{\mathbf{q}}$. Thus, we get $$\frac{\tilde{q}_{1}^{\downarrow}}{\tilde{q}_{k}^{\downarrow}}\stackrel{\left(a\right)}{\leq} \frac{\tilde{q}_{\max}}{\tilde{q}_{\min}}\leq e^{\epsilon}$$ where $\tilde{q}_{\max}=\max_{j\in\left[k\right]}\tilde{q}_{j}=\tilde{q}_{1}^{\downarrow}$ and $\tilde{q}_{\min}=\min_{j\in\left[k\right]}\tilde{q}_{j}$. If $q_u$ for $u\in\overline{\mathcal{U}}_y$ is one of the first $k$ elements in $\tilde{\mathbf{q}}^{\downarrow}$, i.e, $q_u>\tilde{q}_{\min}$, then inequality $\left(a\right)$ is still valid. Now, let $U^{\prime}\in\left[k\right]$ be a random key drawn from a distribution $\mathbf{q}^{\prime}=\left[q_1^{\prime},\ldots,q_{k}^{\prime}\right]$, where $q_{j}^{\prime}=\frac{\tilde{q}_{j}^{\downarrow}}{\sum_{j=1}^{k}\tilde{q}_{j}^{\downarrow}}$. Observe that $\mathbf{q}^{\prime}$ is obtained by applying Lemma~\ref{lemmf_1} $l$ times on $\tilde{\mathbf{q}}^{\downarrow}$ to trim sequentially the last $l$ symbols of $\tilde{U}$ that have the lowest $l$ probabilities. Thus, we get that $H\left(U\right)\geq H\left(\tilde{U}\right)\geq H\left(U^{\prime}\right)$. Furthermore, from the second property, we have $q_{\max}^{\prime}/q_{\min}^{\prime}=\frac{\tilde{q}_{1}^{\downarrow}}{\tilde{q}_{k}^{\downarrow}}\leq e^{\epsilon}$. Thus, from Lemma~\ref{lemm5_1}, we can construct an $\epsilon$-LDP-Rec mechanism with input $X\in\left[k\right]$ and an output $Y\in\left[k\right]$ using the random key $U^{\prime}$, and $H\left(U\right)\geq H\left(U^{\prime}\right)$. This completes the proof. \section{Proof of Lemma~\ref{lemm6_1}} \label{AppG} To simplify the proof, we assume that $\left[k\right]=\lbrace 0,\ldots,k-1\rbrace$. Let $\mathcal{X}^{T}=\left[k\right]^{T}$ denote the input dataset, and $Y^{T}=\left(Y^{\left(1\right)},\ldots,Y^{\left(T\right)}\right)$ be the output of the private mechanism $Q$ that takes a value from a set $\mathcal{Y}^{T}=\left[k\right]^{T}$. In order to recover $X^{T}$ from $Y^{T}$ and $U$, it is required that each input database $\mathbf{x}\in\mathcal{X}^{T}$ is mapped to each output $\mathbf{y}\in\left[k\right]^{T}$ with a different value of key $U$. Let the random key $U$ be drawn from an $\epsilon$-DP distribution $\mathbf{q}$. Hence, there exists a bijective function $f:\mathcal{X}^{T}\to\left[k\right]^{T}$ such that \begin{equation} \frac{q_{f\left(\mathbf{x}\right)}}{q_{f\left(\mathbf{x}^{\prime}\right)}}\leq e^{\epsilon}. \end{equation} for every neighboring databases $\mathbf{x},\mathbf{x}^{\prime}\in\left[k\right]^{T}$. Let $Q$ be a private mechanism defined as follows \begin{equation}~\label{eqnG_2} Q\left(\mathbf{y}|\mathbf{x}\right)=q_{f\left(\mathbf{x}\oplus\mathbf{y}\right)}. \end{equation} where $\mathbf{x}\oplus\mathbf{y}=\left(x^{\left(1\right)}\oplus y^{\left(1\right)},\ldots,x^{\left(T\right)}\oplus y^{\left(T\right)}\right)$\footnote{We apply elementwise operation $\oplus$ on the vectors $\mathbf{x}$ and $\mathbf{y}$.}, and $x^{\left(j\right)}\oplus y^{\left(j\right)}=\left[\left(x^{\left(j\right)}+y^{\left(j\right)}\right)\bmod k \right]$ which is an addition between $x^{\left(j\right)}$ and $y^{\left(j\right)}$ in a finite group of order $k$. For a fixed $\mathbf{y}\in\mathcal{Y}^{T}$, we can easily see that $f\left(\mathbf{x}\oplus\mathbf{y}\right)\neq f\left(\hat{\mathbf{x}}\oplus\mathbf{y}\right)$ for any $\mathbf{x}\neq \hat{\mathbf{x}}$ and $\mathbf{x},\hat{\mathbf{x}}\in\left[k\right]^{T}$, since $\mathbf{x}\oplus\mathbf{y}\neq \hat{\mathbf{x}}\oplus\mathbf{y}$ and $f$ is a bijection. Hence, for every output $\mathbf{y}\in\left[k\right]^{T}$, each input database $\mathbf{x}\in\mathcal{X}^{T}$ is mapped to an output $\mathbf{y}$ with a different value of key $U$. Thus, we can recover $X^{T}$ from $Y^{T}$ and $U$. For a fixed $\mathbf{x}\in\left[k\right]^{T}$, we can see that $f\left(\mathbf{x}\oplus\mathbf{y}\right)\neq f\left(\mathbf{x}\oplus\hat{\mathbf{y}}\right)$ for any $\mathbf{y}\neq \hat{\mathbf{y}}$ and $\mathbf{y},\hat{\mathbf{y}}\in\left[k\right]^{T}$, since $\mathbf{x}\oplus\mathbf{y}\neq \mathbf{x}\oplus\hat{\mathbf{y}}$ and $f$ is a bijection. Hence $Q\left(\mathbf{y|\mathbf{x}}\right)$ is a valid conditional distribution for each $\mathbf{x}\in\left[k\right]^{T}$. It remains to prove that the private mechanism $Q$ given in~\eqref{eqnG_2} is $\epsilon$-DP. In the following, we prove that for every output $\mathbf{y}$, and every neighboring databases $\mathbf{x},\tilde{\mathbf{x}}\in\left[k\right]^{T}$, we have \begin{equation}~\label{eqnG_6} \frac{Q\left(\mathbf{y}|\mathbf{x}\right)}{Q\left(\mathbf{y}|\tilde{\mathbf{x}}\right)}\leq e^{\epsilon} \end{equation} Therefore, the private mechanism $Q$ is $\epsilon$-DP. The proof is by induction. For the basis step, observe that each input database $\mathbf{x}$ is mapped to $\mathbf{y}_0=\left[0,\ldots,0\right]$ with probability $q_{f\left(\mathbf{x}\right)}$ for $\mathbf{x}\in\left[k\right]^{T}$. Thus, for every neighboring databases $\mathbf{x},\tilde{\mathbf{x}}\in\left[k\right]^{T}$, we get \begin{equation} \frac{Q\left(\mathbf{y}_0|\mathbf{x}\right)}{Q\left(\mathbf{y}_0|\tilde{\mathbf{x}}\right)}=\frac{q_{f\left(\mathbf{x}\right)}}{q_{f\left(\tilde{\mathbf{x}}\right)}}\stackrel{\left(a\right)}{\leq} e^{\epsilon} \end{equation} where step $\left(a\right)$ follows from the assumption that the distribution $\mathbf{q}$ satisfies $\epsilon$-DP. For the induction step, suppose there exists an output $\mathbf{y}\in\left[k\right]^{T}$ that satisfies~\eqref{eqnG_6}. Let $\tilde{\mathbf{y}}$ be a neighboring output to $\mathbf{y}$, i.e., $\tilde{\mathbf{y}}$ and $\mathbf{y}$ are different in only one element. Without loss of generality, let $y^{\left(i\right)}\neq \tilde{y}^{\left(i\right)}$ while $y^{\left(j\right)}= \tilde{y}^{\left(j\right)}$ for $j\neq i$. Then, for every neighboring databases $\mathbf{x},\tilde{\mathbf{x}}\in\left[k\right]^{T}$, we get \begin{equation} \begin{aligned} \frac{Q\left(\tilde{\mathbf{y}}|\mathbf{x}\right)}{Q\left(\tilde{\mathbf{y}}|\tilde{\mathbf{x}}\right)}&=\frac{q_{f\left(\mathbf{x}\oplus \tilde{\mathbf{y}}\right)}}{q_{f\left(\tilde{\mathbf{x}}\oplus \tilde{\mathbf{y}}\right)}}\\ &=\frac{q_{f\left(\underline{\mathbf{x}}\oplus \mathbf{y}\right)}}{q_{f\left(\underline{\tilde{\mathbf{x}}}\oplus \mathbf{y}\right)}}\\ &\stackrel{\left(a\right)}{\leq} e^{\epsilon} \end{aligned} \end{equation} where $\underline{\mathbf{x}}=\left(\underline{x}^{\left(1\right)},\ldots,\underline{x}^{\left(T\right)}\right)$ such that $\underline{x}^{\left(j\right)}=x^{\left(j\right)}$ for $j\neq i$ and $\underline{x}^{\left(i\right)}=\left[\left(k+x^{\left(i\right)}+y^{\left(i\right)}-\tilde{y}^{\left(i\right)}\right)\bmod k \right]$. Similarly, $\underline{\tilde{\mathbf{x}}}=\left(\underline{\tilde{x}}^{\left(1\right)},\ldots,\underline{\tilde{x}}^{\left(T\right)}\right)$ such that $\underline{\tilde{x}}^{\left(j\right)}=\tilde{x}^{\left(j\right)}$ for $j\neq i$ and $\underline{\tilde{x}}^{\left(i\right)}=\left[\left(k+\tilde{x}^{\left(i\right)}+y^{\left(i\right)}-\tilde{y}^{\left(i\right)}\right)\bmod k \right]$. Since $\mathbf{x}$ and $\tilde{\mathbf{x}}$ are neighboring databases, then $\underline{\mathbf{x}}$ and $\underline{\tilde{\mathbf{x}}}$ are also neighboring databases. Step $\left(a\right)$ follows from the assumption that $\mathbf{y}$ satisfy~\eqref{eqnG_6}. From the basic step along with the induction step, we conclude that the mechanism $Q$ given in~\eqref{eqnG_2} is $\epsilon$-DP-Rec mechanism. Hence, the proof is completed. \subsection{Proof of The First Necessary Condition ($|\mathcal{U}|\geq|\mathcal{Y}^{T}|\geq |\mathcal{X}^{T}|$) of Theorem~\ref{Th2_6}} We prove it in two parts: first we show $|\mathcal{Y}^{T}|\geq |\mathcal{X}^{T}|$, and then we show $|\mathcal{U}|\geq |\mathcal{Y}^{T}|$. {\bf $|\mathcal{Y}^{T}|\geq |\mathcal{X}^{T}|$:} Note that the output is a deterministic function of the input and the random key, i.e., $Y^{T}=f(X^T,U)$ for some deterministic function $f$. This implies that, for any fixed $u\in\mathcal{U}$, the function $f\left(\mathbf{x},u\right)$ should generate a different output $\mathbf{y}\in \mathcal{Y}^{T}$ for different values of $\mathbf{x}\in\mathcal{X}^{T}$, which implies that $|\mathcal{Y}^{T}|\geq |\mathcal{X}^{T}|$. {\bf $|\mathcal{U}|\geq |\mathcal{Y}^{T}|$:} Let $\mathcal{Y}\left(\mathbf{x}\right)\subseteq\mathcal{Y}^{T}$ be a subset of outputs such that the input $X^{T}=\mathbf{x}$ is mapped with non-zero probability to every $\mathbf{y}\in\mathcal{Y}\left(\mathbf{x}\right)$. We claim that $\mathcal{Y}\left(\mathbf{x}\right)=\mathcal{Y}^{T}$ for all $\mathbf{x}\in\mathcal{X}^{T}$ for any $\epsilon$-DP-Rec mechanism. In other words, we claim that each input $\mathbf{x}\in\mathcal{X}^{T}$ should be mapped with non-zero probability to every output $\mathbf{y}\in\mathcal{Y}^{T}$. We prove our claim by contradiction. Suppose that there exist two neighboring $\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{X}^{T}$ such that $\mathcal{Y}\left(\mathbf{x}\right)\neq \mathcal{Y}\left(\mathbf{x}^{\prime}\right)$. Thus, there exists $\mathbf{y}\in\mathcal{Y}\left(\mathbf{x}\right)\setminus \mathcal{Y}\left(\mathbf{x}^{\prime}\right)$ or $\mathbf{y}\in\mathcal{Y}\left(\mathbf{x}^{\prime}\right)\setminus \mathcal{Y}\left(\mathbf{x}\right)$. Hence, we have $\frac{Q\left(\mathbf{y}|\mathbf{x}\right)}{Q\left(\mathbf{y}|\mathbf{x}^{\prime}\right)}\to \infty$ or $\frac{Q\left(\mathbf{y}|\mathbf{x}^{\prime}\right)}{Q\left(\mathbf{y}|\mathbf{x}\right)}\to \infty$ which violates the privacy constraints. Therefore, $\mathcal{Y}\left(\mathbf{x}\right)= \mathcal{Y}\left(\mathbf{x}^{\prime}\right)=\mathcal{Y}^{T}$ for all $\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{X}^{T}$. Given $\mathbf{x}\in\mathcal{X}^{T}$, we have that $|\mathcal{Y}\left(\mathbf{x}\right)|\leq |\mathcal{U}|$, where $|\mathcal{U}|$ is the maximum number of possible keys. Thus, the random key size is at least the same as the output size: $|\mathcal{U}|\geq |\mathcal{Y}^{T}|$. Hence, the first condition of Theorem~\ref{Th2_6} is necessary to design an $\epsilon$-DP-Rec mechanism. \section{Proof of Lemma~\ref{lemm6_2}} \label{AppH} Let $g_i=i\left(k\right)^{\left(T-1\right)}$ for $i\in\lbrace 0,\ldots,k\rbrace$. Observe that databases $\mathbf{x}_1,\ldots,\mathbf{x}_{g_1}$ have $x^{\left(1\right)}=1$ and the databases $\mathbf{x}_{g_1+1},\ldots,\mathbf{x}_{g_2}$ have $x^{\left(1\right)}=2$. Generally, the databases $\mathbf{x}_{g_{i-1}+1},\ldots,\mathbf{x}_{g_{i}}$ have $x^{\left(1\right)}=i$. Let $C_i=\sum_{a=g_{i-1}+1}^{g_{i}}P^{\mathbf{y}}_a$ for $i\in\left[k\right]$. Consider the following inequalities that we will prove next \begin{align} H\left(\mathbf{P}^{\mathbf{y}}\right)&=-\sum_{a=1}^{k^{T}}P^{\mathbf{y}}_a\log\left(P^{\mathbf{y}}_a\right)\nonumber\\ &=\sum_{i=1}^{k}C_i\left[-\sum_{a=g_{i-1}+1}^{g_{i}}\frac{P^{\mathbf{y}}_a}{C_i}\log\left(\frac{P^{\mathbf{y}}_a}{C_i}\right)\right]-\sum_{i=1}^{k}C_i\log\left(C_i\right)\\ &\geq\sum_{i=1}^{k}C_iH\left(U_{\min,T-1}\right)-\sum_{i=1}^{k}C_i\log\left(C_i\right))~\label{eqn6_2}\\ &\geq \sum_{i=1}^{k} C_i H\left(U_{\min,T-1}\right)+H\left(U_{\min,1}\right)~\label{eqn6_3}\\ &=H\left(U_{\min,T-1}\right)+H\left(U_{\min,1}\right)~\label{eqn6_4} \end{align} We begin with inequality~\eqref{eqn6_2}. Observe that the $k^{T-1}$ databases $\mathbf{x}_{g_{i-1}+1},\ldots,\mathbf{x}_{g_{i}}$ have the same value of the first sample $x^{\left(1\right)}=i$, and hence these $k^{T-1}$ databases cover all possible databases in $\mathcal{X}^{T-1}$. Consider a random variable $U^{T-1}$ drawn according to the distribution $\mathbf{P}_{T-1}=\left[\frac{P_{g_{i-1}+1}^{\mathbf{y}}}{C_{i}},\ldots,\frac{P_{g_i}^{\mathbf{y}}}{C_{i}}\right]$. This is a valid distribution with support size $k^{T-1}$. Furthermore, since the distribution $\mathbf{P}^{\mathbf{y}}$ is $\epsilon$-DP, then the distribution $\mathbf{P}_{T-1}$ is also $\epsilon$-DP. From Lemma~\ref{lemm6_1}, the random key $U^{T-1}$ can be used to construct an $\epsilon$-DP-Rec mechanism with the possibility to recover the databases $X^{T-1}=\left(x^{\left(2\right)},\ldots,x^{\left(T\right)}\right)$ from the output of the mechanism and the random key $U^{T-1}$. Hence, we get \begin{equation} H\left(U^{T-1}\right)\geq H\left(U_{\min,T-1}\right). \end{equation} This proves inequality~\eqref{eqn6_2}. Now, observe that databases $\mathbf{x}_{i},\mathbf{x}_{g_1+i},\ldots,\mathbf{x}_{g_{k-1}+i}$ are neighboring databases for each $i\in\left[k^{T-1}\right]$, since they are only different in the value of the first sample $x^{\left(1\right)}$. Since the mechanism $Q$ is $\epsilon$-DP-Rec, we have \begin{equation} e^{-\epsilon}\leq \frac{P_{g_a+i}^{\mathbf{y}}}{P_{g_j+i}^{\mathbf{y}}}\leq e^{\epsilon} \qquad \forall a,j\in\lbrace 0,\ldots,k-1\rbrace \end{equation} Thus, we get \begin{equation} e^{-\epsilon}\leq \frac{\sum_{i=g_{a-1}+1}^{g_{a}}P^{\mathbf{y}}_i}{e^{\epsilon}\sum_{i=g_{a-1}+1}^{g_{a}}P^{\mathbf{y}}_i}\leq \frac{C_{a}}{C_{j}}=\frac{\sum_{i=g_{a-1}+1}^{g_{a}}P^{\mathbf{y}}_i}{\sum_{i=g_{j-1}+1}^{g_{j}}P^{\mathbf{y}}_i}\leq \frac{e^{\epsilon}\sum_{i=g_{j-1}+1}^{g_{j}}P^{\mathbf{y}}_i}{\sum_{i=g_{j-1}+1}^{g_{j}}P^{\mathbf{y}}_i}\leq e^{\epsilon} \qquad \forall a,j\in \left[k\right] \end{equation} Consider a random key $U^{1}$ that has a distribution $\mathbf{C}=\left[C_1,\ldots,C_k\right]$, where $C_a=\sum_{i=g_{a-1}+1}^{g_{a}}P^{\mathbf{y}}_i$. From Lemma~\ref{lemm5_1}, the random key $U^{1}$ can be used to construct an $\epsilon$-LDP-Rec mechanism with the possibility to recover the sample $X_1$ from the output of the mechanism and the random key $U^{1}$. Hence from Theorem~\ref{Th2_4}, we have \begin{equation} H\left(U^{1}\right)\geq H\left(U_{\min,1}\right). \end{equation} This proves inequality~\eqref{eqn6_3}, and completes the proof of Lemma~\ref{lemm6_2}. \section{Proof of Lemma~\ref{lemmf_1}} \label{AppI} For the random variable $U^{\prime}$, the distribution $\mathbf{q}^{\prime}=\left[q_{1}^{\prime},\ldots,q_{m-1}^{\prime}\right]$ is given by \begin{equation} q^{\prime}_j=\frac{q_j}{1-q_{m}}. \end{equation} Note that the distribution $\mathbf{q^{\prime}}$ is a valid distribution on $U^{\prime}$ since $\sum_{j=1}^{m-1}q^{\prime}_{j}=\sum_{j=1}^{m-1}\frac{q_j}{1-q_{m}}=1$. Now, we can bound the difference between $H\left(U\right)-H\left(U^{\prime}\right)$ as follows \begin{align} H\left(U\right)-H\left(U^{\prime}\right)&=\sum_{j=1}^{m-1}q^{\prime}_{j}\log\left(q^{\prime}_{j}\right)-\sum_{j=1}^{m}q_j\log\left(q_j\right)\nonumber\\ &=\sum_{j=1}^{m-1}\frac{q_j}{1-q_{m}}\log\left(\frac{q_j}{1-q_{m}}\right)-\sum_{j=1}^{m}q_j\log\left(q_j\right)\nonumber\\ &=\sum_{j=1}^{m-1}\frac{q_j}{1-q_{m}}\left[\log\left(\frac{q_j}{1-q_{m}}\right)-\log\left(q_j^{\left(1-q_m\right)}\right)\right]-q_m\log\left(q_m\right)\nonumber\\ &=\sum_{j=1}^{m-1}\frac{q_j}{1-q_{m}}\left[-\log\left(\frac{1-q_m}{q_j^{q_m}}\right)\right]-q_m\log\left(q_m\right)\nonumber\\ &> -\log\left(\sum_{j=1}^{m-1}q_j^{\left(1-q_m\right)}\right)-q_m\log\left(q_m\right)~\label{eqni_1}\\ &\geq -\left(1-q_m\right)\log\left(1-q_m\right)-q_m\log\left(m-1\right)-q_m\log\left(q_m\right)~\label{eqni_2}\\ &\geq \min\left(0,\log\left(\frac{m}{m-1}\right)\right)~\label{eqni_3}\\ &\geq 0 ~\label{eqni_4} \end{align} where~\eqref{eqni_1} follows from the fact that $-\log\left(.\right)$ is a strictly convex function and $q_j/1-q_m >0$ for $j\in\left[m-1\right]$. The inequality~\eqref{eqni_2} follows from solving the convex problem \begin{equation}~\label{opt_1} \begin{aligned} \max_{\lbrace q_j\rbrace_{j=1}^{m-1}}&\ \sum_{j=1}^{m-1}q_j^{\left(1-q_m\right)}\\ s.t.&\ \sum_{j=1}^{m-1}q_j=1-q_m\\ &\ q_j\geq q_m\ \forall j\in\left[m-1\right] \end{aligned} \end{equation} Note that $x^{a}$ is a concave function on $x\in\mathbb{R}_{+}$ for $0\leq a\leq 1$. Therefore, the objective function in~\eqref{opt_1} is concave in $\lbrace q_j\rbrace$. By solving the optimization problem in~\eqref{opt_1}, we get $q_j^{*}=\frac{1-q_m}{m-1}\geq q_m$ for all $j\in\left[m-1\right]$ and $\sum_{j=1}^{m-1}q_j^{\left(1-q_m\right)}\leq \frac{\left(1-q_m\right)^{\left(1-q_m\right)}}{\left(m-1\right)^{\left(-q_m\right)}}$. Since $\log\left(x\right)$ is a monotonic function, we get $-\log\left(\sum_{j=1}^{m-1}q_j^{\left(1-q_m\right)}\right)\geq -\left(1-q_m\right)\log\left(1-q_m\right)-q_m\log\left(m-1\right)$. The inequality~\eqref{eqni_3} follows from the fact that $-\left(1-q_m\right)\log\left(1-q_m\right)-q_m\log\left(m-1\right)-q_m\log\left(q_m\right)=H\left(q_m\right)-q_m\log\left(m-1\right)$ is a concave function of $q_m$. The minimum of a concave function is one of the vertices, where $q_m\in\lbrace 0,\frac{1}{m}\rbrace$. Hence, the proof is completed. \section{Proof of Lemma~\ref{lemm3_4}} \label{AppK} We start our proof by Assoud's method. \begin{lemma}~\label{lemm3_2}(Assouad's Method~\cite{duchi2018minimax}) For the family of distributions $\left\{ \mathbf{p}^{\nu}:\nu\in\mathcal{V}=\lbrace -1,1\rbrace^{k/2}\right\}$, and a loss function $\ell\left(\hat{\mathbf{p}},\mathbf{p}\right)=\sum_{j=1}^{k}\phi\left(\hat{p}_j-p_j\right)$ defined in Section~\ref{LDP_AS}, we have \begin{equation}~\label{eqn3_5} \begin{aligned} r^{\ell}_{\epsilon,R,n,k}\left(Q^{n}\right)&=\inf_{\hat{\mathbf{p}}}\sup_{\mathbf{p}\in\Delta_k}\mathbb{E}\left[\ell\left(\hat{\mathbf{p}}\left(Y^{n}\right),\mathbf{p}\right)\right]\\ &\geq \phi\left(\delta\right)\sum_{j=1}^{k/2}\left(1-||\mathbf{M}^{n}_{+j}-\mathbf{M}^{n}_{-j}||_{\text{TV}}\right) \end{aligned} \end{equation} \end{lemma} For completeness, we present the proof of Lemma~\ref{lemm3_2} in Appendix~\ref{AppC}. Let $\lbrace e_j\rbrace_{j=1}^{k/2}$ be the standard basis of $\mathbb{R}^{k/2}$. Consider now the following inequalities: {\allowdisplaybreaks \begin{equation}~\label{eqn3_6} \begin{aligned} \sum_{j=1}^{k/2}\left(1-\left\|\mathbf{M}^{n}_{+j}-\mathbf{M}^{n}_{-j}\right\|_{\text{TV}}\right)&\stackrel{\left(a\right)}{\geq} \sum_{j=1}^{k/2}\left(1-\frac{1}{|\mathcal{V}|}\sum_{\nu:\nu_j=1}||\left(\prod_{i=1}^{n}\mathbf{M}^{\nu}_{i}\right)-\left(\prod_{i=1}^{n}\mathbf{M}^{\nu-2e_j}_{i}\right)||_{\text{TV}}\right)\\ &\geq \sum_{j=1}^{k/2}\left(1-\sup_{\nu:\nu_j=1}||\left(\prod_{i=1}^{n}\mathbf{M}^{\nu}_{i}\right)-\left(\prod_{i=1}^{n}\mathbf{M}^{\nu-2e_j}_{i}\right)||_{\text{TV}}\right)\\ &\stackrel{\left(b\right)}{\geq} \sum_{j=1}^{k/2}\left(1-\sup_{\nu:\nu_j=1}\sqrt{\frac{1}{2}D_{\text{KL}}\left(\left(\prod_{i=1}^{n}\mathbf{M}^{\nu}_{i}\right)||\left(\prod_{i=1}^{n}\mathbf{M}^{\nu-2e_j}_{i}\right)\right)}\right)\\ &\stackrel{\left(c\right)}{\geq}\sum_{j=1}^{k/2}\left(1-\sqrt{\frac{1}{2}\sup_{\nu:\nu_j=1}\sum_{i=1}^{n}D_{\text{KL}}\left(\mathbf{M}^{\nu}_{i}||\mathbf{M}^{\nu-2e_j}_{i}\right)}\right)\\ &=\frac{k}{2}\left(1-\frac{2}{k}\sum_{j=1}^{k/2}\sqrt{\frac{1}{2}\sup_{\nu:\nu_j=1}\sum_{i=1}^{n}D_{\text{KL}}\left(\mathbf{M}^{\nu}_{i}||\mathbf{M}^{\nu-2e_j}_{i}\right)}\right)\\ &\stackrel{\left(d\right)}{\geq} \frac{k}{2}\left(1-\sqrt{\frac{1}{k}\sum_{j=1}^{k/2}\sup_{\nu:\nu_j=1}\sum_{i=1}^{n}D_{\text{KL}}\left(\mathbf{M}^{\nu}_{i}||\mathbf{M}^{\nu-2e_j}_{i}\right)}\right)\\ &\geq \frac{k}{2}\left(1-\sqrt{\frac{n}{2}\sup_{j\in\left[k/2\right]}\sup_{i\in\left[n\right]}\sup_{\nu:\nu_j=1}D_{\text{KL}}\left(\mathbf{M}^{\nu}_{i}||\mathbf{M}^{\nu-2e_j}_{i}\right)}\right)\\ \end{aligned} \end{equation} } where step $\left(a\right)$ follows from the triangular inequality. Step $\left(b\right)$ follows from Pinsker's inequality that states that for any two distributions $\mathbf{P}$ and $\mathbf{Q}$, we get $\|\mathbf{P}-\mathbf{Q}\|_{\text{TV}}\leq\sqrt{\frac{1}{2}D\left(\textbf{P}||\textbf{Q}\right)}$~\cite[Lemma~$2.5$]{tsybakov2008introduction}. Step $\left(c\right)$ follows from the properties of KL-divergence. Step $\left(d\right)$ follows from the concavity of function $\sqrt{x}$. Substituting from~\eqref{eqn3_6} into~\eqref{eqn3_5}, we get \begin{equation}~\label{eqn3_7} \begin{aligned} r^{\ell}_{\epsilon,R,n,k}&=\inf_{\lbrace Q_i\in\mathcal{Q}_{\left(\epsilon,R\right)}\rbrace} r^{\ell}_{\epsilon,R,n,k}\left(Q^{n}\right)\\ &\geq \inf_{\lbrace Q_i\in\mathcal{Q}_{\left(\epsilon,R\right)}\rbrace} \phi\left(\delta\right)\frac{k}{2}\left(1-\sqrt{\frac{n}{2}\sup_{j\in\left[k/2\right]}\sup_{i\in\left[n\right]}\sup_{\nu:\nu_j=1}D_{\text{KL}}\left(\mathbf{M}^{\nu}_{i}||\mathbf{M}^{\nu-2e_j}_{i}\right)}\right)\\ &=\phi\left(\delta\right)\frac{k}{2}\left(1-\sqrt{\frac{n}{2}\sup_{j\in\left[k/2\right]}\sup_{i\in\left[n\right]}\sup_{\nu:\nu_j=1}\sup_{Q_i\in\mathcal{Q}_{\left(\epsilon,R\right)}} D_{\text{KL}}\left(\mathbf{M}^{\nu}_{i}||\mathbf{M}^{\nu-2e_j}_{i}\right)}\right) \end{aligned} \end{equation} Hence the proof is completed. \section{Lower Bound on The Minimax Risk Estimation Using Fisher Information}~\label{LDP_F} In this section, we introduce an alternative proof of Theorem~\ref{Th2_1}. Our proof is inspired by the approach in~\cite{barnes2019learning} that uses Fisher information to bound the minimax risk estimation under communication constraints. The main idea of our proof is to formulate a non-convex optimization problem to bound the Fisher information matrix under privacy and randomness constraints. Let $\overline{\mathcal{P}}\subset \Delta_k$ be a subset of simplex $\Delta_k$ defined by $$\overline{\mathcal{P}}=\left\{ \mathbf{p}\in\mathbb{R}^{k}:\sum\limits_{j=1}^{k}p_j=1,\ \frac{1}{k}\leq p_j\leq \frac{2}{k} ,\ p_{j+k/2}=\frac{2}{k}-p_{j},\ \forall j\in\left[k/2\right]\right\}.$$ For every $\mathbf{p}\in\overline{\mathcal{P}}$, the number of free variables is $k/2$, where each parameter $p_{j+k/2}$ is associated with the variable $p_j$, $\forall\ j\in\left[k/2\right]$. For a given distribution $\mathbf{p}\in\Delta_k$, we define the marginal distribution on the output $Y$ as \begin{equation} \mathbf{M}\left(y|\mathbf{p}\right)=\sum_{j=1}^{k}Q\left(Y=y|X=j\right)p_j. \end{equation} Let $S_{\mathbf{p}}\left(y\right)$ denote the $k/2$-vector score function of $Y$ given by \begin{equation} \begin{aligned} S_{\mathbf{p}}\left(y\right)&=\left[S_{p_1}\left(y\right),\ldots,S_{p_{k/2}}\left(y\right)\right]\\ &=\left[\frac{\partial\log\left( \mathbf{M}\left(y|\mathbf{p}\right)\right)}{\partial p_1},\ldots,\frac{\partial\log\left( \mathbf{M}\left(y|\mathbf{p}\right)\right)}{\partial p_{k/2}}\right]. \end{aligned} \end{equation} Then, the Fisher information matrix for estimating $\mathbf{p}\in\overline{\mathcal{P}}$ from $Y$ is given by \begin{equation} I_{Y}\left(\mathbf{p}\right)=\mathbb{E}\left[S_{\mathbf{p}}\left(y\right)S_{\mathbf{p}}\left(y\right)^{T}\right], \end{equation} where the expectation is taken over the randomness in the output $Y$. Now, consider the following inequalities \begin{equation}~\label{eqn3_4} \begin{aligned} r^{\ell_{2}^{2}}_{\epsilon,R,n,k}&=\inf_{\lbrace Q_i\in\mathcal{Q}_{\left(\epsilon,R\right)}\rbrace} \inf_{\hat{\mathbf{p}}}\sup_{\mathbf{p}\in\Delta_k}\mathbb{E}\left[\ell_{2}^{2}\left(\hat{\mathbf{p}}\left(\mathbf{Y}^{n}\right),\mathbf{p}\right)\right] \\ &\geq \inf_{\lbrace Q_i\in\mathcal{Q}_{\left(\epsilon,R\right)}\rbrace} \inf_{\hat{\mathbf{p}}}\sup_{\mathbf{p}\in\overline{\mathcal{P}}}\mathbb{E}\left[\ell_{2}^{2}\left(\hat{\mathbf{p}}\left(\mathbf{Y}^{n}\right),\mathbf{p}\right)\right]\\ &\stackrel{\left(a\right)}{\geq} \frac{\left(k/2\right)^2}{\sup\limits_{\lbrace Q_i\in\mathcal{Q}_{\left(\epsilon,R\right)}\rbrace} \sup\limits_{\mathbf{p}\in\overline{\mathcal{P}}}\text{Tr}\left(I_{Y^{n}}\left(\mathbf{p}\right)\right)+\frac{k}{2}\pi^{2}} \end{aligned} \end{equation} where $I_{Y^{n}}\left(\mathbf{p}\right)$ denotes the Fisher information matrix for estimating $\mathbf{p}$ from $Y^{n}=\left[Y_1,\ldots,Y_n\right]$, and $\text{Tr}\left(I_{Y^{n}}\left(\mathbf{p}\right)\right)$ denotes the trace of the Fisher information matrix $I_{Y^{n}}\left(\mathbf{p}\right)$. Step $\left(a\right)$ follows from the van Trees inequality~\cite{barnes2019learning}[Eqn.$4$-$8$]. Our goal is to bound the term $\sup_{\lbrace Q_i\in\mathcal{Q}_{\left(\epsilon,R\right)}\rbrace} \sup_{\mathbf{p}\in\overline{\mathcal{P}}}\text{Tr}\left(I_{Y^{n}}\left(\mathbf{p}\right)\right)$. For a given distribution $\mathbf{p}\in\overline{\mathcal{P}}$, the random variables $Y_1,\ldots,Y_n$ are independent. As a result, the trace of the Fisher information matrix for estimating $\mathbf{p}$ from $Y_1,\ldots,Y_n$ is bounded by \begin{equation}~\label{eqn3_3} \begin{aligned} \sup\limits_{\lbrace Q_i\in\mathcal{Q}_{\left(\epsilon,R\right)}\rbrace} &\sup_{\mathbf{p}\in\overline{\mathcal{P}}}\text{Tr}\left(I_{Y^{n}}\left(\mathbf{p}\right)\right) \\ &\stackrel{\left(a\right)}{=}\sup\limits_{\lbrace Q_i\in\mathcal{Q}_{\left(\epsilon,R\right)}\rbrace}\sup_{\mathbf{p}\in\overline{\mathcal{P}}}\sum_{i=1}^{n}\text{Tr}\left(I_{Y_i}\left(\mathbf{p}\right)\right)\\ &\leq \sup\limits_{\lbrace Q_i\in\mathcal{Q}_{\left(\epsilon,R\right)}\rbrace} \sup_{\mathbf{p}\in\overline{\mathcal{P}}} n\sup_{i\in\left[n\right]} \text{Tr}\left(I_{Y_i}\left(\mathbf{p}\right)\right)\\ &\stackrel{\left(b\right)}{\leq} \left\{ \begin{array}{ll} 2nk \frac{e^{\epsilon}\left(e^\epsilon-1\right)^2}{\left(e^\epsilon+1\right)^2}& \text{if}\ R\geq H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right)\\ 2nk \frac{p_{R}^2 \left(e^\epsilon-1\right)^2}{e^{\epsilon}}& \text{if}\ R< H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right) \end{array} \right. \end{aligned} \end{equation} where step $\left(a\right)$ follows from the chain rule of the Fisher information~\cite{zamir1998proof}[Lemma~$1$]. Step $\left(b\right)$ follows from Lemma~\ref{lemm3_1} presented below. Substituting from~\eqref{eqn3_3} into~\eqref{eqn3_4}, we get \begin{equation} r^{\ell_{2}^{2}}_{\epsilon,R,n,k}\geq \left\{ \begin{array}{ll} \frac{k\left(e^\epsilon+1\right)^2}{16ne^\epsilon\left(e^\epsilon-1\right)^2}& \text{if}\ R\geq H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right)\\ \frac{ke^{\epsilon}}{16np_{R}^2 \left(e^\epsilon-1\right)^2}& \text{if}\ R< H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right) \end{array} \right. \end{equation} for $n\geq 4\frac{e^{\epsilon}}{p_R^2\left(e^{\epsilon}-1\right)^2}$. \begin{lemma}~\label{lemm3_1} For any $\left(\epsilon,R\right)$-LDP mechanism, the trace of the Fisher information matrix $I_{Y}\left(\mathbf{p}\right)$ is bounded by \begin{equation} \sup_{Q\in\mathcal{Q}_{\left(\epsilon,R\right)}}\sup_{\mathbf{p}\in\overline{\mathcal{P}}} \text{Tr}\left(I_{Y}\left(\mathbf{p}\right)\right)\leq \left\{ \begin{array}{ll} 2k \frac{e^{\epsilon}\left(e^\epsilon-1\right)^2}{\left(e^\epsilon+1\right)^2}& \text{if}\ R\geq H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right)\\ 2k \frac{p_{R}^2 \left(e^\epsilon-1\right)^2}{e^{\epsilon}}& \text{if}\ R< H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right) \end{array} \right. \end{equation} where $ H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right)$ is the Shannon entropy, and $p_R<0.5$ denotes the inverse Shannon entropy $p_R=h^{-1}\left(R\right)$. \end{lemma} \begin{proof} For a given distribution $\mathbf{p}\in\overline{\mathcal{P}}$, we have \begin{equation} \begin{aligned} S_{p_j}\left(y\right)&=\frac{\partial\log\left( \mathbf{M}\left(y|\mathbf{p}\right)\right)}{\partial p_j}\\ &=\frac{Q\left(y|j\right)-Q\left(y|j+k/2\right)}{\mathbf{M}\left(y|\mathbf{p}\right)}, \end{aligned} \end{equation} for $j\in\left[k/2\right]$. By taking the expectation with respect to $Y$, we get \begin{equation} \mathbb{E}\left[ S_{p_j}\left(Y\right)^2\right]=\sum_{y\in\mathcal{Y}}\frac{\left(Q\left(y|j\right)-Q\left(y|j+k/2\right)\right)^2}{\sum_{j^{\prime}=1}^{k}Q\left(y|j^{\prime}\right)p_{j^{\prime}}} \end{equation} Thus, the trace of the Fisher information matrix is given by \begin{equation} \begin{aligned} \text{Tr}\left(I_{Y}\left(\mathbf{p}\right)\right)&=\sum_{j=1}^{k/2}\mathbb{E}\left[ S_{p_j}\left(Y\right)^2\right]\\ &=\sum_{j=1}^{k/2}\sum_{y\in\mathcal{Y}}\frac{\left(Q\left(y|j\right)-Q\left(y|j+k/2\right)\right)^2}{\sum_{j^{\prime}=1}^{k}Q\left(y|j^{\prime}\right)p_{j^{\prime}}}\\ &\leq \frac{k}{2}\max_{j\in\left[k/2\right]}\sum_{y\in\mathcal{Y}}\frac{\left(Q\left(y|j\right)-Q\left(y|j+k/2\right)\right)^2}{\sum_{j^{\prime}=1}^{k}Q\left(y|j^{\prime}\right)p_{j^{\prime}}}\\ &\stackrel{\left(a\right)}{\leq} k e^{\epsilon}\max_{j\in\left[k/2\right]}\sum_{y\in\mathcal{Y}}\frac{\left( Q\left(y|j\right)-Q\left(y|j+k/2\right)\right)^2}{Q\left(y|j\right)+Q\left(y|j+k/2\right)}\\ &\stackrel{\left(b\right)}{\leq} \left\{ \begin{array}{ll} 2k \frac{e^{\epsilon}\left(e^\epsilon-1\right)^2}{\left(e^\epsilon+1\right)^2}& \text{if}\ R\geq H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right)\\ 2k \frac{p_{R}^2 \left(e^\epsilon-1\right)^2}{e^{\epsilon}}& \text{if}\ R< H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right) \end{array} \right. \end{aligned} \end{equation} where step $\left(a\right)$ follows from the fact that $Q\left( y|j^{\prime}\right)\geq e^{-\epsilon} Q\left( y|j\right)$ and $Q\left( y|j^{\prime}\right)\geq e^{-\epsilon} Q\left( y|j+k/2\right),\ \forall j^{\prime}\in\left[k\right]$. Thus, we have \begin{equation} \begin{aligned} \sum\limits_{j^{\prime}=1}^{k}Q\left( y|j^{\prime}\right)p_{j^{\prime}}&\geq e^{-\epsilon}\frac{Q\left( y|j\right)+Q_i\left( y|j+k/2\right)}{2}\sum_{j^{\prime}=1}^{k}p_{j^{\prime}}\\ &=e^{-\epsilon}\frac{Q\left( y|j\right)+Q\left( y|j+k/2\right)}{2} \end{aligned} \end{equation} Step $\left(b\right)$ follows from Lemma~\ref{lemma_1} presented at the end of Section~\ref{LDP_AS}. This completes the proof of Lemma~\ref{lemm3_1}. \end{proof} \section{Introduction}\label{Intro} Differential privacy~\cite{dwork2006calibrating} -- a cryptographically motivated notion of privacy -- has recently emerged as the gold standard in privacy-preserving data analysis. Privacy is provided by guaranteeing that the participation of a single person in a dataset does not change the probability of any outcome by much; this is ensured by randomness -- either by adding noise to (or randomizing) the raw data itself or to a function or statistic computed directly on the data. If the randomization is large enough relative to the change caused by a single person's data, then their participation is indistinguishable, and privacy is attained. An underlying assumption in the body of work on differential privacy has long been that an unlimited amount of randomness is available for use by any privacy mechanism. Under this assumption, the vast majority of the literature has focused on achieving better privacy-utility trade-offs -- see, for example, \cite{DP-book_DworkRoth14,SarwateChaudhuri_DP-survey13} for surveys. In this paper, we ask: how much randomness do we need to achieve a desired level of privacy and utility, and study privacy-utility-randomness trade-offs instead. Answering this question both contributes to our theoretical understanding, and also could support specific emerging applications that we discuss later in the section. We consider local differential privacy (LDP) -- a privacy model that has recently seen use in industrial applications, \cite[RAPPOR]{Rappor}, \cite{Apple_DP}. Here, an untrusted analyst acquires already-privatized pieces of information from a number of users, and aggregates them into a statistic or a machine learning model. Concretely, there are $n$ users who observe i.i.d.~inputs $X_1, X_2,\hdots, X_n$ (user $i$ observes $X_i$) from a finite alphabet $\mathcal{X}$ of size $k$, where each $X_i$ is distributed according to a probability distribution $\mathbf{p}$. Each user has a certain amount of randomness, measured in Shannon entropy, to randomize her input, that she then publicly shares. Our general setup also includes $d$ analysts who would like to use the users' public outputs to estimate $\mathbf{p}$, each at a different level of privacy $\epsilon_1, \ldots,\epsilon_d$, where smaller $\epsilon$ means higher privacy. Each analyst may or may not share some common randomness with the users. We call this general setup {\em successive refinement of privacy}, in which each user shares a public output with highest privacy level. Then, each analyst uses a shared random key to partially undo the randomization of the public output to get less privacy and higher utility. This general formulation includes several interesting special cases, for which we study the trade-offs between privacy, utility, and randomness. These are: {\sf (i)} There is a single analyst ($d=1$), who shares no randomness with the users and estimates $\mathbf{p}$ with privacy level $\epsilon$. This setting directly generalizes the classical setup of LDP to the case of limited randomness. {\sf (ii)} There are two analysts ($d=2$), who observe the same public outputs from the users; the first analyst who shares common randomness with the users has permission to perfectly recover the original inputs (i.e., privacy level $\epsilon_1\to \infty$), while the second analyst who shares no randomness with the users estimates $\mathbf{p}$ with privacy level $\epsilon_2$. This setting is an adaptation of the classical {\em perfect secrecy} setup of Shannon \cite{shannon1949} to the differential privacy world. In Shannon's setup, Alice (users) wants to send a secret to Bob (the first analyst), which must remain perfectly private from Eve (the second analyst); whereas, in our setting, instead of complete independence, we only want that the secret remains hidden from Eve in the sense of differential privacy. We call this setup {\em private-recoverability}. {\sf (iii)} There are $d>1$ analysts, who share some common randomness with the users. Analyst $i$ would like to estimate $\mathbf{p}$ with privacy level $\epsilon_i$, where $\epsilon_1>\ldots>\epsilon_d$.\footnote{We can assume, without loss of generality, that $\epsilon_j>\epsilon_{j+1}, \forall j\in[d-1]$; otherwise, we can group the equal $\epsilon_j$'s together and the corresponding analysts can use the same privatized data that the users share with them.} \begin{figure}[t] \centerline{\includegraphics[scale=0.3]{SM.pdf}} \caption{We have $n$ users, each observing a sample $X_i$. A private randomization mechanism $Q_i$ is applied to $X_i$ using a random key $U_i$. Two analysts want to estimate $\mathbf{p}$. Each analyst requires a different privacy level.} \label{Fig1_1} \end{figure} \subsection{Motivation} In general, designing private mechanisms with a small amount of randomness can be translated into communication efficiency and/or storage efficiency. For instance, when there are multiple privacy levels, each user needs to send additional information to some analysts, that is a function of the randomness used in the mechanism. Hence, using a smaller amount of randomness implies delivering a smaller number of bits to each analyst. The private-recoverability setup ($d=2$) can be useful in applications such as census surveys, \cite{Dowrk_US-Census19}, that collect large amounts of data and are prohibitively expensive to repeat. Using our approach, we can store the randomized data on a public database (second analyst) without compromising the privacy of individuals; we can also give to the first analyst (e.g., the government, who may wish to exactly calculate the population count, or verify the validity of census results) a secret key, that can be used to ``de-randomize'' the publicly stored data and perfectly reconstruct the user inputs. An alternative approach would be to store the data twice (once randomized in a public database and once in a secure government database), which would incur an additional storage cost, as also shown in Section \ref{Num}. Another alternative would be to use a cryptographic scheme to encode the user inputs; in this case, the resulting outputs may not allow public use in an efficient manner.\footnote{In principle, we could use homomorphic encryption that allows to compute a function on the encrypted data without decrypting it explicitly; however, such encryption schemes are computationally inefficient and expensive to deploy.} The multi-level privacy $ d>1 $ illustrates a new technical capability of hierarchical access to the raw data that might inspire and support a variety of applications. For example, given data collected from a fleet of autonomous cars, we could imagine different privacy access levels provided to the car manufacturer itself, to police departments, to applications interested in online traffic regulation, to applications interested in long-term traffic predictions or road planning. Essentially, this capability enables providing the desired utility needed for each application while maintaining the maximum possible amount of privacy. \subsection{Contributions} Our contributions are as follows. $\bullet$ For the single analyst case $(d=1)$, we characterize the trade-off between randomness and utility for a fixed privacy level $\epsilon$, by proving an information-theoretic lower bound and a matching upper bound for a minimax private estimation problem. $\bullet$ For private-recoverability $(d=2)$, we derive an information-theoretic lower bound on the minimum randomness required to achieve it, and prove that the Hadamard scheme proposed in~\cite{acharya2018hadamard} is order optimal. We also show that we cannot reuse random keys over time while preserving privacy of each user. Hence, to preserve privacy of $T$ samples, any $\epsilon$-DP mechanism has to use an amount of randomness equal to $T$ times the amount of randomness used for a single data sample. We also extend this result to estimating {\em heavy hitters}. $\bullet$ In the multi-level privacy $(d>1)$ setting, a trivial scheme is to use the $d=1$ scheme multiple times, separately for each analyst. We propose instead a non-trivial scheme that uses a smaller amount of randomness with no sacrifice in utility. Our scheme publicly announces the users' outputs, and allows each analyst to remove an appropriate amount of (shared) randomness with the help of an associated key. This approach enables efficient hierarchical access to the data (for example, when analysts have different levels of authorized access). Overall, our investigation into privacy-utility-randomness trade-offs for LDP yields (optimal) privacy mechanisms that use randomness more economically. These include new guarantees for existing schemes such as the Hadamard mechanism, as well as new multi-user and multi-level mechanisms that allow for hierarchically private data access. \subsection{Related work} To the best of our knowledge, the role of limited randomness has not been previously explored either in the context of local or global differential privacy.\footnote{Except for a notable exception of \cite{imperfect-randomness_DP12}, which showed that imperfect source of randomness allows efficient protocols with global differential privacy. This is different from our problem, where our goal is to quantify the amount of randomness required (measured in terms of Shannon entropy) in local differential privacy and give privacy-utility-randomness trade-offs.} In this work, we consider local differential privacy in the context of distribution estimation and heavy hitter estimation for reasons of simplicity. Popular local differentially private mechanisms for distribution estimation include RAPPOR~\cite{Rappor}, randomized response (RR)~\cite{warner1965randomized}), subset selection (SS)~\cite{Ye2018,wang2016mutual}, and the Hadamard response (HR)~\cite{acharya2018hadamard}. The randomized response mechanism is known to be order optimal in the low privacy regime, and the RAPPOR scheme in the high privacy regimes~\cite{kairouz2016discrete,kairouz2014extremal}. Subset selection and the Hadamard mechanisms are order optimal in utility for all privacy regimes; additionally, the Hadamard mechanism has the advantage of communication and computational efficiency for all privacy regimes~\cite{acharya2018hadamard}. We build on this extensive literature, and show that the Hadamard mechanism is also near-optimal in terms of the amount of randomness used. Heavy hitter estimation under local differential privacy has been studied in~\cite{bassily2015local,qin2016heavy,hsu2012distributed, bassily2017practical, bun2018heavy}, again with unrestricted randomness. Our work adds to this line of work by showing that the Hadamard mechanism is capable of achieving order-optimal accuracy for heavy hitter estimation {\em{while}} using an order-optimal amount of randomness. Local differential privacy in a multi-user setting where the users and the server may have some shared randomness has also been looked at in prior work -- see ~\cite{bassily2015local, acharya2019communication, acharya2018test} among others. These works however investigate other orthogonal aspects of such multi-user protocols. Local differentially private mechanisms with bounded communication have also been studied by~\cite{acharya2019communication}; in their setup, multiple agents transmit their data in a locally private manner to an aggregator, and communication is measured by the number of bits transmitted by each user. They consider both private and public coin mechanisms, and show that the Hadamard mechanism is near optimal in terms of communication for both distribution and heavy-hitter estimation; however, unlike ours, their mechanisms do not impose any randomness constraints. Our results in the multiple analyst setting are also related to privacy amplification by stochastic postprocessing~\cite{balle2019privacy} -- which analyzes the privacy risk achieved by applying a (stochastic) post-processing mechanism to the output of a differentially private algorithm. While these methods might also be used to provide multi-level privacy to multiple analysts, our work is different from~\cite{balle2019privacy} in the following aspect. First, their privacy amplification methodology does not apply to pure DP and applies instead to approximate DP, while our work focuses on pure DP. Second, the work in~\cite{balle2019privacy} does not include a randomness constraint, and finally, a closer look at their mechanism reveals that it does not use the optimal amount of randomness. Finally, a line of work on locally differentially private estimation considers the case when the inputs comprise of i.i.d.\ samples from the same distribution. ~\cite{duchi2018minimax,duchi2019lower} derive lower and upper bounds for estimation under LDP in this setting -- their work considers that all users observe i.i.d.\ samples from the same distribution, and the goal for each user is to preserve privacy of its raw sample. Our work is also different from this setting in that we focus on designing private mechanisms with finite randomness. \subsection{Paper organization} Section~\ref{Prelm} formally defines LDP mechanisms under randomness constraints and presents the distribution and heavy hitter estimation problem formulations. Section~\ref{Res} states our main results for the single-level privacy, private-recoverability, and multi-level privacy settings. Section~\ref{Num} presents numerical evaluations on the effect of parameters such as $n,\epsilon,d$ on the estimation error and the required randomness. Section~\ref{LDP} derives an information-theoretic lower bound and an upper bound (achievability scheme) on the minimax risk estimation under randomness and privacy constraints for a single analyst. Section~\ref{Privlv} proposes a new LDP mechanism for the multi-level privacy $d>1 $. Section~\ref{Recov} presents the necessary and sufficient conditions on the randomness to design an $\epsilon$-LDP mechanism with input recoverability requirement. Section~\ref{Recov_GDP} introduces the necessary and sufficient conditions on the randomness to preserve privacy of a sequence of samples per user. \section{Main Results} \label{Res} This section formally presents our main results. First, we characterize the minimax risk estimation under randomness and privacy constraints in Theorems~\ref{Th2_1} and~\ref{Th2_2} for single-level privacy ($d=1$). Then, we propose in Theorem~\ref{Th2_3} a new LDP privacy mechanism that provides a hierarchical access to users' samples with different privacy levels (multi-level privacy $d>1$). We present in Theorem~\ref{Th2_4} the necessary and sufficient conditions on the randomness to design an LDP mechanism with input recoverability requirement. Finally, we present in Theorem~\ref{Th2_6} the necessary and sufficient conditions on the randomness to preserve privacy of a sequence of samples under a recoverability constraint. \subsection{Single-level Privacy, $d=1$} We here study the fundamental trade-off between randomness and utility for a fixed privacy level $\epsilon$. In the following theorem, we derive a lower bound on the minimax risk estimation $r^{\ell_{2}^{2}}_{\epsilon,R,n,k}$ and $r^{\ell_{1}}_{\epsilon,R,n,k}$ defined in~\eqref{eqn1_1}. \begin{theorem}~\label{Th2_1} For every $\epsilon,R\geq0$ and $k,n\in\mathbb{N}$, the minimax risk under $\ell_2$-norm loss is bounded by \begin{equation} r^{\ell_{2}^{2}}_{\epsilon,R,n,k}\geq \tau= \begin{cases} \frac{k\left(e^{\epsilon}+1\right)^2}{16ne^{\epsilon}\left(e^{\epsilon}-1\right)^{2}}& \text{if}\ R\geq H_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right),\\ \frac{ke^{\epsilon}}{16 n p_R^2 \left( e^{\epsilon}-1\right)^{2}}& \text{if}\ R< H_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right), \end{cases} \end{equation} where $p_{R}\leq 0.5$ is the inverse of the binary entropy function $p_R=H_2^{-1}\left(R\right)$. The minimax risk under $1$-norm loss is bounded by $r^{\ell_{1}}_{\epsilon,R,n,k}\geq \sqrt{k\tau/8}$. \end{theorem} The main contribution in our proof (see Section~\ref{LDP_AS}) is a formulation of a non-convex optimization problem to bound the minimax risk under privacy and randomness constraints, and obtaining a tight bound on its solution for every value of privacy level $\epsilon$ and randomness $R$. \begin{remark} In~\cite{Ye2018}, the authors derive the following lower bound on the minimax risk estimation without randomness constraints ($R\to\infty$) \begin{equation}~\label{eqn2_1} r^{\ell_{2}^{2}}_{\epsilon,\infty,n,k}\geq \left\{ \begin{array}{ll} \frac{k\left(e^{\epsilon}+1\right)^2}{512n\left(e^{\epsilon}-1\right)^{2}}& \text{for}\ e^{\epsilon}< 3, \\ \frac{k}{64 n \left( e^{\epsilon}-1\right)}& \text{for}\ e^{\epsilon}\geq 3. \end{array} \right. \end{equation} For $\epsilon=\mathcal{O}(1)$ and $R\geq H_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right)$ (which includes $R\to\infty$ as well), our lower bound from Theorem~\ref{Th2_1} gives $r^{\ell_{2}^{2}}_{\epsilon,R,n,k}=\Omega\left(\frac{k}{n\epsilon^{2}}\right)$, which coincides with (\ref{eqn2_1}). However, our lower bound is tighter for all values of $\epsilon\in\left[0,\infty\right)$ with smaller constant factors. \end{remark} We next show that there exists an achievable scheme for all values of $\epsilon,R\geq0$ that matches (up to a constant factor) the lower bound given in Theorem~\ref{Th2_1} for $\epsilon=\mathcal{O}\left(1\right)$ and $R\geq 0$. \begin{theorem}~\label{Th2_2} For any $\epsilon,R\geq0$, there exists $\left(\epsilon,R\right)$-LDP mechanisms $Q_1,\hdots,Q_n$ and an estimator $\hat{\mathbf{p}}$ such that the error $\mathcal{E}:=\sup_{\mathbf{p}\in\Delta_k}\mathbb{E}\left[\|\hat{\mathbf{p}}\left(Y^{n}\right)-\mathbf{p}\|_{2}^{2}\right]$ is bounded by \begin{equation}~\label{eqn2_3} \mathcal{E} \leq \eta= \left\{ \begin{array}{ll} \frac{2k \left(e^{\epsilon}+1\right)^{2}}{n \left(e^{\epsilon}-1\right)^2} & \text{if}\ R\geq H_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right), \\ \frac{2k e^{2\epsilon}}{np_R^{2} \left(e^{\epsilon}-1\right)^2} & \text{if}\ R< H_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right). \end{array} \right. \end{equation} The error under $\ell_1$-norm loss is bounded by $\sup_{\mathbf{p}\in\Delta_k}\mathbb{E}\left[\|\hat{\mathbf{p}}\left(Y^{n}\right)-\mathbf{p}\|_{1}\right]\leq \sqrt{k\eta}$. \end{theorem} We prove Theorem~\ref{Th2_2} constructively in Section~\ref{LDP-AC}, by adapting the Hadamard response scheme given in~\cite{acharya2019communication} to our setting of limited randomness. Theorems~\ref{Th2_1} and \ref{Th2_2} together imply the following characterization for $r^{\ell_{2}^{2}}_{\epsilon,R,n,k}$ and $r^{\ell_{1}}_{\epsilon,R,n,k}$, for the case when $\epsilon=\mathcal{O}(1)$: \begin{corollary}~\label{Cor2_1} For $\epsilon=\mathcal{O}\left(1\right)$ and $R\geq0$, we have \begin{equation} r^{\ell_{2}^{2}}_{\epsilon,R,n,k}= \begin{cases} \Theta\left(\frac{k}{n\epsilon^{2}}\right) & \text{if}\ R\geq H_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right),\\ \Theta\left(\frac{k}{np_R^2\epsilon^{2}}\right) & \text{if}\ R < H_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right), \end{cases} \end{equation} and $r^{\ell_{1}}_{\epsilon,R,n,k}=\sqrt{k r^{\ell_{2}^{2}}_{\epsilon,R,n,k}}$. \end{corollary} We next provide a comparison between well-known mechanisms from randomness perspective. Table~\ref{t1} describe the amount of randomness required to implement different $\epsilon$-LDP mechanisms: RAPPOR~\cite{Rappor}, Randomized Response (RR)~\cite{warner1965randomized}, Hadamard Response (HR)~\cite{acharya2018hadamard}, and Binary Hadamard (BH)~\cite{acharya2019communication}. \begin{table}[t!] \centering \begin{tabular}{ |c || c | c | c | c | } \hline & RAPPOR & RR & HR & BH \\ \hline\hline Randomness per user ($R$ in bits) & $kH_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right)$ & $\log\left(k-1+e^{\epsilon}\right)-\frac{\epsilon e^{\epsilon}}{k-1+e^{\epsilon}}$ & $\leq\log\left(2k\frac{3e^{\epsilon}-1}{e^{\epsilon}}\right)-\frac{\epsilon e^{\epsilon}}{3e^{\epsilon}-1}$ & $H_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right)$ \\ \hline Minimax risk ($r_{\epsilon,R,n,k}^{\ell_{2}^{2}}$) & $\mathcal{O}\left(\frac{k}{n\epsilon^2}\right)$ & $\mathcal{O}\left(\frac{k^2}{n\epsilon^2}\right)$ & $\mathcal{O}\left(\frac{k}{n\epsilon^2}\right)$ & $\mathcal{O}\left(\frac{k}{n\epsilon^2}\right)$ \\ \hline \end{tabular} \caption{Randomness requirement to implement each private mechanism and its corresponding minimax risk under $\ell_{2}^{2}$ loss function for $\epsilon=\mathcal{O}\left(1\right)$.}~\label{t1} \end{table} Observe that all private mechanisms are order optimal in the high privacy regime except for the RR scheme. However, only the BH scheme uses the smallest amount of randomness $R=H_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right)$ per user, while the other mechanisms require a larger amount of randomness. Table~\ref{t1} considers only the regime of randomness $R\geq H_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right)$, since the privacy-utility trade-off when the amount of randomness $R< H_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right)$ has not been studied before. Corollary~\ref{Cor2_1} characterizes the privacy-utility trade-offs for all regions of randomness $R$. \begin{remark}\label{remark_critical-rand} Observe that when $R< H_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right)$, there exists a trade-off between $R$ and $r^{\ell_{2}^{2}}_{\epsilon,R,n,k}$ -- as $R$ increases, $r^{\ell_{2}^{2}}_{\epsilon,R,n,k}$ decreases proportionally to $1/p_R^{2}$. However, when $R\geq H_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right)$, the minimax risk is not affected by $R$. Hence, $R=H_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right)$ is a critical point that defines the minimum amount of randomness required for each user to generate an $\epsilon$-LDP mechanism, while achieving the optimal utility at the analyst. \end{remark} \begin{remark} Corollary~\ref{Cor2_1} also characterizes the number of users $n$ (sample complexity) required to estimate the distribution $\mathbf{p}$ with estimation error at most $\alpha$ for given privacy level $\epsilon$ and randomness $R$ bits per user is (where $k$ is the input alphabet size): \begin{equation} n \begin{cases} \Theta\left(\frac{k}{\alpha \epsilon^2}\right) & \text{if}\ R\geq H_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right),\\ \Theta\left(\frac{k}{\alpha p_{R}^{2}\epsilon^2}\right) & \text{if}\ R < H_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right). \end{cases} \end{equation} A remark analogous to Remark~\ref{remark_critical-rand} also holds here. \end{remark} \subsection{Multi-level Privacy, $d>1$}\label{multilevel-privacy} Here, we study the case of $d$ different analysts, with privacy levels $\epsilon_1>\cdots>\epsilon_d$, and $\epsilon_j=\mathcal{O}\left(1\right)$ for $j\in\left[d\right]$. A trivial scheme is to use the $d=1$ scheme multiple times, separately for each analyst: each user $i\in\left[n\right]$ generates $d$ samples $\left(Y_{i}^{1},\ldots,Y_{i}^{d}\right)$ from its input sample $X_i$. The $j$th sample $Y_{i}^{j}$ is delivered privately to the $j$th analyst. Note that the $j$th sample must be generated from an $\epsilon_j$-LDP. It then follows from Corollary~\ref{Cor2_1} that the minimum risk for the $j$th analyst is given by $r^{\ell_{2}^{2}}_{\epsilon_j,\infty,n,k}=\Theta\left(\frac{k}{n\epsilon_j^{2}}\right) $, which requires each user to have $R_j\geq H_2\left(\frac{e^{\epsilon_j}}{e^{\epsilon_j}+1}\right)$ bits of randomness, and results in a total amount of randomness $$R_{\text{total}}^{\text{trivial}}=\sum_{j=1}^{d} H_2\left(\frac{e^{\epsilon_j}}{e^{\epsilon_j}+1}\right).$$ We propose a new scheme, in which each user generates a single output that is publicly accessible by all analysts; each analyst is given a part of the random key that was used to privatize the data, and leverages this key to reduce the perturbation of the public output. The next theorem is proved in Section~\ref{Privlv}. \begin{theorem}~\label{Th2_3} There exists a private mechanism using a total amount of randomness given by $R_{\text{total}}^{\text{proposed}}=\sum_{j=1}^{d} H_2\left(q_{j}\right)$, such that the $j$th analyst achieves the minimum risk estimation $r^{\ell_{2}^{2}}_{\epsilon_j,\infty,n,k}=\Theta\left(\frac{k}{n\epsilon_j^{2}}\right)$, while preserving privacy of each user with privacy level $\epsilon_j$ for $j\in[d]$. Here, for every $j\in[d]$, $q_j$ is defined as follows (where $z_j=\frac{1}{e^{\epsilon_j}+1}$): \begin{equation}~\label{eqn2_2} q_j= \begin{cases} z_j&\text{if}\ j=1, \\ \frac{z_{j}-z_{j-1}}{1-2z_{j-1}}&\text{if}\ j>1. \end{cases} \end{equation} \end{theorem} \begin{remark} Note that $z_j>z_{j-1}$ as $\epsilon_{j-1}> \epsilon_{j}$. Moreover, we also have $z_j=1/\left(e^{\epsilon_j}+1\right)< 0.5$ for all $j\in\left[d\right]$. As a result, we can show that for $j>1$, we have \begin{equation} q_j=\frac{z_j-z_{j-1}}{1-2z_{j-1}}=z_j-\frac{z_{j-1}\left(1-2z_{j}\right)}{1-2z_{j-1}}<z_j. \end{equation} Hence, we get that $H_2\left(q_j\right)<H_2\left(z_j\right)$ holds for all $j>1$. Therefore, our proposed scheme uses a strictly smaller amount of randomness than the trivial scheme. \end{remark} \subsection{Private Recoverability, $d=2$} We here consider a legitimate analyst with permission to access the data $\lbrace X_i\rbrace_{i=1}^{n}$, i.e., $\epsilon_1\to\infty$, and an untrusted analyst with privacy level $\epsilon_2<\infty$. The $i$th user uses a random private key $U_i$ and her mechanism $Q_i$ to generate an output $Y_i$ that is publicly accessible by both analysts. \begin{ddd}[\textbf{LDP-Rec mechanisms}]\label{LDP-Rec} We say that a private mechanism $Q$ is $\epsilon$-LDP-Rec, if it is an $\epsilon$-LDP mechanism and it is possible to recover the input $X$ from output $Y$ and the key $U$. \end{ddd} We derive necessary and sufficient conditions on the random keys $\lbrace U_i \rbrace $ and the mechanisms $\lbrace Q_i \rbrace$, such that the legitimate analyst can recover $ X_i$ from observing $U_i$ and $Y_i$, while preserving privacy level $\epsilon_2$ against the untrusted analyst who does not have access to the keys. \begin{figure}[t] \centerline{\includegraphics[scale=0.4,trim={3.5cm 9.8cm 15cm 1.6cm},clip]{SM_Recover.pdf}} \caption{Private-Recoverability: Alice has data $X$. An $\epsilon$-LDP-Rec mechanism $Q$ is applied to $X$ using a random key $U$ to generate output $Y$. Bob is capable to recover $X$ from $Y$ and $U$. Eve only observes $Y$.} \label{Fig2_1} \end{figure} We first consider a simplified setting as shown in Figure~\ref{Fig2_1}. Alice (an arbitrary user \footnote{Since the input samples $X_1,\ldots,X_n$ are i.i.d., and the random keys $U_1,\ldots,U_n$ are independent random variables, it is sufficient to study the private-recoverable mechanism for any single user.}) has a sample $X\in\mathcal{X}$. Alice wants to send her sample $X$ to Bob (the legitimate analyst) while keeping her sample $X$ private against Eve (the untrusted analyst) with differential privacy level $\epsilon$. Eve has access to the message between Alice and Bob. However, Alice has a random key $U$ shared with Bob that Eve does not have access to. Let $Y$ be the output of the private mechanism $Q$ used by Alice. The following theorem (which we prove in Section~\ref{Recov-A}) provides necessary and sufficient conditions on the random key $U$ and the privatized output $Y$ to generate an $\epsilon$-LDP-Rec mechanism. \begin{remark}\label{private-recover-general} Observe that in the simplified model in Figure~\ref{Fig2_1}, we do not impose any assumptions on the input $X$. Furthermore, we do not impose any assumptions about the task for Eve. Hence, our model and results in Theorem~\ref{Th2_4} are applicable to any task for Eve including distribution estimation, heavy hitter estimation, or learning from sample $X$. \end{remark} \begin{theorem}~\label{Th2_4} Let $Q$ be an $\epsilon$-LDP-Rec mechanism that uses a random key $U\in\mathcal{U}$ and an input $X\in\mathcal{X}$ to produce a privatized output $Y\in\mathcal{Y}$. The following conditions are necessary and sufficient to allow recovery of $X$ from $(U,Y)$:\\ (1) $|\mathcal{U}|\geq |\mathcal{Y}| \geq |\mathcal{X}|$. \\ (2) The entropy of the random key must satisfy $H\left(U\right)\geq H\left(U_{\min}^{s^{*}}\right)$, where $s^{*}=\arg\min\limits_{s\in\lbrace \ceil{l},\floor{l}\rbrace}H\left(U_{\min}^{s}\right)$ for $l=k\frac{e^{\epsilon}\left(\epsilon-1\right)+1}{\left(e^{\epsilon}-1\right)^2}$ and $U_{\min}^{s}$ is a random variable with support size equal to $|\mathcal{X}|=k$ and has the following distribution: \begin{align*} \mathbf{q}_{\min}^{s}=[1/t,\hdots,1/t,e^{\epsilon}/t,\hdots,e^{\epsilon}/t], \end{align*} where $t=(se^{\epsilon}+k-s)$, the first $k-s$ terms are equal to $1/t$ and the remaining $s$ terms are equal to $e^{\epsilon}/t$. \end{theorem} We now discuss the effect of $\epsilon$ on the structure of optimal distribution $\mathbf{q}_{\min}^{s^*}$ for $U_{\min}^{s^*}$: {\sf (i)} When $\epsilon \gg \log(k)$, the optimal $s^*=1$, and the corresponding $\mathbf{q}_{\min}^{1}$ has its first $k-1$ terms equal to $1/(e^{\epsilon}+k-1)$ and the last term equal to $e^{\epsilon}/(e^{\epsilon}+k-1)$. This distribution is equivalent to the one used in the Randomized Response (RR) model proposed in~\cite{warner1965randomized}. {\sf (ii)} When $\epsilon\to0$, the optimal $s^*$ is around $k/2$, and the corresponding $\mathbf{q}_{\min}^{k/2}$ has its first $k/2$ terms equal to $2/k(e^{\epsilon}+1)$ and the remaining $k/2$ terms equal to $2e^{\epsilon}/k(e^{\epsilon}+1)$. {\sf (iii)} When $\epsilon=0$, the distribution $q_{\min}^s$ becomes uniform (irrespective of the value of $s$). Thus, when $\epsilon$ decreases, the distribution $\mathbf{q}_{\min}^{s}$ approaches to the uniform distribution. On the other hand, when $\epsilon$ increases, the distribution $\mathbf{q}_{\min}^{s}$ becomes skewed. It turns out that the minimum randomness required to generate an $\epsilon$-LDP-Rec mechanism for input recoverability is a non-increasing function of $\epsilon$. In other words, more privacy requires more randomness. \begin{remark} Consider the cryptosystem introduced by Shannon in~\cite{shannon1949}, where Alice wants to send a secure message $X$ to Bob using a shared random key $U$. Let $Y$ be the encrypted message sent to Bob. Eve eavesdrops the channel between Alice and Bob and observes $Y$. This cryptosystem achieves {\em perfect secrecy} if and only if $I\left(X;Y\right)=0$. Shannon showed that perfect secrecy requires $H\left(U\right)\geq H\left(X\right)$. Since the distribution of $X$ is not known to any node (Alice, Bob, and Eve), this implies $H\left(U\right)\geq\max_{p_X\in\Delta_k}H(X)=\log k$. We can easily verify that the $\epsilon$-LDP-Rec mechanism satisfies a cryptosystem with secrecy measure $\max_{\mathbf{p}\in\Delta_k}I\left(X;Y\right)\leq \epsilon$. Hence, a perfect secrecy system with unknown input distribution is a $0$-LDP-Rec mechanism, which is a special case of our problem. Moreover, the $\epsilon$-LDP-Rec mechanism with data recovery is a cryptosystem leaking an amount of information measured by $\max_{\mathbf{p}\in\Delta_k}I\left(X;Y\right)\leq \epsilon$. \end{remark} Observe that Theorem~\ref{Th2_4} does not provide performance guarantees for Eve, it only guarantees privacy for Alice with respect to Eve, and recoverability for Bob. Hence, we can ask the question: Does there exist an $\epsilon$-LDP-Rec mechanism using the smallest amount of randomness and guaranteeing the smallest error for distribution estimation or heavy hitter estimation for Eve (the untrusted analyst)? In the following theorem (which we prove in Section~\ref{Recov-B}), we show that such a mechanism exists. \begin{theorem}~\label{Th2_5} The Hadamard Response mechanism from~\cite{acharya2018hadamard} satisfies private-recoverability, and is utility-wise order-optimal for distribution estimation and heavy hitter estimation while using an order-optimal amount of randomness. \end{theorem} \subsection{Sequence of Distribution (or Heavy Hitter) Estimation} We again start from the setting in Figure~\ref{Fig2_1}, but with the modification that Alice (an arbitrary user) wants to send to Bob (a legitimate analyst) $T$ independent samples $X^{T}=\left(X^{\left(1\right)},\ldots,X^{\left(T\right)}\right)$, where $X^{\left(t\right)}\in\mathcal{X}$, while keeping them private against Eve (an untrusted analyst) with differential privacy level $\epsilon$. Eve has access to the sequence of outputs $Y^{T}=\left(Y^{\left(1\right)},\ldots,Y^{\left(T\right)}\right)$ that Alice produces, but not to the random key $U$ that Alice and Bob share. Note that each output $Y^{\left(t\right)}$ might be a function of all input samples $X_1^{t}=\left(X^{\left(1\right)},\ldots,X^{\left(t\right)}\right)$ and the key $U$. Furthermore, the output $Y^{\left(t\right)}$ can take values from a set $\mathcal{Y}^{\left(t\right)}$ that is not required to be the same as $\mathcal{Y}^{\left(t^{\prime}\right)}$ for $t\neq t^{\prime}$. Let $\mathcal{Y}^{T}=\mathcal{Y}^{\left(1\right)}\times\cdots\times \mathcal{Y}^{\left(T\right)}$. The following theorem is proved in Section~\ref{Recov_GDP}. We can define $\epsilon$-DP-Rec mechanisms in the same way as we defined $\epsilon$-LDP-Rec mechanisms in Definition~\ref{LDP-Rec}: A mechanism $Q$ is $\epsilon$-DP-Rec, if it satisfies \eqref{eqn1_2}, and allows the recovery of input $X$ from the output $Y$ and the key $U$. \begin{theorem}~\label{Th2_6} Let $Q$ be an $\epsilon$-DP-Rec mechanism that uses a random key $U\in\mathcal{U}$ and an input database $X^{T}\in\mathcal{X}^{T}$ to create an output $Y^{T}\in\mathcal{Y}^{T}$. The following conditions are necessary and sufficient to allow recovery of the input $X^{T}$ from $(U,Y^{T})$.\\ (1) $|\mathcal{U}|\geq |\mathcal{Y}^{T}| \geq |\mathcal{X}^{T}|$. \\ (2) The entropy of the random key must satisfy $H\left(U\right)\geq T \min\limits_{s^{*}\in\lbrace \ceil{l},\floor{l}\rbrace} H\left(U_{\min}^{s^{*}}\right)$, where $U_{\min}^{s}$ is the same random variable with support size $|\mathcal{X}|=k$, as defined in Theorem~\ref{Th2_4}. \end{theorem} \begin{figure}[t!] \centering \begin{subfigure}[t]{0.49\linewidth} \centerline{\includegraphics[scale=0.32]{n_vs_l1_eps_1_R.pdf}} \caption{$\ell_1$-estimation error for input alphabet size $k=1000$, privacy level $\epsilon=1$, and $\mathbf{p}=\text{Geo}\left(0.8\right)$.} ~\label{Fig4_1} \end{subfigure} \hfill \begin{subfigure}[t]{0.49\linewidth} \centerline{\includegraphics[scale=0.407,width=7cm,height=5cm]{esp_vs_error.pdf}} \caption{Estimation error for input alphabet size $k=1000$, number of users $n=500000$, and $\mathbf{p}=\text{Geo}\left(0.8\right)$.} ~\label{Fig_2} \end{subfigure} \caption{Single-level privacy} \end{figure} Theorem~\ref{Th2_6} shows that the minimum amount of randomness required to preserve privacy of $T$ samples is equal to $T$ times the amount of randomness required to preserve privacy of a single sample. That is, for $\epsilon$-DP-Rec, it is optimal to use an $\epsilon$-LDP-Rec mechanism $T$ times. \begin{remark} Observe that Theorem~\ref{Th2_6} is applicable in a $n$-user setting (by setting $T=n$), where user $i$ has a single sample $X^{(i)}$, and all users have access to a shared random key $U$. So we have that shared randomness among users does not help in reducing the overall required amount of randomness. \end{remark} \section{Multi-level Privacy (Proof of Theorem~\ref{Th2_3})} \label{Privlv} This section proves Theorem~\ref{Th2_3} (see page~\pageref{Th2_3}) by establishing a new technique using a smaller amount of randomness than the trivial scheme mentioned in Section~\ref{multilevel-privacy} while achieving the minimum risk estimation for each analyst. Our proposed mechanism for multi-level privacy (where $\epsilon_1>\hdots>\epsilon_d$) is a cascading mechanism, where in each step, we add a random key to the output of the previous step (see Figure~\ref{Fig5_1}, for example). The common output of the mechanism is accessible by all analysts. However, each analyst would have a different privacy level depending on the amount of randomness shared with it. Thus, each analyst uses the shared random key to partially undo the randomization of the common output to get less privacy and higher utility. Let $z_j=\frac{1}{e^{\epsilon_j}+1}$ for $j\in\left[d\right]$. For $i\in\left[n\right]$, let $\lbrace U_{i}^{1},\ldots, U_i^{d}\rbrace$ be a set of $d$ Bernoulli random variables, where $U_{i}^{j}$ has a parameter $q_j=\text{Pr}\left[U_i^{j}=1\right]$ given by \begin{equation}~\label{eqn4_4} q_j=\left\{\begin{array}{ll} z_j&\text{if}\ j=1,\\ \frac{z_{j}-z_{j-1}}{1-2z_{j-1}}&\text{if}\ j>1.\\ \end{array}\right. \end{equation} We first use the Hadamard response proposed in~\cite{acharya2019communication} for getting the first step of our mechanism (see Section~\ref{LDP-AC} for more details). Let $H_{K}$ be the $K\times K$ Hadamard matrix. Let $B^{l}$ be a set of the row indices that have $1$ in the $l$-th column of Hadamard matrix $H_{K}$ for $l\in\left[K\right]$. We divide the users into $K$ sets ($\mathcal{US}_1,\ldots,\mathcal{US}_K$), where each set contains $n/K$ users. We assign a set $B_i=B^{l}$ representing a subset of inputs for each user $i\in\mathcal{US}_l$. Then, user $i$ generates a virtual output $Y_{i}^{1}\in\lbrace 0,1\rbrace$ as follows \begin{equation}~\label{eqn4_2} Y_{i}^{1}=\left\{\begin{array}{ll} 1& \text{if}\ \left(X_i\in B_i\ \text{and}\ U_{i}^{1}=0\right)\ \text{or}\ \left(X\notin B_i\ \text{and}\ U_{i}^{1}=1\right),\\ 0& \text{otherwise}.\\ \end{array}\right. \end{equation} Observe that the representation of $Y_{i}^{1}$ in~\eqref{eqn4_2} is exactly the same as in~\eqref{eqn3_9} by setting $q=\text{Pr}\left[U_{i}^{1}=0\right]=\frac{e^{\epsilon}}{e^{\epsilon}+1}$. We represent $Y_{i}^{1}$ with this form to explicitly show the random keys used to design the Hadamard scheme presented in Section~\ref{LDP-AC}. Let $Y_{i}^{j}$ be the virtual output generated by user $i$ for the $j$th analyst, which is given by \begin{equation} Y_{i}^{j}=Y_{i}^{1}\oplus U_{i}^{2}\oplus \ldots\oplus U_{i}^{j}, \end{equation} where $\oplus$ denotes the bitwise XOR. Hence, we add randomization to the first step of the Hadamard scheme. User $i$ transmits the output $Y_{i}^{d}$ to all analysts. The private scheme is shown in Figure~\ref{Fig5_1}. \begin{figure}[t] \centerline{\includegraphics[scale=0.38,trim={0cm 10cm 13cm 0cm},clip]{Multiple_levels.pdf}} \caption{Multiple privacy levels mechanism.}\vspace{-7mm} ~\label{Fig5_1} \end{figure} \begin{lemma}~\label{lemm4_1} The $j$th output of user $i$ satisfies $\epsilon_j$-LDP, i.e., \begin{equation}~\label{eqn4_1} \sup_{y_i^{j}\in\lbrace 0,1\rbrace}\sup_{x_i,x_i^{\prime}\in\mathcal{X}}\frac{\text{Pr}\left[Y_{i}^{j}=y_{i}^{j}|X_i=x_i\right]}{\text{Pr}\left[Y_{i}^{j}=y_{i}^{j}|X_i=x_i^{\prime}\right]}\leq e^{\epsilon_j} \end{equation} \end{lemma} We prove Lemma~\ref{lemm4_1} in Appendix~\ref{AppD}. Note that each analyst has access to the public outputs $\lbrace Y_{1}^{d},\ldots,Y_{n}^{d}\rbrace$ which is $\epsilon_d$-LDP. Additionally, user $i$ sends a random key $L_{i}^{j}=U_{i}^{d}\oplus\ldots\oplus U_{i}^{j+1}$ to the $j$th analyst. Using the random keys $\lbrace L_{1}^{j},\ldots,L_{n}^{j}\rbrace$, the $j$th analyst can construct the private outputs $\lbrace Y_{1}^{j},\ldots,Y_{n}^{j}\rbrace$ which are $\epsilon_j$-LDP, where $Y_{i}^{j}=Y_{i}^{d}\oplus L_{i}^{j}$. Observe that the privatized output $Y_{i}^{j}$ has a conditional distribution given by \begin{equation} Q_{i}\left(Y_{i}^{j}|X_i\right)=\left\{\begin{array}{ll} \frac{e^{\epsilon_j}}{e^{\epsilon_j}+1}& \text{if}\ X_i\in B_i\\ \frac{1}{e^{\epsilon_j}+1}& \text{if}\ X_i\not\in B_i\\ \end{array} \right. \end{equation} which coincides with the private mechanism given in~\eqref{eqn3_9} with $q=\frac{e^{\epsilon_j}}{e^{\epsilon_j}+1}$. From Lemma~\ref{lemm3_3}, for privacy level $\epsilon_j=\mathcal{O}\left(1\right)$, we get that \begin{equation} r_{\epsilon,R,n,k}^{\ell_{2}^{2},j}=\mathcal{O}\left(\frac{k}{n\epsilon_{j}^{2}}\right), \end{equation} for analyst $j$, which coincides with the lower bound stated in Corollary~\ref{Cor2_1}. Observe that the total amount of randomness per user in the proposed mechanism is given by \begin{equation} \begin{aligned} R_{\text{total}}^{\text{proposed}}&=\sum_{j=1}^{d} H\left(U^{j}\right)=\sum_{j=1}^{d} H_2\left(q_{j}\right)\leq R_{\text{total}}^{\text{trivial}}, \end{aligned} \end{equation} where $q_j$ is defined in~\eqref{eqn4_4}. Note that the last inequality is strict for $d>1$, which follows from the argument presented in Section~\ref{multilevel-privacy}. This completes the proof of Theorem~\ref{Th2_3}. \section{Numerical Evaluation} \label{Num} In this section, we numerically validate our theoretical results through simulation. \begin{figure}[t!] \centering \begin{subfigure}[t]{0.49\linewidth} \centerline{\includegraphics[scale=0.32]{Mut_level_eps_1.pdf}} \caption{Comparison between our privacy scheme proposed in Theorem~\ref{Th2_3} and the trivial scheme for two privacy levels $\epsilon_1=1$ and $\epsilon_2=[0.01:1]$.} ~\label{Fig4_2a} \end{subfigure} \hfill \begin{subfigure}[t]{0.49\linewidth} \centerline{\includegraphics[scale=0.32]{Mut_level_eps_2_d.pdf}} \caption{ Comparison between our privacy scheme proposed in Theorem~\ref{Th2_3} and the trivial scheme for $d$ privacy levels $\epsilon_1=2$ and $\epsilon_j=\epsilon-0.1j$ for $j\in\left[2:d\right]$.} ~\label{Fig4_2b} \end{subfigure} \caption{Multi-level privacy}\vspace{-6mm} \end{figure} \textbf{Single-level privacy:} In this part, we investigate the performance of the estimator presented in Theorem~\ref{Th2_2} for a single-level privacy. Each point is obtained by averaging over $20$ runs. In Figure~\ref{Fig4_1}, we plot the estimation error for the $\ell=\ell_1$ loss function ($\|\mathbf{p}-\hat{p}\left(Y^{n}\right)\|_1$) for estimating a discrete distribution $\mathbf{p}\in\Delta_{k}$. The input size is $k=1000$, the number of users is $n\in\left[10^{5}:10^{6}\right]$, and the privacy level is $\epsilon=1$ for two values of randomness $R\in\lbrace 0.7,1\rbrace$ bits per user. The input samples are drawn from a Geometric distribution with parameter $q=0.8$ ($\text{Geo}\left(0.8\right)$), in which $p_i= C q^{i-1}\left(1-q\right)$ for $i\in\left[k\right]$, where $C$ is a normalization term. Figure~\ref{Fig4_1} shows that the number of users required to achieve a certain estimation error increases as the amount of randomness per user decreases. For instance, to achieve an $\ell_1$-error equal to $1.4$, we need $n\approx 150,000$ users if $R=1$ bits per user, while we need $n\approx 850,000$ users if $R=0.7$ bits per user. Figure~\ref{Fig_2} depicts the $\ell_1$ estimation error as a function of the privacy level $\epsilon$ for input size $k=1000$ and number of users $n=500000$ for two different values of randomness $R\in\lbrace 1,0.6\rbrace$ bits per user. As we discussed in Theorem~$1$, for each privacy level $\epsilon$, there is a critical point of randomness $R=H\left(e^{\epsilon}/\left(e^{\epsilon}+1\right)\right)$. When each user has $R<H\left(e^{\epsilon}/\left(e^{\epsilon}+1\right)\right)$ bits of randomness, then the $\ell_1$ estimation loss increases as the randomness $R$ decreases. While when each user has $R\geq H\left(e^{\epsilon}/\left(e^{\epsilon}+1\right)\right)$ bits of randomness, the estimation error is not affected by the amount of randomness $R$. In Figure~\ref{Fig_2}, we find that the $\ell_1$ error depends on the randomness $R$ for all $\epsilon<0.8$, since we have $R=0.9<H\left(e^{\epsilon}/\left(e^{\epsilon}+1\right)\right)$ for all $\epsilon<0.8$. \textbf{Multi-level privacy:} Figure~\ref{Fig4_2a} and Figure~\ref{Fig4_2b} compare our proposed scheme in Theorem~\ref{Th2_3} with the trivial scheme with respect to the total amount of randomness used. In the trivial scheme, each user generates $d$ different privatized samples, one for each analyst. In Figure~\ref{Fig4_2a} we consider two privacy levels $\epsilon_1=1$ and $\epsilon_2\leq\epsilon_1$. We find that when $\epsilon_1-\epsilon_2$ is small, then the trivial scheme requires approximately twice the total amount of randomness used in our scheme. However, when $\epsilon_1- \epsilon_2$ is large, then our scheme and the trivial scheme use similar amounts of randomness. In Figure~\ref{Fig4_2b}, we consider $d\in\left[1:10\right]$, $\epsilon_1=2$ and $\epsilon_j=\epsilon_1 - 0.1 j$, for $j\in\lbrace 2,\ldots,d\rbrace$. We find that the gap between the amount of randomness used in our scheme and the trivial scheme increases with $d$. \textbf{Private-recoverability:} Observe that each user needs $\log\left(k\right)$ bits to store her input sample $X\in\left[ k\right]$, since she does not know the distribution $X\sim\mathbf{p}$. In private-recoverability, we can recover $X$ from observing $Y$ and $U$; hence, we only need to store $U$. Figure~\ref{Fig4_3} plots the number of bits required to store $U$ (see Theorem~\ref{Th2_4}) as a function of the privacy level $\epsilon$ and different values of input size $k\in\lbrace 10,100,1000\rbrace$. The black lines represent the $\log\left(k\right)$ bits required to store $X$ (an additional secure copy). Note that the amount of bits needed to store $U$ is strictly smaller than $\log\left(k\right)$ for $\epsilon>0$, and decreases as the privacy level $\epsilon$ increases. Observe that the gain in Fig~\ref{Fig4_3} is per user. Hence, the total amount of saving in storage would be considerable when the number of users is large and $\epsilon>0$. For example, when $\epsilon=5$, alphabet size $k=2,4,10$, we get gain in efficiency $\frac{\log\left(k\right)-H\left(U\right)}{\log\left(k\right)}$ of $94.2\%$, $91.4\%$, and $85\%$ respectively. \begin{figure}[t!] \centerline{\includegraphics[scale=0.35]{storage_eps.pdf}} \caption{Comparison between storage required for $X$ and a random key $U$, for input alphabet sizes $k\in\lbrace 10,100,1000\rbrace$. The black lines represent $\log\left(k\right)$.} ~\label{Fig4_3}\vspace{-6mm} \end{figure} \section{Preliminaries and Problem Formulation} \label{Prelm} \textbf{Notation:} We use $\left[k\right]$ to define the set $\lbrace 1,\ldots,k\rbrace$ of integers. We use uppercase letters $X,Y$, etc., to denote random variables, and lowercase letter $x,y$, etc., to denote their realizations. For any two distributions $\textbf{p}$ and $\textbf{q}$ supported over a set $\mathcal{X}$, let $\|\textbf{p}-\textbf{q}\|_{\text{TV}} = \sup_{\mathcal{A}\subseteq\mathcal{X}}|\textbf{p}(\mathcal{A})-\textbf{q}(\mathcal{A})|$ be the total variation distance between $\textbf{p}$ and $\textbf{q}$. We use $\oplus$ to define the XOR operation. For $p\in[0,1]$, we use $H_2\left(p\right)$ to denote the binary entropy function defined by $H_2\left(p\right)=-p\log\left(p\right)-\left(1-p\right)\log\left(1-p\right)$, and $H\left(X\right)$ to denote the entropy of the random variable $X$. Also, we use $H\left(\textbf{p}\right)$ to denote the entropy of a random variable $X$ drawn from a distribution $\textbf{p}$. \subsection{Differential Privacy (LDP)} Let $\mathcal{X}\triangleq\lbrace 1,\ldots,k\rbrace$ be an input alphabet and $\mathcal{Y}\triangleq\lbrace 1,\ldots,m\rbrace$ be an output alphabet, of sizes $|\mathcal{X}|=k$ and $|\mathcal{Y}|=m$, respectively, that are not required to be the same. A private randomization mechanism $Q$ is a conditional distribution that takes an input $X\in\mathcal{X}$ and generates a privatized output $Y\in\mathcal{Y}$. $Q$ is said to satisfy the $\epsilon$-local differential privacy ($\epsilon$-LDP)~\cite{duchi2013local}, if for every pair of inputs $x,x^{\prime}\in\mathcal{X}$, we have \begin{equation}~\label{eqn1_3} \sup_{y\in\mathcal{Y}}\frac{Q\left(y|x\right)}{Q\left(y|x^{\prime}\right)}\leq \exp\left(\epsilon\right), \end{equation} where $Q\left(y|x\right)=\text{Pr}\left[Y=y|X=x\right]$ and $\epsilon$ captures the privacy level. For small values of $\epsilon$, the adversary cannot infer whether the input was $X=x$ or $X=x^{\prime}$. Hence, a smaller privacy level $\epsilon$ implies higher privacy. \subsection{Randomness in LDP Mechanisms} A private mechanism $Q$ with input $X\in\mathcal{X}$ and output $Y\in\mathcal{Y}$ is said to satisfy $\left(\epsilon,R\right)$-LDP, if for every pair of inputs $x,x^{\prime}\in\mathcal{X}$, we have \begin{equation} \begin{aligned} &\sup_{y\in\mathcal{Y}}\frac{Q\left(y|x\right)}{Q\left(y|x^{\prime}\right)}\leq \exp\left(\epsilon\right),\ \text{and}\\ &H\left(Y|X=x\right)\leq R\quad \forall x\in\mathcal{X}, \end{aligned} \end{equation} where $H\left(Y|X=x\right)=\sum_{y\in\mathcal{Y}}Q\left(y|x\right)\log\left(\frac{1}{Q\left(y|x\right)}\right)$ denotes the entropy of the random output $Y$ conditioned on the input $X=x$. Note that an $\left(\epsilon,R\right)$-LDP mechanism is an $\epsilon$-LDP mechanism that requires an amount of randomness less than or equal to $R$-bits to be designed. Suppose that a random key $U$ with $H\left(U\right)\leq R$ is used to design an $\left(\epsilon,R\right)$-LDP mechanism $Q$. We consider $U$ to be a random variable that takes values from a discrete set $\mathcal{U}=\lbrace u_1,\ldots,u_l\rbrace$ according to a distribution $\mathbf{q}=\left[q_1,\ldots,q_l\right]$, where $q_u=\text{Pr}\left[U=u\right]$ for $u\in\mathcal{U}$. We assume that $\mathcal{U}$ is a discrete set, since we focus on finite randomness. Let $\mathcal{U}_{yx}\subset\mathcal{U}$ be a subset of key values such that input $X=x$ is mapped to $Y=y$ when $u\in\mathcal{U}_{yx}$. The private mechanism $Q$ can be represented as \begin{equation}\label{eqn2_4} Q\left(y|x\right)=\sum_{u\in\mathcal{U}_{yx}}q_u. \end{equation} Note that the output $Y$ is a function of $\left(X,U\right)$. Therefore, we have $\mathcal{U}_{y^{\prime}x}\bigcap \mathcal{U}_{yx}=\phi$ for $y^{\prime}\neq y$, since there is only one output for each input. In addition, if we want \eqref{eqn2_4} to satisfy the privacy condition \eqref{eqn1_3}, we also have\footnote{Otherwise we can distinguish inputs causing $\epsilon\rightarrow\infty$.} $\bigcup_{y\in\mathcal{Y}}\mathcal{U}_{yx}=\mathcal{U}$ for each $x\in\mathcal{X}$. We will leverage this representation of randomness in LDP mechanisms to design multi-level privacy mechanisms. Figure~\ref{Fig2_1} shows an example of designing a private mechanism with binary inputs $\mathcal{X}=\lbrace 0,1\rbrace$, binary random keys $\mathcal{U}=\lbrace 0,1\rbrace$, and binary outputs $\mathcal{Y}=\lbrace 0,1\rbrace$. In this example, we can represent the output of the mechanism as a function of $\left(X,U\right)$ by $Y=X\oplus U$, where $\oplus$ denotes the XOR operation. If the random key $U$ is drawn from a distribution $\mathbf{q}=\left[\frac{e^{\epsilon}}{e^{\epsilon}+1},\frac{1}{e^{\epsilon}+1}\right]$, then it is easy to show that the mechanism is $\epsilon$-LDP. \begin{figure}[t] \centerline{\includegraphics[scale=0.4,trim={0.5cm 8.9cm 9cm 0.5cm},clip]{Rand_Ex.pdf}} \caption{An example of designing an $\epsilon$-LDP mechanism using a private key: (left) representing the output $Y$ of the mechanism $Q$ as a function of the input $X$ and the private key $U$, (right) representing the mechanism $Q$ as a probabilistic mapping from the input $X$ to the output $Y$ depending on the private key $U$.} \label{Fig2_1} \end{figure} \subsection{Problem Formulation} We consider $n$ users who observe i.i.d.~inputs $X_1, X_2,\hdots, X_n$ (user $i$ observes input $X_i$), drawn from an unknown discrete distribution $\mathbf{p}\in \Delta_k$, where $\Delta_k=\left\{ \mathbf{p}\in\mathbb{R}^{k}| \sum_{j=1}^{k}p_j=1, p_j\geq 0,\ \forall j\in\left[k\right]\right\}$ denotes the probability simplex over $\mathcal{X}$. The $i$'th user has a random key $U_i$ with $H\left(U_i\right)\leq R$; we assume that $U^{n}=\left[U_1,\ldots,U_n\right]$ are independent random variables, unless otherwise stated. The $i$'th user generates (and publicly shares) an output $Y_i$, using an $\left(\epsilon,R\right)$-LDP mechanism $Q_i$ and her random key $U_i$. The output $Y_i$ has a marginal distribution given by \begin{equation} \mathbf{M}_{i}\left(y|\mathbf{p}\right)=\sum_{x\in\mathcal{X}}Q_i\left(y|x\right)p_x\qquad\forall y\in\mathcal{Y}_i, \end{equation} where $\mathcal{X}$ and $\mathcal{Y}_i$ are the input and output alphabets. We also have $d$ analysts who want to use the users' public outputs $Y^{n}=\left[Y_1,\ldots,Y_n\right]$ to estimate $\mathbf{p}$, each at a different level of privacy $\epsilon_1 > \ldots > \epsilon_d$. The system model is shown in Figure~\ref{Fig1_1}. \textbf{\em Risk Minimization:} For simplicity of exposition, consider for now a single analyst, and let $\mathbf{\hat{p}}=\left[\hat{p}_1,\cdots,\hat{p}_{k}\right]$ denote the analyst's estimator (this is a function $\mathbf{\hat{p}}:Y^{n}\to \mathbb{R}^{k}$ that maps the outputs $Y^{n}$ to a distribution in the simplex $\Delta_k$)\footnote{Observe that it is sufficient to consider a deterministic estimator $\hat{\mathbf{p}}$, since for any randomized estimator, there exists a deterministic estimator that dominates the performance of the randomized one.}. For given private mechanisms $Q^{n}=\left[ Q_1,\ldots,Q_n\right]$, the estimator $\hat{\mathbf{p}}$ is obtained by solving the problem \begin{equation} r^{\ell}_{\epsilon,R,n,k}\left(Q^{n}\right)=\inf_{\hat{\mathbf{p}}}\sup_{\mathbf{p}\in\Delta_k}\mathbb{E}\left[\ell\left(\hat{\mathbf{p}}\left(Y^{n}\right),\mathbf{p}\right)\right], \end{equation} where $r^{\ell}_{\epsilon,R,n,k}$ is the minimax risk, the expectation is taken over the randomness in the outputs $Y^{n}=\left[Y_1,\ldots,Y_n\right]$ with $Y_i\sim\mathbf{M}_i$, and $\ell:\mathbb{R}^{k}\times\mathbb{R}^{k}\to\mathbb{R}_{+}$ is a loss function that measures the distance between two distributions in $\Delta_k$. Unless otherwise stated, we adopt as loss function the 1-norm, namely $\ell=\ell_1$ and the squared 2-norm, namely $\ell=\ell_2^{2}$. Our task is to design private mechanisms $ Q_1,\hdots,Q_n$ that minimize the minimax risk estimation, namely, \begin{equation}~\label{eqn1_1} \begin{aligned} r^{\ell}_{\epsilon,R,n,k}&=\inf_{\lbrace Q_i\in\mathcal{Q}_{\left(\epsilon,R\right)}\rbrace}\ r^{\ell}_{\epsilon,R,n,k}\left(Q^{n}\right)\\ &=\inf_{\lbrace Q_i\in\mathcal{Q}_{\left(\epsilon,R\right)}\rbrace} \inf_{\hat{\mathbf{p}}}\sup_{\mathbf{p}\in\Delta_k}\mathbb{E}\left[\ell\left(\hat{\mathbf{p}}\left(Y^{n}\right),\mathbf{p}\right)\right], \end{aligned} \end{equation} where $\mathcal{Q}_{\left(\epsilon,R\right)}$ denotes the set of mechanisms that satisfy $\left(\epsilon,R\right)$-LDP. Observe that when $R\to\infty$, the problem~\eqref{eqn1_1} is reduced to the standard LDP distribution estimation studied previously in~\cite{duchi2013local, kairouz2016discrete,Ye2018,acharya2018hadamard}. The difference in the formulation in~\eqref{eqn1_1} is the randomness constraint. \textbf{\em LDP heavy hitter estimation:} In heavy hitter estimation, the input samples $X^{n}=[X_1,\hdots,X_n]$ do not have an associated distribution. Furthermore, the analyst is interested in estimating the frequency of each element $x\in\mathcal{X}$ with the infinity norm being the loss function (i.e., $\ell=\ell_{\infty}$). Frequency of each element $x\in\mathcal{X}$ is defined by $f\left(x\right)=\frac{\sum_{i=1}^{n}\mathbbm{1}\left(X_i=x\right)}{n}$. We then want to calculate \begin{equation}~\label{eqn1_1_1} \begin{aligned} &r^{\ell_{\infty}}_{hh,\epsilon,R,n,k}= &\inf_{\lbrace Q_i\in\mathcal{Q}_{\left(\epsilon,R\right)}\rbrace} \inf_{\hat{\mathbf{p}}}\sup_{X^{n}\in\mathcal{X}^{n}}\mathbb{E}\left[\max_{x\in\mathcal{X}}|\hat{p}_x\left(Y^{n}\right)-f\left(x\right)|\right],\nonumber \end{aligned} \end{equation} where the expectation is taken over the randomness in the outputs $Y^{n}=\left[Y_1,\ldots,Y_n\right]$ and $\hat{\mathbf{p}}$ denotes the estimator of the analyst. Note, again, that in this case we do not make any distributional assumptions on $X_1,\hdots,X_n$. \textbf{\em Multi-level privacy:} Consider now the general case of $d$ analysts each operating at a different level of privacy $\epsilon_1 > \ldots > \epsilon_d$. All analysts observe the users' public outputs $Y^{n}$; additionally, analyst $j$ may also observe some side information on the user randomness. The question we ask is: what is the minimum amount of randomness $U$ per user required to maintain the privacy of each user while achieving the minimum risk estimation for each analyst? \textbf{\em Sequence of distribution (or heavy hitter) estimation:} We assume that each user $i$ has a random key $U_i$ to preserve the privacy of a sequence of $T$ independent samples $X_{i}^{\left(1\right)},\ldots,X_{i}^{\left(T\right)}$, where the $t$'th samples for $t\in\left[T\right]$ at all users are drawn i.i.d.\ from an unknown distribution $\mathbf{p}^{\left(t\right)}$.\footnote{As mentioned earlier, for heavy hitter estimation, the samples $X_{i}^{\left(1\right)},\ldots,X_{i}^{\left(T\right)}$ do not have an associated distribution.} At time $t$, the $i$'th user generates an output $Y_{i}^{(t)}$ that may be a function of the random key $U_i$ and all input samples $\lbrace X_{i}^{\left(m\right)}\rbrace_{m=1}^{t}$. Each of the $d$ analysts uses the outputs $Y_{i}^{(t)}, i\in[n], t\in[T]$ to estimate $T$ distributions $\mathbf{p}^{\left(1\right)},\ldots,\mathbf{p}^{\left(T\right)}$ (or estimate the heavy hitters). A private mechanism $Q$ with a sequence of inputs $X^{T}=\left( X^{\left(1\right)},\ldots,X^{\left(T\right)}\right)$ and a sequence of outputs $Y^{T}=\left( Y^{\left(1\right)},\ldots,Y^{\left(T\right)}\right)$ is said to satisfy $\epsilon$-DP, if for every neighboring databases $\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{X}^{T}$, we have \begin{equation}\label{eqn1_2} \sup_{\mathbf{y}\in\mathcal{Y}^{T}}\frac{Q\left(\mathbf{y}|\mathbf{x}\right)}{Q\left(\mathbf{y}|\mathbf{x}^{\prime}\right)}\leq \exp\left(\epsilon\right), \end{equation} where $Q\left(\mathbf{y}|\mathbf{x}\right)=\text{Pr}\left[Y^{T}=\mathbf{y}|X^{T}=\mathbf{x}\right]$; and we say that two databases, $\mathbf{x}=\left(x^{\left(1\right)},\ldots,x^{\left(T\right)}\right)$ and $\mathbf{x}^{\prime}=\left(x^{\prime\left(1\right)},\ldots,x^{\prime\left(T\right)}\right)\in\mathcal{X}^{T}$ are neighboring, if there exists an index $t\in\left[T\right]$, such that $x^{\left(t\right)}\neq x^{\prime\left(t\right)}$ and $x^{\left(l\right)}= x^{\prime\left(l\right)}$ for $l\neq t$. Observe that when $T=1$, the definition of $\epsilon$-DP in~\eqref{eqn1_2} coincides with the definition of $\epsilon$-LDP in~\eqref{eqn1_3}. We are interested in the question: Is there a private mechanism that uses a smaller amount of randomness than $T$ times the amount of randomness used for a single data sample? In other words, can we perhaps reuse the randomness over time while preserving privacy? \section{Private Recoverability (Proofs of Theorem~\ref{Th2_4} and Theorem~\ref{Th2_5})} \label{Recov} In this section, we prove Theorem~\ref{Th2_4} (see page~\pageref{Th2_4}) and Theorem~\ref{Th2_5} (see page~\pageref{Th2_5}). \subsection{Proof of Theorem~\ref{Th2_4}}~\label{Recov-A} This section proves the necessary and sufficient conditions on the random key $U$ and the privatized output $Y$ to design an $\epsilon$-LDP-Rec mechanism. We first prove that $|\mathcal{Y}|\geq |\mathcal{X}|$ is necessary to recover $X$ from $Y$ and $U$. We then prove that each input $x\in\mathcal{X}$ should be mapped with non-zero probability to every output $y\in\mathcal{Y}$; hence, we get $|\mathcal{U}|\geq |\mathcal{Y}|$, since each input $x\in\mathcal{X}$ can be mapped with non-zero probability to at most $|\mathcal{U}|$ outputs. The main part of our proof is bounding the randomness of the key $U$ in the second condition. We first prove in Lemma~\ref{lemm5_2} that for any $\epsilon$-LDP-Rec mechanism designed using a random key of size greater than the input size, there exists another $\epsilon$-LDP-Rec mechanism designed using a random key of size equal to the input size with the same or smaller amount of randomness. Thus, we can assume that $|\mathcal{U}|=|\mathcal{X}|$ and minimize the entropy of the random key $U$ over all possible distributions and under the $\epsilon$-LDP constraint. Since entropy is a concave function of the distribution, we get a non-convex problem. However, we can obtain an exact solution for the problem due to the structure of the privacy constraints that form a closed polytope. For the sufficiency part, we prove in Lemma~\ref{lemm5_1} that we can construct an $\epsilon$-LDP-Rec mechanism using the random key $U_{\min}^{s^{*}}$ defined in Theorem~\ref{Th2_4} that satisfies the two necessary conditions. Before we proceed into the proof of Theorem~\ref{Th2_4}, we first present the following two lemmas whose proofs are given in Appendix~\ref{AppE} and Appendix~\ref{AppF}, respectively. \begin{lemma}~\label{lemm5_1} For given a random key $U\in\mathcal{U}$ with size $|\mathcal{U}|=k$ having a distribution $\mathbf{q}=\left[q_1,\ldots,q_k\right]$ such that $\frac{q_{\max}}{q_{\min}}\leq e^{\epsilon}$, where $q_{\max}=\max\limits_{j\in\left[k\right]}q_j$ and $q_{\min}=\min\limits_{j\in\left[k\right]}q_j$, there exists an $\epsilon$-LDP-Rec mechanism with input $X\in\left[k\right]$ and an output $Y\in\left[k\right]$ designed using $U$. \end{lemma} This lemma shows that we can design an $\epsilon$-LDP mechanism with output size equal to the input size if we have a random key with size equal the input size and having a distribution such that $\frac{q_{\max}}{q_{\min}}\leq e^{\epsilon}$. \begin{lemma}~\label{lemm5_2} Suppose that an $\epsilon$-LDP-Rec mechanism with an input $X\in\left[k\right]$ and an output $Y\in\mathcal{Y}$ is designed using a random key $U\in\mathcal{U}$ with size $|\mathcal{U}|=m> k$. Then there exists an $\epsilon$-LDP-Rec mechanism with an input $X\in\left[k\right]$ and an output $Y\in\left[k\right]$ designed using a random key $U^{\prime}\in\left[k\right]$ such that $H\left(U\right)\geq H\left(U^{\prime}\right)$. \end{lemma} Now, we are ready to prove Theorem~\ref{Th2_4}. We prove the first necessary condition of Theorem~\ref{Th2_4} in two parts: We can show $|\mathcal{Y}|\geq |\mathcal{X}|$ using the recoverability constraint and $|\mathcal{U}|\geq |\mathcal{Y}|$ using the privacy constraint. We prove these in Appendix~\ref{AppF-2}. From Lemma~\ref{lemm5_2} and the first necessary condition, we see that the $\epsilon$-LDP-Rec mechanism with the smallest amount of randomness is obtained when $|\mathcal{U}|=|\mathcal{Y}|=|\mathcal{X}|=k$. Hence, we restrict our attention to this case only. Let $U\in\left[k\right]$ be a random key having a distribution $\mathbf{q}=\left[q_1,\ldots,q_k\right]$. Without loss of generality, we assume that $q_1\leq q_2\leq \ldots\leq q_k$. Before we prove the necessity of the second condition, we claim that $q_k/q_1\leq e^{\epsilon}$. We prove this using both privacy and recoverability constraints in Appendix~\ref{AppF-2}. Now, we are ready to prove the necessity of the second condition. Our objective is to find the minimum entropy of the random key $U$ with size $|\mathcal{U}|=k$ such that the private mechanism is $\epsilon$-LDP and the sample $X$ can be recovered from observing $Y$ and the random key $U$. The problem can be formulated as follows \begin{align}~\label{eqn5_1} \min\limits_{\mathbf{q}=\left[q_1,\ldots,q_k\right]}&\ H\left(U\right)=-\sum_{j=1}^{k}q_j\log\left(q_j\right)\\ s.t.,&\ 1\leq\frac{q_j}{q_1}\leq e^{\epsilon}\ \forall j\in\left[k\right]~\label{eqn5_2}\\ & \sum_{j=1}^{k}q_j=1, \,\,\, q_j\geq 0\ \forall j\in\left[k\right]~\label{eqn5_3} \end{align} where the constraint~\eqref{eqn5_2} is obtained from the claim proved above. Observe that the constraints~\eqref{eqn5_2}-\eqref{eqn5_3} form a closed polytope. Furthermore, the objective function~\eqref{eqn5_1} is a concave function on $\mathbf{q}$. Since we \textit{minimize} a concave function over a polytope, the global optimum point is one of the vertices of the polytope~\cite{rosen1983global}. Since we have a single equality constraint, a vertex has to satisfy at least $k-1$ inequality constraints with equality. Observe that none of the inequalities in~\eqref{eqn5_3} can be satisfied with equality, otherwise the privacy constraints in~\eqref{eqn5_2} would be violated. Thus, the optimal vertex is of the form $$\mathbf{q}=\left[\underbrace{q_1,\ldots,q_1}_{k-s\ \text{terms}},\underbrace{e^{\epsilon}q_1,\ldots,e^{\epsilon}q_1}_{s\ \text{terms}}\right]$$ such that $s$ of inequalities from $\frac{q_j}{q_1}\leq e^{\epsilon}$ are satisfied with equality and $\left( k-s-1\right)$ of inequalities from $1\leq \frac{q_j}{q_1}$ are satisfied with equality, where $s$ is a variable to be optimized. Hence, the optimal distribution has the form \begin{equation}~\label{eqn5_8} \mathbf{q}^{s}=\left[\underbrace{\frac{1}{se^{\epsilon}+k-s},\ldots,\frac{1}{se^{\epsilon}+k-s}}_{k-s\ \text{terms}},\underbrace{\frac{e^{\epsilon}}{se^{\epsilon}+k-s},\ldots,\frac{e^{\epsilon}}{se^{\epsilon}+k-s}}_{s\ \text{terms}}\right], \end{equation} where $s$ is an integer parameter chosen to minimize the entropy as follows \begin{equation}~\label{eqn5_4} \begin{aligned} s^{*}=&\arg\min_{s\in\left[k\right]} \sum_{j=1}^{k}q^{s}_{j}\log\left(\frac{1}{q^{s}_{j}}\right) =\arg\min_{s\in\left[k\right]}\ \log\left(s\left(e^{\epsilon}-1\right)+k\right)-\frac{s\epsilon e^{\epsilon} }{ s\left(e^{\epsilon}-1\right)+k}\\ &=\arg\min_{s\in\left[k\right]}\ \log\left(s\left(e^{\epsilon}-1\right)+k\right)+\frac{\epsilon e^{\epsilon} k}{\left(e^{\epsilon}-1\right)\left( s\left(e^{\epsilon}-1\right)+k\right)}-\frac{\epsilon e^{\epsilon}}{e^{\epsilon}-1}. \end{aligned} \end{equation} In order to solve the optimization problem~\eqref{eqn5_4}, we relax the problem by assuming $s$ is a real number taking values in $[0,k]$. The optimization problem in~\eqref{eqn5_4} is non-convex in for general values of $\epsilon$ and $k$. Thus, we get all local minima by setting the derivative to zero along with the boundary points $s\in\lbrace 0,k\rbrace$. Then we check all these critical points to obtain the global minimum point. However, we can see that at the boundary points $s\in\lbrace 0,k\rbrace$, the objective function is equal to $\log\left(k\right)$ which is the maximum entropy for any random variable with support size $k$. Hence, the optimal solution is one of the local minimums. We can verify that the objective function has only one local minimum point by setting the derivative with respect to $s$ to zero. Thus, we get \begin{equation}~\label{eqn5_5} \tilde{s}=k\frac{e^{\epsilon}\left(\epsilon-1\right)+1 }{\left(e^{\epsilon}-1\right)^{2}}, \end{equation} where $\tilde{s}$ denotes the local minimum point. Since~\eqref{eqn5_4} is a continuous function in the real variable $s$, the optimal discrete point $s^{*}$ is within the local minimum $\tilde{s}$. Hence, we get the closest integer to the real value in~\eqref{eqn5_5}. As a result, we get $$H\left(U\right)\geq H\left(U_{\min}^{s^{*}}\right),$$ where $s^{*}=\arg\min\limits_{s\in\lbrace \ceil{l},\floor{l}\rbrace}H\left(U_{\min}^{s}\right)$ for $l=k\frac{e^{\epsilon}\left(\epsilon-1\right)+1 }{\left(e^{\epsilon}-1\right)^{2}}$, and $U_{\min}^{s}$ is a random variable having a distribution $\mathbf{q}^{s^{*}}$ given in~\eqref{eqn5_8}. Hence, the proof of the necessary part is completed. The sufficiency part is straightforward: Note that the random key $U_{\min}^{s^{*}}$ defined in Theorem~\ref{Th2_4} satisfies the necessary conditions, and Lemma~\ref{lemm5_1}, we can construct an $\epsilon$-LDP-Rec mechanism using the random key $U_{\min}^{s^{*}}$. Thus, these conditions are sufficient. \subsection{Proof of Theorem~\ref{Th2_5}}~\label{Recov-B} In this section, we show that the Hadamard response (HR) scheme proposed in~\cite{acharya2018hadamard} is, in fact, an $\epsilon$-LDP-Rec mechanism, where it is possible to recover the input $X$ from the output $Y$ and randomness $U$. Furthermore, we show that it is order optimal from a randomness perspective\footnote{We mention that the Hadamard mechanism in~\cite{acharya2018hadamard} is symmetric with non-binary outputs, while the Hadamard response in~\cite{acharya2019communication} has only binary outputs.}. We briefly describe the HR mechanism, and then analyze its performance. We refer to~\cite{acharya2018hadamard} for more details. The HR mechanism is parameterized by two parameters: $K$ denotes the support size of the private mechanism output ($\mathcal{Y}=\left[K\right]$), and $s\leq K$ is a positive integer. For each $x\in\mathcal{X}$, let $\mathcal{C}_x\subseteq\left[K\right]$ be a subset of outputs of size $|\mathcal{C}_x|=s$. The private mechanism for HR is defined by \begin{equation} Q\left(y|X\right)=\left\{\begin{array}{ll} \frac{e^{\epsilon}}{s e^{\epsilon}+K-s}& \text{if}\ y\in\mathcal{C}_x\\ \frac{1}{s e^{\epsilon}+K-s}& \text{if}\ y\notin \mathcal{C}_x\\ \end{array} \right. \end{equation} We can easily show that this is a symmetric mechanism, i.e., it can be represented using a private key $U$ of size $|K|$ that is independent of the mechanism input $X$. Furthermore the distribution of the private key $U$ is given by $$\mathbf{q}^{\text{HR}}=\left[\underbrace{\frac{1}{se^{\epsilon}+K-s},\ldots,\frac{1}{se^{\epsilon}+K-s}}_{K-s\ \text{terms}},\underbrace{\frac{e^{\epsilon}}{se^{\epsilon}+K-s},\ldots,\frac{e^{\epsilon}}{se^{\epsilon}+K-s}}_{s\ \text{terms}}\right]$$ It remains to choose $K$, $s$, and $\lbrace \mathcal{C}_x\rbrace_{x\in\mathcal{X}}$ for fixed $\epsilon$ and input size $|\mathcal{X}|=k$. In~\cite[Section~$5$]{acharya2018hadamard}, the authors proposed $K=B\times b$ and $s=b/2$, where $B=2^{\ceil{\log_2\left(\min\lbrace e^{\epsilon},2k\rbrace\right)}-1}$, and $b=2^{\ceil{\log_2\left(\frac{k}{B}+1\right)}}$. Furthermore, each set $\mathcal{C}_x$ is a subset of rows indices of the Hadamard matrix. These parameters are chosen such that $s$ is close to $\max\lbrace \frac{k}{e^{\epsilon}},1\rbrace$, and $K$ is approximately the smallest power of $2$ greater than $k$. The reason behind using values that are powers of $2$ is to exploit the structure of the Hadamard matrix. In~\cite[Theorem~$7$]{acharya2018hadamard}, the authors proved that the minimax risk of HR for $\ell_{2}^{2}$ loss function is given by \begin{equation} r_{\epsilon,n,k}^{\ell_{2}^{2}}\leq\left\{\begin{array}{ll} \mathcal{O}\left(\frac{k}{n\epsilon^{2}}\right)& \text{for}\ \epsilon <1\\ \mathcal{O}\left(\frac{k}{ne^{\epsilon}}\right)& \text{for}\ 1\leq\epsilon \leq \log\left(k\right)\\ \mathcal{O}\left(\frac{1}{n}\right)& \text{for}\ \epsilon >\log\left(k\right) \end{array}\right. \end{equation} which is order optimal for all privacy levels. In addition, the authors in~\cite{acharya2019communication} have shown that the HR scheme is order optimal for heavy hitter estimation in the high privacy regime ($\epsilon=\mathcal{O}\left(1\right)$). In the following, we analyze the performance of HR with respect to the randomness of the private mechanism. Observe that for fixed $\epsilon$ and $k$, the parameters $K$, $B$, and $b$ of HR is bounded by $$\frac{\min\lbrace e^{\epsilon},2k\rbrace}{2}\leq B\leq \min\lbrace e^{\epsilon},2k\rbrace,\quad \frac{k}{\min\lbrace e^{\epsilon},2k\rbrace}\leq b\leq \frac{4k}{\min\lbrace e^{\epsilon},2k\rbrace},\quad k\leq K\leq 4k.$$ Hence, the entropy of the private key used to generate the HR private mechanism is bounded by \begin{equation}~\label{eqn5_6} \begin{aligned} H^{\text{HR}}\left(U\right)&=\log\left(\frac{b}{2} e^{\epsilon}+K-\frac{b}{2}\right)-\frac{\epsilon e^{\epsilon}\frac{b}{2}}{\frac{b}{2}e^{\epsilon}+K-\frac{b}{2}}\\ &\leq\log\left(\frac{2k}{\min\lbrace e^{\epsilon},2k\rbrace}\left(e^{\epsilon}-1\right)+4k\right)-\frac{\epsilon e^{\epsilon}}{e^{\epsilon}-1+2\min\lbrace e^{\epsilon},2k\rbrace}\\ &= \left\{\begin{array}{ll} \log\left(2k\frac{3e^{\epsilon}-1}{e^{\epsilon}}\right)-\frac{\epsilon e^{\epsilon}}{3e^{\epsilon}-1}& \text{if}\ \epsilon\leq\log\left(k\right)+1, \\ \log\left(e^{\epsilon}+4k-1\right)-\frac{\epsilon e^{\epsilon}}{e^{\epsilon}+4k-1}& \text{if}\ \epsilon>\log\left(k\right)+1. \end{array}\right. \end{aligned} \end{equation} The minimum entropy of the private key to generate an $\epsilon$-LDP-Rec mechanism is bounded by (Theorem~\ref{Th2_4}) \begin{equation}~\label{eqn5_7} \begin{aligned} H^{\min}\left(U\right)&= \log\left(s^{*}e^{\epsilon}+k-s^{*}\right)-\frac{\epsilon e^{\epsilon}s^{*}}{s^{*}e^{\epsilon}+k-s^{*}}\\ &\geq \left\{\begin{array}{ll} \log\left(k\left(\frac{\epsilon e^{\epsilon}}{e^{\epsilon}-1}\right)\right)-\frac{\epsilon e^{\epsilon}}{e^{\epsilon}+\frac{\left( e^{\epsilon}-1\right)^2}{e^{\epsilon}\left(\epsilon-1\right)+1}-1} &\text{if}\ \epsilon\leq \log\left(k\right),\\ \log\left(e^{\epsilon}+k-1\right)-\frac{\epsilon e^{\epsilon}}{e^{\epsilon}+k-1} &\text{if}\ \epsilon>\log\left(k\right).\\ \end{array}\right. \end{aligned} \end{equation} From~\eqref{eqn5_6} and~\eqref{eqn5_7}, we can verify that HR is randomness-order-optimal for all privacy levels $\epsilon$. \section{Sequence of Distribution Estimation (Proof of Theorem~\ref{Th2_6})} \label{Recov_GDP} In this section, we prove Theorem~\ref{Th2_6} (see page~\pageref{Th2_6}). The main idea of our proof is as follows. The first condition is obtained in a similar manner as in the proof of Theorem~\ref{Th2_4}. For the second condition, we relate the minimum amount of randomness required to preserve privacy of $T$ samples to the minimum amount of randomness required to preserve privacy of $T-1$ samples. In particular, we prove that $H\left(U\right)\geq H\left(U_{\min,T-1}\right)+H\left(U_{\min,1}\right)$, where $H\left(U_{\min,t}\right)$ is the minimum amount of randomness of a key when we have a database of $t$ input samples. \begin{ddd} Let $U\in\mathcal{U}$ be a random key drawn from a discrete distribution $\mathbf{q}=\left[q_{1},\cdots,q_{k^{T}}\right]$ with a support size $|\mathcal{U}|=k^{T}$, where $q_{u}=\text{Pr}\left[U=u\right]$. We say that the distribution $\mathbf{q}$ satisfies $\epsilon$-DP, if there exists a bijective function $f:\mathcal{X}^{T}\to \left[1:k^{T}\right]$ from the dataset $\mathcal{X}^{T}$ to integers $\left[1:k^{T}\right]$, such that for every neighboring databases $\mathbf{x},\mathbf{x}^{\prime}\in\left[k\right]^{T}$, we have \begin{equation}~\label{eqn6_1} \frac{q_{f\left(\mathbf{x}\right)}}{q_{f\left(\mathbf{x}^{\prime}\right)}}\leq e^{\epsilon}. \end{equation} \end{ddd} We begin our proof with the following lemma which is a generalized version of Lemma~\ref{lemm5_1}. We prove it in Appendix~\ref{AppG}. \begin{lemma}~\label{lemm6_1} Consider an input database $\mathbf{x}=\left(x^{\left(1\right)},\ldots,x^{\left(T\right)}\right)\in\left[k\right]^{T}$, and a random key $U\in\mathcal{U}=\lbrace u_1,\cdots,u_{k^{T}}\rbrace$ distributed according to an $\epsilon$-DP distribution $\mathbf{q}=\left[q_{1},\cdots,q_{k^{T}}\right]$. Then, there exists an $\epsilon$-DP-Rec mechanism $Q:\left[k\right]^{T}\to \left[k\right]^{T}$ that uses $U$ to create an output $Y^{T}\in \left[k\right]^{T}$, such that we can recover the input database $X^{T}$ from $(U,Y^{T})$. \end{lemma} We can prove the first necessary condition of Theorem~\ref{Th2_6} (which is to show $|\mathcal{U}|\geq|\mathcal{Y}^{T}|\geq |\mathcal{X}^{T}|$) in the same way as we proved that for Theorem~\ref{Th2_4}. For completeness, we provide a proof of it in Appendix~\ref{AppG}. Now we prove the necessity of the second condition. Consider an arbitrary $\epsilon$-DP-Rec mechanism $Q$ with output $Y^{T}\in\mathcal{Y}^{T}$ using a random key $U\in\mathcal{U}$, where $|\mathcal{Y}^{T}|=m\geq k^{T}$ and $|\mathcal{U}|=l\geq m$. Let $U\sim \mathbf{q}$, where $\mathbf{q}=\left[q_1,\ldots,q_l\right]$ such that $q_u=\text{Pr}\left[U=u\right]$ for $u\in\mathcal{U}$. Let $\mathcal{U}_{\mathbf{y}\mathbf{x}}\subset\mathcal{U}$ be a subset of key values such that the input $X^{T}=\mathbf{x}$ is mapped to $Y^{T}=\mathbf{y}$ when $U\in\mathcal{U}_{\mathbf{y}\mathbf{x}}$. Thus, the private mechanism $Q$ can be represented as \begin{equation} Q\left(\mathbf{y}|\mathbf{x}\right)=\sum_{u\in\mathcal{U}_{\mathbf{y}\mathbf{x}}}q_u. \end{equation} Observe that $\sum_{\mathbf{y}\in\mathcal{Y}^{T}}Q\left(\mathbf{y}|\mathbf{x}\right)=1$, since $Q\left(\mathbf{y}|\mathbf{x}\right)$ is a conditional distribution for any given $\mathbf{x}\in\left[k\right]^{T}$. Since $Q$ is an $\epsilon$-DP-Rec mechanism, it follows from the recoverability constraint that each input $\mathbf{x}$ is mapped to $\mathbf{y}$ using a different set of key values ($\mathcal{U}_{\mathbf{y}\mathbf{x}}\bigcap\mathcal{U}_{\mathbf{y}\mathbf{x}^{\prime}}=\phi$). Thus, for each $\mathbf{y}\in\mathcal{Y}^{T}$, we have $s_{\mathbf{y}}=\sum_{\mathbf{x}\in\left[k\right]^{T}}Q\left(\mathbf{y}|\mathbf{x}\right)\leq 1$. Furthermore, we get $\sum_{\mathbf{y}\in\mathcal{Y}^{T}}\sum_{\mathbf{x}\in\left[k\right]^{T}}Q\left(\mathbf{y}|\mathbf{x}\right)=\sum_{\mathbf{y}\in\mathcal{Y}^{T}}s_{\mathbf{y}}=k^{T}$. We sort the $k^{T}$ databases in $\mathcal{X}^{T}$ in lexicographic order by arranging them in increasing order of $x^{\left(1\right)}$. Then, we arrange the databases that have the same $x^{\left(1\right)}$ in increasing order of $x^{\left(2\right)}$ and so on. For example, database $\mathbf{x}=\left(x^{\left(1\right)},\ldots,x^{\left(i\right)},x^{\left(i+1\right)},\ldots,x^{\left(T\right)}\right)$ will appear before the database $\tilde{\mathbf{x}}=\left(x^{\left(1\right)},\ldots,x^{\left(i\right)},\tilde{x}^{\left(i+1\right)},\ldots,\tilde{x}^{\left(T\right)}\right)$ when $x^{\left(i+1\right)}<\tilde{x}^{\left(i+1\right)}$. Furthermore, we denote $\mathbf{x}_{i}$ as the $i$th database in the lexicographic order for $i\in\left[k\right]^{T}$. Observe that $s_{\mathbf{y}}=\sum_{\mathbf{x}\in\left[k\right]^{T}}Q\left(\mathbf{y}|\mathbf{x}\right)$ for given $\mathbf{y}\in\mathcal{Y}^{T}$. Thus, the probabilities $\mathbf{P}^{\mathbf{y}}= \left[P^{\mathbf{y}}_1,\ldots,P^{\mathbf{y}}_{k^{T}}\right]$ construct a valid distribution with support size $k^{T}$, where $P_{j}^{\mathbf{y}}=\frac{Q\left(\mathbf{y}|\mathbf{x}_j\right)}{s_{\mathbf{y}}}$ for $j\in\left[k\right]^{T}$. Furthermore, for every neighboring databases $\mathbf{x},\mathbf{x}^{\prime}\in\left[k\right]^{T}$, we have \begin{equation}~\label{eqn6_7} \frac{\nicefrac{Q\left(\mathbf{y}|\mathbf{x}\right)}{s_{\mathbf{y}}}}{\nicefrac{Q\left(\mathbf{y}|\mathbf{x}^{\prime}\right)}{s_{\mathbf{y}}}}=\frac{Q\left(\mathbf{y}|\mathbf{x}\right)}{Q\left(\mathbf{y}|\mathbf{x}^{\prime}\right)} \stackrel{\left(a\right)}{\leq} e^{\epsilon}, \end{equation} where step $\left(a\right)$ follows from the fact that $Q$ is an $\epsilon$-DP-Rec mechanism. Hence, the distribution $\mathbf{P}^{\mathbf{y}}$ is $\epsilon$-DP distribution. The proof of the following lemma is presented in Appendix~\ref{AppH}. \begin{lemma}~\label{lemm6_2} For every output $\mathbf{y}\in\mathcal{Y}^{T}$, we have $H\left(\mathbf{P}^{\mathbf{y}}\right)\geq H\left(U_{\min,T-1}\right)+H\left(U_{\min,1}\right),$ where $H\left(U_{\min,t}\right)$ denotes the minimum randomness of a private key when we have a database of $t$ samples for $t\in\lbrace 1,\ldots,T\rbrace$. \end{lemma} Using Lemma~\ref{lemm6_2}, we can prove Theorem~\ref{Th2_6} as follows. \begin{align} H\left(U\right)&=\frac{1}{k^{T}}\sum_{\mathbf{x}\in\left[k\right]^{T}}H\left(U\right)\stackrel{\left(a\right)}{\geq} \frac{1}{k^{T}}\sum_{\mathbf{x}\in\left[k\right]^{T}}H\left(Y^{T}|X^{T}=\mathbf{x}\right) \notag\\ &=\frac{1}{k^{T}}\sum_{\mathbf{x}\in\left[k\right]^{T}}\sum_{\mathbf{y}\in\mathcal{Y}^{T}}-Q\left(\mathbf{y}|\mathbf{x}\right)\log\left(Q\left(\mathbf{y}|\mathbf{x}\right)\right) \notag \\ &=\frac{1}{k^{T}}\sum_{\mathbf{y}\in\mathcal{Y}^{T}}\left[s_{\mathbf{y}}\left(\sum_{\mathbf{x}\in\left[k\right]^{T}}-\frac{Q\left(\mathbf{y}|\mathbf{x}\right)}{s_{\mathbf{y}}}\log\left(\frac{Q\left(\mathbf{y}|\mathbf{x}\right)}{s_{\mathbf{y}}}\right)\right)-s_{\mathbf{y}}\log\left(s_{\mathbf{y}}\right)\right] \notag \\ &=\frac{1}{k^{T}}\sum_{\mathbf{y}\in\mathcal{Y}^{T}}\big[s_{\mathbf{y}}H\left(\mathbf{P}^{\mathbf{y}}\right)-s_{\mathbf{y}}\log\left(s_{\mathbf{y}}\right)\big] \notag \\ &\stackrel{\left(b\right)}{\geq} \frac{1}{k^{T}}\sum_{\mathbf{y}\in\mathcal{Y}^{T}} \big[s_{\mathbf{y}}\left(H\left(U_{\min,T-1}\right)+H\left(U_{\min,1}\right)\right)-s_{\mathbf{y}}\log\left(s_{\mathbf{y}}\right)\big] \notag\\ &\stackrel{\left(c\right)}{\geq} H\left(U_{\min,T-1}\right)+H\left(U_{\min,1}\right), \label{eqn6_6} \end{align} where step $\left(a\right)$ follows from the fact that $Q\left(\mathbf{y}|\mathbf{x}\right)$ is a function of $U$. Step $\left(b\right)$ follows from Lemma~\ref{lemm6_2}. The inequality $\left(c\right)$ follows from solving the problem \begin{equation}~\label{opt_10} \begin{aligned} \min_{\lbrace s_{\mathbf{y}}\rbrace}&\ \sum_{\mathbf{y}\in\mathcal{Y}^{T}}s_{\mathbf{y}}\left[H\left(U_{\min,T-1}\right)+H\left(U_{\min,1}\right)\right]-s_{\mathbf{y}}\log\left(s_{\mathbf{y}}\right)\\ s.t.&\ \sum_{\mathbf{y}\in\mathcal{Y}^{T}}s_{\mathbf{y}}=k^{T}\quad \text{ and } 0\leq s_{\mathbf{y}}\leq 1, \ \forall\ \mathbf{y}\in\mathcal{Y}^{T} \end{aligned} \end{equation} Note that $f\left(x\right)=-x\log\left(x\right)$ is a concave function on $0\leq x\leq 1$. Therefore, the objective function in~\eqref{opt_10} is concave in $\lbrace s_{\mathbf{y}}\rbrace$. The minimum value of a concave function is one of the vertices which is obtained when all the inequalities are satisfied by equalities. By setting $k^{T}$ of the $s_{\mathbf{y}}$'s to be one and setting the remaining $|\mathcal{Y}^{T}|-k^T$ of $s_{\mathbf{y}}$'s to be zero, the objective value in \eqref{opt_10} becomes $k_T$, which gives inequality (c). Now, from~\eqref{eqn6_6}, we conclude that $H\left(U\right)\geq TH\left(U_{\min,1}\right)$, where $H\left(U_{\min,1}\right)$ is the minimum amount of randomness required to design an $\epsilon$-LDP-Rec mechanism given in Theorem~\ref{Th2_4}. This completes the proof of Theorem~\ref{Th2_6}. \section{Single-level Privacy (Proofs of Theorem~\ref{Th2_1} and Theorem~\ref{Th2_2})} \label{LDP} \subsection{Lower Bound on The Minimax Risk Estimation Using Assouad's Method}~\label{LDP_AS} Now we prove the lower bound on the minimax risk given in Theorem~\ref{Th2_1} (see page~\pageref{Th2_1}). We first follow similar steps as in~\cite{duchi2018minimax,Ye2018} to reduce the minimax problem into multiple binary testing problems using Assouad's method. We note that~\cite{duchi2018minimax,Ye2018} do not consider a randomness constraint. Hence, we formulate an optimization problem to obtain a lower bound on the minimax risk estimation with a randomness constraint. Finding a tight bound on the solution of this problem is the main step in our proof. We also provide an alternative proof of Theorem~\ref{Th2_1} by using Fisher information, which leads to a tight bound for $\ell=\ell_{2}^{2}$ with smaller constant factors (see Appendix~\ref{LDP_F}). Let $|\mathcal{X}|=k$ be the input alphabet size. Let $\lbrace \mathbf{p}^{\nu}\rbrace$ be a set of distributions parameterized by $\nu=\left(\nu_1,\ldots,\nu_{k/2}\right)\in\mathcal{V}=\lbrace -1,1\rbrace^{k/2}$. The distribution $\mathbf{p}^{\nu}=\left(p_{1}^{\nu},\ldots,p_{k}^{\nu}\right)$ is given by: \begin{equation} p_{j}^{\nu}=\left\{\begin{array}{ll} \frac{1}{k}+\delta \nu_j& \text{if}\ j\in\lbrace 1,\ldots,k/2\rbrace\\ \frac{1}{k}-\delta \nu_{j-k/2} & \text{if}\ j\in\lbrace k/2+1,\ldots,k\rbrace\\ \end{array} \right., \end{equation} where $0\leq\delta\leq 1/k$ is a parameter that will be chosen later. Let $Y^{n}=\left[Y_1,\ldots,Y_n\right]$ and $\mathcal{Y}^{n}=\mathcal{Y}_1\times\cdots\times\mathcal{Y}_n$. Following~\cite{duchi2018minimax}, for any loss function $\ell\left(\hat{\mathbf{p}},\mathbf{p}\right)=\sum_{j=1}^{k}\phi\left(\hat{p}_j-p_j\right)$, where $\phi:\mathbb{R}\to\mathbb{R}_{+}$ is a symmetric function, we have\footnote{Observe that for loss function $\ell=\ell_{2}^{2}$, we have $\phi\left(x\right)=x^{2}$, and for loss function $\ell=\ell_{1}$, we have $\phi\left(x\right)=|x|$.} \begin{equation} \ell\left(\hat{\mathbf{p}}\left(y^{n}\right),\mathbf{p}^{\nu}\right)=\sum_{j=1}^{k}\phi\left(\hat{p}_{j}\left(y^{n}\right)-p_j^{\nu}\right)\geq\phi\left(\delta\right)\sum_{j=1}^{k/2}\mathbbm{1}\left(\text{sgn}\left(\hat{p}_j\left(y^{n}\right)-\frac{1}{k}\right)\neq \nu_j\right), \end{equation} where $\text{sgn}\left(x\right)=1$ if $x\geq 0$ and $\text{sgn}\left(x\right)=0$ otherwise. Suppose that user $i$ chooses a private mechanism $Q_i\in\mathcal{Q}_{\left(\epsilon,R\right)}$ that generates an output $Y_i\in\mathcal{Y}_i$. Let $\mathbf{M}_{i}^{\nu}$ be the output distribution on $\mathcal{Y}_i$ for an input distribution $\mathbf{p}^{\nu}$ on $\mathcal{X}$ defined by \begin{equation}~\label{eqn3_30} \mathbf{M}_{i}^{\nu}\left(y\right)=\sum_{j=1}^{k}Q_{i}\left(y|X_i=j\right)p_{j}^{\nu}. \end{equation} Let $\mathbf{M}^{n}_{+j}$ and $\mathbf{M}^{n}_{-j}$ denote the marginal distribution on $\mathcal{Y}^{n}$ conditioned on $\nu_j=+1$ and $\nu_j=-1$, respectively, where \begin{align*} \mathbf{M}^{n}_{+j}\left(y^{n}\right) &= \frac{1}{|\mathcal{V}|}\sum_{\nu:\nu_j=+1}\prod_{i=1}^{n}\mathbf{M}^{\nu}_i\left(y_i\right) \\ \mathbf{M}^{n}_{-j}\left(y^{n}\right) &= \frac{1}{|\mathcal{V}|}\sum_{\nu:\nu_j=-1}\prod_{i=1}^{n}\mathbf{M}^{\nu}_i\left(y_i\right). \end{align*} Thus, the minimax risk can be bounded using the following lemma whose proof is presented in Appendix~\ref{AppK}. \begin{lemma}~\label{lemm3_4} For the family of distributions $\left\{ \mathbf{p}^{\nu}:\nu\in\mathcal{V}=\lbrace -1,1\rbrace^{k/2}\right\}$, and a loss function $\ell\left(\hat{\mathbf{p}},\mathbf{p}\right)=\sum_{j=1}^{k}\phi\left(\hat{p}_j-p_j\right)$ defined above, we have \begin{equation}~\label{eqn3_10} r^{\ell}_{\epsilon,R,n,k}\geq\phi\left(\delta\right)\frac{k}{2}\left(1-\sqrt{\frac{n}{2}\sup_{j\in\left[k/2\right]}\sup_{i\in\left[n\right]}\sup_{\nu:\nu_j=1}\sup_{Q_i\in\mathcal{Q}_{\left(\epsilon,R\right)}} D_{\text{KL}}\left(\mathbf{M}^{\nu}_{i}||\mathbf{M}^{\nu-2e_j}_{i}\right)}\right) \end{equation} \end{lemma} Fix arbitrary $i\in\left[n\right]$, $j\in\left[k/2\right]$ and $\nu\in\mathcal{V}$. We have {\allowdisplaybreaks \begin{align} D_{\text{KL}}&\left(\mathbf{M}^{\nu}_{i}||\mathbf{M}^{\nu-2e_j}_{i}\right)\stackrel{\left(a\right)}{\leq} D_{\text{KL}}\left(\mathbf{M}^{\nu}_{i}||\mathbf{M}^{\nu-2e_j}_{i}\right)+D_{\text{KL}}\left(\mathbf{M}^{\nu-2e_j}_{i}||\mathbf{M}^{\nu}_{i}\right)\\ &=\sum_{y\in\mathcal{Y}_i} \left( \mathbf{M}^{\nu}_{i}\left(y\right)-\mathbf{M}^{\nu-2e_j}_{i}\left(y\right)\right)\log\left(\frac{\mathbf{M}^{\nu}_{i}\left(y\right)}{\mathbf{M}^{\nu-2e_j}_{i}\left(y\right)}\right)\nonumber\\ &\stackrel{\left(b\right)}{\leq}\sum_{y\in\mathcal{Y}_i} \frac{\left( \mathbf{M}^{\nu}_{i}\left(y\right)-\mathbf{M}^{\nu-2e_j}_{i}\left(y\right)\right)^2}{\mathbf{M}^{\nu-2e_j}_{i}\left(y\right)}\stackrel{\left(c\right)}{=}\sum_{y\in\mathcal{Y}_i} \delta^2\frac{\left( Q_i\left(y|j\right)-Q_i\left(y|j+k/2\right)\right)^2}{\sum_{j^{\prime}=1}^{k}Q_i\left(y|j^{\prime}\right)p^{\nu-2e_j}_{j^{\prime}}}\nonumber\\ &\stackrel{\left(d\right)}{\leq}2\delta^2 e^{\epsilon}\sum_{y\in\mathcal{Y}_i} \frac{\left( Q_i\left(y|j\right)-Q_i\left(y|j+k/2\right)\right)^2}{Q_i\left(y|j\right)+Q_i\left(y|j+k/2\right)}~\label{eqn3_31}, \end{align} } where step $\left(a\right)$ follows from the fact that $D_{\text{KL}}\left(.||.\right)$ is not negative. Step $\left(b\right)$ follows from the inequality $\log\left(x\right)\leq x-1$. Step $\left(c\right)$ follows from the definition of $\mathbf{M}^{\nu}_{i}$ in~\eqref{eqn3_30}. Step $\left(d\right)$ follows from bounding the denominator as follows: \begin{equation} \begin{aligned} \sum\limits_{j^{\prime}=1}^{k}Q_i\left( y|j^{\prime}\right)p_{j^{\prime}}^{\nu-2e_j}&\geq e^{-\epsilon}\frac{Q_i\left( y|j\right)+Q_i\left( y|j+k/2\right)}{2}\sum_{j^{\prime}=1}^{k}p_{j^{\prime}}^{\nu-2e_j}\\ &=e^{-\epsilon}\frac{Q_i\left( y|j\right)+Q_i\left( y|j+k/2\right)}{2}, \end{aligned} \end{equation} where we use the fact that $Q_i\left( y|j^{\prime}\right)\geq e^{-\epsilon} Q_i\left( y|j\right)$ and $Q_i\left( y|j^{\prime}\right)\geq e^{-\epsilon} Q_i\left( y|j+k/2\right),\ \forall j^{\prime}\in\left[k\right]$. \begin{lemma}~\label{lemma_1} For any randomized mechanism $Q\in\mathcal{Q}_{\left(\epsilon,R\right)}$ that generates an output $Y\in\mathcal{Y}$, we have \begin{equation}~\label{eqna_1} \begin{aligned} \sup_{Q\in\mathcal{Q}_{\left(\epsilon,R\right)}}\sum_{y\in\mathcal{Y}}\frac{\left(Q\left(y|j\right)-Q\left(y|j+k/2\right)\right)^2}{Q\left(y|j\right)+Q\left(y|j+k/2\right)}\leq\left\{\begin{array}{ll} 2\frac{\left(e^\epsilon-1\right)^2}{\left( e^\epsilon+1\right)^2}& \text{if}\ R\geq H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right)\\ 2\frac{p_R^2\left(e^{\epsilon}-1\right)^2}{ e^{2\epsilon}}& \text{if}\ R< H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right) \end{array} \right. \end{aligned} \end{equation} \end{lemma} This lemma presents an upper bound on equation~\eqref{eqn3_31} as a function of the randomness $R$ for any private mechanism $Q\in\mathcal{Q}_{\left(\epsilon,R\right)}$. To prove this lemma, we first show that the optimization problem~\eqref{eqna_1} is non-convex due to the randomness constraint. We then prove that the maximum value of this function~\eqref{eqna_1} is obtained when the output of the mechanism $Q\in\mathcal{Q}_{\left(\epsilon,R\right)}$ is binary. Then, we obtain a tight bound numerically for the binary output. \begin{proof}[Proof of Lemma \ref{lemma_1}] Without loss of generality assume that $\mathcal{Y}=\lbrace y_1,\ldots,y_m\rbrace$ with $|\mathcal{Y}|=m$. For ease of notation, we write $Q\left(y_l|j\right)=q_{l,j}$ and $Q\left(y_l|j+k/2\right)=q_{l,j+k/2}$. The problem~\eqref{eqna_1} can be formulated as follows \begin{align} \textbf{P1:} \qquad&\max_{\lbrace q_{l,j},q_{l,j+k/2}\rbrace_{l=1}^{m}} \sum_{l=1}^{m}\frac{\left(q_{l,j}-q_{l,j+k/2}\right)^2}{q_{l,j}+q_{l,j+k/2}}~\label{eqnb_1}\\ \text{s.t.}&\quad H\left(\left[q_{1,j},\ldots,q_{m,j}\right]\right)\leq R,\qquad H\left(\left[q_{1,j+k/2},\ldots,q_{m,j+k/2}\right]\right)\leq R~\label{eqnb_4}\\ &\quad e^{-\epsilon}\leq \frac{q_{l,j}}{q_{l,j+k/2}}\leq e^{\epsilon},\hspace{3cm} \forall l\in\left[m\right]\nonumber\\ &\quad q_{l,j}\geq 0,\qquad q_{l,j+k/2}\geq 0,\hspace{2cm} \forall l\in\left[m\right]\nonumber\\ &\quad \sum_{l=1}^{m}q_{l,j}=1,\qquad \sum_{l=1}^{m}q_{l,j+k/2}=1\nonumber ~\label{eqnb_7} \end{align} Note that the objective function~\eqref{eqnb_1} is jointly convex in both $\lbrace q_{l,j} \rbrace_{l=1}^{m}$ and $\lbrace q_{l,j+k/2}\rbrace_{l=1}^{m}$. However, the optimization problem \textbf{P1} is non-convex due to two reasons. First, we maximize a convex function, and second the entropy constraints~\eqref{eqnb_4} are sub-level sets of a concave function and are non-convex constraints. However, we can solve the optimization problem~\textbf{P1} by exploiting the results of Lemma~\ref{lemmb_1} below. \begin{lemma}~\label{lemmb_1} The optimal solution of the non-convex optimization problem \textbf{P1} is obtained when the output size is $m=2$. \end{lemma} The proof of Lemma~\ref{lemmb_1} is presented in Appendix~\ref{AppB}. Since the output alphabet is binary, we can efficiently plot the feasible region of~\textbf{P1} for $m=2$ as depicted in Figure~\ref{Figb_1}. Since we maximize a convex function, the optimal solution is at the boundary of the feasible set. Furthermore, the objective function~\eqref{eqnb_1} is symmetric on $q_{1,j},\ q_{1,j+k/2}$ for $m=2$. As a result, the optimal solution is given by. \begin{figure*}[t!] \begin{subfigure}{0.5\linewidth} \centerline{\includegraphics[scale=0.355]{F_Region_1.pdf}} \end{subfigure} \begin{subfigure}{0.49\linewidth} \centerline{\includegraphics[scale=0.37]{F_Region_2.pdf}} \end{subfigure} \caption{The feasible region of the optimization problem~\textbf{$P1$} for $m=2$. In $\left(a\right)$, we have $R=0.5<H_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right)$ for $\epsilon=1$, and hence the optimal point is one of the black points. In $\left(b\right)$, we have $R=0.85>H_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right)$ for $\epsilon=1$, and hence the optimal point is one of the black vertices.} ~\label{Figb_1} \end{figure*} \begin{equation}~\label{eqnb_2} q_{1,j}^{*}=\left\{\begin{array}{ll} \frac{e^\epsilon}{e^\epsilon+1}& \text{if}\ R\geq H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right)\\ p_R& \text{if}\ R< H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right) \end{array} \right.,\qquad q_{1,j+k/2}^{*}=\left\{\begin{array}{ll} \frac{1}{e^\epsilon+1}& \text{if}\ R\geq H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right)\\ \frac{p_R}{e^{\epsilon}}& \text{if}\ R< H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right) \end{array} \right., \end{equation} where $q_{2,j}^{*}=1-q^{*}_{1,j}$, and $q_{2,j+k/2}^{*}=1-q_{1,j+k/2}^{*}$. Substituting from~\eqref{eqnb_2} into the objective function~\eqref{eqnb_1}, we get \begin{equation} \sum_{l=1}^{m}\frac{\left(q_{l,j}-q_{l,j+k/2}\right)^2}{q_{l,j}+q_{l,j+k/2}}\leq \left\{\begin{array}{ll} 2\frac{\left(e^\epsilon-1\right)^2}{\left( e^\epsilon+1\right)^2}& \text{if}\ R\geq H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right)\\ 2\frac{p_R^2\left(e^{\epsilon}-1\right)^2}{ e^{2\epsilon}}& \text{if}\ R< H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right) \end{array} \right. \end{equation} Hence, the proof is completed for Lemma~\ref{lemma_1}. \end{proof} Using the bound from Lemma~\ref{lemma_1} in \eqref{eqn3_31} and taking supremum over all $Q_i\in\mathcal{Q}_{\epsilon,R}$, we get \begin{equation}~\label{eqn3_8} \begin{aligned} \sup_{Q_i\in\mathcal{Q}_{\left(\epsilon,R\right)}} D_{\text{KL}}\left(\mathbf{M}^{\nu}_{i}||\mathbf{M}^{\nu-2e_j}_{i}\right)&\leq 2\delta^2 e^{\epsilon} \sup_{Q_i\in\mathcal{Q}_{\left(\epsilon,R\right)}} \sum_{y\in\mathcal{Y}_i} \frac{\left( Q_i\left(y|j\right)-Q_i\left(y|j+k/2\right)\right)^2}{Q_i\left(y|j\right)+Q_i\left(y|j+k/2\right)}\\ &=2\delta^2 e^{\epsilon} \left\{\begin{array}{ll} 2\frac{\left(e^\epsilon-1\right)^2}{\left( e^\epsilon+1\right)^2}& \text{if}\ R\geq H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right)\\ 2\frac{p_R^2\left(e^{\epsilon}-1\right)^2}{ e^{2\epsilon}}& \text{if}\ R< H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right) \end{array} \right. \end{aligned} \end{equation} Substituting from~\eqref{eqn3_8} into~\eqref{eqn3_10}, we get \begin{equation} \begin{aligned} r^{\ell}_{\epsilon,R,n,k}&\geq \left\{\begin{array}{ll} \phi\left(\delta\right)\frac{k}{2}\left(1-\sqrt{2\delta^2 ne^{\epsilon} \frac{\left(e^{\epsilon}-1\right)^2}{\left(e^{\epsilon}+1\right)^2}}\right)& \text{if}\ R\geq H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right)\\ \phi\left(\delta\right)\frac{k}{2}\left(1-\sqrt{2\delta^2 n \frac{p_R^2\left(e^{\epsilon}-1\right)^2}{e^{\epsilon}}}\right)& \text{if}\ R< H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right) \end{array} \right. \end{aligned} \end{equation} By setting $\delta^2=\frac{\left(e^{\epsilon}+1\right)^2}{8ne^{\epsilon}\left(e^{\epsilon}-1\right)^2}$ if $R\geq H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right)$ and $\delta^2=\frac{e^{\epsilon}}{8np_R^2\left(e^{\epsilon}-1\right)^2}$ if $R\geq H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right)$, we get \begin{equation} \begin{aligned} r^{\ell}_{\epsilon,R,n,k}&\geq \left\{\begin{array}{ll} \phi\left(\sqrt{\frac{\left(e^{\epsilon}+1\right)^2}{8ne^{\epsilon}\left(e^{\epsilon}-1\right)^2}}\right)\frac{k}{4}& \text{if}\ R\geq H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right)\\ \phi\left(\sqrt{\frac{e^{\epsilon}}{8np_R^2\left(e^{\epsilon}-1\right)^2}}\right)\frac{k}{4}& \text{if}\ R< H_2\left(\frac{e^\epsilon}{e^\epsilon+1}\right) \end{array} \right. \end{aligned} \end{equation} For the loss function $\ell=\ell_{2}^{2}$, we set $\phi\left(x\right)=x^{2}$ and for $\ell=\ell_{1}$, we set $\phi\left(x\right)=|x|$. This completes the proof of Theorem~\ref{Th2_1} with a slightly worse constant of 32 instead of 16 in the denominator. We provide a different proof of Theorem~\ref{Th2_1} in Appendix~\ref{LDP_F} using Fisher information that gives the exact bound as stated in Theorem~\ref{Th2_1}. \subsection{Upper Bound on The Minimax Risk Estimation Using Hadamard Response}~\label{LDP-AC} In this section, we prove Theorem~\ref{Th2_2} (see page~\pageref{Th2_2}) by proposing a private mechanism by adapting the Hadamard response given in~\cite{acharya2019communication}, where each user answers to a yes-no question such that the probability of telling the truth depends on the amount of randomness~$R$. Each user $i\in\left[n\right]$ has a binary output $Y_i\in\lbrace 0,1\rbrace$. The $\left(\epsilon,R\right)$-LDP mechanism of the $i$-th user is defined by \begin{equation}~\label{eqn3_9} Q\left(Y_i=1|X\right)=\left\{\begin{array}{ll} q& \text{if}\ X\in B_i\\ \frac{q}{e^{\epsilon}}& \text{if}\ X\notin B_i\\ \end{array}\right. \end{equation} where $B_i\subset\left[k\right]$ is a subset of inputs, and $q$ is a probability value that will be determined later such that $H_2\left(q\right)\leq R$. Let $K=2^{\ceil{\log\left(k\right)}}$ denote the smallest power of $2$ larger than $k$, and $H_{K}$ be the $K\times K$ Hadamard matrix. In the following, we assume an extended distribution $\overline{\mathbf{p}}$ over the set $\mathcal{X}=\left[K\right]$ with $|\mathcal{X}|=K$ that is obtained by zero-padding the original distribution $\mathbf{p}$ with $\left( K-k\right)$ zeros, i.e., $\overline{\mathbf{p}}=\left[\overline{p}_1,\ldots,\overline{p}_{K}\right]=\left[p_1,\ldots,p_k,0,\ldots,0\right]$. For $j\in\left[K\right]$, let $B^{j}$ be a set of row indices that have $1$ in the $j$-th column of the Hadamard matrix $H_{K}$. For example, when $K=4$, the Hadamard matrix is given by \begin{equation} H_{4}=\begin{bmatrix} 1&1&1&1\\ 1&-1&1&-1\\ 1&1&-1&-1\\ 1&-1&-1&1\\ \end{bmatrix} \end{equation} Hence, $B^{1}=\lbrace 1,2,3,4\rbrace$, $B^{2}=\lbrace 1,3\rbrace$, $B^{3}=\lbrace 1,2\rbrace$, and $B^{4}=\lbrace 1,4\rbrace$. We divide the users into $K$ sets ($\mathcal{US}_1,\ldots,\mathcal{US}_K$), where each set contains $n/K$ users. For each user $i\in\mathcal{US}_j$, we set $B_i=B^{j}$. Let $p\left(B^{j}\right)=\text{Pr}\left[X\in B^{j}\right]=\sum_{x\in B^{j}}\overline{p}_x$, and $s_j=\text{Pr}\left[Y_i=1\right]$ for $i\in\mathcal{U}_j$. Then, we can easily see that \begin{equation} \begin{aligned} s_j&=p\left(B^{j}\right)q+\left(1-p\left(B^{j}\right)\right)\frac{q}{e^{\epsilon}}\\ &=p\left(B^{j}\right)q\left(\frac{e^{\epsilon}-1}{e^{\epsilon}}\right)+\frac{q}{e^{\epsilon}} \end{aligned} \end{equation} Let $\hat{s}_j=\frac{1}{|\mathcal{US}_j|}\sum_{i\in\mathcal{US}_j}\mathbbm{1}\left\{Y_i=1\right\}$ denote the estimate of $s_j$. Then, we can estimate $p\left(B^{j}\right)$ as $\hat{p}\left(B^{j}\right)=\frac{e^{\epsilon}}{q\left(e^{\epsilon}-1\right)}\left(\hat{s}_j-\frac{q}{e^{\epsilon}}\right)$. Observe that the relation between the distribution $\overline{\mathbf{p}}$ and $\mathbf{p}\left(B\right)=\left[p\left(B^{1}\right),\ldots,p\left(B^{K}\right)\right]$ is given by~\cite[Eq.~$13$]{acharya2019communication} \begin{equation} \mathbf{p}\left(B\right)=\frac{H_{K}\overline{\mathbf{p}}+\mathbf{1}_{K}}{2}, \end{equation} where $\mathbf{1}_K$ denotes a vector of $K$ ones. Hence, we can estimate the distribution $\overline{\mathbf{p}}$ as \begin{equation} \hat{\overline{\mathbf{p}}}=H_{K}^{-1}\left(2\hat{\mathbf{p}}\left(B\right)-\mathbf{1}_{K}\right)=\frac{1}{K}H_{K}\left(2\hat{\mathbf{p}}\left(B\right)-\mathbf{1}_{K}\right). \end{equation} \begin{lemma}~\label{lemm3_3} For arbitrary $\mathbf{p}\in\Delta_k$, we have \begin{equation} \mathbb{E}\left[\|\mathbf{p}-\hat{\mathbf{p}}\|_{2}^{2}\right]\leq \frac{2k e^{2\epsilon}}{n q^{2}\left(e^{\epsilon}-1\right)^2}. \end{equation} \end{lemma} The proof is exactly the same as the proof in~\cite[Theorem~$5$]{acharya2019communication}. By setting $q=\frac{e^{\epsilon}}{e^{\epsilon}+1}$ if $R\geq H_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right)$ and $q=p_R$ if $R< H_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right)$, we get \begin{equation} r_{\epsilon,R,n,k}^{\ell_{2}^{2}}\leq\left\{ \begin{array}{ll} \frac{2k \left(e^{\epsilon}+1\right)^{2}}{n \left(e^{\epsilon}-1\right)^2} & \text{if}\ R\geq H_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right),\\ \frac{2k e^{2\epsilon}}{np_R^{2} \left(e^{\epsilon}-1\right)^2} & \text{if}\ R< H_2\left(\frac{e^{\epsilon}}{e^{\epsilon}+1}\right). \end{array} \right. \end{equation} The difference in our mechanism is that we design the private mechanism~\eqref{eqn3_9} for all values of randomness $R$. This completes the proof of Theorem~\ref{Th2_2}.
1,108,101,562,374
arxiv
\section{Introduction} These days, blockchain-based decentralized applications are designed for a variety of use cases. The society embraced the innovation of the blockchain technology and transformed its potential into ingenious software products. Since one of the main purposes of the blockchain technology is to remove the interaction with a central authority or a third party which has control over specific information, the potential of the blockchain was widely acknowledged and it was adopted in various domains: finance, government, digital identity, media and entertainment, healthcare and supply chain management. Rating systems are widely used by many social media platforms to establish specific trends or trending posts, videos, movies, and so on. There are various rating systems, which have different purposes with respect to the rated resources: \emph{like/dislike-based rating systems}, \emph{star-based rating systems} and \emph{review-based rating systems}, which are used for media contents and products evaluation. Most of the existing rating systems are used for advertisement, which can provide the one who controls a specific resource with substantial incomes. An interesting fact is that most of the media platforms control directly the rating systems they are using. Even if the politics of media platforms seem to be transparent, there is still no evidence for this transparency. The users on these platforms have to trust the rating process, without any certain proof that it is indeed transparent. To our best knowledge, at this moment there is no solution that has the purpose of moving rating systems into the blockchain. However, we can associate \emph{rating} with \emph{voting}, which is a concept widely discussed~\cite{10.1007/978-3-642-02627-0_5,mci/Khader2012,Ayed2017ACS,10.1007/978-981-10-7605-3_50,8603050}. In this case, the blockchain technology is used to distribute an open voting record among citizens, such that the citizens do not need anymore to put their trust into central authorities. Considering the similarity between \emph{decentralized voting} and \emph{decentralized rating}, we can admit that a blockchain-based decentralized application for rating could have a major impact, since it allows the users to rate internet resources in a trustworthy manner. Most of the existing media platforms include a component for rating their content (e.g., posts, videos, songs, movies). For example both YouTube and IMDb provide rating mechanisms. The rating process on these two platforms is handled by specialised implementations. Even if YouTube informs its users with respect to the number of views, likes or dislikes that a video resource collected, there are other factors that influence these numbers (e.g., watch time, session time or the popularity of a specific channel). Since the number of views collected by the video resources is usually a criteria for producing advertising money, the owners of these resources may be determined to use fraudulent means to increase their incomes. On the other hand, YouTube claims that their algorithms handle these kinds of situations, even if there is still a lack of transparency in their actions\footnote{There are plenty of web pages where users complain (e.g., \url{https://www.authorsguilds.com/how-youtube-algorithm-works/}, \url{https://support.google.com/youtube/thread/11131851?hl=en}, \url{https://www.appypie.com/how-youtube-algorithm-works}). Even if these pages cannot be necessarily trusted, it is hard for YouTube to guarantee/prove that its rating algorithms are transparent and work correctly.}. On the IMDb platform, one of the main problems is the determination of the ranking of a specific movie. The platform does not consider the mean of the ratings, but it proposes a complex algorithm which computes the ranking based on some criteria. Even if the rating values are accessible, the users can not trust entirely the accuracy of the rating process, since its transparency is still under question \footnote{ \url{https://help.imdb.com/article/imdb/track-movies-tv/ratings-faq/G67Y87TFYYP6TWAV\#}.}. In order to address these issues, we propose a decentralized application for rating. Besides the rating functionalities, which are provided by a specialised \emph{smart contract}, our solution comes with an additional authentication mechanism, that guarantees the \emph{uniqueness} of each user who rates resources. In this way, the fraud tendencies are reduced and the rating process is transparent and trustworthy. \paragraph{Contributions.} The main contribution of this paper is a blockchain-based decentralized application for rating different kinds of internet resources. The application provides its users with an intuitive UI, which facilitates the rating process. The rating functionalities are implemented by a specialised \emph{smart contract}, which guarantees the protection of the identities of the users and stores rating records in a proper manner. \paragraph{Paper Organisation.} In Section~\ref{sec:prelim} we present background information on the blockchain and the technologies we use. In Section~\ref{sec:tool} we present our main contribution, that is, a decentralized application for rating. In Section~\ref{sec:tooleval} we present some experiments that we have performed and their execution cost analysis. We conclude in Section~\ref{sec:conclusions}. \section{Background and tools} \label{sec:prelim} This section recalls several background notions and tools that we use in this paper. We give a brief presentation of each of them, and we point the readers to additional material if necessary. The \emph{blockchain} is a new technology, which records data across a peer-to-peer network in a distributed shared ledger. The nodes of the network contribute to the creation this distributed ledger, which is chain of blocks of transactions. The creation of blocks is a complex process that requires computing power. Specialised nodes, called \emph{miners}, are incentivised to invest computational power in order to create new blocks. Transactions are broadcasted in the network, and miners collect them and try to build a block of transactions following strict rules, including a link to the last known block in the chain of blocks. When a miner finishes the creation of a block, the block is send across the network and consensus algorithms are used to accept the block. The centralised control is out of the question. Based on the consensus principles, approved blocks are added to the main ledger and stored chronologically. Even if this technology was initially designed for cryptocurrency trading~\cite{bitcoin}, its potential is used in a variety of software products. \emph{Ethereum}~\cite{wood2014ethereum,eth} is a popular blockchain platform, which enables the use of \emph{smart contracts}. A smart contract is just a program that encodes a digital agreement which is executed automatically by the \emph{miners} in the network. Such programs are deployed in the network as special transactions and the miners need to execute them when creating blocks. Smart contracts can simulate real-world agreements for different kinds of assets. Usually they are written in higher-level languages (e.g., Solidity~\cite{solidity}, Vyper~\cite{vyper}) and compiled into Ethereum bytecode~\cite{wood2014ethereum,eth}. The compiled code is packed in a transaction and sent across the network. Miners use the \emph{Ethereum Virtual Machine} (EVM) to execute compiled contracts. Each bytecode instruction has an associated execution cost. This is yet another incentive for the miners to execute the code. On the other hand, the smart contract programmer needs to balance the benefits with the execution costs. \emph{Solidity}~\cite{solidity} is a statically-typed, high-level and contract-oriented programming language designed for implementing smart contracts that run on the EVM. Solidity has a wide range of functionalities inspired by existing programming languages (e.g., JavaScript, C++ and Python). This programming language is used on several platforms, including Ethereum and Hyperledger~\cite{hyperledger}. \emph{Remix}~\cite{remix} is a powerful tool that allows its users to design, deploy and test smart contracts directly in the browser. It provides development environments for several contract-oriented programming languages (e.g., Solidity, Vyper). Since Remix can simulate the deployment process of a smart contract, users can interact with the smart contract functions, emit events, debug or even inspect transactions. Users can choose from a wide range of plugins, which are designed to improve the development process of smart contracts (e.g., vulnerabilities detection, costs estimation, oracles integration). Remix can connect with Metamask~\cite{metamask} and deploy the smart contracts on various existing test networks (e.g., Ropsten~\cite{ropsten}, Kovan~\cite{kovan}, Rinkeby~\cite{rinkeby}). \emph{Metamask}~\cite{metamask} is a browser extension, which can be used on various existing browsers (e.g., Chrome, Firefox and Brave). It represents a way to connect normal browsers with the Ethereum blockchain and also to interact with decentralized applications. Metamask is a token wallet, which manages the digital assets of the users and protects the personal data and the access keys. For testing purposes, Metamask can be connected with a local blockchain network by importing the predefined accounts. \emph{Truffle}~\cite{truffle} is a popular development framework for Ethereum. It provides several tools that facilitate the development process of decentralized applications. Truffle manages smart contracts deployments, automates the smart contracts testing and provides its users with an interactive console, which includes useful commands for smart contracts manipulation. \emph{Ganache}~\cite{ganache} is a tool provided by Truffle. It creates a local Ethereum blockchain network, which can be used for smart contracts deployment and testing purposes. The users can inspect the issued transactions or events and explore the debugging information. Ganache contains a set of predefined accounts, which can be imported in Metamask using their private keys and the mnemonic code. The information about each account (e.g., address, balance, number of issued transactions) can be also explored. \emph{React}~\cite{react} is a JavaScript library used for building the user interface for web applications. We use React and Material UI~\cite{materialui} to create the UI of our decentralized application for rating. \emph{Passport}~\cite{passport} is a JavaScript library, which provides a wide range of authentication strategies that can be embedded into a web application. Our solution incorporates an authentication mechanism, which makes use of these strategies. \section{A decentralized application for rating} \label{sec:tool} \noindent \begin{figure}[t] \centering \includegraphics[scale=.2]{UseCaseDiagram.png} \caption{A use case diagram for the decentralized App for Rating. The user interacts with the smart contract using several components: an authentication mechanism, an input validation component, and a component that sends the rating to the smart contract. A query-like component is used to retrieve data from the smart contract. The order of the steps to perform is given by the numbered labels.} \label{fig:usecasediagram} \end{figure} In this section we present a general workflow of the proposed decentralized application for rating. The application is divided into several components, which have specific roles (e.g., authentication, validation, storage, error handling and rating). The diagram in Figure~\ref{fig:usecasediagram} illustrates the main use cases of the application and highlights the steps of the rating process using numbered labels. We explain these steps in the next paragraphs. First, the users need to authenticate (step 1 in Figure~\ref{fig:usecasediagram}) with one of the available providers (e.g., Google, GitHub and Spotify). If the user provides valid credentials then the authentication process succeeds and an ID of the user is returned (step 2). The IDs are used when users initiate a rating operation. After authentication, the users can rate a diversity of internet resources (step 3) using their IDs (step 4). In order to be recorded, the ratings need to be valid. The application contains a component responsible for input validation. Each time an user initiates a rating operation, the validation component checks if a set of conditions (e.g., if the rated resource is truly hosted by a specific provider or if the user has already rated a specific resource) is met (step 5). If the requirements are satisfied, the rating process can continue, otherwise an error is thrown. Finally, the rating component is used to interact with the smart contract (step 6). The smart contract exposes a function that handles the rating, that has to be called only with valid arguments. The function creates a new transaction (step 7), sends it across the network and the state of the smart contract is eventually updated (step 8). There is also a component responsible for retrieving the rating data (steps 9 and 10) from the smart contract that informs the users with respect to the updated rating history. \subsection{DApp components} In the current section, we will focus our attention on the main components of the decentralized application: the smart contract, the authentication mechanism and the user interface. We will briefly discuss the role of each component within the application. \begin{enumerate} \item The \emph{smart contract} encapsulates the logic of the rating process. It exposes a public function that is called each time an user initiates a rating operation. This function handles accurately different use cases and alters the state of the smart contract when a rating operation succeeds. The smart contract includes several data structures that keep track of the rating history. We do not perform any additional checks in the smart contract code, since this could increase the execution cost. \item The \emph{authentication mechanism} guarantees the fact that users have valid accounts. Since the users are uniquely identified, the attempts to rate multiple times a specific resource are easily detected. In this way, the fraudulent tendencies are reduced and the users are provided with a trustworthy rating experience. \item The \emph{user interface} provides the users with an intuitive visual experience of the rating process and contains three sections: the authentication section, the rating section and the data visualisation section. The user interface accesses the state of the smart contract and retrieves the rating history in an asynchronous manner. The extracted information can be used not only for the data visualisation section, but also for error handling within the rating section. \end{enumerate} \subsection{Smart contract for rating} We designed a smart contract which simulates the behaviour of a \emph{like/dislike-based} rating system. We developed the smart contract in a Solidity environment within the Remix platform. The smart contract allows the users to rate positively or negatively different kinds of media contents. The behaviour of the smart contract does not change, no matter which one is the content provider. The users and the rated resources are uniquely identified and the ratings consist of tuples of the following form: \begin{center} $\langle$ a user identifier, a resource identifier, a boolean value $\rangle$, \end{center} \noindent where the boolean flag represents a positive or a negative rating. The logic of the smart contract was simplified, since a part of the conditions and requirements are handled directly at the application level. This simplification process has no impact on the accuracy of the rating operations, and it reduces considerably the execution costs, which is one of our goals. \subsubsection{Data structures.} The smart contract contains a set of data structures which record the rating operations. In this section, we will provide a brief description of the used data structures. The identifiers for users are of type {\tt bytes32} and the identifiers for resources are of type {\tt string}. \begin{enumerate} \item {\tt resourceRating} is a {\tt struct} used to monitor the number of likes and dislikes for a specific resource. \item {\tt resourcesInformation} represents a {\tt mapping}. The structure records the number of likes and dislikes for each rated resource. \item {\tt resources} is an {\tt array} data structure. This component stores all the resources that were rated. \item {\tt ratedResources} represents a {\tt mapping}. It contains tuples of user IDs and boolean values, where the boolean values indicate if a resource was previously rated or not. \item {\tt ratingsInformation} is a {\tt mapping} data structure. It records a tuple of the following form: user, resource, rating value. This data structure is helpful, because it provides information with respect to the rating history. If an user attempts to rate multiple times (e.g., positively or negatively) a specific resource, the rating operation will be rejected, since this data structure contains the proof of the previous rating operation. \item {\tt usersToResources} is a similar to the data structure which was previously described. The purpose of this {\tt mapping} is to record if a resource was previously rated by a specific user. The application can handle several error cases based on the information stored within this data structure. \end{enumerate} These data structures are used within the rating function exposed by the smart contract. If a rating operation succeeds, these data structures are updated. The state of the smart contract can be accessed externally, since all these data structures are publicly available. \subsubsection{Smart contract functions.} In this section, we will focus our attention on the functionalities exposed by the smart contract. We will enumerate a set of possible use cases, which are handled by the smart contract. \begin{enumerate} \item The users can rate the resources positively or negatively (e.g., by giving likes or dislikes). \item The users can change their rating options for the previously rated resources. In this case, the number of likes and dislikes of the resources updates according to the current rate (e.g., if the initial rate is positive and the current rate is negative, the number of likes decreases by one and the number of dislikes increases by one). \item If an user intends to rate a resource that was not previously included in the rating history, the resource is recorded and its number of likes or dislikes, initially 0, updates to 1. \item If an user intends to rate a resource that was previously recorded, the number of likes or dislikes increases by one, with respect to the current rate. \end{enumerate} The {\tt rate} function (Listing 1.1) handles accurately each possible use case of a \emph{like/dislike-based} rating system. Based on the provided arguments ({\tt \_cred} -- the user identifier, {\tt \_res} -- the resource identifier and {\tt \_vote} -- the rate value), this function updates the state of the smart contract each time a new transaction is issued. The resource is plainly recorded within the smart contract, but the identities of the users are protected, since their IDs are previously hashed. \begin{center} \lstset{% caption=The implementation for the {\tt rate} function., basicstyle=\ttfamily\scriptsize\bfseries, frame=tb } \begin{lstlisting} function rate( bytes32 _cred, string memory _res, bool _vote ) public { if (usersToResources[_cred][_res] == true) { if ( ratingsInformation[_cred][_res] == true && _vote == false ) { ratingsInformation[_cred][_res] = false; resourcesInformation[_res].likes -= 1; resourcesInformation[_res].dislikes += 1; } if ( ratingsInformation[_cred][_res] == false && _vote == true ) { ratingsInformation[_cred][_res] = true; resourcesInformation[_res].likes += 1; resourcesInformation[_res].dislikes -= 1; } } else { if (ratedResources[_res] == false) { ratedResources[_res] = true; resources.push(_res); usersToResources[_cred][_res] = true; if (_vote == true) { resourcesInformation[_res] = resourceRating(1, 0); ratingsInformation[_cred][_res] = true; } else { resourcesInformation[_res] = resourceRating(0, 1); ratingsInformation[_cred][_res] = false; } } else { usersToResources[_cred][_res] = true; if (_vote == true) { resourcesInformation[_res].likes += 1; } else { resourcesInformation[_res].dislikes += 1; } } } } \end{lstlisting} \end{center} In addition to the {\tt rate} function, the smart contract exposes several helper functions: \begin{enumerate} \item {\tt getResourceInformation} returns the number of likes and dislikes associated with a specific resource. The resource identifier is given as argument to this function. \item {\tt getNumberOfRatedResources} returns the total number of rated resources. \item {\tt getRatedResource} returns the identifier of a resource based on its index in the list of rated resources. \end{enumerate} They are called at the application level for retrieving the rating history or other useful information. \subsection{Authentication mechanism} The decentralized application includes an authentication mechanism. In order to rate media contents, the users need to use the credentials from a specific provider (e.g., YouTube, Spotify, GitHub). Even if the users are required to use their credentials, the ID provided by the authentication mechanism is not plainly stored in the smart contract. The credentials used in the rating process are previously hashed with {\tt md5}. In this way, the one cannot find the credentials by simply inspecting the blockchain. This authentication mechanism ensures the fact that users have valid accounts. We are interested in this because we need to make sure that the rating process is trustworthy w.r.t. existing accounts. Our approach is efficient in preventing multiple rating situations. For instance, a smart contract which records the ratings based on the Ethereum addresses of the users is not resistant to multiple rating attempts. An user can generate multiple Ethereum addresses and rate a specific resource using those addresses. In this way, the ratings can be easily tricked. The advantage of our solution is precisely this additional authentication layer. Even so, the users can rate multiple times a specific resource, but they need to have active accounts on the platform that share the resource. This is more difficult to achieve than the generation of Ethereum addresses. We actually reuse existing algorithms for detecting and removing fake accounts currently implemented by the media platforms. For the authentication mechanism we used Passport, a JavaScript library which provides a variety of authentication strategies. Currently, in our application we provide support for Google, GitHub and Spotify, but other strategies can be easily added. \subsection{User Interface} The decentralized application offers an intuitive user experience. Besides the capacity of the smart contract to handle the rating process, some operations and conditions are included directly in the application. The user interface is divide into three main sections: authentication, a section for rating and data visualisation. \begin{center} \begin{figure}[t] \centering \includegraphics[scale=.38]{HomePage.png} \caption{Authentication page where the users can select the platform to sign in. In order to connect with one of the existing log in options, the users should have a valid account on the chosen platform.} \label{fig:homepage} \end{figure} \end{center} In the authentication section, the users can choose one of the available login strategies (e.g., Google, GitHub and Spotify). Once the authentication mechanism issues the IDs, the users can start to rate resources. The rating section is more complex, because it encapsulates the modules for input validation and error handling. The users need to provide an URL of the resource and to select a rating option (e.g., like or dislike). Before we call the rating function from the smart contract, we perform two input validation steps: \begin{enumerate} \item We check the provenience of the resource; \item We check the rating history (i.e., if the user rated the resource with the same rating value, then the user cannot rate it again); \end{enumerate} If one of the situations below occurs, the error handling module throws some errors: \begin{enumerate} \item If the user intends to rate an invalid resource, then the error handling module returns a notification message with the following content: \emph{Invalid resource.} \item If the user attempts to rate multiple times the same resource, then the error handling module returns a notification message with the following content: \emph{Multiple ratings for the same resource are not allowed.} \end{enumerate} If all requirements are met, the rating process continues and a Metamask window will inform the user regarding the rating costs. When the user confirms the transaction, the rating function is called and the state of the smart contract will be modified by the miners in the Ethereum network. The URL-based resources are validated using API calls to the services that share the resources. The state of the smart contract is accessed through reading operations and the rating history can be user to determine if an user rated previously a specific resource. The data visualisation section accesses the state of the smart contract and the users can inspect directly in the application their rating history. The interactions with the smart contract are asynchronous and it is a matter of seconds until the updated information is displayed within the application. We designed the decentralized application using React and Material UI (i.e., JavaScript libraries which provide a variety of predefined UI components). \begin{figure}[t] \centering \includegraphics[scale=.39]{RatingPage.png} \caption{The rating page. The users can complete the form with an URL-based resource and rate it positively or negatively by clicking one of the two specialised buttons. The users can inspect their rating history (e.g., the users have access to a table which contains the rated resources and their associated number of likes/dislikes) by clicking the \emph{See ratings} link. } \label{fig:ratingpage} \end{figure} \section{Evaluation/Experiments} \label{sec:tooleval} In this section we evaluate all the costs required by our decentralized application. There are several categories of costs: the cost to deploy the application, the costs to maintain the smart contract and the costs supported by users for rating the resources. We will present our approaches and their impact on the execution costs. Our initial attempt was to encapsulate the entire logic within the smart contract. Since our rating system aims to avoid the multiple voting problem and to validate the resource provenience, the smart contract gained in complexity. This led to increased execution costs and made our initial attempt unfeasible. Moreover, smart contracts can not access information outside the network. Since we needed confirmations that the rated resources are shared by specific providers, we initially used an oracle (i.e, a service that verifies real world data and submits the collected information to the smart contract). We used Provable~\cite{provable} and Chainlink~\cite{chainlink} to validate the arguments for the rating function. This approach led to different kinds of problems. First of all, the deployment costs and the costs supported by users for the rating operations increased considerably, because the off-chain requests require extra fees. The off-chain request involved also the usage of custom cryptocurrencies (e.g., LINK for Chainlink oracle). In order to stimulate the interest for the proposed rating system, we tried to minimize the costs\footnote{The costs involved by the smart contract deployment and the costs for a rating operation with and without off-chain interaction were observed in a testing environment provided by Ganache.}. Based on the observation that off-chain interactions are far more suitable, we moved the authentication, input validation and other side logic outside the smart contract. The complexity of the smart contract is now considerably reduced. Except the minimal information (IDs, resources and their associated ratings) required to keep the ratings in a decentralized manner, the rest of the logic is directly handled at the application level. Thus, the off-chain dependencies were removed from the smart contract. In Table~\ref{tbl:costs}, we present a brief comparison between the versions of the smart contract that we have developed. The ones that use external providers (oracles) like Provable or Chainlink have an unacceptable execution cost for rating. The simple smart contract with no external providers has significantly lower execution costs per rating operation. This is the one we have currently implemented\footnote{The code is available on GitHub:\url{https://github.com/buterchiandreea/rating-dapp/blob/master/contracts/Rating.sol}}. \begin{center} \begin{table}[htp] \centering \begin{tabular}{ |p{3.5cm}||p{3cm}|p{3cm}|| } \hline \multicolumn{3}{ |c| }{\bf Costs analysis } \\ \hline {\bf Smart Contract version} & {\bf \hfil Deployment} & {\bf \hfil Rating operation} \\ \hline Simple smart contract (no external providers) & \hfil10 \$ & \hfil 0.2 \$ \\ \hline Provable & \hfil10 \$ & \hfil 2 \$ \\ \hline Chainlink & \hfil10 \$ & \hfil 2-8 \$ \\ \hline \end{tabular}\vspace*{2ex} \caption{A comparison between various smart contract versions that we experimented. The table shows the costs for deployment and rating operations. Note that the versions of the smart contract that use external providers (oracles) like Provable or Chainlink have an unacceptable execution cost for rating.} \label{tbl:costs} \end{table} \end{center} The codebase is available at \url{https://github.com/buterchiandreea/rating-dapp}. The Github repository contains the source code and installation instructions. The smart contract code is written in {\tt Rating.sol} under the {\tt contracts} folder, while the UI is mainly in the {\tt client} folder. \section{Conclusions} \label{sec:conclusions} \begin{figure}[t] \centering \includegraphics[scale=.19]{Authentication.png} \caption{Authentication and rating. The users can not rate a resource until the authentication process is completed. The ratings are recorded based on the {\tt clientID}, which is returned after the authentication succeeds. In this way, we constraint the number of votes to maximum 1 for each account.} \label{fig:authentication} \end{figure} The solution we propose in this paper is a decentralized application that stores the rating in the blockchain using a simple smart contract. In order to rate an internet resource, users need accounts on the platform that shares that resource. Obviously, their accounts or login information are not stored in the blockchain, but their hashed value is used to limit the number of votes to a maximum of 1 per account. Note that this approach keeps the benefits brought by the algorithms for eliminating or detecting fake accounts currently implemented in the existing media platforms. In this context, our rating application cannot accept more fake ratings than the existing approaches, as shown in Figure~\ref{fig:authentication}. The transparency and decentralisation are guaranteed by the use of the blockchain. To summarise, our approach provides a decentralized solution to keep ratings for internet resources identified by an URL. The approach is based on the blockchain technology that provides anonymity and immutability of data besides decentralisation. We use authentication with third parties in order to provide an inferior limit to the number of votes. The limit is the number of accounts controlled by an user. \paragraph{Future work.} There are at least two new features than we intent to add to our decentralized application. First, we want to add more extensions, so that users can authenticate with other accounts. Currently we support only Google, Spotify, and GitHub, but our architecture allows any extension that is compatible with Passport. Another improvement is related to the various types of ratings. At this point, our application only allows users to rate resources using a two choice (positive/negative) option. However, other rating systems allow scores or systems based on stars. Adding reviews is also an interesting feature, but this requires a very strong analysis on the costs involved to keep the reviews on the blockchain. An interesting idea would be not to keep the reviews on the blockchain, but only a hashed value of the review so that it can be easily checked if a review retrieved from a different resource is indeed the one whose hash is stored in the blockchain. \newpage
1,108,101,562,375
arxiv
\section{\label{sec:intro}Introduction} Photonic qubits are particularly useful for quantum information transfer over a long distance. There are several different ways to encode qubit information in traveling light fields. Probably the most well-known type uses the horizontal and vertical polarizations of a single photon (PSP), $\ket{H}$ and $\ket{V}$ \cite{Knill01, Dodd03}, which is often called ``dual-rail encoding.'' Another method is to utilize the vacuum and the single-photon (VSP) states, $\ket{0}$ and $\ket{1}$, called ``single-rail encoding,'' with its own merit \cite{Lee00, Lund02}. Not only restricted to the discrete qubit encoding, one can alternatively utilize continuous-variable-based qubit encodings such as one with two coherent states with opposite phases, $\ket{\pm\alpha}$, where $\pm\alpha$ are coherent amplitudes. This approach enables one to perform nearly deterministic Bell-state measurements \cite{Jeong01, Jeong01b} and efficient gate operations for quantum computing \cite{Jeong02, Ralph03, Lund08}. Hybrid architectures of these qubit encodings have also been explored to combine their advantages \cite{Park12, Lee13, Kwon13, Sheng13, Jeong14, Jeong15, Andersen15, Kim16, Lim16}. Recently, Lee \textit{et al.}~suggested multiphoton encoding with the horizontal and vertical polarizations of $N$ photons, $\{ \ket{H}^{\otimes N} = \bigotimes_{i = 1}^N \ket{H}_i, \ket{V}^{\otimes N} = \bigotimes_{i = 1}^N \ket{V}_i \}$, in order to overcome the limitation of Bell-state measurement using linear optics \cite{Lee15}. Using linear optics with single-photon qubits, only two among four Bell states can be discriminated, and the success probability of Bell measurement is generally limited to $1/2$ \cite{Lutkenhaus99, Calsamiglia01}. This affects the success probabilities of gate operations for linear optics quantum computing \cite{Knill01} depending on the gate teleportation scheme \cite{Gottesman99}, which is an obstacle against the implementation of scalable optical quantum computation. There are a number of proposals to circumvent this limitation using ancillary states or operations \cite{Grice11, Zaidi13, Ewert14, Kilmer19}, coherent-state qubits \cite{Jeong01}, hybrid qubits \cite{Lee13}, and multiphoton qubits \cite{Lee15}. Among them, the multiphoton encoding achieves a nearly deterministic Bell-state measurement with an average success probability $1- 2^{-N}$, where $N$ is the number of photons per qubit \cite{Lee15}. Recently, it was shown that the multiphoton encoding is particularly advantageous for quantum communication \cite{Lee19}. A multiphoton qubit is generally in the form of the Greenberger-Horne-Zeilinger~(GHZ) state, i.e.,~$\ket{\psi} = a \ket{H}^{\otimes N} + b \ket{V}^{\otimes N}$. The GHZ-type state is fragile under photon loss \cite{Simon02, Dur02}, and this makes it hard to transmit quantum information over long distance via the multiphoton qubit. One solution to this problem is to use the parity encoding with quantum error correction that corrects photon loss errors \cite{Munro12, Muralidharan14, Ewert17, Lee19}. However, such a qubit encoding has a complex structure making it generally hard to generate the desired logical qubit and Bell states (the scheme and its success rate are discussed in Ref.~\cite{Ewert17}). In this paper, we suggest and investigate a teleportation scheme via hybrid entanglement between a multiphoton qubit and another type of optical qubit serving as a loss-tolerant carrier. Our strategy is to send the loss-tolerant carrier qubit through the noisy environment while storing the multiphoton qubit as intact as one can. A similar type of approach was used for loss-tolerant quantum relay for a coherent-state qubit via an asymmetric entangled coherent state~\cite{Neergaard-Nielsen13}. We consider three types of carrier qubits: a coherent-state qubit, a PSP qubit, and a VSP qubit. We investigate quantum fidelities for the output states and success probabilities of quantum teleportation under photon loss. The success probability of the Bell measurement is affected only by photon loss on the multiphoton qubit but the fidelity is determined by properties on the carrier qubit. It shall be shown that any choice among the three candidates can improve the fidelity. We mainly consider the photon number for a multiphoton qubit as $N=4$ which was identified as the optimal number for fault-tolerant quantum computing using the multiphoton qubits, the seven-qubit Steane code and the telecorrection protocol \cite{Lee15, Lee15_2}. Remarkbly, the VSP qubit in hybrid entanglement serves as a highly efficient carrier showing about 10 times better tolerance to photon loss than the direct transmission of the multiphoton qubit when the fidelity is larger than 0.9. The coherent-state qubit encoding can be even better than the VSP qubit as the carrier when its amplitude is as small as $\alpha<0.78$. Our study may be useful for designing and building up loss-tolerant quantum communication networks. \section{\label{sec:loss_env}Photon-loss model} We describe the environment with the photon-loss model by the Master equation under the Born-Markov approximation with the zero-temperature \cite{Phoenix90}: \begin{align}\label{eq:masterequation} \frac{\partial \rho}{\partial \tau} = \gamma \sum_{i=1}^{N} \left( \hat{a}_i \rho \hat{a}_i^\dagger -\frac{1}{2}\hat{a}_i^\dagger \hat{a}_i \rho -\frac{1}{2}\rho \hat{a}_i^\dagger \hat{a}_i \right) \end{align} where $\hat{a}_i$($\hat{a}_i^\dagger$) represents the annihilation (creation) operator of mode $i$ and $\gamma$ is the decay constant determined by the coupling strength of the system and the environment. This evolution of a density operator is equivalently described by the beam-splitter model where each input mode is independently mixed with a vacuum state at a beam splitter with transmittance $t=e^{-\gamma \tau/2}$ and reflectance $r = \sqrt{1-t^2}$ \cite{Leonhardt93}: \begin{align} \label{eq:beamsplitter} \begin{pmatrix} \hat{a} \\ \hat{b} \end{pmatrix} \rightarrow \begin{pmatrix} \hat{a}' \\ \hat{b}' \end{pmatrix} = \begin{pmatrix} t & - r \\ r & t \end{pmatrix} \begin{pmatrix} \hat{a} \\ \hat{b} \end{pmatrix}. \end{align} where $\hat{a}$($\hat{b}$) is the annihilation operator on system (ancillary) mode. The output state is then obtained by tracing out the ancillary modes. Considering the evolution of single photon state $\dyad{1} \rightarrow t^2\dyad{1} + r^2 \dyad{0}$, we refer to the sqaure of the reflactance $r^2$ as the photon-loss rate $\eta$. \begin{figure} \centering \includegraphics[width=\linewidth]{overall_scheme.pdf} \caption{Schematic of quantum information transmission of a multiphoton qubit. (a) A multiphoton qubit $\ket{\psi_\mathrm{in}}$ is directly transmitted. (b) The qubit encoding is changed to the carrier qubit by teleportation with a hybrid entangled state. The classical information from the Bell-state measurement (BSM) is transmitted via a classical channel. } \label{fig:overall_shceme} \end{figure} \section{\label{sec:dir_transmission}Direct transmission} Suppose that we directly transmit a multiphoton qubit of $N$ photons $\ket{\psi_\mathrm{in}}=a \ket{H}^{\otimes N}+b\ket{V}^{\otimes N}$ over a lossy environment. The output qubit of the transmission is obtained using Eq.~(\ref{eq:beamsplitter}) as \begin{align*} \rho_\mathrm{out}(t) =& \abs{a}^2 \qty[t^2 \dyad{H} + (1 - t^2) \dyad{0}]^{\otimes N} \nonumber\\ &+ \abs{b}^2 \qty[t^2 \dyad{V} + (1 - t^2) \dyad{0}]^{\otimes N} \nonumber\\ &+ t^{2N} [ a b^*(\dyad{H}{V})^{\otimes N} + \mathrm{H.c.}] \nonumber\\ =& t^{2N} \dyad{\psi_\mathrm{in}} + (1-t^{2N})\rho^\mathrm{loss}, \end{align*} where \begin{align}\label{eq:dir_loss} \rho^\mathrm{loss} = &\sum_{k=1}^{N} (t^{2})^{N-k}(1-t^2)^{k} \sum_{\mathcal{P}\in \mathrm{Perm}(N,k)}\nonumber\\ &\Big\{\abs{a}^2\mathcal{P}[(\dyad{H})^{\otimes N-k}\otimes (\dyad{0})^{\otimes k}]\nonumber\\ &+ \abs{b}^2 \mathcal{P}[(\dyad{V})^{\otimes N-k}\otimes (\dyad{0})^{\otimes k}]\Big\} \end{align} is the loss term with one or more photons lost. We denote $\mathrm{Perm}(N,k)$ as the set of permutations of tensor products with the number of elements $\genfrac{(}{)}{0pt}{}{N}{k}$, which represents the cases that photons in $k$ modes within the total $N$ modes are lost and photons in $N-k$ modes remain in the polarization state. It is straightforward to see that $\rho^\mathrm{loss}$ is orthogonal to $\ket{\psi_\mathrm{in}}$. The quality of the output state is measured by fidelity $F$ between the input and output states that is defined as $F(t) = \bra{\psi_\mathrm{in}}\rho_\mathrm{out}(t)\ket{\psi_\mathrm{in}}$. The fidelity for the direct transmission is then obtained as \begin{align*} F^\mathrm{dir} =t^{2N}=(1-\eta)^N. \end{align*} This shows that the multiphoton qubit becomes more fragile when photon number $N$ per qubit becomes larger. Although the success probability of the Bell-state measurement using multiphoton qubits approaches the unity as $N$ gets larger \cite{Lee15}, this fragility may be a weak point of the multiphoton encoding when it is applied to quantum information transfer. \section{\label{sec:hybird}Teleportation with hybrid entanglement} In our scheme, a hybrid entangled state between a multiphoton qubit and a carrier qubit is used as the quantum channel, where the carrier qubit is loss-tolerant compared to the multiphoton qubit. In what follows, we examine a coherent-state qubit, a PSP qubit and a VSP qubit as candidates for the carrier qubit. \subsection{\label{subsec:loss}Loss on hybrid entangled states} For the teleportation between two different types of qubits, the sender and the receiver need to share a hybrid entangled state between a multiphoton qubit and a carrier qubit. The entangled state for the quantum channel is expressed as $\ket{\psi_\mathrm{hyb}} = \frac{1}{\sqrt{2}}(\ket{H}^{\otimes N}\ket{C_0} + \ket{V}^{\otimes N}\ket{C_1})$, where $\ket{C_0}$ and $\ket{C_1}$ are the basis states for the carrier qubit. We consider the three types of hybrid entangled states \begin{align} \label{eq:hyb_entanglement} \ket{\psi_{\mathrm{mc}}} &= \frac{1}{\sqrt{2}}\left(\ket{H}^{\otimes N}\ket{\alpha}+\ket{V}^{\otimes N}\ket{-\alpha}\right), \nonumber\\ \ket{\psi_\mathrm{mp}} &= \frac{1}{\sqrt{2}}\left(\ket{H}^{\otimes N}\ket{H}+\ket{V}^{\otimes N}\ket{V}\right), \nonumber\\ \ket{\psi_\mathrm{ms}} &= \frac{1}{\sqrt{2}}\left(\ket{H}^{\otimes N}\ket{0} + \ket{V}^{\otimes N}\ket{1} \right), \end{align} where subscripts m, c, p and s denote multiphoton qubit, coherent-state qubit, PSP qubit, and VSP qubit, respectively. We assume an asymmetric environment that the transmittance (reflectance) of every mode of the multiphoton qubit is $t_M$ ($r_M$) and that of the carrier qubit is $t_C$ ($r_C$). Using Eq.~(\ref{eq:beamsplitter}), the shared hybrid entangled states are obtained as \begin{align}\label{eq:mc_hyb} &\rho_{\mathrm{mc}}(t_M, t_C) = \frac{t_M^{2N}}{2}\Big\{ (\dyad{H})^{\otimes N}\otimes \dyad{t_C \alpha } \nonumber\\ &~~~~~~~~~~~~~~~~~~+ (\dyad{V})^{\otimes N}\otimes \dyad{- t_C \alpha} \nonumber\\ &~~~~~~~~~~~+ e^{-2 \alpha^2 r_C^2} \big[(\dyad{H}{V})^{\otimes N} \otimes \dyad{t_C \alpha}{-t_C \alpha} +\mathrm{H.c} \big] \Big\} \nonumber\\ &~~~~~~~~~~~~~~~~~~+ (1-t_M^{2N})\rho^\mathrm{loss}_{\mathrm{mc}}, \end{align} \begin{align}\label{eq:mp_hyb} &\rho_\mathrm{mp}(t_M, t_C) = t_M^{2N} \Big\{t_C^{2} \dyad{\psi_{\mathrm{mp}}} +r_C^2 \big[(\dyad{H})^{\otimes N}\nonumber\\ &~~~~~~~~~~~~+ (\dyad{V})^{\otimes N} \big] \otimes \dyad{0} \Big\} +(1-t_M^{2N})\rho^\mathrm{loss}_{\mathrm{mp}}, \end{align} and \begin{align}\label{eq:ms_hyb} &\rho_\mathrm{ms}(t_M, t_C) = \frac{t_M^{2N}}{2} \Big\{ (\dyad{H})^{\otimes N} \otimes \dyad{0}\nonumber\\ &~~~~~~~~~~~~~~~~~~+(\dyad{V})^{\otimes N} \otimes (t_C^2 \dyad{1} + r_C^2 \dyad{0}) \nonumber\\ &~~~~~~~~~~~~~~~~~~+ t_C \big[ (\dyad{H}{V})^{\otimes N} \otimes \dyad{0}{1} + \mathrm{H. c.} \big] \Big\}\nonumber\\ &~~~~~~~~~~~~~~~~~~+ (1-t_M^{2N})\rho^\mathrm{loss}_\mathrm{ms}, \end{align} where the loss terms $\rho^\mathrm{loss}$ represent the events where one or more photons are lost from the multiphoton qubit. Explicit expressions of the loss terms are \begin{align*} \rho^\mathrm{loss}_{\mathrm{mc}}& = \frac{1}{2}\sum_{k=1}^{N} (t_M^{2})^{N-k}(1-t_M^2)^{k} \sum_{\mathcal{P}\in \mathrm{Perm}(N,k)}\\ &\Big\{\mathcal{P}\big[(\dyad{H})^{\otimes N-k}\otimes (\dyad{0})^{\otimes k}\big]\otimes \dyad{t_C \alpha}\\ &+ \mathcal{P}\big[(\dyad{V})^{\otimes N-k}\otimes (\dyad{0})^{\otimes k}\big]\otimes \dyad{-t_C \alpha}\Big\}, \end{align*} \begin{align*} &\rho^\mathrm{loss}_{\mathrm{mp}} = \frac{1}{2}\sum_{k=1}^{N} (t_M^{2})^{N-k}(1-t_M^2)^{k} \sum_{\mathcal{P}\in \mathrm{Perm}(N,k)}\\ &~~\Big\{\mathcal{P}\big[(\dyad{H})^{\otimes N-k}\otimes (\dyad{0})^{\otimes k}\big]\otimes (t_C^2\dyad{H} + r_C^2\dyad{0})\\ &+ \mathcal{P}\big[(\dyad{V})^{\otimes N-k}\otimes (\dyad{0})^{\otimes k}\big]\otimes (t_C^2\dyad{V} + r_C^2\dyad{0})\Big\}, \end{align*} and \begin{align*} &\rho^\mathrm{loss}_{\mathrm{ms}} = \frac{1}{2}\sum_{k=1}^{N} (t_M^{2})^{N-k}(1-t_M^2)^{k} \sum_{\mathcal{P}\in \mathrm{Perm}(N,k)}\\ &~~\Big\{\mathcal{P}\big[(\dyad{H})^{\otimes N-k}\otimes (\dyad{0})^{\otimes k}\big]\otimes (t_C^2\dyad{1} + r_C^2\dyad{0})\\ &~~~~~~~~+ \mathcal{P}\big[(\dyad{V})^{\otimes N-k}\otimes (\dyad{0})^{\otimes k}\big]\otimes \dyad{0}\Big\}. \end{align*} All these terms do not contain entanglement. This is attributed to the fact that when a photon from the multiphoton qubit is lost, the resulting multiphoton qubit effectively becomes completely dephased. \subsection{\label{subsec:entanglement}Amount of entanglement in hybrid entangled states} In this subsection, we investigate the amount of entanglement contained in the hybrid entangled states. Entanglement in any bipartite mixed state can be measured by the negativity $\mathcal{N}(\rho)$ \cite{Vidal02}, which is defined as \begin{align} \mathcal{N}(\rho)\equiv\frac{\left\Vert\rho^{T_A}\right\Vert-1}{2} = \sum_{\lambda_i <0}|\lambda_i| \label{eq:Neg} \end{align} where $\rho^{T_A}$ is the partial transpose of $\rho$ with respect to subsystem $A$, $\left\Vert\cdot\right\Vert$ is the trace norm, and $\{\lambda_i \}$ is the set of eigenvalues of $\rho^{T_A}$. The negativity is an entanglement measure, i.e., it does not increase under any local operations and classical communications. Using Eq.~(\ref{eq:Neg}), analytical expressions of the negativity of the hybrid entangled states can be obtained from Eqs.~(\ref{eq:mc_hyb}), (\ref{eq:mp_hyb}) and (\ref{eq:ms_hyb}). Although $|t_C \alpha\rangle$ and $|-t_C \alpha\rangle$ in Eq.~(\ref{eq:mc_hyb}) are not orthogonal, they are two linear independent state vectors that can be treated in a two-dimensional Hilbert space as done in Ref.~\cite{Jeong01}. Further, since the loss terms, $\rho^\mathrm{loss}$, are orthogonal to the remaining terms and contain no entanglement, we can consider only the remaining terms in a $2 \otimes 2$ dimensional Hilbert space. The degrees of negativity are then obtained as \begin{align*} &\mathcal{N}(\rho_\mathrm{mc}) = \frac{t^{2N}_M} {4\sqrt{1-e^{-4t_C^2\alpha^2}}} \nonumber \\ &\qquad \times \left[ \sqrt{ 1 - 2\left( 2e^{-4t_C^2\alpha^2} - 1 \right) e^{-2r_C^2\alpha^2} + e^{-4r_C^2\alpha^2} } \right. \nonumber \\ &\qquad\qquad \left.+ e^{-2r_C^2\alpha^2} - 1 \right], \\ &\mathcal{N}(\rho_\mathrm{mp}) = \mathcal{N}(\rho_\mathrm{ms}) = \frac{1}{2} t_M^{2N} t_C^2. \end{align*} Here, the negativities of $\rho_\mathrm{mp}$ and $\rho_\mathrm{ms}$ are same, because entanglement disappears when at least one photon is definitely lost in both cases. Figure~\ref{fig:negativity} shows the dependence of the negativity on the photon loss rates of both the sides, $\eta_M = 1-t_M^2$ and $\eta_C = 1-t_C^2$. It is generally shown that the dependence is sharper for the loss rate, $\eta_M$, of the multiphoton qubit than that of the carrier qubit, $\eta_C$. This implies the desirable property that entanglement in the hybrid entangled state is more robust to the photon loss on the carrier qubit than that on the multiphoton qubit. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{negativity.pdf} \caption{ Degrees of entanglement (negativity) against the photon-loss rate for the multiphoton qubit $\eta_M = 1-t_M^2$ and for the carrier qubit $\eta_C=1-t_C^2$ of hybrid entanglement between (a) the multiphoton qubit and the coherent-state qubit $\rho_{\mathrm{mc}}$, (b) the multiphoton qubit and the PSP qubit $\rho_{\mathrm{mp}}$, and the multiphoton qubit and the VSP qubit $\rho_{\mathrm{ms}}$. The number of photons $N$ for the multiphoton qubit is set to be $N=4$. The amplitude of the coherent-state qubit is chosen to be $\alpha = 1.2$. } \label{fig:negativity} \end{figure} \subsection{\label{subsec:fidelity}Teleportation fidelities} We now consider quantum teleportation with the hybrid entangled states $\rho_\mathrm{mc}$, $\rho_\mathrm{mp}$, and $\rho_\mathrm{ms}$ as the quantum channel. We employ the Bell-state measurement scheme for the multiphoton qubits proposed in Ref.~\cite{Lee15}. In the multiphoton qubit encoding, the Bell states are defined as \begin{align*} \ket{B_{1,2}^N} & = \frac{1}{\sqrt2}\left(\ket{H}^{\otimes N}\ket{H}^{\otimes N}\pm\ket{V}^{\otimes N}\ket{V}^{\otimes N}\right), \nonumber\\ \ket{B_{3,4}^N} & = \frac{1}{\sqrt2}\left(\ket{H}^{\otimes N}\ket{V}^{\otimes N}\pm\ket{V}^{\otimes N}\ket{H}^{\otimes N}\right), \end{align*} where $\pm$ is chosen in the same order of the two number labels of $\ket{B_i^N}$ in each line. Using only linear optics and on-off photodetectors, $\ket{B_2^N}$ and $\ket{B_4^N}$ are identified unambiguously while $\ket{B_1^N}$ and $\ket{B_3^N}$ with probability $1-1/2^{N-1}$ \cite{Lee15}. When one or more photons are lost from the muliphoton qubit in hybrid entangled states, there is a chance that $\ket{B_i^K}$ with $K<N$ is detected. However, although we accept these events as success, the fidelity is not improved. We pointed out earlier that $\rho^\mathrm{loss}$ does not contain entanglement due to the dephasing induced by photon loss. The teleportation fidelity between the input and output qubits cannot then exceed the classical limit, which we will discuss further at the end of this section. We thus take only the detection of $N$-photon Bell states as the successful events. Similarly to the standard teleportation scheme, a sender jointly measures the input state $\ket{\psi_\mathrm{in}}=a \ket{H}^{\otimes N} + b \ket{V}^{\otimes N}$ and the multiphoton-qubit part of the hybrid entangled states. After the Bell-state measurement with outcome $i$, the input state $\ket{\psi_\mathrm{in}}$ and the hybird entangled state under photon loss, $\rho_\mathrm{hyb}(t_M, t_C)$, are projected to \begin{align}\label{eq:postmeasurment} \rho_{\mathrm{out},i}(t_M, t_C) = \frac{\bra{B_i^N} (\dyad{\psi_\mathrm{in}}\otimes \rho_\mathrm{hyb}(t_M, t_C) )\ket{B_i^N}}{\tr [\dyad{B_i^N} (\psi_\mathrm{in} \otimes \rho_\mathrm{hyb}(t_M, t_C))]}. \end{align} With the heralded measurement outcome $i$, the receiver may recover the state $\rho_\mathrm{out} = \rho_\mathrm{out, 1}$ by a proper local unitary based on the outcome $i$. Before proceeding further, we point out that the output state does not depend on loss $\eta_M$ on the multiphoton-qubit part. The hybrid entangled state can be represented as $\rho_\mathrm{hyb}(t_M, t_C) = t_M^{2N} \sigma_\mathrm{hyb}(t_C)+ \rho^\mathrm{loss}$, where $\sigma_\mathrm{hyb}(t_C)$ corresponds to the state when no photon is lost from the multiphoton qubit. The facter $t_M^{2N}$ indicates that this event happens with a probability of $t_M^{2N} = (1-\eta_M)^N$. Since the loss term $\rho^\mathrm{loss}$ is orthogonal to the qubit basis $\{\ket{H}^{\otimes N}, \ket{V}^{\otimes N}\}$, only $\sigma_\mathrm{hyb}(t_C)$ remains after the projection on $\ket{B_N^i}$. The factor $t_M^{2N}$ in both the numerator and the denominator of Eq.~(\ref{eq:postmeasurment}) then cancels out. Thus, $\rho_\mathrm{out}(t_M, t_C)$ is independent of $t_M$ so that it can be represented as $\rho_\mathrm{out}(t_C)$. We set the target state of the teleportation to be $\ket{\psi_\mathrm{t}} = a\ket{C_0}+b\ket{C_1}$. The quantum fidelity between the output state $\rho_\mathrm{out}$ and the target state $\ket{\psi_\mathrm{t}}$ is defined as \begin{align*} F(t_C)= \expval{\rho_\mathrm{out}(t_C)}{\psi_\mathrm{t}}. \end{align*} Now, we examine the candidates of the carrier qubit. First, we consider quantum teleportation from a multiphoton qubit to a coherent-state qubit. When $\ket{B_1^N}$ is detected, we can express the output qubits after Bell-state measurement by \begin{align*} \rho_{\mathrm{out}, 1}^\mathrm{m \rightarrow c} =& M_+ \big[\abs{a}^2 \dyad{t_C \alpha}+\abs{b}^2\dyad{-t_C \alpha} \nonumber\\ &+e^{-2\alpha^2 r_C^2}(ab^* \dyad{t_C\alpha}{-t_C\alpha}+\mathrm{H.c.})\big], \end{align*} where $M_+=\big[ 1+e^{-2\alpha^2} (a b^*+a^* b) \big]^{-1}$. When $\ket{B_3^N}$ is detected, the output qubit undergoes a bit flip as $\rho_\mathrm{out, 3}^{\mathrm m \rightarrow c} = X_c \rho_\mathrm{out, 1}^{\mathrm m \rightarrow c} X_c^{\dagger}$ with $X_c : \ket{\pm t_C \alpha} \rightarrow \ket{\mp t_C \alpha}$. This effect can be corrected by applying a $\pi$-phase shifter. However, when $\ket{B_2^N}$ is detected, the output qubit becomes \begin{align*} \rho_{\mathrm{out}, 2}^\mathrm{m \rightarrow c}=& M_- \big[\abs{a}^2 \dyad{t_C \alpha}+\abs{b}^2\dyad{-t_C \alpha} \nonumber\\ &-e^{-2\alpha^2 r_C^2}(ab^* \dyad{t_C\alpha}{-t_C\alpha}+\mathrm{H.c.})\big], \end{align*} which cannot be corrected to $\rho_\mathrm{out, 1}^\mathrm{m \rightarrow c}$ by applying a unitary operation because of the nonorthogonality of the coherent-state qubit basis. In other words, the required operation $Z_c : \ket{\pm t_C \alpha} \rightarrow \pm \ket{\pm t_C \alpha}$ cannot be performed in a fully deterministic way. There are, however, approximate methods to perform the required $Z_c$ operation using the displacement operation \cite{Jeong01, Jeong02} or the gate teleportation protocol \cite{Ralph03}. We also note that the transformation of $\rho_{\mathrm{out}, 4}^\mathrm{m \rightarrow c} \rightarrow \rho_{\mathrm{out}, 2}^\mathrm{m \rightarrow c}$ can be carried out by the $X_c$ gate. Therefore, the output qubit is one of the non-exchangable states, $\rho_{\mathrm{out}, 1}^\mathrm{m \rightarrow c}$ or $\rho_{\mathrm{out}, 2}^\mathrm{m \rightarrow c}$. We denote these two states as $\rho_\mathrm{out, \pm}^\mathrm{m \rightarrow c}$. Nevertheless, the measurement outcome $i$ heralds the transformation of the output states. Thus, the output qubit has the quantum information of the initial qubit. Given the transmittance $t_C$, we take the dynamical qubit basis $\qty{\ket{t_C \alpha}, \ket{-t_C \alpha}}$ as the output qubit basis. As an analogy of the input state, we set the two target states as \begin{align*} \ket{ \psi_\mathrm{t, \pm}^{m \rightarrow c} } = N_\pm \qty( a\ket{t_C \alpha} \pm b\ket{ - t_C \alpha} ), \end{align*} where $N_\pm =\qty{1 \pm (a b^* + a^* b ) \exp (-2t^2 \alpha^2)}^{-1/2}$ are the normalization constants. Then, we obtain the fidelity between $\rho_{\mathrm{out}, \pm}^\mathrm{m \rightarrow c}$ and $\ket{\psi_\mathrm{t, \pm}^\mathrm{m \rightarrow c}}$ respectively: \begin{align*} F^{m \rightarrow c}_\pm &(t_C ; a, b) = \ev{\rho_{\mathrm{out}, \pm}^{m \rightarrow c}}{\psi_\mathrm{t, \pm}^\mathrm{m \rightarrow c}} \nonumber\\ =& M_\pm N_\pm^2 \big[ \abs{a}^2 \abs{a\pm bS}^2 + \abs{b}^2 \abs{aS \pm b}^2 \nonumber\\ &\pm 2 e^{-2\alpha^2 r_C^2} \mathrm{Re}\left[ ab^* (a^* \pm b^* S) (a S \pm b)\right] \big], \end{align*} where $S=\bra{t_C\alpha}\ket{-t_C\alpha}=e^{-2t_C^2 \alpha^2}$ is the overlap between the output coherent-state qubit basis states. We now compute the average fidelity over all input states. We use a parametrization $a = \cos(\theta/2)\exp(i \phi /2)$ and $b = \sin(\theta/2) \exp(-i \phi /2)$ with uniformly random sampling on the Bloch sphere. Note that $F^{m \rightarrow c}_+(t_C ; a, b) = F^{m \rightarrow c}_-(t_C ; a, -b)$, so the average fidelities of both cases are equal. Finally, we get the following integration: \begin{align} \label{eq:CS_ave_fid} &F_\mathrm{ave}^\mathrm{m \rightarrow c}(t_C) = \left< F^{m \rightarrow c}_\pm(t_C ; a, b)\right>_{\theta, \phi} \nonumber\\ &~~~~~~~~~~~~= \frac{1}{4 \pi } \int_0^\pi d \theta \sin \theta \int_0^{2\pi} d\phi F^{m \rightarrow c}_\pm(t_C; \theta, \phi). \end{align} \begin{figure} \centering \includegraphics[width=1\linewidth]{fidelity.pdf} \caption{Average fidelities of direct transmission with $N=4$ (black solid) and hybrid archtectures with different carrier qubits: coherent state qubits (denoted by c) for $\alpha = 1.2$ (yellow dot-dashed) and $\alpha = 1.6$ (red dot-dot-dashed), a PSP qubit (p, green dashed), and a VSP qubit (s, blue dotted) against the photon-lose rate for the carrier-qubit part $\eta_C = 1-t_C^2$. The gray horizontal dotted line is the classical limit $F_\mathrm{cl} = 2/3$.} \label{fig:fid_comparison} \end{figure} The analytic expression of this integration is given in Ref.~\cite{Park12} but is too lengthy to present here. We show the average fidelity varying amplitude $\alpha$ of the coherent-state qubit in Fig.~\ref{fig:fid_comparison} (a). The plot shows that as the mean photon number $\alpha^2$ is smaller, the average fidelity approaches the unity. However, a small value of $\alpha$ makes the overlap between $\ket{\pm\alpha}$ large so that its ability for quantum information processing (for example, the probability to perform $Z_c$ gate) becomes low. In the case of the quatum teleporatation from multiphoton qubit to PSP qubit, we use the hybrid entangled state in Eq.~(\ref{eq:mp_hyb}). Since all single-qubit operations can be implemented in linear optics \cite{Knill01, Dodd03}, we set the unique target state: $\ket{\psi_\mathrm{t}^\mathrm{m \rightarrow p}} = a \ket{H} + b\ket{V} $. When a Bell state $\ket{B_1^N}$ is detected, the output state is \begin{align*} \rho_{\mathrm{out}, 1}^\mathrm{m \rightarrow p} &= t_C^{2}\big(\abs{a}^2 \dyad{H} + \abs{b}^2 \dyad{V} \nonumber\\ &\quad+ ab^*\dyad{H}{V} + a^* b\dyad{V}{H}\big) + r_C^2\dyad{0} \nonumber\\ &= t_C^2 \dyad{\psi_\mathrm{t}^\mathrm{m \rightarrow p}} + r_C^2 \dyad{0}. \end{align*} When the other Bell states are detected, after receiving the measurement outcome, the receiver can recover the target state by a proper single-qubit unitary operation. The fidelity is then readily obtained as \begin{align*} F^\mathrm{m \rightarrow p}(t_C) = t_C^2. \end{align*} The last case is teleportation from a multiphoton qubit to a VSP qubit with the entangled state in Eq. (\ref{eq:ms_hyb}). In this case of the VSP qubit, the situation is similar to the case of the coherent-state qubit. While the Z operation is deterministic in linear optics, the $X$ operation, $X:\ket{0} \leftrightarrow \ket{1}$, is probabilistic \cite{Lund02}. Therefore, we distinguish the output qubit of $\ket{B_1^N}$ detection, denoting $\rho_\mathrm{out, +}^\mathrm{m\rightarrow s}$, from $\ket{B_2^N}$, denoting $\rho_\mathrm{out, -}^\mathrm{m\rightarrow s}$. The output qubit when $\ket{B_1^N}$ is detected is obtained similarly as \begin{align*} \rho_\mathrm{out, +}^\mathrm{m \rightarrow s} &= (\abs{a}^2 + \abs{b}^2r_C^2) \dyad{0}+ \abs{b}^2 t_C^2 \dyad{1} \nonumber\\ &\quad+ (a b^* t_C \dyad{0}{1} + \mathrm{H. c.}). \end{align*} We then obtain the input-dependent fidelity as \begin{align*} F^\mathrm{m \rightarrow s} (t_C) = \abs{a}^4 + \abs{a}^2 \abs{b}^2(1+ t_C) + \abs{b}^4 t_C^2. \end{align*} In this case, the average fildelity has a simple analytic expression: \begin{align*} F_\mathrm{ave}^\mathrm{m \rightarrow s}(t_C) = \frac{1}{3} t_C^2 + \frac{1}{6} t_C + \frac{1}{2}. \end{align*} We need to consider the classical fidelity $F_\mathrm{cl}$ that is defined as the maximum average fidelity obtained by teleportation protocol without entanglement. It is well known that $F_\mathrm{cl} = 2/3$ for a qubit with an orthonormal basis \cite{Massar95}. If we use the coherent-state qubit of $\ket{\pm\alpha}$ as the carrier qubit, however, $F_\mathrm{cl}^\mathrm{m \rightarrow c}$ is \begin{align*} F_\mathrm{cl}^\mathrm{m \rightarrow c}(t_C) = \frac{S + 3S^2 - (S^4-1)}{4S^3} \sinh ^{-1} \qty[\frac{S}{\sqrt{1-S^2}}], \end{align*} where $S=\braket{-t_C\alpha}{t_C \alpha}=e^{-2t_C^2 \alpha^2}$ \cite{Park12}. In this case, the classical limit becomes larger $2/3$ due to the nonorthogonality $S$. Of course, $F_\mathrm{cl}^\mathrm{m \rightarrow c}$ converges to $2/3$ as $S \rightarrow 0$. In Fig.~\ref{fig:fid_comparison}, the average fidelity $F_\mathrm{cl}^\mathrm{m\rightarrow c}$ is approximately $2/3$ for the area of $\alpha \geq 1.2$ and $\eta \leq 0.5$. In Fig.~\ref{fig:fid_comparison}, we present the average fidelities between the output qubit and the target state against the photon-loss rate $\eta_C$ for the different types of the carrier qubit. For the coherent-state qubit, we choose amplitudes of $\alpha = 1.2$ and $1.6$, which are approximately the minimum and optimal amplitudes, respectively, for the fault-tolerant quantum computing using the 7-qubit Steane code \cite{Lund08}. Obvious, better fidelities over the direct transmission can be obtained using the teleportation protocol. Among the carrier qubits, the VSP qubit is better than the PSP qubit. The reason for this can be understood as follows. When photon loss occurs, the PSP qubit gets out of the original qubit space because of the vacuum portion. However, the VSP qubit remains in the original qubit space even under the photon loss. The comparison between the coherent-state qubit and the other types of qubits depends on amplitude $\alpha$. With small values of $\alpha$, the coherent-state qubit shows higher average fidelity than the others. We numerically obtain that, when $\alpha<1.23$ ($\alpha<0.78$), the average fidelity of the corresponding coherent-state qubit is higher than that of the PSP qubit (the VSP qubit) for any rates of photon loss. However, one should note that the overlap between two coherent states $\ket{\pm \alpha}$ is $\braket{\alpha}{-\alpha} = \mathrm{exp} (-2\alpha^2) \approx 0.0485$ ($0.296$) for $\alpha = 1.23$ ($0.78$), which could be a negative factor depending on the task to perform. \begin{table}[b]\label{tab:loss_limit} \begin{center} \caption{Maximum photon-loss rates for the carrier qubit, $\eta_C$, required to reach the fidelity of 99.9\%, 99\%, and 90\% with the coherent-state (CS) qubit, the PSP qubit, and the VSP qubit. The direct transmission (DT) of the multiphoton qubit with the photon number $N=4$ is given for comparison under the same photon-loss rate.} \label{tab:lim_loss} \begin{tabular}{p{1.3cm} *{5}{>{\centering}p{1cm}} c} \hline \hline \multirow{2}{*}{$F$} & \multirow{2}{*}{DT} & \multicolumn{2}{c}{CS} & \multirow{2}{*}{PSP} & \multirow{2}{*}{VSP}& \\ \cline{3-4} & &1.2 & 1.6 & & & \\ \hline 99.9\% & 0.025 & 0.10 & 0.059 & 0.10 & 0.24 & \multirow{3}{*}{($\times 10^{-2}$)} \\ 99\% & 0.25 & 1.1 & 0.59 & 1.0 & 2.4 &\\ 90\% & 2.6 & 12 & 7.0 & 10 & 24 &\\ \hline \hline \end{tabular} \end{center} \end{table} All-optical quantum computing schemes have tolerable limits of photon loss rates for fault tolerance \cite{Dawson06, Lund08, Herrera10, Lee13, Li15, Wehner18}. In Table \ref{tab:lim_loss}, we summarize the maximum photon-loss rates for the carrier qubit, $\eta_C$, which can be tolerated while preserving the fidelity to be 99.9\%, 99\%, and 90\% within our hybrid architectures. In this high fidelity regime, the VSP qubit tolerates approximately 10 times larger photon loss than the direct transmission. \subsection{\label{subsec:sucprob}Success probabilities} \begin{figure} \includegraphics[width=\linewidth]{success_probability.pdf} \caption{Success probability $P_\mathrm{success}$ of the multiphoton Bell-state measurement against the photon-loss rate for the multiphoton qubit $\eta_M = 1-t_M^2$ for photon numbers of $N =$1, 2, 3, and 4.} \label{fig:suc_comparision} \end{figure} Only when the input qubits are in the logical qubit basis and the identification between $\ket{B_1^N}$ and $\ket{B_3^N}$ is successful, the Bell-state measurement successes. Let us denote $q_i$ as the probability of the successful identification of $\ket{B_i^N}$ when $\ket{B_i^N}$ is given. This $q_i$ varies according to the Bell-state measurement scheme, and we follows the Bell-state measurement scheme of multiphoton qubit that $q_i= 1-1/2^{N-1}$ for $i = \mathrm{odd}$ and $q_i = 1$ for $i = \mathrm{even}$ \cite{Lee15}. The success probability of the teleportation with the hybrid entangled state $\rho_\mathrm{hyb}$ is then given as \begin{align} P = \sum_i q_i \tr \Big[\ket{B_i^N}_\mathrm{SS'}\bra{B_i^N}\qty(\ket{\psi_\mathrm{in}}_\mathrm{S} \bra{\psi_\mathrm{in}}\otimes(\rho_\mathrm{hyb})_\mathrm{S'R})\Big], \label{eq:prob-hyb} \end{align} where S and S' represents the sender's modes and R does the receiver's mode. Note that the success probability $P$ does not depend on $t_C$ since \begin{align*} &\tr \Big[\ket{B_i^N}_\mathrm{SS'}\bra{B_i^N}\qty(\ket{\psi_\mathrm{in}}_\mathrm{S} \bra{\psi_\mathrm{in}}\otimes(\rho_\mathrm{hyb})_\mathrm{S'R})\Big] \\ &= \tr_{\rm SS'} \Big[\ket{B_i^N}_\mathrm{SS'}\bra{B_i^N}\qty(\ket{\psi_\mathrm{in}}_\mathrm{S} \bra{\psi_\mathrm{in}}\otimes (\tr_{\mathrm R}\rho_\mathrm{hyb})_\mathrm{S'})\Big] \end{align*} and $\tr_\mathrm{R}\rho_\mathrm{hyb}(t_M, t_C) = \tr_\mathrm{R}(\Phi_{t_M}\otimes I)(\dyad{\psi_\mathrm{hyb}})$ from the trace-preserving property of $\Phi_{t}$ where R represents the receiver's mode and $\Phi_{t_M}$ is the quantum channel of photon loss with transmittance $t_M$. Now, we examine the success probability for each carrier qubit. For the case of teleportation to a coherent-state qubit with $\rho_\mathrm{mc}$ in Eq. (\ref{eq:mc_hyb}), the success probability of the teleportation, $P^\mathrm{m \rightarrow c}$, is obtained using Eq.~(\ref{eq:prob-hyb}) as \begin{align*} &P^\mathrm{m \rightarrow c} (t_M, N; a, b) \nonumber\\ &~~~~~~~~~~~~= t_M^{2N}\qty[ \qty(1- \frac{1}{2^N}) - \frac{e^{-2\alpha^2}}{2^N}\qty(a b^* + a^* b)]. \end{align*} The last term is from the nonorthogonality of the coherent state qubit. We also obtain the averaged success probability $P_\mathrm{ave}^\mathrm{m\rightarrow c}$ by averaging $P^{m \rightarrow c} (t_M, N; a, b)$ on all possible input state with the same parametrization of Eq. (\ref{eq:CS_ave_fid}) as \begin{align*} P_\mathrm{ave}^\mathrm{m \rightarrow c} (t_M, N) = t_M^{2N}\qty(1- \frac{1}{2^N}). \end{align*} For the discrete variable qubits, we have $\tr_{\mathrm R} (\Phi_{t_M}\otimes I)\dyad{\psi_\mathrm{hyb}} = \Phi_{t_M}(I/2)$. Therefore, without dependence on the carrier qubit, we obtain \begin{align}\label{eq:Suc_prob} P(t_M, N) = t_M ^{2N} \left( 1- \frac{1}{2^N} \right). \end{align} In Fig.~\ref{fig:suc_comparision}, we plot the success probability $P(t_M, N)$ as a function of the photon loss rate for the multiphoton qubit, $\eta_M$, by changing the photon number $N$ of the multiphoton qubit. The success probability in Eq. (\ref{eq:Suc_prob}) shows an interesting feature: while the success probability of Bell-state measurement increases with $N$ for $t_M = 1$, if $t_M$ is less than 1, larger $N$ rather makes the success probability lower. This supports the general belief that a ``macroscopic object'' is fragile under loss if we regard the larger $N$ means the qubit is more ``macroscopic'' \cite{Frowis18}. It is straightforward to obtain the optimal number of photons per a multiphoton qubit, $N_\mathrm{opt} = \lfloor \log_2 (1+1/\eta_M) \rfloor$, that maximizes the success probability $P(t_M, N)$. \section{\label{sec:gen}Generation of multiphoton hybrid entangled states} In this section, we discuss how to generate the required hybrid entangled states $\ket{\psi_\mathrm{mc}}$, $\ket{\psi_\mathrm{mp}}$ and $\ket{\psi_\mathrm{ms}}$ in Eq.~(\ref{eq:hyb_entanglement}). We may start with a GHZ state of PSP qubits: $ \ket{\mathrm{GHZ}(N)} =( \ket{H}^{\otimes N} + \ket{V}^{\otimes N} )/ \sqrt{2}$ . It is then clear that $\ket{\psi_\mathrm{mp}} = (\ket{H}^{\otimes N}\ket{H}+\ket{V}^{\otimes N}\ket{V})/{\sqrt{2}}$ is simply a GHZ state with $N+1$ modes $\ket{\mathrm{GHZ}(N+1)}$. In addition to a GHZ state of $N+1$ photons $\ket{\mathrm{GHZ}(N+1)}$, we need to find out methods to convert one of the polarization qubits in the GHZ state to the desired carrier qubit by a conversion gate $V = \dyad{C_0}{H} + \dyad{C_1}{V}$. In this way, desired hybrid entangled states may be obtained. There are a number of proposals for the generation of the GHZ state. A linear optical setup, called the (Type-I) fusion gate, is designed to fuse $\ket{\mathrm{GHZ}(N)}$ and $\ket{\mathrm{GHZ}(2)}$ to generate $\ket{\mathrm{GHZ}(N+1)}$ with a probability of $50\%$ \cite{Browne05}. In a similar method (Supplementary Material of Ref.~\cite{Varnava08}), 6 single photons are fused by the fusion gate followed by a Bell-state projection to generate $\ket{\mathrm{GHZ}(3)}$. The Bell-state measurements on copies of $\ket{\mathrm{GHZ}(3)}$ also provide a probabilistic method to generate a GHZ state with an arbitrary high photon number \cite{Varnava08, Li15}. Using the Bell-state measurements, this method is made robust to photon loss \cite{Varnava08}. Alternatively, a method based on a nonlinear interaction, called coherent photon conversion, was proposed to implement a deterministic photon-doubling gate $\dyad{HH}{H} + \dyad{VV}{V}$ \cite{Langford11}. So far, the multiphoton GHZ-type entanglement has been experimentally observed with postselection in most experiments (for example, \cite{Zhang16, Zhang15, Huang11, Wang16, Zhong18}), which cannot be used as a teleportation channel. Nevertheless, a direct generation of a three-photon GHZ state was experimentally performed \cite{Hamel14}. There are several methods to convert one PSP qubit to the VSP qubit~\cite{Ralph05, Fiurasek17, Drahi19}. The conversion gate $V_\mathrm{p\rightarrow s}=\dyad{0}{H}+\dyad{1}{V}$ was experimentally demonstrated using the teleportation protocol and post-selection~\cite{Drahi19}. In Ref.~\cite{Kwon15}, the authors suggested a method for conversion operation $V_\mathrm{p\rightarrow c} = \dyad{\alpha}{H}+\dyad{-\alpha}{V}$ using passive linear elements, single-photon detectors and a superposition of coherent states. This scheme allows the conversion $V_\mathrm{p \rightarrow c} = \dyad{\alpha}{H}+\dyad{-\alpha}{V}$ using a superposition of coherent states with an amplitude slightly larger than $\alpha$. We note that a superposition of coherent states with amplitude $\alpha \approx 1.85$ in a traveling field was recently generated \cite{Sychev17}. Experimental attempts to perform aforementioned proposals to generate multiphoton hybrid entangled states with high fidelities would be challenging due to effects of inefficient detectors, photon loss and other noisy effects. It is, however, beyond the scope of this work to investigate and analyze those details under realistic conditions. \section{Remarks} It is important to identify efficient qubit encoding for a given quantum information task such as quantum communication and computation. The multiphoton encoding enables one to perform a nearly deterministic Bell-state measurement, which is a remarkable advantage for quantum communication and computation. However, a multiphoton qubit is vulnerable to photon loss and this is a formidable obstacle particularly against long-distance quantum communication. In order to overcome this problem, we have suggested a teleportation scheme via hybrid entanglement between a multiphoton qubit and another type of optical qubit serving as a loss-tolerant carrier. In our scheme, only the loss-tolerant carrier qubit is sent through a lossy environment, where the coherent-state qubit, the PSP qubit, and the VSP qubit are considered as the loss tolerant carrier qubit. We have found that the average fidelities of the teleportation with the considered hybrid entangled states are better than the direct transmission. The VSP qubit in hybrid entanglement serves as the best carrier showing about 10 times better tolerance on the photon-loss rate than the direct transmission of the multiphoton qubit for the fidelity larger than 0.9. Our numerical analysis further shows that the coherent-state qubit shows higher average fidelity than the others with small values of $\alpha$. When $\alpha<1.23$ ($\alpha<0.78$), the average fidelity of the corresponding coherent-state qubit is higher than that of the PSP qubit (the VSP qubit) for any rates of photon loss. These results would be useful information when choosing the proper carrier qubit depending on the quantum tasks under consideration. We have also investigated the average success probability of the teleportation. It was shown that the success probability depends only on the loss of the multiphoton-qubit part. Although the Bell-state measurement scheme of the multiphoton qubit is nearly deterministic without loss, the photon loss limits the maximum success probability. Our work may be useful for the optical realization of long-distance quantum information processing by exploring hybrid architectures of optical networks. \section{Acknowledgement} This work was supported by the National Research Foundation of Korea (NRF) through grants funded by the the Ministry of Science and ICT (Grants No. NRF-2019M3E4A1080074 and NRF-2020R1A2C1008609). S.C. was supported by NRF Grant funded by Korean Government (NRF-2016H1A2A1908381-Global Ph.D. Fellowship Program)
1,108,101,562,376
arxiv
\section{Atomic structure of Twisted Bilayer Graphene} The atomic structure of twisted bilayer graphene (TBG) consists of two super-imposed graphene layers rotated by an angle $\theta$ and separated by a distance of $d_0 = 0.335$ nm. The axis of rotation is chosen to intersect two vertically aligned carbon atoms when starting from purely AA-stacked bilayer, i.e. two perfectly overlapping honeycomb lattices. Labelling the upper (lower) monolayer of TBG by $u$($l$), their two-dimensional lattice vectors are \begin{align} \begin{split} \bvec{a}_1^l &= a_0(\text{cos}(\pi/6) , -\text{sin}(\pi/6) ) \\ \bvec{a}_2^l &= a_0(\text{cos}(\pi/6) , \ \ \text{sin}(\pi/6) ) \\ \bvec{a}_i^u &= R(\theta)\bvec{a}_i^l, \end{split} \label{lattice_vec} \end{align} where $R(\theta)$ represents a rotation by an angle $\theta$ and $a_0 = 0.246 \, \text{nm}$ is the lattice constant of graphene, which should not be confused with the carbon-carbon bond length of $a_{\text{cc}} = a_0/\sqrt{3}$. The so defined primitive cell is di-atomic and contains two inequivalent sites with basis vectors \begin{align} \begin{split} \bvec{b}_1^{u(l)} &= (0,0) \qquad \qquad \qquad \quad \text{(A site)}, \\ \bvec{b}_2^{u(l)} &= \frac{1}{3} \bvec{a}_1^{u(l)} + \frac{1}{3} \bvec{a}_2^{u(l)} \qquad \, \, \, \text{(B site)}. \end{split} \label{lattice_basis} \end{align} The choice of basis vectors and the type of rotation around an AA site restricts the symmetry of TBG to point group $D_3$. The latter contains a threefold in-plane rotation $C_3 = C_{3z}$ around the z-axis and a twofold out-of-plane rotation $C_2 = C_{2y}$. \\ The lattice geometry of TBG, as defined so far, is not periodic in general \cite{PhysRevX.8.031087, moon2013optical, dos2012continuum} since the periods of the two graphene layers are incommensurate for arbitrary twist angles. A finite unit cell can only be constructed for some discrete angles satisfying the condition \begin{align} \text{cos}(\theta) = \frac{1}{2} \frac{m^2+n^2+mn}{m^2+n^2+mn} \label{theta_moire} \end{align} with $(m,n)$ being positive integers. In this case, the twisted bilayer graphene forms a moir\'{e} pattern, see Fig.~\ref{fig:atomic_structure} (a), containing $N(m,n) = 4(m^2+n^2+mn)$ atoms with superlattice vectors \begin{align} \begin{split} \bvec{L}_1 &= m \bvec{a}_1^l + n \bvec{a}_2^l = n \bvec{a}_1^u + m \bvec{a}_2^u \\ \bvec{L}_2 &= R(\pi/3)\bvec{L}_1. \end{split} \label{moire_vec} \end{align} Magic-angle twisted bilayer graphene has a twist angle of $\theta = 1.05^{\circ}$ corresponding to the the integers $(m,n) = (31,32)$. The so defined structure contains $N = 11908$ carbon atoms in the moir\'{e} unit cell. When discussing the geometric structure of TBG, atomic relaxation effects play an important role and may modify the low-energy physics of the system significantly. From experiments using transmission electron microscopy (TEM) \cite{PhysRevB.48.17427} as well as from structural optimization studies using density functional theory \cite{uchida2014} it is known that the interlayer distance between the two layers varies over the moir\'{e} unit cell. The interlayer spacing takes its maximum value $d_{AA} = 0.360 \, \text{nm}$ in the AA regions and its minimal value $d_{AB} = 0.335 \, \text{nm}$ in the AB regions. Intermediate spacings $d(\bvec{r})$ are obtained by using an interpolation suggested by \cite{PhysRevX.8.031087, uchida2014} \begin{equation} d(\bvec{r}) = d_0 + 2d_1\sum_{i = 1}^3 \text{cos} \left ( \bvec{G}_i \cdot \bvec{r} \right), \label{corr} \end{equation} where the vector $\bvec{r}$ points to a carbon atom in the moir\'{e} unit cell and $\bvec{G}$ are the reciprocal lattice vectors obtained from Eq. \eqref{moire_vec}. Furthermore, the constants $d_0 = \frac{1}{3}(d_{AA} + 2d_{AB})$ and $d_1= \frac{1}{9}(d_{AA} - d_{AB})$ are defined such to match the distances in the AA and AB regions. In order to preserve the $D_3$ symmetry of the system, the corrugation must be applied symmetric to both layers as depicted in Fig.~\ref{fig:atomic_structure} (c). \begin{figure*} \includegraphics[width=\textwidth]{S1.png} \centering \caption{\textbf{Atomic structure of twisted bilayer graphene.} \textbf{(a)} Atomic structure of twisted bilayer graphene with twist angle $\theta = 5.09^{\circ}$ corresponding to $(m,n) = (6,7)$. The blue labels indicate characteristic stacking patterns emerging throughout the moir\'{e} unit cell. \textbf{(b)} Downfolding of the (mini-) Brillouin zone of TBG (brown hexagon). \textbf{(c)} Corrugation effects in TBG. \textbf{(d)} Band structure of magic-angle TBG with twist angle $\theta = 1.05^{\circ}$ corresponding to $(m,n) = (31,32)$. The low-energy window around charge neutrality (red line) is modified significantly when corrugation of the two graphene sheets is taken into account (left panel). This is reflected in the formation of four flat bands (two-fold spin degeneracy) around charge neutrality that have a bandwidth of $\approx 15 \, \text{meV}$ and are separated from the rest of the spectrum.} \label{fig:atomic_structure} \end{figure*} \section{Atomistic Tight-binding Hamiltonian} The eigenenergies and eigenfunctions of non-interacting TBG are obtained using a single-orbital tight-binding Hamiltonian for the $p_z$-orbitals of the carbon atoms \cite{PhysRevX.8.031087, moon2013optical, sboychakov2015electronic}: \begin{equation} H = \sum_{\bvec{R}, \bvec{R'}} \sum_{i,j, \sigma} t(\bvec{R} + \bvec{r}_i - \bvec{R'} - \bvec{r}_j) c_{\bvec{R}, \bvec{r}_i, \sigma}^{\dagger} c_{\bvec{R'}, \bvec{r}_j, \sigma}. \label{tb_ham} \end{equation} For our microscopic approach we account for the full $\pi$-band spectrum of magic-angle TBG, keeping all $N=11908$ bands under consideration. In the following, we label the supercell vector with $\bvec{R}$ and the position vector of site $i$ in the corresponding moir\'{e} unit cell with $\bvec{r}_i$. Hence, the operator $c_{\bvec{R}, \bvec{r}_i, \sigma}^{\dagger}$ creates an electron with spin $\sigma = {\uparrow, \downarrow}$ in the $p_z$-orbital of site $i$, whereas $c_{\bvec{R}, \bvec{r}_i, \sigma}$ destroys an electron with the same quantum numbers. The transfer integral between orbitals at site $i$ and $j$, separated by the vector $\bvec{d}$ can be written in Slater-Koster form \cite{PhysRevX.8.031087} \begin{align} \begin{split} t(\bvec{d}) &= t_{\parallel}(\bvec{d}) + t_{\bot}(\bvec{d}) \\ t_{\parallel}(\bvec{d}) &= -V_{pp \pi}^0\text{exp} \left(-\frac{d-a_{\text{cc}}}{\delta_0}\right) \left[1 - \left(\frac{d^z}{d} \right)^2 \right] \\ t_{\bot}(\bvec{d}) &= -V_{pp \sigma}^0\text{exp} \left(-\frac{d-d_0}{\delta_0}\right)\left[\frac{d^z}{d} \right]^2. \end{split} \label{tb_hop} \end{align} Here, $d^z = \bvec{d} \cdot \bvec{e}_z$ points perpendicular to the graphene sheets and $d_0 = 1.362 \, a_0$ is the vertical spacing of graphite. The term $V_{pp \sigma} = 0.48 \ \text{eV}$ describes the interlayer hopping between atoms in different monolayers of TBG, while $V_{pp \pi} = - 2.7 \ \text{eV}$ models the intralayer hopping amplitude between neighboring atoms in a single graphene sheet. The parameters are fitted to data of first principle calculations to match the dispersion of mono- and bilayer graphene \cite{PhysRevX.8.031087}. This ensures in particular that TBG behaves locally similar to graphene with the overall structure being modulated by the moir\'{e} pattern. The parameter $\delta_0 = 0.184 \, a_0$ determines the decay length of the transfer integral and is chosen such that the nearest-neighbor intralayer hopping reduces to $0.1 V_{pp\pi}$. For numerical calculations it is therefore sufficient to truncate hopping terms for $r_{ij} > 4 \, a_0 $ as $t(\bvec{r}_{ij}) < 10^{-4}$ in these regimes. To construct the full non-interacting Hamiltonian of the periodic system, we define the Bloch wave basis by a Fourier transform to mini-Brillouin zone (MBZ) momentum $\bvec{k}$, which is depicted in Fig.~\ref{fig:atomic_structure} (b), \begin{align} \begin{split} c_{\bvec{k}, \bvec{r}} &= \frac{1}{\sqrt{N}} \sum_{\bvec{R}} e^{i \bvec{k} \bvec{R}} c_{\bvec{R}, \bvec{r}} \\ c_{\bvec{R}, \bvec{r}} &= \frac{1}{\sqrt{N}} \sum_{\bvec{k}} e^{-i \bvec{k} \bvec{R}} c_{\bvec{k}, \bvec{r}}. \end{split} \label{bloch_op} \end{align} The spin index is omitted for simplicity. Note that the moir\'{e} Fourier transform as defined above couples momenta $\bvec{k}$ and superlattice vectors $\bvec{R}$. The latter describe the spatial extent of the moir\'{e} unit cell, which in the case of magic-angle TBG is given by $L = |\bvec{L}_{1,2}| = 13.42 \, \text{nm}$. Inserting these expressions into Eq.\eqref{tb_ham} renders the unperturbed Hamiltonian block-diagonal in momentum space \begin{align} \begin{split} H_0 &= \sum_{\bvec{k}}\sum_{\bvec{R} - \bvec{R'}} \sum_{i,j, \sigma} t(\bvec{R} + \bvec{r}_i - \bvec{R'} - \bvec{r}_j) \\ & \qquad \qquad \qquad \quad \times e^{-i\bvec{k} \cdot(\bvec{R}-\bvec{R}^{\prime})} c_{\bvec{k}, \bvec{r}_i, \sigma}^{\dagger} c_{\bvec{k}, \bvec{r}_j, \sigma}^{\phantom \dagger} \\ &= \sum_{\bvec{k}} \sum_{i,j, \sigma} \left [ H_0(\bvec{k}) \right ]_{ \bvec{r}_i, \bvec{r}_j} c_{\bvec{k}, \bvec{r}_i, \sigma}^{\dagger} c_{\bvec{k}, \bvec{r}_j, \sigma}^{\phantom \dagger}. \end{split} \label{BH} \end{align} The matrix $ \left [ H_0(\bvec{k}) \right ]_{ \bvec{r}_i, \bvec{r}_j}$ can be diagonalized in orbital space $(i,j)$ for each $\bvec{k}$ to obtain the bandstructure $\epsilon_b(\bvec{k})$ and orbital-to-band transformation $u_{\bvec{r}}^{b}(\bvec{k})$, $b = 1 .. N$: \begin{align} H_0 &= \sum_{\bvec{k}, b} \epsilon_b(\bvec{k}) \gamma_{\bvec{k}, b}^{\dagger} \gamma_{\bvec{k}, b}^{\phantom \dagger} \qquad \text{with} \ \ \gamma_{\bvec{k}, b} = u_{\bvec{r}}^{b}(\bvec{k}) c_{\bvec{k}, \bvec{r}}. \label{o2b} \end{align} Since magic-angle TBG contains $N=11908$ atoms in the moir\'{e} unit cell, care must be taken when treating the system numerically. \section{Magnetic instabilities and Stoner criterion} \subsection{Stoner Criterion} In the manuscript, we study magnetic instabilities in TBG described by short-ranged Coulomb interactions. To this end, we follow Ref.~\cite{klebl2019inherited} and employ a repulsive Hubbard term for electrons with opposite spin $\sigma$ with $\overline{\sigma} = -\sigma$ residing on the same carbon site \begin{equation} H_{\text{int}} = \frac{1}{2} U \sum_{\bvec{R},i, \sigma} n_{\bvec{R}, \bvec{r}_i, \sigma} n_{\bvec{R}, \bvec{r}_i, \overline{\sigma}}. \label{hubbard} \end{equation} To treat the interacting system in a pertubative manner, we define the free Matsubara Green's function in orbital-momentum space as \begin{equation} g_{\bvec{r}, \bvec{r}^{\prime}} (i\omega, \bvec{k}) = \sum_{b} u_{\bvec{r}}^{b}(\bvec{k}) (i \omega -e_b(\bvec{k}))^{-1} u_{\mathbf{r^{\prime}}}^{b*}(\bvec{k}). \label{green} \end{equation} We then calculate the renormalized interaction in the spin channel within the random-phase approximation (RPA) to analyze the electronic instabilities mediated by spin-fluctuation exchange between electrons to high order in the bare coupling $U$. Since the initial short-ranged interaction vertex $U$ has no momentum and frequency dependence, the full susceptibility of the system can be approximated by $\hat{\chi}^{\text{RPA}}(q) = \hat{\chi}_0(q)/[1+U \hat{\chi_0}(q)]$. Here, we use the multi-index qunatum number $q = (\bvec{q}, i \omega)$ and indicate matrices of dimension $N \times N$ with an hat symbol, e.g. $\hat{\chi} = \chi_{\bvec{r}, \bvec{r}^{\prime}}$. Magnetic instabilities may subsequently be classified according to a generalized Stoner criterion: The effective (RPA) interaction diverges, when the smallest eigenvalue $\lambda_0$ of $\hat{\chi}(q)$ reaches $-1/U$, marking the onset of magnetic order for all interaction strengths $U \geq U_{\text{crit.}} = -1/\lambda_0$. The corresponding eigenvector $v^{(0)}(q)$ is expected to dominate the spatial structure of orbital magnetization. In this letter, we study magnetic instabilities with emphasis on the static, long-wavelength limit $(\bvec{q}, i \omega \to 0)$ on the moir\'{e} scale. The latter limit proves to contain the relevant physics when starting with local repulsive interaction. We stress again that momenta $\bvec{q}$ are related via moir\'{e} Fourier transform Eq. \eqref{bloch_op} to the superlattice vectors $\bvec{R}$. The RPA susceptibility predicts spin correlations at length scales intermediate to the c-c bond scale and moir\'e length scale, thus being described by orderings at $\bvec q=0$. The system shows the same order in all moir\'{e} unit cells with variable correlations present on the c-c bond scale. \subsection{Spin susceptibility} For analyzing the magnetic properties of the system on the RPA level, it is therefore sufficient to compute the free polarization function $\hat{\chi}_0(q)$ defined as \begin{equation} \chi_{0_{\bvec{r}, \bvec{r}^{\prime}}}(\bvec{q}, i\omega) = \frac{1}{N \beta} \sum_{\bvec{k}, \omega^{\prime}} g_{\bvec{r}, \bvec{r}^{\prime}}(i\omega^{\prime}, \bvec{k})g_{\bvec{r}^{\prime}, \bvec{r}}\left ( i(\omega^{\prime}+ \omega \right ), \bvec{k}+ \bvec{q}). \label{chi0} \end{equation} The Matsubara summation occuring in Eq. \eqref{chi0} can be evaluated analytically yielding the well-known Lindhard function for multi-orbital systems \begin{equation} \begin{split} \label{chi0_mats} \chi_{0_{\bvec{r}, \bvec{r}^{\prime}}}(\bvec{q}, i\omega) &= \frac{1}{N} \sum_{\bvec{k}, b, b^{\prime}} \frac{n_F(\epsilon_{b^{\prime }}(\bvec{k})) - n_F(\epsilon_b(\bvec{k}+\bvec{q})) }{i \omega + \epsilon_{b^{\prime}}(\bvec{k}) - \epsilon_b(\bvec{k}+\bvec{q}) } \\ &\times u_{\bvec{r}}^{b^{\prime}}(\bvec{k}) u_{\bvec{r}^{\prime}}^{b^{\prime}*}(\bvec{k}) u_{\bvec{r}}^{b*}(\bvec{k}+\bvec{q}) u_{\bvec{r}^{\prime}}^{b}(\bvec{k}+\bvec{q}), \end{split} \end{equation} where $n_F(\epsilon) = (1+e^{\beta \epsilon})^{-1}$ is the Fermi function. While the analytical evaluation of the Matsuabra sum occuring in Eq. \eqref{chi0} is the standard procedure for systems containing only few atoms in the unit cell, this approach is destined to fail in our atomistic approach as it scales like $\mathcal{O}(N^4)$. For magic-angle TBG with $N = 11908$ atoms in the moir\'{e} unit cell, it is more efficient to compute the Matsubara sum in Eq. \eqref{chi0} numerically over a properly chosen frequency grid and in each step compute Hadamard products of band-summed non-local Green's functions $g_{\bvec{r}, \bvec{r}^{\prime}}(i\omega^{\prime}, \bvec{k})g^{T}_{\bvec{r}, \bvec{r}^{\prime}}\left ( i(\omega^{\prime}+ \omega \right ), \bvec{k}+ \bvec{q})$. The expression then scales like $\mathcal{O}(N^3 N_{\omega})$ with $N_{\omega}$ being the number of fermionic frequencies needed to achieve proper convergence. To this end, non-linear mixing schemes \cite{ozaki2007} are proven to outperform any linear summation such that we only need to sum over $N_{\omega} \approx 1000 $ frequencies when accessing temperatures down to $T = 0.03 \, \text{meV}$. The momentum sum occuring in Eq. \eqref{chi0} is evaluated over 24 $\bvec{k}$ points in the MBZ using a momentum meshing proposed by Cunningham et al. \cite{cunningham1974}. In particular, we checked that the results are sufficiently converged when taking an denser mesh into account. \begin{figure*} \includegraphics[width=\textwidth]{S2.png} \centering \caption{\textbf{Spin correlations in magic-angle TBG.} \textbf{(a)} Magnetic RPA phase diagram showing the critical onsite interaction strength $U_{\text{crit.}}$ vs. chemical potential $\mu$ in the four flat bands of TBG at $T = 0.03 \, \text{meV}$. The vertical lines indicate the integer fillings $\pm 3, \pm 2, \pm 1$ that show an increased magnetic ordering tendency towards a moir\'{e}-modulated ferromagnetic state, while away from integer fillings weaker antiferromagnetic tendencies dominate. \textbf{(b)} Spatial distribution of the leading eigenvector of the RPA analysis on the carbon-carbon bond scale in the moir\'{e} unit cell. For simplicity, only the lower layer of TBG is shown as well as a linecut through the unit cell. The leading eigenvector of the DAFM instabilities is staggered throughout the moir\'{e} unit cell with strongest weight in the AA regions as depicted in the linecut in (b). \textbf{(c)} Effective spin-fluctuation mediated pairing vertex $\Gamma_2(q=0)$ close to a DAFM (upper panel) and FM (lower panel) magnetic instability. In the former case, the pairing vertex is staggered in real-space with strong on-site repulsion and nearest-neighbor attraction. Real-space profiles are shown starting from an atom located in the AA, AB or DW region of TBG, respectively. } \label{fig:correlation} \end{figure*} \subsection{Leading Instabilities} In the manuscript, we classify the different leading eigenvectors of the RPA analysis according to their real-space profile in the moir\'{e} unit cell following the nomenclature introduced in Ref.~\cite{klebl2019inherited}. The three potential ground states of the interacting system at $T = 0.03 \, \text{meV}$ are depicted in Fig.~\ref{fig:correlation} (b): (i) AFM: moir\'{e}-modulated antiferromagnetic phase on the carbon-carbon bond scale ("\AA ngstr\"om"-scale) with increased weight in the AA regions, (ii) DAFM: moir\'{e}-modulated antiferromagnetic phase with opposite signs between the AA and AB regions that becomes visible as a node in the absolute value of the order parameter, (iii) FM: moir\'{e}-modulated ferromagnetic phase (FM) that exhibits the same overall sign in the moir\'{e} unit cell. The choice of the temperature $T = 0.03 \, \text{meV}$ should provide resolution of the flat energy bands in TBG as extensively discussed in Ref.~\cite{klebl2019inherited}. As long as $T \gg \mathcal{O}(1 \, \text{meV})$ the flat band physics is not resolved and the system inherits magnetic order from purely AA and AB stacked bilayer graphene. For $T \approx \mathcal{O}(1 \, \text{meV}) \approx 10 \, \text{K}$, significant deviations due to the flat bands occur, leading to the plethora of magnetic phases described above. \section{Self-consistent Bogoliubov de-Gennes equations for TBG} \subsection{Fluctuation-Exchange approximation} For interaction values $U < U_{\text{crit}}$ the system is in the paramagnetic regime and the magnetic instabilities prescribed by the RPA analysis are not strong enough to actually occur. In this regime, spin and charge fluctuations contained in the transverse and longitudinal spin channel can give rise to an effective interaction between electrons that may lead to the formation of Cooper pairs. The leading RPA diagrams to the irreducible singlet particle-particle scattering vertex $\hat{\Gamma}_2(\bvec{q},\nu)$ are captured within the fluctuations-exchange approximation (FLEX) \cite{berk1966effect, romer2012local} \begin{equation} \hat{\Gamma}_2(\bvec{q}) = \hat{U} - \frac{U^2\hat{\chi}_0(\bvec{q})}{1+U\hat{\chi}_0(\bvec{q})} + \frac{U^3\hat{\chi}_0^2(\bvec{q})}{1-U^2\hat{\chi}_0^2(\bvec{q})}. \label{eq_veff} \end{equation} As in the previous section, we only consider the static long-wavelength limit $(\bvec{q}, i \omega \to 0)$ and thus focus on the pairing structure on the carbon-carbon bonds within the moir\'{e} unit cell. To this end, the real-space profile of the effective interaction $\hat{\Gamma}_2$ for different chemical potentials is shown in Fig.~\ref{fig:correlation} (c). As mentioned in the manuscript, the interaction vertex is staggered through the moiré unit cell close to a DAFM/AFM instability, opening the door for unconventional singlet Cooper pairs in the two graphene sheets of TBG. In particular, the interlayer interaction strength is an order of magnitude smaller than comparable intralayer terms. This indicates that the main pairing will create in-plane Cooper pairs. In the manuscript, we thus only visualize the projection of the superconducting order parameters on the in-plane form factor basis of each graphene sheet, i.e. a layer-resolved representation. \subsection{Self-consistent BdG Formalism} In the next step, we analyze the effective particle-particle scattering vertex Eq.~\eqref{eq_veff} using a mean-field decoupling to extract pairing symmetries and spatial distribution of the superconducting order parameter. In the static long-wavelength limit $(\bvec{q}, i \omega \to 0)$, we may hence neglect the momentum dependence of the gap parameter and effectively solve a one-unit cell system with periodic boundary conditions. While this approach does not take correlations between different moir\'{e} unit cells into account, it allows for all pairing contributions from within the moir\'{e} unit cell. Due to the proximity to the anti-ferromagnetic ordered state, we restrict the mean-field decoupling to spin-singlet configurations that are symmetric under the exchange of spatial indices \begin{equation} \begin{split} \Delta_{ij}= - \frac{1}{2} \left [ \Gamma_2 (q=0) \right]_{ij} \langle c_{i \uparrow} c_{j \downarrow} - c_{i \downarrow} c_{j \uparrow} \rangle_{\text{\tiny MF}}. \end{split} \label{singlet_gap} \end{equation} The expectation value $\langle \cdot \rangle_{\text{\tiny MF}}$ can be calculated by diagonalizing the resulting mean-field Hamiltonian $H_{\text{MF}}$ in Nambu-space using a Bogoliubov de-Gennes transformation \cite{zhu2016bogoliubov} \begin{equation} \centering \begin{split} H_{\text{MF}} &= \psi^{\dagger} \begin{pmatrix} \hat{H}_0 & \hat{\Delta}\\ \hat{\Delta}^{\dagger} & -\hat{H}_0 \\\end{pmatrix} \psi \\ \langle c_{i \uparrow} c_{j \downarrow} - c_{i \downarrow} c_{j \uparrow} \rangle_{\text{\tiny MF}} &= \sum_n \left (u_i^n v_j^{n*} + u_j^n v_i^{n*} \right)\text{tanh} \left(\frac{E^n}{2T}\right ). \end{split} \label{bdg} \end{equation} Here, $u_i^n \, (v_i^n)$ are the particle (hole) amplitudes of the BdG quasi-particles resulting from the diagonalization of the Hamiltonian in Eq. \eqref{bdg} \begin{equation} \begin{split} H_{\text{MF}} &= \left ( \hat{U} \psi \right )^{\dagger} \begin{pmatrix} \hat{E} &0 \\ 0 & -\hat{E} \end{pmatrix} \left ( \hat{U} \psi \right )^{\phantom \dagger} \\ \hat{U} &= \begin{pmatrix} \hat{u} & \hat{v} \\ -\hat{v}^*& \hat{u} \\ \end{pmatrix}, \end{split} \label{bdg_particle_hole} \end{equation} and $\psi = (c_{1 \uparrow}^{\phantom \dagger}, ... , c_{N \uparrow}^{\phantom \dagger}, c_{1 \downarrow}^{\dagger}, ... , c_{N \downarrow}^{\dagger})^{T}$ is the $2N$-component Nambu vector. $\sum_n$ denotes a sum over the positive quasi-particle energies $E_n>0$ and $\hat{E}$ is the corresponding diagonal matrix. To solve this set of self-consistent equations we start with an initial guess for $\Delta_{ij}$ and iterate until convergence is achieved using a linear mixing scheme to avoid any bipartite solutions. Since the atomic arrangement is highly inhomogeneous in the moir\'{e} unit cell, we track the free energy during each self-consistency cycle and for different initial configurations to ensure proper convergence of the algorithm into the actual global minimum. The free energy of the system in the low temperature regime reads \begin{equation} F = E-TS \approx E_g - \sum_{n} E_n - \sum_{ij} \frac{|\Delta_{ij}|^2}{\Gamma_{2, ij}} \label{free_energy} \end{equation} where $E_g = 2 \sum_n E_n n_F(E_n)$ is the excitation energy of the quasi-particles. The different initial configurations are chosen to transform according to the irreducible representations of the $D_{6h}$ point group of the honeycomb lattice. This procedure aligns with the insights from the previous paragraph that the spin-fluctuation mediated pairing vertex Eq.~\eqref{eq_veff} will create in-plane Cooper pairs with strongest pairing amplitude living on the nearest-neighbor bonds of the two single graphene sheets. The phase factors of the different nearest-neighbor pairing channels are shown in Fig.~\ref{fig:form_factor}. \begin{figure} \includegraphics[width=0.3\textwidth]{S3.png} \centering \caption{Form factors for different nearest-neighbor (singlet) pairing channels on the honeycomb lattice. The complex linear combination $d\pm id$ is characterized by the phase factor $w = \text{exp}(\pm i\, 2 \pi / 3)$. } \label{fig:form_factor} \end{figure} \subsection{Supercurrent and magnetic field} To characterize the different superconducting phases of the system, we compute the layer resolved quasi-particle bond current in TBG \cite{zhu2016bogoliubov} \begin{equation} \boldsymbol{J}_{nm} = \frac{e}{i \hbar} \langle c_{n}^{\dagger}t_{nm}c_m - c_m^{\dagger}t_{mn}c_n \rangle \hat{\boldsymbol{e}}_{nm}. \label{eq_supercurrent} \end{equation} In the atomistic approach presented here, the quasi-particle current $\bvec{J}_{nm}$ is only defined between two carbon atoms residing at sites $\bvec{r}_{n(m)}$ in the moir\'{e} unit cell. Therefore, we take an amplitude-weighted average of neighboring bonds to arrive at a vector field representation $\bvec{J}$ as shown in Fig. 3 in the manuscript \begin{equation} \bvec{J}(\bvec{r}_n) = \frac{1}{3} \sum_{\langle m \rangle } \bvec{J}_{nm} \hat{\bvec{e}}_{nm}. \label{current_averge} \end{equation} Here, $\hat{\bvec{e}}_{nm}$ points from the atom at position $\bvec{r}_n$ to its three nearest-neighbors $\bvec{r}_m$ on each graphene sheet. In particular, the current amplitude is negligible at distances exceeding nearest-neighbor atoms and thus an average over nearest-neighbors is sufficient. The spontaneously flowing currents of quasi-particles induce a magnetic field that can be calculated by applying the Biot-Savart law \begin{equation} \bvec{B}(\bvec{r}) = \frac{\mu_0}{4 \pi} \int \bvec{J}(\bvec{r}) \times \frac{\bvec{r}-\bvec{r}^{\prime}}{|\bvec{r}-\bvec{r}^{\prime}|^3} d^3\bvec{r}, \label{biot_savart} \end{equation} where $\mu_0$ is the vacuum permeability. Since the current co-propagates in the two graphene sheets of TBG in the "chiral" phase, the magnetic fields induced by the supercurrents add constructively making this particular feature of the TRS breaking phase measurable in experiment.
1,108,101,562,377
arxiv
\chapter{Setup for the Complexes} \section{Introduction} We will study how local and global geometries affect heat kernels on a set of metric spaces called Euclidean polyhedral complexes. Euclidean complexes are formed by taking a collection of $n$ dimensional convex polytopes and joining them along $n-1$ dimensional faces. Within each polytope, we will have the same metric structure as $R^n$. When we join them, we will glue the faces of two polytopes together so that points on one face are identified with points on the other face, and the metrics on those faces are preserved. We will require that these structures have a countable number of polytopes, are locally finite, and have lower bound on the interior angles and edge lengths. The complex formed by looking at $k$ dimensional faces is called the $k$-skeleton. For instance, the 0-skeleton is set of vertices. A 1-skeleton is a graph where the space includes both vertices and points on the edges; sometimes this is called a metric graph \cite{Kuc}. Note that we can triangulate any convex polytope to obtain a collection of simplices, and so this structure is essentially equivalent to looking at a simplicial complex. \begin{figure}[h] \centering \includegraphics[angle=0,width=1in]{2skelCropDot.eps} \hspace{.5in} \includegraphics[angle=0,width=1in]{1skelCrop.eps} \hspace{.5in} \includegraphics[angle=0,width=1in]{2skelCropDotTriang.eps} \caption{Example of a 2 dimensional Euclidean Complex (left), its 1-skeleton (center), one possible triangulation (right).} \end{figure} Let $h_t^k(x,y)$ be the heat kernel on the $k$-skeleton. This is the fundamental solution to the heat equation $\partial_t u - \Delta u =0$ on the $k$-skeleton. It can be used to describe the probability that we travel from $x$ to $y$ in time $t$ when our movement is restricted to the $k$-skeleton. \begin{theorem} For $X$ a uniformly locally finite Euclidean complex of dimension $n$ whose interior angles and edge lengths are bounded below. Fix $T \in (0,\infty)$. There exist $c, C\in (0,\infty)$ such that for any $x \in X$ and $t < T$ we have: \begin{eqnarray*} \frac{c}{t^{k/2}} \le h_t^k(x,x) \le \frac{C}{t^{k/2}}. \end{eqnarray*} \end{theorem} Note that this claims that the heat kernel on the $k$-skeleton behaves, up to a constant that is independent of where in $X$ we are, like the heat kernel on $R^k$ asymptotically when $t \rightarrow 0$. The local behavior reflects the local geometry and structure of our space. Theorems in Sturm \cite{SturmDiffHK} can be applied to Euclidean complexes to show that on any compact subset of $X^{(k)}$, the heat kernel is locally like the one on $R^k$, with constants that depend on the choice of compact subset. The essential difference in our theorem is that the constants are uniform throughout the entire complex. An interesting example of these complexes comes from biology. In a paper by Billera, Holmes, and Vogtmann \cite{BilleraHolmesVogtmann} they describe a way of classifying distances between phylogenetic trees, which are trees that describe evolution of species. One can form an Euclidean complex, where each of the faces corresponds to a different tree, and one moves through the points in the face by changing the edge lengths in the tree. One can then consider probability distributions on this space to determine likely genetic ancestry. Euclidean complexes are also examples of fractal ``blow-ups'', which are infinite fractals that are locally nice but globally have a structure with repetition. See Kigami \cite{KigamiFrac} for a description of these fractals. In this setting, our small time asymptotic estimates apply. Note that these examples need not be compact. In \cite{BarlowKumagai}, Barlow and Kumagai studied the small time asymptotic of heat kernels for compact self-similar sets. Another collection of examples can be found by considering metric spaces, $X$, which are acted upon by a finitely generated group, $G$ of isometries. When we take the space and mod out by that group, we obtain a compact set $Y = X/G$. When $Y$ can be expressed as a finite Euclidean complex, then $X$ is an Euclidean complex as well. Note that the $k$-skeleton of $Y$ will be the $(k$-skeleton of $X)/G$. A simple example of this is $X=\mbox{\bf R}^2$, $Y=$ the unit square, and $G=\mbox{\bf Z} ^2$. A more interesting example occurs when $G$ is the free group; there the space is globally hyperbolic, but locally Euclidean. With this added group structure, we can describe the large time behavior of the heat kernel. We write the heat kernel on a group as $p_t(\cdot,\cdot)$. \begin{proposition} Let $X$ be a locally finite countable Euclidean complex of dimension $n$ and let $G$ be finitely generated group $G$. If $X/G$ is a complex comprised of a finite number of polytopes with Euclidean metric, we have: \begin{eqnarray*} p_t(x,x) \simeq h_t(x,x) \mbox{ as } t \rightarrow \infty . \end{eqnarray*} \end{proposition} Our main result says that, up to a constant, the heat kernel will behave the same asymptotically as $t \rightarrow \infty$ on both the group and the complex. By transitivity, it will behave the same asymptotically regardless of which $k$-skeleton we consider. This theorem relates to a paper of Pittet and Saloff-Coste \cite{LSCP}. They show that a manifold $M$ which has a finitely generated group of isometries $G$ satisfies $sup_x h^M_t(x,x) \simeq h^G_t(e,e)$ for large $t$. In chapter one, we describe our set-up. We provide definitions for the complex and skeletons and then define an energy form and a Laplacian on them. In chapter two, we will prove the initial theorem by first showing that a series of Poincar\'{e} inequalities hold, starting with one for balls where the radius of the ball depends on the center and generalizing until the result is uniform in space. In chapter three, we apply these inequalities to a result of Sturm \cite{Sturm} to yield a small time on diagonal heat kernel asymptotic with a uniform constant. We also provide off diagonal estimates with constants that depend on $d(x,y)$, but not on where $x$ and $y$ are located. We give several examples of heat kernels. In chapter four, we consider complexes with underlying group structure. We describe how to compare metrics on the complex and those on the underlying group, as well as how to switch from a function on a group to one on a complex and vice versa. We then use the metric comparison as well as our small time Poincar\'{e} inequality to compare norms of functions on complexes and their group counterparts. In chapter five, we consider heat kernels on the group and the complex. We split into two cases; nonamenable groups, which have exponentially fast heat kernel decay, and amenable groups. For the amenable groups, we look at heat kernels restricted to subsets of our space, and then take a F\o lner sequence to limit to a bound on the heat kernels themselves. In this way, we prove the second theorem. \section{Geometry of the Complexes} We will take our definitions of polytopes and polyhedral sets from Gr\"{u}nbaum's Convex Polytopes \cite{Grunbaum}. \begin{definition} A polyhedron $K$ is a subset of $R^n$ formed by intersecting a finite family of closed half spaces of $R^n$. Note that this can be an unbounded set, but it will be convex. \end{definition} \begin{definition} A set $F$ is a face of $K$ if $F=\emptyset$, $F=K$, or if $F=H\cap K$ where $H$ is a supporting hyperplane of $K$. $H$ is a supporting hyperplane of $K$ if $H\cap K \ne \emptyset$ and $H$ does not cut $K$ into two pieces. \end{definition} \begin{definition} A point $x\in K$ is an extreme point of a set $K$ if the only $y,z \in K$ which are solutions to $x = \lambda y + (1-\lambda) z$ for some $\lambda \in (0,1)$ are $x=y=z$. That is, $x$ cannot be expressed as a convex combination of points in $K - \{x\}$. Note that the extreme points of $K$ are faces for $K$. \end{definition} \begin{definition} A polytope is a compact convex subset of $R^n$ which has a finite set of extreme points. This is equivalent to saying it is a bounded polyhedron. \end{definition} \begin{definition} A polyhedral complex $X$ is the union of a collection, $\mathcal{X}$, of convex polyhedra which are joined along lower dimensional faces. By this we mean that for any two distinct polyhedra $P_1, P_2 \in \mathcal{X}$, \begin{itemize} \item $P_1 \cap P_2$ is a polyhedron whose dimension satisfies \\ $\dim(P_1 \cap P_2) < \max(\dim(P_1),\dim(P_2))$ and \item $P_1 \cap P_2$ is a face of both $P_1$ and $P_2$. We allow this face to be the empty set. \end{itemize} \end{definition} We do not have a specific embedding for the complex, $X$; however, we require each polyhedra to have a metric which is consistent with that of its faces. Note that this definition implied $P_1\cap P_2$ is a connected set. This rules out expressing a circle as two edges whose ends are joined, but it allows us to write it as a triangle of three edges. This is not very restrictive, as we can triangulate the polyhedra in order to form a complex which avoids the overlap. Simplicial complexes are an example of a polyhedral complex; the difference here is that we allow greater numbers of sides. Note that we allow infinite polyhedra, not just finite polytopes. \begin{definition} Define a $p$-skeleton, $X^{(p)}$, for $0\le p\le \dim X$ to be the union of all faces of dimension $p$ or smaller. Note that this is also a polyhedral complex. \end{definition} \begin{definition} A maximal polyhedron is a polyhedron that is not a proper face of any other polyhedron. The set of maximal polyhedra of $X$ is denoted $\mathcal{X}_{MAX}$. We say $X$ is dimensionally homogeneous if all of its maximal polyhedra have dimension $n$. Note that in combinatorics literature this is called pure. \end{definition} \begin{definition} $X$ is locally (n-1)-chainable if for every connected open set $U \subset X$, $U-X^{(n-2)}$ is also connected. For a dimensionally homogeneous complex $X$ this is equivalent to the property that any two $n$ dimensional polyhedra that share a lower dimensional face can be joined by a chain of contiguous $(n-1)$ or $n$ dimensional polyhedra containing that face. \end{definition} \begin{definition} We call $X$ admissible if it is both dimensionally homogeneous and in some triangulation $X$ is locally (n-1)-chainable. \end{definition} \begin{figure}[h] \centering \includegraphics[angle=0,width=1in]{DimensionallyInhomogeneous.eps} \hspace{.5in} \includegraphics[angle=0,width=1in]{NotChainable.eps} \hspace{.5in} \includegraphics[angle=0,width=1in]{Admissible.eps} \caption{Examples of a complex which is not dimensionally homogeneous (left), one which is not 1-chainable (center), and one which is admissible (right).} \end{figure} We will be working with connected admissible complexes, and for our purposes, we'd like to consider polyhedra that have an Euclidean metric. Let $X$ be an n-dimensional complex. When two polyhedra share a face, we require these metrics to coincide. For points $x$ and $y$ in different polyhedra, we define the distance as follows. \begin{definition} Consider the set of paths connecting $x$ to $y$ which consist of a finite number of line segments. We can label each of these by the points it crosses in the $(n-1)$ skeleton. Set $\gamma =\{x=x_0,x_1,x_2,..,x_k=y\}$ where $x_i \in$ $(n-1)$-skeleton for $i=1..k-1$, and $x_i$, $x_{i+1}$ are both in the closure of the same maximal polyhedron. Then set $L(\gamma) = \sum_{i=1}^k d(x_{i-1},x_i)$. We define the distance between points in different polyhedra to be $d(x,y) = \inf_{\gamma} L(\gamma)$. \end{definition} Essentially, we are splitting the path into pieces, and letting the lengths of those pieces inside of the simplices be the standard lengths in $R^n$. Since our complex is created using closed polyhedra, if the geometry of the polyhedra is bounded, the inf will be realized. This will give us a length space; ie, one in which distances are realized by geodesics in the space. Discussions of length spaces and other metric measure spaces can be found in Heinonen \cite{Hein} and Burago, Burago, and Ivanov \cite{BuragoI}. \begin{definition} Let $X=\cup_i P_i$, where the $P_i$ are the maximal polyhedra. We will set the measure of $A$, a Borel subset of $X$, to be $\mu(A) =\sum_i \mu_i(A \cap P_i)$ where $\mu_i$ is the Lebesgue measure on $P_i$. \end{definition} Notice that the measure within the interior of maximal polyhedra is the same as Lebesgue measure on $R^n$. This means that locally we will have all of the structure of $R^n$; in particular, we will have volume doubling for balls contained in the interior of the maximal polyhedra. Since our complex is locally finite, volume doubling will hold locally for all points in the complex. \begin{definition} An admissible polyhedral complex, $X$, equipped with distance, $d(\cdot,\cdot)$ and measure $\mu$ is called an Euclidean polyhedral complex. \end{definition} For brevity, we will often call this an Euclidean complex. A book which describes these complexes is Harmonic Maps Between Riemannian Polyhedra \cite{EellF}. In it, the authors define these structures with a Riemannian metric and provide analytic results on both the complexes and functions whose domain and range are both complexes. \section{Analysis on the Complexes} \subsection{The Dirichlet Form} Now that we've defined the space geometrically, we will define a Dirichlet form whose core consists of compactly supported Lipschitz functions. \begin{definition} A function $f$ on a metric space $X$ is called $L$-Lipschitz (alternately, Lipschitz) if there exists a constant $L \ge 0$ so that $d(f(x),f(y)) \le L d_X(x,y)$ for all $x$ and $y$ in $X$. The space of Lipschitz functions is denoted $\operatorname{Lip}(X)$. The space of compactly supported Lipschitz functions is denoted $C_0^{\operatorname{Lip}}(X)$. \end{definition} Note that Lipschitz functions are continuous. By theorem 4 in section 5.8 of \cite{Evans}, for each $B_{\epsilon}(x) \subset X-X^{(n-1)}$ and $f \in C_0^{\operatorname{Lip}}\left(X\right)$, $f$ restricted to $B_{\epsilon}(x)$ is in the Sobolev space $W^{1,\infty}\left(B_{\epsilon}(x)\right)$. This tells us that $f$ has a gradient almost everywhere in $X-X^{(n-1)}$. Since $\mu(X^{(n-1)})=0$, $f$ has a gradient for almost every $x$ in $X$. We would like an energy form that acts like $E(u,v) =\int_X \langle \nabla u,\nabla v \rangle d\mu$ with domain $F$ to define our operator $\Delta$ with domain $\operatorname{Dom}(\Delta)$. We can define this in a very general manner which does not depend on the local Euclidean structure by following a paper of Sturm \cite{SturmDif}. We can also define it in a more straightforward manner which uses the geometry of $X$. We do both, and then show that they coincide. Sturm assumes that the space $(X,d)$ is a locally compact separable metric space, $\mu$ is a Radon measure on $X$, and that $\mu(U)>0$ for every nonempty open set $U \subset X$. These assumptions hold both in our space, $X$, and on the skeletons, $X^{(k)}$. We begin by approximating $E$ with a form $E^r$ defined to be: \begin{eqnarray*} E^r(u,v) := \int_X \int_{B(x,r)-\{x\}} \frac{(u(x)-u(y))(v(x)-v(y))}{d^2(x,y)} \frac{2 N d\mu(y)d\mu(x)}{\mu(B_r(x)) + \mu(B_r(y))} \end{eqnarray*} for $u,v \in \operatorname{Lip}(X)$ where N is the local dimension. Note that whenever $x$ is in a region locally like $R^n$, we have \begin{eqnarray*} \mathop{\lim}_{r\rightarrow 0} \frac{N}{\mu(B_r(x))} \int_{B(x,r)-\{x\}} \frac{(u(x)-u(y))^2}{d^2(x,y)} d\mu(y) = |\nabla u(x)|^2 , \end{eqnarray*} and so this form looks very similar to $E(u,u)=\int_X|\nabla u|^2 dx$. This form with domain $C_0^{\operatorname{Lip}}(X)$ is closable and symmetric on $L^2(X)$, and its closure has core $C_0^{\operatorname{Lip}}(X)$. See Lemma 3.1 in \cite{SturmDif}. One can take limits of these operators in the following way. The $\Gamma$-limit of the $E^{r_n}$ is defined to be the limit that occurs when the following lim sup and lim inf are equal for all $u\in L^2(X,m)$. See Dal Maso\cite{DalMaso} for a thorough introduction. \begin{align*} \Gamma-\mathop{\lim \sup}_{n \rightarrow \infty} E^{r_n}(u,u) &:= \mathop{\lim}_{\alpha \rightarrow 0} \mathop{\lim \sup}_{n \rightarrow \infty} \mathop{\mathop{\inf}_{v \in L^2(X)}}_{||u-v||\le \alpha} E^{r_n}(v,v) \\ \Gamma-\mathop{\lim \inf}_{n \rightarrow \infty} E^{r_n}(u,u) &:= \mathop{\lim}_{\alpha \rightarrow 0} \mathop{\lim \inf}_{n \rightarrow \infty} \mathop{\mathop{\inf}_{v \in L^2(X)}}_{||u-v||\le \alpha} E^{r_n}(v,v). \end{align*} For any sequence $\{E^{r_n}\}$ of these operators with $r_n \rightarrow 0$ , there is a subsequence $\{r_{n'}\}$ so that the $\Gamma$-limit of $E^{r_n'}$ exists by Lemma 4.4 in \cite{SturmDif}. These lemmas are put together into a theorem (5.5 in \cite{SturmDif}) that tells us that this limit, $E^0$, with domain $C_0^{\operatorname{Lip}}(X)$ is a closable and symmetric form, and its closure, $(E,F)$, is a strongly local regular Dirichlet form on $L^2(X,m)$ with core $C_0^{\operatorname{Lip}}(X)$. Alternately, we can define the energy form using the structure of the space. We set $\mathcal{E}(\cdot,\cdot)$ to the following for $f \in C_0^{\operatorname{Lip}}(X)$: \begin{eqnarray*} \mathcal{E}(f,f) = \sum_{X_M \in \mathcal{M}} \int_{X_M} |\nabla f|^2 d\mu(x). \end{eqnarray*} \begin{lemma} $\mathcal{E}(\cdot,\cdot)$ is a closable form. That is, for any sequence \\ $\{ f_n \}_{n=1}^{\infty} \subset C_0^{\operatorname{Lip}}\left(X\right)$ that converges to 0 in $L^2(X)$ and is Cauchy in $|| \cdot ||_2 + \mathcal{E}(\cdot,\cdot)$ we have $\lim_{n \rightarrow \infty} \mathcal{E}(f_n,f_n) =0$. \end{lemma} \begin{proof} To show this, we will first look at what happens on one fixed polyhedron, and then look at what happens on a complex. Let $X_M$ be a maximal polyhedron. Since $\{ f_n \}_{n=1}^{\infty}$ is Cauchy in the norm, we have \begin{eqnarray*} \lim_{m,n \rightarrow \infty} \left( \int_{X_M} (f_n -f_m)^2 d\mu \right)^{\frac{1}{2}} + \left( \int_{X_M} (\nabla f_n - \nabla f_m)^2 d\mu \right)^{\frac{1}{2}} = 0. \end{eqnarray*} This gives us two functions, $f$ and $F$ which are the limits of $f_n$ and $\nabla f_n$ respectively. We have $f=0$ by assumption. We need to show that $F=0$. For almost every $x,y \in X_M$ and line $\gamma_{x \sim y}$ in $X_M$ we have \begin{eqnarray*} \int_{\gamma_{x \sim y}} \nabla f_n d\mu = f_n(y) -f_n(x). \end{eqnarray*} Then we can take the limit as $n$ goes to infinity to get \begin{eqnarray*} \lim_{n \rightarrow \infty} \int_{\gamma_{x \sim y}} \nabla f_n d\mu =\lim_{n \rightarrow \infty} f_n(y) -f_n(x) =0. \end{eqnarray*} This gives us $\lim_{n \rightarrow \infty} \nabla f_n(x) =0$ for almost every $x \in X_M$. Since the choice of $X_M$ was arbitrary, this shows $\lim_{n \rightarrow \infty} \nabla f_n(x) =0$ for almost every $x \in X$. Showing $L^2$ convergence is a bit trickier, as we need to show that we can interchange the limit with the sum over the maximal polyhedra. We can do this for $|\nabla f_n - \nabla f_m|$ by Fatou's Lemma. \begin{eqnarray*} \lim_{n \rightarrow \infty} \sum_{X_M \in \mathcal{M}} \int_{X_M} |\nabla f_n|^2 d\mu &=& \lim_{n \rightarrow \infty} \sum_{X_M \in \mathcal{M}} \int_{X_M} |\nabla f_n -\lim_{m \rightarrow \infty} \nabla f_m|^2 d\mu \\ &=& \lim_{n \rightarrow \infty} \sum_{X_M \in \mathcal{M}} \int_{X_M} \lim_{m \rightarrow \infty} |\nabla f_n - \nabla f_m|^2 d\mu \\ &\le& \lim_{n \rightarrow \infty} \lim_{m \rightarrow \infty} \sum_{X_M \in \mathcal{M}} \int_{X_M} |\nabla f_n - \nabla f_m|^2 d\mu \\ &=&0. \end{eqnarray*} This tells us that the form is closable. \end{proof} We will show that the two energy forms are the same. To do this, we show that they are the same on the core $C_0^{\operatorname{Lip}}\left(X\right)$; this gives equality on the domain. \begin{lemma} Each function $f \in C_0^{\operatorname{Lip}}\left(X\right)$ satisfies $E(f,f)=\mathcal{E}(f,f)$. \end{lemma} \begin{proof} We can write $X$ as $(X-X^{(n-1)}) \cup X^{(n-1)}$; this is a collection of maximal polyhedra and a set of measure 0. The interior of the maximal polyhedra is a Riemannian manifold without boundary. $X$ is also a locally compact length space, and so it satisfies the conditions of example 4G in \cite{SturmDiffHK}. This implies it has the strong measure contraction property with an exceptional set. Corollary 5.7 in \cite{SturmDiffHK} says that this then has $E(f,f) =\mathcal{E}(f,f)$ for each $f \in C_0^{\operatorname{Lip}}(X)$. The equality is shown by approximating the forms using an increasing sequence of open subsets which limit to $X-X^{(n-1)}$. As $C_0^{\operatorname{Lip}}(X)$ is a core for both $E$ and $\mathcal{E}$, the Dirichlet forms are the same. \end{proof} \begin{comment} Let $f \in C_0^{\operatorname{Lip}}\left(X\right)$ be given. Denote the Lipschitz constant for $f$ by $L$. Then there is a finite set of maximal polyhedra $\{ X_i \}_{i=1}^k$ with $\operatorname{supp}\left(f\right) \subset \cup_{i=1}^k X_i$. Each of the $X_i$ can be thought of as a compact subset of $R^n$. Set $U = \cup_{i=1}^k X_i$. Define $U_{\epsilon} = \{ x \in U | d\left(x,\partial U \cup U^{\left(n-1\right)}\right) > \epsilon \}$. This set is a union of pieces of $R^n$; it is the same as $\cup _i \{ x \in X_i | d\left(x,X_i\right) > \epsilon \}$. \begin{equation*} \begin{split} E(f, f) =& \Gamma-\mathop{\lim}_{n \rightarrow \infty} E^{r_n}\left(f,f\right) \\ = &\mathop{\lim}_{\alpha \rightarrow 0} \mathop{\lim}_{n \rightarrow \infty} \mathop{\mathop{\inf}_{v \in L^2\left(X\right)}}_{||f-v||\le \alpha} E^{r_n}\left(v,v\right) \\ \le & \mathop{\lim}_{\alpha \rightarrow 0} \mathop{\lim}_{n \rightarrow \infty} E^{r_n}\left(f,f\right) \\ =&\mathop{\lim}_{n \rightarrow \infty} E^{r_n}\left(f,f\right) \\ = &\mathop{\lim}_{n \rightarrow \infty} \int_X \int_{B_{r_n}\left(x\right)-\{x\}} \frac{\left(f\left(x\right)-f\left(y\right)\right)^2}{d^2\left(x,y\right)} \frac{2N d\mu\left(y\right)d\mu\left(x\right)}{\mu\left(B_{r_n}\left(x\right)\right) + \mu\left(B_{r_n}\left(y\right)\right)} \\ \le &\mathop{\lim}_{n \rightarrow \infty} \int_{U_{2\epsilon}} \int_{B_{r_n}\left(x\right)-\{x\}} \frac{\left(f\left(x\right) -f\left(y\right)\right)^2}{d^2\left(x,y\right)} \frac{2N d\mu\left(y\right)d\mu\left(x\right)}{2\mu\left(B_{r_n}\left(x\right)\right)} \\& \mbox{ }+\int_{U - U_{2\epsilon}} \int_{B_{r_n}\left(x\right)-\{x\}} \frac{\left(f\left(x\right) -f\left(y\right)\right)^2 }{d^2\left(x,y\right)} \frac{2N d\mu\left(y\right)d\mu\left(x\right)}{\mu\left(B_{r_n}\left(x\right)\right)}. \end{split} \end{equation*} In the last inequality we used $\mu(B_{r_n}(x)) = \mu(B_{r_n}(y))$ for $x,y \in U_{2\epsilon}, r_n < \epsilon$ in the first term. In the second, we used $\mu(B_{r_n}(x)) + \mu(B_{r_n}(y)) \ge \mu(B_{r_n}(x)) $. Since $f$ is Lipschitz, we can use the dominated convergence theorem to switch the limit and the integral over $U_{2\epsilon}$. This then gives us the gradient. \begin{equation*} \begin{split} \mathop{\lim}_{n \rightarrow \infty} & \int_{U_{2\epsilon}} \int_{B_{r_n}\left(x\right)-\{x\}} \frac{\left(f\left(x\right) -f\left(y\right)\right)^2}{d^2\left(x,y\right)} \frac{N d\mu\left(y\right)d\mu\left(x\right)}{\mu\left(B_{r_n}\left(x\right)\right)} \\ &= \int_{U_{2\epsilon}} \mathop{\lim}_{n \rightarrow \infty} \int_{B_{r_n}\left(x\right)-\{x\}} \frac{\left(f\left(x\right)-f\left(y\right) \right)^2}{d^2\left(x,y\right)} \frac{N d\mu\left(y\right)d\mu\left(x\right)}{\mu\left(B_{r_n}\left(x\right)\right)} \\ &= \int_{U_{2\epsilon}} |\nabla f(x)|^2 d\mu\left(x\right) \\ &\le \mathcal{E}(f,f). \end{split} \end{equation*} For the second term, we have \begin{equation*} \begin{split}\lim_{\epsilon \rightarrow 0} \int_{U - U_{2\epsilon}} \int_{B_{r_n}\left(x\right)-\{x\}} & \frac{\left(f\left(x\right) -f\left(y\right)\right)^2 }{d^2\left(x,y\right)} \frac{N d\mu\left(y\right)d\mu\left(x\right)}{\mu\left(B_{r_n}\left(x\right)\right)} \\ &\le \lim_{\epsilon \rightarrow 0} L \int_{U - U_{2\epsilon}} \int_{B_{r_n}\left(x\right)-\{x\}}\frac{N d\mu\left(y\right)d\mu\left(x\right)}{\mu\left(B_{r_n}\left(x\right)\right)} \\ &= \lim_{\epsilon \rightarrow 0} L \mu\left(U - U_{2\epsilon}\right) 2 =0. \end{split} \end{equation*} Thus, \begin{eqnarray*} E\left(f,f\right) \le \mathcal{E}(f,f) +0. \end{eqnarray*} Now we must show the reverse inequality. Fix $f \in C^{\operatorname{Lip}}_0(X)$. Since $E(f,f)$ is well defined, we can consider a subsequence of the $\alpha$. Limiting along that subsequence will yield the same value. For each $\alpha_j$ in the subsequence, there is a sequence $v_i \in C^{\operatorname{Lip}}_0(X)$ which has $||f-v_i||_2 \le 2\alpha_j$, and $\lim_{i \rightarrow \infty} E^{r_n}(v_i,v_i)=\inf_{v\in L^2(X), ||f-v||_2 \le \alpha}E^{r_n}(v,v).$ This holds because $C^{\operatorname{Lip}}_0(X)$ is a core for the form. We can write these limits out explicitly: \begin{eqnarray*} E(f,f) &=& \lim_{\alpha_j \rightarrow 0} \lim_{n \rightarrow \infty} \lim_{i \rightarrow \infty} E^{r_n}(v_i,v_i) \\ &=& \lim_{\alpha_j \rightarrow 0} \lim_{i \rightarrow \infty} \lim_{n \rightarrow \infty} E^{r_n}(v_i,v_i) \end{eqnarray*} As the $E^{r_n}(v_i,v_i)$ are ............, we can switch the order of the limits. \begin{eqnarray*} \lim_{\alpha_j \rightarrow 0} \lim_{i \rightarrow \infty} \lim_{n \rightarrow \infty} E^{r_n}(v_i,v_i) &=& \lim_{\alpha_j \rightarrow 0} \lim_{i \rightarrow \infty} \mathcal{E}(v_i,v_i). \end{eqnarray*} This equality holds due to the following argument. As the $E^{r_n}(v_i,v_i)$ are positive and converge, we can switch the order of the limits. Then, since the $v_i \in C^{\operatorname{Lip}}_0(X)$, we know that the integrand is less than or equal to the square of the Lipschitz constant for $v_i$ on a fixed compact set. This allows us to apply the dominated convergence theorem to switch $\lim_{n \rightarrow \infty}$ with $\int_X$, and so the form $E^{r_n}(v_i,v_i)$ limits to $\mathcal{E}(v_i,v_i)$. We can use Fatou to switch limits to get \begin{eqnarray*} \lim_{\alpha \rightarrow 0} \lim_{i \rightarrow \infty} \mathcal{E}(v_i,v_i) & = & \lim_{\alpha \rightarrow 0} \lim_{i \rightarrow \infty} \int_X |\nabla v_i|^2 d\mu \\ & \ge & \int_X \lim_{\alpha \rightarrow 0} \lim_{i \rightarrow \infty} |\nabla v_i|^2 d\mu. \end{eqnarray*} Now we show that $|\nabla v_i(y)|^2$ can be used to dominate $|\nabla f|^2$. We can write $|\nabla v_i(y)|^2$ as an integral for almost every $y$. We can bound the integrand above by $L$, the Lipschitz constant for $f$. This will allow us to take limits. \begin{eqnarray*} |\nabla v_i(y)|^2 &=& \lim_{r \rightarrow 0} \int_{B_r(y)} \left(\frac{v_i(y)-v_i(z)}{d(y,z)}\right)^2 \frac{N}{\mu(B_r(y))}d\mu(z) \\ &\ge& \lim_{r \rightarrow 0} \int_{B_r(y)} \min\left(\left(\frac{v_i(y)-v_i(z)}{d(y,z)}\right)^2,L^2\right) \frac{N}{\mu(B_r(y))}d\mu(z). \end{eqnarray*} Now we have a bounded integrable function; we again apply Fatou to get \begin{equation*} \begin{split} \lim_{\alpha \rightarrow 0} \lim_{i \rightarrow \infty} \lim_{r \rightarrow 0} \int_{B_r(y)} & \min\left(\left(\frac{v_i(y)-v_i(z)}{d(y,z)}\right)^2,L^2\right) \frac{N}{\mu(B_r(y))}d\mu(z) \\&\ge \lim_{r \rightarrow 0} \int_{B_r(y)} \lim_{\alpha \rightarrow 0} \lim_{i \rightarrow \infty} \min\left(\left(\frac{v_i(y)-v_i(z)}{d(y,z)}\right)^2,L^2\right) \frac{N}{\mu(B_r(y))}d\mu(z) \\ &= \lim_{r \rightarrow 0} \int_{B_r(y)} \min\left(\left(\frac{f(y)-f(z)}{d(y,z)}\right)^2,L^2\right) \frac{N}{\mu(B_r(y))}d\mu(z) \\ &= |\nabla f(y)|^2. \end{split} \end{equation*} This gives us a pointwise bound almost everywhere. When we put it into the integral, we have: \begin{eqnarray*} E(f,f) \ge \int_X|\nabla f(y)|^2 d\mu(y) = \mathcal{E}(f,f). \end{eqnarray*} This gives us the equality. \end{comment} We will explain more clearly where the domain of this operator lies. The domain is the closure of $C_0^{\operatorname{Lip}}(X)$ in the $W^{1,2}(X)$ norm. This domain is a subset of the set of functions which are in $W^{1,2}$ of the interiors of the maximal polyhedra. \begin{lemma} For $\mathcal{E}$, \begin{align*} \overline{C_0^{\operatorname{Lip}}(X)} \subset \overline{C(X) \cap \left(\cup_{X_M \in \mathcal{X}_{MAX}}\bigoplus W^{1,2}(X_M^o) \right)} \end{align*} where the closure is taken with respect to the $W^{1,2}$ norm, $||\cdot||_2 + \mathcal{E}(\cdot,\cdot)$. $X_M^o$ denotes the interior of $X_M$. \end{lemma} \begin{proof} First note that $C_0^{\operatorname{Lip}}(X) \subset C(X)$. For any $f \in C_0^{\operatorname{Lip}}(X)$, we have the compact subset $Y= \operatorname{supp}(f)$. Then $f$ restricted to $X_M^o$ will be in $W^{1,2}(X_M^o)$, since $||\nabla f||_{2,X_M^o} \le ||\nabla f||_{\infty,X_M} \mu(X_M^o \cap \operatorname{supp}(f))$. This tells us \begin{eqnarray*} f \in \cup_{X_M \in \mathcal{X}_{MAX}}\bigoplus W^{1,2}(X_M^o). \end{eqnarray*} We now have a containment without the closures: \begin{align*} C_0^{\operatorname{Lip}}(X) \subset C(X) \cap \left( \cup_{X_M \in \mathcal{X}_{MAX}}\bigoplus W^{1,2}(X_M^o) \right). \end{align*} As we then close both sides with respect to the same norm, we have: \begin{align*} \overline{C_0^{\operatorname{Lip}}(X)} \subset \overline{C(X) \cap \left( \cup_{X_M \in \mathcal{X}_{MAX}}\bigoplus W^{1,2}(X_M^o) \right)}. \end{align*} \end{proof} \subsection{The Laplacian} The Dirichlet form uniquely determines a positive self-adjoint operator \\ $\{ \Delta, \operatorname{Dom}(\Delta) \}$ on $L^2$ where $F=\operatorname{Dom}(\Delta^{\frac{1}{2}})$ and $E(u,v) = (u,\Delta v)$ for all $u\in F$ and $v \in \operatorname{Dom}(\Delta)$. This is done by defining a collection of quadratic forms, \\ $E_{\alpha}(u,v):=E(u,v) + \alpha(u,v)$ for $\alpha >0$. Then, by the Reisz representation theorem, there will be an operator $G_{\alpha}$ so that $E_{\alpha}(G_{\alpha}u,v) = (u,v)$ for any u,v in $\operatorname{Dom}(E)$. The set of these operators forms a $C_0$ resolvent. One can look at inverses, $G_{\alpha}^{-1}$ on the image of $G_{\alpha}$. We can then consider $\Delta = G_{\alpha}^{-1}- \alpha$ on the space $G_{\alpha}(\operatorname{Dom}(E))$. One can show that this definition is independent of $\alpha$. The domain of the operator is $\operatorname{Dom}(\Delta)=G_{\alpha}(\operatorname{Dom}(E))$. It's difficult to explicitly state exactly which functions are in $\operatorname{Dom}(\Delta)$, but the domain is dense in $L^2(X)$. See Fukushima et al \cite{FOT} for the full argument; a fine summary of this is done in Todd Kemp's lecture notes \cite{Kemp}. Note that this set-up will work on each of the skeletons, and so we can use it to define a different Laplacian on each of them. When we define the $E^r$ on a k-skeleton, $X^{(k)}$, we'll set $N=k$, integrate over $X^{(k)}$, and let m be a k-dimensional measure. This technique will define $\Delta_k$ on a dense subset of $L^2(X^{(k)})$. In the one dimensional case, this Laplacian gives us a structure called a quantum graph. Here, the functions in the domain of the Laplacian should be continuous and the inward pointing derivatives should sum to zero at each vertex. This is known as a Kirchoff condition. A nice introduction to these graphs and their spectra as well as a wide variety of references to the literature on them can be found in Kuchment \cite{Kuc}. In the two dimensional case, this operator is related to results in a paper of Brin and Kifer \cite{BrinK} which constructs Brownian motion on two dimensional Euclidean complexes. Bouziane \cite{Bouz} constructs and proves the existence of Brownian motion on admissible Reimannian complexes of any dimension. It would be interesting to determine whether these constructions define the same operator; however, that is not our focus. \chapter{Local Poincar\'{e} Inequalities on X} In this chapter we will show that a uniform local Poincar\'{e} inequality holds for a certain class of admissible complexes. Local Poincar\'{e} inequalities have appeared in \cite{White} and \cite{EellF} for finite complexes or for compact subsets of complexes. In White's article \cite{White}, a global Poincar\'{e} inequality was shown for Lipschitz functions on an admissible complex made up of a finite number of polyhedra. The constant in this proof was linear in the number of polyhedra involved, and so it does not extend to an infinite complex. A uniform weak local inequality for Lipschitz functions was also shown on this finite complex. This too differs from our inequality in its dependence on a finite complex. In Eells and Fuglede's book \cite{EellF}, they show that for any relatively compact subset of an admissible complex, a local Poincar\'{e} inequality will hold for locally Lipschitz functions with a constant that depends on the particular choice of compact subset. The larger complex itself can be infinite, but the constant in the inequality depends on our particular choice of compact subset. We will show the following for $f \in \operatorname{Lip}(X)$, under some assumptions on the geometry of $X$: \begin{eqnarray*} \norm{f-f_B}_{p,B} \le p P_0 r \norm{\nabla f}_{p,B} \end{eqnarray*} where $f_B$ is the average of $f$ over $B$, $B=B(z,r)$, $r<R_0$. $R_0$ and $P_0$ are constants depending on the space, $X$. Our result shows that a uniform local Poincar\'{e} inequality will hold for Lipschitz continuous functions on any ball of radius less than $R_0$, where $R_0$ is fixed and depends only on the complex itself, not on the specific choice of ball. We require our complex to be admissible. Our complex can be infinite, but we bound below the angles of the polyhedra and the distance between two vertices. We also bound above the number of polyhedra that join at a vertex. In both White and Eells and Fuglede these assumptions hold because their sets are either finite or relatively compact. Connections between Poincar\'{e} inequalities and other analytic inequalities can be found in Sobolev met Poincar\'{e} by Haj{\l}asz and Koskela \cite{HajKosk}. \section{Weak Poincar\'{e} Inequalities} We would like to prove a local Poincar\'{e} inequality for an admissible Euclidean polytopal complex. If we look at a convex subset of Euclidean space, this is a well known statement. We will show it first in a convex space, and then we will generalize it to our locally nonconvex space. A note on our notation: often we will abbreviate $d\mu(x)$ by $dx$. Similarly, we will write the average integral of $f$ over a set $A$ by $\Xint-_A f dx$. \begin{lemma}\label{ConvexSimplify} Let $\Omega$ be a connected convex set with Euclidean distance and structure and $\Omega_1, \Omega_2$ be convex subsets of $\Omega$. For $f \in \operatorname{Lip}(\Omega) \cap L^1(\Omega)$, the following holds: \begin{eqnarray*} \int_{\Omega_2} \int_{\Omega_1} \abs{f(z)-f(y)}dz dy \le 2^{n-1} \frac{ \operatorname{diam}(\Omega)}{n} (\mu(\Omega_1)+\mu(\Omega_2)) \int_{\Omega} \abs{\nabla f(y)} dy. \end{eqnarray*} \end{lemma} \begin{proof} The type of argument used here can be found in Aspects of Sobolev-Type Inequalities \cite{LSC}. Let $\gamma$ be a path from $z$ to $y$. The definition of a gradient gives us: \begin{eqnarray*} \abs{f(z)-f(y)} \le \int_{\gamma}\abs{ \nabla f(s)} ds . \end{eqnarray*} Note that if we are in a 1-dimensional space, a convex subset is a line. The desired inequality follows from expanding $\gamma$ to $\Omega$, and then noting that integrating over $x$ and $y$ has the effect of multiplying the right hand side by \\ $\mu(\Omega_1)\mu(\Omega_2) \le \operatorname{diam}(\Omega)(\mu(\Omega_1) +\mu(\Omega_2))$. Because $z$ and $y$ are in the same convex region $\Omega$ with an Euclidean distance, we can let the path $\gamma$ be a straight line: \begin{eqnarray*} \abs{f(z)-f(y)} \le \int_0^{\abs{y-z}} \abs{\nabla f\left(z + \rho \frac{y-z}{|y-z|}\right)} d\rho . \end{eqnarray*} We integrate this over $z \in \Omega_1,y \in \Omega_2$. To get a nice bound, we will use a trick from Korevaar and Schoen \cite{KorevaarS}. We split the path into two halves. For each half, we switch into and out of polar coordinates in a way that avoids integrating $\frac{1}{s}$ near $s=0$. This allows us to have a bound which depends on the volumes of $\Omega_1$ and $\Omega_2$ rather than $\Omega$. First, we consider the half of the path which is closer to $y \in \Omega_2$. $I_{\Omega}(\cdot)$ is the indicator function for $\Omega$. \begin{eqnarray*} \int_{\Omega_1} \int_{\Omega_2} \int_{\frac{|y-z|}{2}}^{|y-z|} \abs{\nabla f(z + \rho\frac{y-z}{|y-z|})}I_{\Omega}(z + \rho\frac{y-z}{|y-z|}) d\rho dy dz . \end{eqnarray*} We change of variable so that $y-z = s \theta$. That is, $|y-z| = s$ and $\frac{y-z}{|y-z|} = \theta$. Note that $\operatorname{diam}(\Omega)$ is an upper bound on the distance between $y$ and $z$. \begin{eqnarray*} ...= \int_{\Omega_1}\int_{S^{n-1}} \int_0^{\operatorname{diam}(\Omega)} \int_{s/2}^{s} \abs{\nabla f(z + \rho\theta)}I_{\Omega}(z + \rho\theta) s^{n-1}d\rho ds d\theta dz. \end{eqnarray*} We switch the order of integration. Now, $\rho$ will be between $0$ and $\operatorname{diam}(\Omega)$ and $s$ will be between $\rho$ and $\min(2\rho,\operatorname{diam}(\Omega))$. This allows us to integrate with respect to $s$. \begin{eqnarray*} ... &=& \int_{\Omega_1} \int_{S^{n-1}} \int_0^{\operatorname{diam}(\Omega)} \int_{\rho}^{ \min(2 \rho,\operatorname{diam}(\Omega))} \abs{\nabla f(z + \rho\theta)}I_{\Omega}(z + \rho\theta) s^{n-1} ds d\rho d\theta dz \\ &=& \int_{\Omega_1} \int_{S^{n-1}} \int_0^{\operatorname{diam}(\Omega)} \abs{\nabla f(z + \rho\theta)}I_{\Omega}(z + \rho\theta) \frac{(\min(2 \rho,\operatorname{diam}(\Omega)))^n -\rho^n}{n}d\rho d\theta dz. \end{eqnarray*} Now we reverse the change of variables to set $y = z + \rho \theta$. Since our integral includes an indicator function at $z + \rho \theta$, we have $y \in \Omega$. \begin{eqnarray*} \int_{\Omega_1} \int_{\Omega} \abs{\nabla f(y)} \frac{(\min( 2 |y-z|,\operatorname{diam}(\Omega)))^n - |y-z|^n}{n |y-z|^{n-1}} dy dz. \end{eqnarray*} Let's consider the possible values of $\frac{(\min( 2 |y-z|,\operatorname{diam}(\Omega)))^n - |y-z|^n}{n|y-z|^{n-1}}$. If $|y-z| < \frac{\operatorname{diam}(\Omega)}{2}$, then $\min( 2 |y-z|,\operatorname{diam}(\Omega)) = 2 |y-z|$. This gives us: \begin{eqnarray*} \frac{(\min( 2 |y-z|,\operatorname{diam}(\Omega)))^n - |y-z|^n}{n|y-z|^{n-1}} &=& \frac{ 2^n |y-z|^n - |y-z|^n}{n|y-z|^{n-1}} \\ &=& \frac{2^n-1}{n} |y-z| \\ &\le& \frac{\operatorname{diam}(\Omega)(2^n-1)}{2n}. \end{eqnarray*} Otherwise, if $|y-z| \ge \frac{\operatorname{diam}(\Omega)}{2}$, then $\min( 2 |y-z|,\operatorname{diam}(\Omega)) =\operatorname{diam}(\Omega) $. This gives us: \begin{eqnarray*} \frac{(\min( 2 |y-z|,\operatorname{diam}(\Omega)))^n - |y-z|^n}{n|y-z|^{n-1}} &=& \frac{ \operatorname{diam}(\Omega)^n - |y-z|^n}{n|y-z|^{n-1}} \\ &\le& 2^{n-1} \frac{ \operatorname{diam}(\Omega)^n - |y-z|^n}{n \operatorname{diam}(\Omega)^{n-1}} \\ &\le& 2^{n-1} \frac{ \operatorname{diam}(\Omega)^n}{n \operatorname{diam}(\Omega)^{n-1}}\\ &=& 2^{n-1} \frac{ \operatorname{diam}(\Omega)}{n}. \end{eqnarray*} Both cases are dominated by $ 2^{n-1} \frac{ \operatorname{diam}(\Omega)}{n}$. We place this into the original integral: \begin{equation*} \begin{split} \int_{\Omega_1} \int_{\Omega} \abs{\nabla f(y)} & \frac{(\min( 2 |y-z|,\operatorname{diam}(\Omega)))^n - |y-z|^n}{n |y-z|^{n-1}} dy dz \\ &\le \int_{\Omega_1} \int_{\Omega} \abs{\nabla f(y)} 2^{n-1} \frac{ \operatorname{diam}(\Omega)}{n} dy dz \\ &= 2^{n-1} \frac{ \operatorname{diam}(\Omega)}{n} \mu(\Omega_1) \int_{\Omega} \abs{\nabla f(y)} dy. \end{split} \end{equation*} This is an upper bound for \begin{eqnarray*} \int_{\Omega_1} \int_{\Omega_2} \int_{\frac{|y-z|}{2}}^{|y-z|} \abs{\nabla f(z + \rho\frac{y-z}{|y-z|})}I_{\Omega}(z + \rho\frac{y-z}{|y-z|}) d\rho dy dz . \end{eqnarray*} We can apply the same argument to the half of the geodesic closest to $z \in \Omega_1$, after first substituting $\rho'=|y-z| -\rho$: \begin{equation*} \begin{split} \int_{\Omega_1} \int_{\Omega_2} \int_0^{\frac{|y-z|}{2}} & \abs{\nabla f(z + \rho\frac{y-z}{|y-z|})} I_{\Omega}(z + \rho\frac{y-z}{|y-z|}) d\rho dy dz \\ &= \int_{\Omega_2} \int_{\Omega_1} \int_{\frac{|y-z|}{2}}^{|y-z|} \abs{\nabla f(y + \rho'\frac{z-y}{|z-y|})} I_{\Omega}(y + \rho'\frac{z-y}{|z-y|}) d\rho dz dy \\ &\le 2^{n-1} \frac{ \operatorname{diam}(\Omega)}{n} \mu(\Omega_2) \int_{\Omega} \abs{\nabla f(y)} dy. \end{split} \end{equation*} Combining these with the original inequality, we have \begin{eqnarray*} \int_{\Omega_2} \int_{\Omega_1} \abs{f(z)-f(y)}dz dy \le 2^{n-1} \frac{ \operatorname{diam}(\Omega)}{n} (\mu(\Omega_1)+\mu(\Omega_2)) \int_{\Omega} \abs{\nabla f(y)} dy. \end{eqnarray*} \end{proof} \begin{notation} Let $X$ be an admissible Euclidean polytopal complex of dimension $n$. \end{notation} \begin{definition} Let $B$ be a ball of radius $r$ whose center is on a $D$-dimensional face with the property that $B$ intersects no other $D$-dimensional faces. We define wedges $W_k$ of $B$ to be the closures of each of the connected components of \\ $B-X^{(n-1)}$. \end{definition} Note that for any point $z$ in $X$, a ball $B(z,r)$ satisfying the above criteria exists: for each dimension $D$, we can take any point $z \in X^D-X^{(D-1)}$ and any $r < d(z,X^{(D-1)})$ and create $B= B(z,r) \subset X$. Then $B$ is a ball of radius $r$ whose center is on a $D$-dimensional face, and $B$ intersects no other $D$-dimensional faces. In essence, the wedges, $W_k$, are formed when the $(n-1)$ skeleton slices the ball $B$ into pieces. This construction tells us that each $W_k$ has diameter at most $2r$, as each of the points in $W_k$ is within distance $r$ of $z$, and $z$ is included in $W_k$. \begin{example}\label{SkelWithWedges} \begin{figure}[h] \centering \includegraphics[angle=0,width=1in]{SkelWithWedges.eps} \hspace{.5in} \includegraphics[angle=0,width=1in]{SkelWedgesInPieces.eps} \caption{Complex with shaded ball B (left); the three wedges for B (right).} \end{figure} In figure \ref{SkelWithWedges} we have an example of a 2 dimensional complex with a shaded ball centered at a vertex. This ball has three wedges; one for each of the two dimensional faces that share the vertex. Note that each wedge is a fraction of a sphere. \end{example} \begin{definition} We say that $X$ has degree bounded by $M$ if $M$ is the maximal number of edges in $X^{(1)}$ that can share a vertex in $X^{(0)}$. \end{definition} \begin{definition} We say that $X$ has edge lengths bounded below by $\ell$ if \begin{equation*} 0 < \ell \le \inf_{v,w \in X^{(0)}} d(v,w). \end{equation*} \end{definition} Note that having degree bounded by $M$ implies that the maximum number of $k$ dimensional faces that can share a lower dimensional face is also $M$. This tells us that sufficiently small balls will be split into at most $M$ wedges. Note that if $X$ has degree bounded by $M$, then $X^{(k)}$ will as well. When $X$ has degree bounded by $M$, volume doubling will occur locally with a uniform constant. In particular, when the edge lengths are bounded below by $\ell$ the strong statement: \begin{eqnarray*} \mu(B(x,c r)) \le M c^N \mu(B(x,r)) \end{eqnarray*} will hold whenever $c r \le \ell$. For balls in $X$, $N$ will equal $n$, the dimension of $X$. If we restrict to balls in $X^{(k)}$, then this holds with $N=k$. To show a local Poincar\'{e} inequality on $X$, we will split the balls, which are not necessarily convex, up into smaller overlapping pieces which are. We will do this using the wedges. We can use a chaining argument in order to move through $B$ from one of the $W_k$ to another. Note that this uses the fact that our space $X$ is admissible. We will say $W_k$ and $W_j$ are adjacent if they share an $n-1$ dimensional face, and let $N(j)$ be the list of indices of faces adjacent to $W_j$ including $j$. In order to create paths which we can integrate over, we need an overlapping region between the adjacent faces. For $k \in N(j)$, let $W_{k,j}=W_{j,k}$ be the largest subset of $W_k \cup W_j$ which has the property that $W_k \cup W_{k,j}$ and $W_j \cup W_{k,j}$ are both convex. Then, for each $x$ in $W_{k,j}$ there is a way of describing the rays between $x$ and $W_k$ in a distance preserving manner as one would have in $R^n$. This will justify our use of the $\rho$ in the calculation below. \begin{example} \begin{figure}[h]\label{NonConvexWedges} \centering \includegraphics[angle=0,width=1in]{NonConvexWedgeSetup.eps} \hspace{.5in} \includegraphics[angle=0,width=1in]{NonConvexWedges.eps} \caption{Complex with shaded ball B (left); the two wedges for B and a region which overlaps both of them(right).} \end{figure} In figure \ref{NonConvexWedges} we have a complex and ball with two adjacent wedges. The union of the wedges, $W_1$ and $W_2$, is not convex, so we form the region $W_{1,2}$. In this example, both $W_1 \cup W_{1,2}$ and $W_2 \cup W_{1,2}$ are half circles. \end{example} \begin{theorem}\label{PoincareP1} Let $X$ be an admissible Euclidean polytopal complex of dimension $n$ with degree bounded by $M$. For each $z \in X$ there exists $r>0$ so that for $B=B(z,r)$ and its corresponding wedges, $W_{i,j}$, the following holds for \\ $f \in \operatorname{Lip}(X) \cap L^1(B)$: \begin{eqnarray*} ||f - f_B||_{1,B} \le 2M \max_{k,j \in N(k)} \left( \frac{\mu(B)}{\mu(W_k)} +2 \right) \frac{2^n r (\mu(W_k) + \mu(W_{j,k}))}{n\mu(W_{j,k})} ||\nabla f||_{1,B} . \end{eqnarray*} \end{theorem} \begin{proof} For a given $z\in X$ let $D$ be the dimension such that $z \in X^{(D)}-X^{(D-1)}$. Pick $r < d(z,X^{(D-1)})$. Let $B=B(z,r)$. For $x$ in $B$ we have: \begin{eqnarray*} |f(x)-f_B| &=& \abs{\int_B \frac{1}{\mu(B)} f(x)dy -\int_B \frac{1}{\mu(B)} f(y)dy} \\ &\le& \frac{1}{\mu(B)} \int_B |f(x) - f(y)|dy. \end{eqnarray*} We would like to apply Lemma \ref{ConvexSimplify} to this; however, $B$ is not necessarily convex. We will construct a path from $x$ to $y$ using a finite number of straight lines, where each of the line segments is contained in a convex region. For simplicity, we will consider $x \in W_i$ and $y \in W_k$. It is quite possible that these two wedges are not contained in a convex subset of $B$. We need to use the fact that our space is locally $(n-1)$-chainable by looking at a chain in $B-\{z\}$ starting at $W_i$ and ending at $W_k$. The pieces of the chain will move us from a point in $W_j$ into a connecting point in $W_{j,l}$, and then from that connecting point in $W_{j,l}$ into a point in $W_l$. Formulated more precisely, there is a sequence of indices, $\sigma(1)=i,...\sigma(l)=k$ that corresponds to this chain of $W$'s, so that for each $j$, $W_{\sigma(j)}$ and $W_{\sigma(j+1)}$ are adjacent, and none of the indices repeat. We can take points in these regions; $z_1 \in W_{\sigma(1)}$, $z_2 \in W_{\sigma(1),\sigma(2)}$, ... $z_{2j-1} \in W_{\sigma(j)}$ and $z_{2j} \in W_{\sigma(j),\sigma(j+1)}$. Note that each pair in this sequence is located in a convex region-- either $W_{\sigma(j)} \cup W_{\sigma(j),\sigma(j+1)}$ or $W_{\sigma(j+1)} \cup W_{\sigma(j),\sigma(j+1)}$. The line segments between these points will define our path $\gamma$ from $x$ to $y$. \begin{eqnarray*} |f(x)-f(y)| &=& |f(x)-f(z_1) + f(z_1) -...+f(z_{2l}) -f(y)| \\ &\le& |f(x)-f(z_1)| + |f(z_1) -f(z_2)|+...+|f(z_{2l}) -f(y)| \\ &=& |f(x)-f(z_1)| + \sum_{j=1}^{l-1} \left( |f(z_{2j})-f(z_{2j-1})|+|f(z_{2j})-f(z_{2j+1})| \right) \\ & & \mbox{} + |f(z_{2l}) -f(y)|. \end{eqnarray*} Since it didn't matter which $z$'s we chose, as long as they were in the proper sets, we can average the pieces over all of the possible $z$'s. \begin{eqnarray*} |f(x)-f(y)| &\le & \lefteqn{\Xint-_{W_{i,\sigma(1)}} |f(x)-f(z_1)| dz_1} \\ & & + \sum_{j=1}^{l-1} \left( \Xint-_{W_{\sigma(j)}} \Xint-_{W_{\sigma(j),\sigma(j+1)}} |f(z_{2j})-f(z_{2j-1})| dz_{2j} dz_{2j-1} \right. \\ & &+ \left. \Xint-_{W_{\sigma(j+1)}} \Xint-_{W_{\sigma(j),\sigma(j+1)}} |f(z_{2j})-f(z_{2j+1})| dz_{2j} dz_{2j+1} \right) \\ & &+ \Xint-_{W_{\sigma(l),k}} |f(z_{2l})-f(y)| dz_{2l} . \end{eqnarray*} We will not want to keep track of the exact path between every pair of regions, although in specific examples one may want to do that in order to achieve a tighter bound. Rather, it is useful simply integrate over all pairs of neighboring wedges, as this will include everything in our path. \begin{eqnarray*} |f(x)-f(y)| &\le & \lefteqn{\sum_{l \in N(i)} \Xint-_{W_{i,l}} |f(x)-f(z)| dz} \\ & &+ \sum_{j} \sum_{l \ne i, l \in N(j)} \Xint-_{W_l} \Xint-_{W_{j,l}} |f(z)-f(w)| dz dw \\ & &+ \sum_{j \in N(k)} \Xint-_{W_{j,k}} |f(z)-f(y)| dz. \end{eqnarray*} This new inequality will hold for $x$ and $y$ in any pair of $W_i$ and $W_k$ with $k \ne i$. If we expand our notation so that $W_{i,i}=W_i$ , then this will hold when $x$ and $y$ are in the same set $W_k=W_i$. To integrate over all $y \in B$, we can split the integral into two parts; one where $x$ and $y$ are both in $W_i$, and then add it to the second where $y$ is in one of the $W_k \ne W_i$. Similarly, we can integrate over $x$ in $W_i$ and then sum over $i$. \begin{equation*} \begin{split} \frac{1}{\mu(B)} \int_B \int_B & |f(x) - f(y)|dy dx\\ \le & \lefteqn{\frac{1}{\mu(B)} \left( \sum_{i,k} \sum_{l \in N(i)} \int_{W_i} \int_{W_k} \Xint-_{W_{i,l}} |f(x)-f(z)|dz dy dx \right.} \\ & +\sum_{i,k,j} \sum_{l \in N(j)} \int_{W_i} \int_{W_k} \Xint-_{W_{j,l}} \Xint-_{W_l} |f(z)-f(w)|dw dz dy dx \\ & + \left. \sum_{i,k} \sum_{j \in N(k)}\int_{W_i} \int_{W_k} \Xint-_{W_{j,k}} |f(z)-f(y)|dz dy dx \right) \\ = & \lefteqn{ \sum_i \sum_{l \in N(i)} \int_{W_i} \Xint-_{W_{i,l}} |f(x)-f(z)|dz dx}\\ &+\mu(B) \sum_{j} \sum_{l \in N(j)} \Xint-_{W_{j,l}} \Xint-_{W_l} |f(z)-f(w)|dw dz \\ &+ \sum_{k} \sum_{j \in N(k)} \int_{W_k} \Xint-_{W_{j,k}} |f(z)-f(y)|dz dy. \end{split} \end{equation*} We can combine this into one double sum by setting $x=w$ and $y=w$ as well as reindexing so that $i=j$ and $l=k$. \begin{eqnarray*} ... \le \sum_{k} \sum_{j \in N(k)} \left( \frac{\mu(B)}{\mu(W_k)} +2 \right) \int_{W_k} \Xint-_{W_{j,k}} |f(z)-f(w)| dz dw . \end{eqnarray*} Applying lemma \ref{ConvexSimplify} with $\Omega=W_k \cup W_{j,k}$, $\Omega_1=W_{j,k}$, $\Omega_2=W_k$, and $\operatorname{diam}(\Omega) \le 2r$ to each of the pieces we find: \begin{eqnarray*} ... \le \sum_{k} \sum_{j \in N(k)} \left( \frac{\mu(B)}{\mu(W_k)} +2 \right) \frac{2^n r (\mu(W_k) + \mu(W_{j,k}))}{n \mu(W_{j,k})} \int_{W_k \cup W_{j,k}} |\nabla f(y)| dy. \end{eqnarray*} Note that points in the sets $W_k \cup W_{j,k}$ are counted at most $2M$ times, since each of the $W_k$ has at most $M$ neighbors. This allows us to combine the sums to find: \begin{equation*} \begin{split} \frac{1}{\mu(B)} \int_B \int_B &|f(x) - f(y)|dy dx \\ &\le 2M \max_{k,j \in N(k)} \left( \frac{\mu(B)}{\mu(W_k)} +2 \right) \frac{2^n r (\mu(W_k) + \mu(W_{j,k}))}{n \mu(W_{j,k})} \int_{B} |\nabla f(y)| dy . \end{split} \end{equation*} This is the desired result. \end{proof} Now that we have the inequality when $p=1$, we can use a trick to extend it to other values of $p$. \begin{lemma}\label{PoincareP} If for any $f \in \operatorname{Lip}(X)$ we have: \begin{eqnarray*} ||f-f_B||_{1,B} \le C r||\nabla f||_{1,B} \end{eqnarray*} for $B=B(z,r)$ then \begin{eqnarray*} \inf_{c \in (-\infty,\infty)} ||f - c||_{p,B} &\le& p C r||\nabla f||_{p,B} \text{ and} \\ ||f-f_B||_{p,B} &\le& 2p C r||\nabla f||_{p,B}. \end{eqnarray*} holds for $1 \le p < \infty$. \end{lemma} \begin{proof} Let $g(x)=|f(x)-c_f|^p \operatorname{sign}(f(x)-c_f)$. Note that $g$ is in $\operatorname{Lip}(X)$. Then if $\nabla f$ is the gradient of $f$, we have that $p |f(x)-c_f|^{p-1} |\nabla f(x)|$ is the length of the gradient of $g$. Pick a value of $c_f$ so that $g_B=\int_B g(x) dx =0$. (One will exist; we consider $g_B$ as a function of $c_f$ and apply the intermediate value theorem.) Applying our assumption to $g$, we have: \begin{eqnarray*} \int_B |g(x)-0| dx &\le& C \int_B |\nabla g(x)| dx \\ &=& C \int_B p |f(x)-c_f|^{p-1} |\nabla f(x)| dx . \end{eqnarray*} Now we use H\"{o}lder to find: \begin{eqnarray*} \int_B |f(x)-c_f|^{p-1} |\nabla f(x)| dx \le \left(\int_B (|f(x)-c_f|^{p-1})^q dx\right)^{1/q} \left(\int_B |\nabla f(x)|^p dx\right)^{1/p} . \end{eqnarray*} Since $\frac{1}{p} +\frac{1}{q} =1$, we have $(p-1)q=p$. Combining this with the above inequality gives us: \begin{eqnarray*} \left(\int_B |f(x)-c_f|^p dx\right)^{1/p} \le p C r \left(\int_B |\nabla f(x)|^p dx\right)^{1/p} . \end{eqnarray*} When we take an infimum, we find that \begin{eqnarray*} \inf_c \left(\int_B |f(x)-c|^p dx\right)^{1/p} \le \left(\int_B |f(x)-c_f|^p dx\right)^{1/p}. \end{eqnarray*} When we combine these, we have \begin{eqnarray*} \inf_{c} ||f - c||_{p,B} \le p C r||\nabla f||_{p,B}. \end{eqnarray*} In the case where $p=2$, it is easy to compute the infimum exactly. Consider \begin{eqnarray*} h(c)&=& \int_B |f(x)-c|^2 dx \\ &=& \int_B f(x)^2 dx -2c \int_B f(x) dx +c^2 \mu(B) . \end{eqnarray*} This is a parabola whose minimum occurs at $c=\frac{1}{\mu(B)} \int_B f(x) dx = f_B$. Its minimum is the same as that of $\sqrt{h(c)}$, and so this gives us \begin{eqnarray*} ||f-f_B||_{2,B} \le 2 C r||\nabla f||_{2,B}. \end{eqnarray*} When $p \ne 2$, we can use Jensen's inequality to get the average. We do this by noticing: \begin{eqnarray*} \Xint- |f_B -c|^p dx &=& |\Xint-_B (f-c)dx |^p \\ & \le& \Xint-_B |f-c|^p dx. \end{eqnarray*} This tells us that \begin{eqnarray*} ||f -f_B||_{p,B} &\le& \inf_c ||f -c||_{p,B} + ||f_B -c||_{p,B} \\ &\le& 2 \inf_c ||f -c||_{p,B} \\ &\le& 2 p C r||\nabla f||_{p,B}. \end{eqnarray*} \end{proof} \begin{definition} We say that $X$ has solid angle bound $\alpha$ if for each \\ $z \in X^{(D)}-X^{(D-1)}$ and $r <d(z,X^{(D-1)})$ the wedges of the ball $B(z,r)$ satisfy \begin{eqnarray*} \alpha \le \frac{\mu(W_k)}{\mu(r^n S^{(n-1)})} \le 1. \end{eqnarray*} \end{definition} Note that the right hand side of the inequality reflects the fact that each of the $W_k$ is a subset of an Euclidean ball. If we have a uniform bound on the solid angles formed, then the constant in Theorem \ref{PoincareP1} will simplify. \begin{corrolary}\label{VolPPoincare} Suppose $X$ is an admissible n-dimensional Euclidean polytopal complex with solid angle bound $\alpha$, and $f\in \operatorname{Lip}(X)$. For each $z \in X$ there exists $r>0$ so that for $B=B(z,r)$ we have \begin{eqnarray*} ||f - f_B||_{p,B} \le C_{X} p r ||\nabla f||_{p,B} \end{eqnarray*} where the constant $C_{X} = \frac{2^{3n+3} M^2}{\alpha n}$ depends only on the space $X$. \end{corrolary} \begin{proof} We need to bound $ \max_{k,j \in N(k)} \left( \frac{\mu(B)}{\mu(W_k)} +2 \right) \frac{\mu(W_k) + \mu(W_{j,k})}{\mu(W_{j,k})}$ from Theorem \ref{PoincareP1}. Since we will want to bound the $\mu(W_{j,k})$, we need a way to compare its size to the volume of the other $W_j$. We can subdivide the space initially by cutting each piece in half in each of the $n$ dimensions, so that there are at most $M'= 2^n M$ pieces. When the $W'_k$ and $W'_j$ are adjacent, this tells us that $W'_{j,k}$ has a volume which is larger than $\min(\mu(W'_j),\mu(W'_k))$. Thus $\frac{\mu(W_k) + \mu(W_{j,k})}{\mu(W_{j,k})} \le 2$. To bound $\frac{\mu(B)}{\mu(W_k)}$ we will need the solid angle bound. Combining the solid angle bound with the factor of $2^{-n}$ decrease in wedge size gives us the modified inequality: \begin{eqnarray*} \mu(W'_k) \le 2^{-n} \mu(r^n S^{(n-1)}) \le \frac{\mu(W'_k)}{\alpha}. \end{eqnarray*} Summing the left hand side of the inequality over $k$ tells us that \begin{eqnarray*} \mu(B) \le M 2^n 2^{-n} \mu(r^n S^{(n-1)}). \end{eqnarray*} If we multiply the right hand side of the inequality by $M2^n$, we have \begin{eqnarray*} M \mu(r^n S^{(n-1)}) \le \frac{M 2^n \mu(W'_k)}{\alpha}. \end{eqnarray*} Combining these two inequalities, we find that: \begin{eqnarray*} \frac{\mu(B)}{\mu(W'_k)} \le \frac{M2^n}{\alpha}. \end{eqnarray*} We can substitute these into our constant to get: \begin{eqnarray*} 2M' \max_{k,j \in N(k)} \left( \frac{\mu(B)}{\mu(W'_k)} +2\right) \frac{2^n r(\mu(W'_k)+\mu(W'_{j,k}))}{n\mu(W'_{j,k})} \le 2 M2^n \left(\frac{M2^n}{\alpha} +2\right) \frac{2^{n+1}r}{n} . \end{eqnarray*} This combined with theorem \ref{PoincareP1} and lemma \ref{PoincareP} gives us that: \begin{eqnarray*} ||f - f_B||_{p,B} \le \frac{2^{3n+3} M^2}{\alpha n} p r ||\nabla f||_{p,B}. \end{eqnarray*} \end{proof} We would like to extend these theorems so that the radius is not dependent on the center of the ball. To do so, we will first show a weaker Poincar\'{e} inequality, and then we will extend it via a Whitney type covering to a stronger version. \begin{theorem}\label{weakp1} Suppose $X$ is an admissible n-dimensional Euclidean polytopal complex with solid angle bound $\alpha$ and edge lengths bounded below by $\ell$, and \\ $f\in \operatorname{Lip}(X)$. When $\kappa = 6\left(\frac{2}{\sqrt{2(1-\cos(\alpha))}}+1\right)^n$, the following inequality holds for each $z\in X$ and each $0\le r \le R_0$ where $R_0 := \frac{\inf_{v,w\in X^{0}}d(v,w)}{ 6\left(\frac{2}{\sqrt{2(1-\cos(\alpha))}}+1\right)^n } =\frac{\ell}{\kappa}$. \begin{eqnarray*} \int_{B(z,r)} |f(x)-f_{B(z,r)}| dx \le r C_{Weak} \int_{B(z,\kappa r)} |\nabla f(x)|dx \end{eqnarray*} where $C_{Weak}= \frac{2^{3n+3} M^3 \kappa^{N+1}}{\alpha n}$. \end{theorem} \begin{proof} Let $z \in X$ and $r\le \frac{\inf_{v,w\in X^{0}} d(v,w)}{6(\frac{2}{\sqrt{2(1-\cos(\alpha))}}+1)^n }$ be given. If $d(z,X^{(n-1)}) > r$, then the result follows as a weaker version of Corollary \ref{VolPPoincare}. Otherwise, we will need to find a point $v_k$ which has the property that it is on a $k$-skeleton, and there are no other faces in the $k$-skeleton that are intersected by $B(v,d(v,z)+r)$. We will do this by descending down the skeletons. If there is a point within $r$ of $z$ with this property, we will use it. If not, set $r_0=3r$. Then there is a $k$ such that the lowest dimensional skeleton that is intersected by $B(z,r_0)$ is $X^{(k)}$, and $X^{(k)}$ is intersected by $B(z,r_0)$ at at least two points $v_k$ and $w_k$ on two different faces. If these faces did not intersect, then they would be at least distance $\inf_{v,w\in X^{(0)}} d(v,w)$ from one another. This would imply that $\inf_{v,w\in X^{(0)}} d(v,w) \le 2r_0 = 6r$, a contradiction. Thus those two faces intersect in a smaller $j$-dimensional face. Call $v_j$ the point on the $j$-dimensional face which minimizes $\min(d(v_j,v_k),d(v_j,w_k))$. These three points form a triangle with angle $v_k v_j w_k \ge \alpha$, where $\alpha$ is the smallest interior angle in $X$. Note that this angle is bounded by the assumption $\alpha \le \frac{\mu(W_k)}{\mu(r^n S^{n-1})}$. The triangle that would maximize the minimum distance to this new point, $\min(d(v_j,v_k),d(v_j,w_k))$ is an isosceles one with angle $v_k v_j w_k = \alpha$. The law of cosines tells us that for the isosceles triangle, $d(v_k,w_k)^2 = 2 d(v_k,v_j)^2(1-\cos(\alpha))$, and so for a general triangle, $d(v_k,v_j) \le \frac{d(v_k,w_k)}{\sqrt{2(1-\cos(\alpha))}} \le \frac{2r_0}{\sqrt{2(1-\cos(\alpha))}}$. If this $v_j$ works, we're done. Otherwise, we will have at least two points, $v_j$ and $w_j$ within $\left(\frac{2}{\sqrt{2(1-\cos(\alpha))}}+1\right)r_0$ of $z$. We will repeat the process by taking new $r$'s of the form $r_{i+1}= \left(\frac{2}{\sqrt{2(1-\cos(\alpha))}}+1\right)r_i$ until we have a point which works. Note that each time we repeat it, we find a point on a lower dimensional skeleton. The worst case scenario will have us repeat this $n$ times until we're left with at least one point on $X^{(0)}$. The largest radius that we could require is $R= \left(\frac{2}{\sqrt{2(1-\cos(\alpha))}}+1\right)^n 3 r$. Using this $R$, we can show that $B(v,R)$ does not intersect two vertices. The condition $r \le \frac{\inf_{v,w\in X^{0}} d(v,w)}{6\left(\frac{2}{\sqrt{2(1-\cos(\alpha))}}+1\right)^n }$ tells us that $R \le \frac{1}{2}\inf_{v,w\in X^{0}}d(v,w)$. As two vertices cannot be closer than the closest pair, $B(v,R)$ contains at most one vertex. This construction gives us a center, $v$, on a $k$-dimensional face, and a radius, $R \le \left(\frac{2}{\sqrt{2(1-\cos(\alpha))}}+1\right)^n 3 r $ so that $B(v,R)$ intersects only the $k$-dimensional face that $v$ is on. This allows us first to recenter our ball around $v$ and then to apply Corollary \ref{VolPPoincare} to $f$ on $B(v,R)$. Then, as $\kappa = 6\left(\frac{2}{\sqrt{2(1-\cos(\alpha))}}+1\right)^n$, we find $B(v,R) \subset B(z,\kappa r)$. \begin{eqnarray*} \int_{B(z,r)} |f(x)-f_{B(z,r)}| dx &=& \frac{1}{\mu(B(z,r))}\int_{B(z,r)} \int_{B(z,r)}|f(x)-f(y)| dx dy \\ &\le& \frac{1}{\mu(B(z,r))}\int_{B(v,R)} \int_{B(v,R)}|f(x)-f(y)| dx dy \\ &\le& \frac{\mu(B(v,R))}{\mu(B(z,r))} \int_{B(v,R)} |f(x)-f_{B(v,R)}| dx \\ &\le& \frac{\mu(B(v,R))}{\mu(B(z,r))} \frac{2^{3n+3} M^2}{\alpha n} R \int_{B(v,R)} |\nabla f(x)|dx \\ &\le& \frac{\mu(B(z,\kappa r))}{\mu(B(z,r))}\frac{2^{3n+3} M^2}{\alpha n} \kappa r \int_{B(z,\kappa r)} |\nabla f(x)|dx\\ &\le& M \kappa^N \frac{2^{3n+3} M^2}{\alpha n} \kappa r \int_{B(z,\kappa r)} |\nabla f(x)|dx . \end{eqnarray*} \end{proof} \section{Whitney Covers} We would like to strengthen the weak version of the Poincar\'{e} inequality. We'll do this by using a Whitney type covering of the ball, $E=B(z,r)$. Once we have this cover, we can use a chaining argument to allow us to replace \\ $\kappa=6\left(\frac{2}{\sqrt{2(1-\cos(\alpha))}}+1\right)^n$ with 1. Given our set we will consider a collection $F$ of balls such that \begin{description} \item(1) $B \in \mathcal{F}$ are disjoint. \item(2) If we expand the balls to ones with twice the radius, we cover all of $E$. $\cup_{B\in \mathcal{F}} 2B =E$. \item(3) For any $B \in F$, its radius is $r_B = 10^{-3}\kappa^{-1} d(B,\partial E)$. This implies $10^{3}\kappa B \subset E$. Note that this also tells us that the distance from the center of $B$ to the boundary of $E$ is $(10^{-3}\kappa^{-1} +1)d(B, \partial E) = (10^3\kappa +1)r_B$. \item(4) $\sup _{x \in E} |\{B\in \mathcal{F} | x \in 36 \kappa B \}| \le K$. \end{description} Note that the constant $\kappa$ depends on $X$ but not on the specific choice of $E$. We will show in the following lemma that $K$ is independent of $E$ as well. \begin{lemma} Property 4 is satisfied for $K \le C_{vol}^{\log_2(8(10^3\kappa +1))}$. When $E$ is a ball in $X$ which intersects only one vertex, we have $K \le M (8(1+10^{3}\kappa))^N$. \end{lemma} \begin{proof} Let a point $x\in E$ be given. Let $B(y,r_y) \in \mathcal{F}$ be a ball centered at $y$ with the property that $x \in 36\kappa B(y,r_y)$. Then: \begin{eqnarray*} d(x,y) &\le& 36 \kappa r_y = 36 \kappa 10^{-3}\kappa^{-1} d(y, \partial E) \\ &\le& .036 (d(x,y) +d(x,\partial E)). \end{eqnarray*} When we solve for $d(x,y)$, we have: \begin{eqnarray*} d(x,y) \le \frac{.036}{1-.036}d(x,\partial E) . \end{eqnarray*} The triangle inequality tells us that \begin{eqnarray*} d(x, \partial E) -d(x,y) \le d(y,\partial E ) \le d(y,x) + d(x, \partial E). \end{eqnarray*} We use this to bound $d(y,\partial E )$. \begin{eqnarray*} \frac{1-.072}{1-.036}d(x,\partial E) \le d(y,\partial E) \le \frac{1}{1-.036}d(x,\partial E). \end{eqnarray*} Because $r_y=(10^3\kappa +1)^{-1}d(y,\partial E)$, this tells us that the radius $r_y$ is bounded by \begin{eqnarray*} (1+10^{3}\kappa)^{-1} \frac{1-.072}{1-.036}d(x,\partial E) \le r_y \le \frac{(1+10^{3}\kappa)^{-1}}{1-.036}d(x,\partial E). \end{eqnarray*} These inequalities hold for any $B(y,r_y) \in \mathcal{F}$ with $x \in 36\kappa B(y,r_y)$. Each of the balls, $B_y$ will be contained in $B_x:=B(x,r_1d(x,\partial E))$ where $r_1=\frac{(1+10^{3}\kappa)^{-1} + .036}{1-.036 }$. The $B_y$ have radius at least $r_2d(x,\partial E)$ where $r_2=(1+10^{3}\kappa)^{-1}\frac{1-.072}{1-.036}$. We also know that the $B_y$ are disjoint. This tells us that: \begin{eqnarray*} |\{B\in \mathcal{F} | x \in 36 \kappa B \}| \min_{B_y \in \mathcal{F}\cap B_x} \mu(B_y) \le \sum_{B_y \in \mathcal{F} \cap B_x} \mu(B_y) \le \mu(B_x). \end{eqnarray*} We can use volume doubling to compare the size of $\min_{B_y \in \mathcal{F}\cap B_x} \mu(B_y)$ and $\mu(B_x)$. Note that $r_1 < 2$ and $\frac{1}{2(1+10^{3}\kappa)}<r_2$. This tells us that \\ $B(x,r_1 d(x, \partial E)) \subset B(y, 8(1+10^3 \kappa)r_2 d(x,\partial E))$. \begin{eqnarray*} \mu(B_x) \le C_{vol}^{\log_2(8(1+10^{3}\kappa))} \mu(B_y). \end{eqnarray*} Combining these inequalities and taking the supremum over $x \in E$ gives us: \begin{eqnarray*} \sup _{x \in E} | \{B\in \mathcal{F} | x \in 36 \kappa B \} | \le C_{vol}^{\log_2(8(1+10^{3}\kappa))}. \end{eqnarray*} Note that this only depends on $\kappa$ and $C_{vol}$. When $E$ intersects only one vertex we have: \begin{eqnarray*} \mu(B_x) \le M 2^{2N} (2(1+10^{3}\kappa))^N \mu(B_y). \end{eqnarray*} This gives us a more refined estimate. Here $N$ is the dimension of $X$: \begin{eqnarray*} \sup _{x \in E} | \{B\in \mathcal{F} | x \in 36 \kappa B \} | \le M (8(1+10^{3}\kappa))^N . \end{eqnarray*} \end{proof} We will first describe properties of this collection, and then we will use them to show a Poincar\'{e} inequality. This is a modified version of the argument found in \cite{LSC}. We can also use this technique to take a Poincar\'{e} inequality on a small ball and extend it to one on a larger ball whenever we have volume doubling. This increases the constant involved, so it cannot be done indefinitely, but given a fixed radius we will be able to have inequalities that hold up to balls of that size. We begin with a bit of notation. Let $B_z \in \mathcal{F}$ be a ball such that $z\in 2B_z$. Note that there may be more than one; we will pick one arbitrarily. As $z$ is the center of $E$, we will call $B_z$ the central ball. For a ball, $B$, call the center $x_B$, and fix $\gamma_B$, a distance minimizing curve from $z$ to $x_B$. \begin{lemma}\label{536} For any $B \in \mathcal{F}$ we have \begin{eqnarray*} d(\gamma_B,\partial E) \ge \frac{1}{2} d(B,\partial E) = \frac{1}{2}\kappa 10^3 r_B. \end{eqnarray*} If $B'\in \mathcal{F}$ has the property $2B' \cap \gamma_B \ne \emptyset$, then $r_{B'}\ge \frac{1}{4}r_B$. \end{lemma} \begin{proof} This first claim will follow from multiple applications of the triangle inequality. Let $\alpha$ be the point in $\gamma_B$ which is closest to the boundary: \\ $d(\gamma_B,\partial E) = d(\alpha, \partial E)$. Then we can bound $r_E$: \begin{eqnarray*} d(z,\alpha) + d(\alpha,\partial E) \ge d(z, \partial E) = r_E \end{eqnarray*} and we can bound the distance from $B$ to $\partial E$: \begin{eqnarray*} d(x_B,\alpha) + d(\alpha,\partial E) \ge d(x_B,\partial E) \ge d(B,\partial E). \end{eqnarray*} Summing them, we find that: \begin{eqnarray*} d(z,\alpha) + d(x_B,\alpha) + 2d(\alpha,\partial E)\ge r_E +d(B,\partial E). \end{eqnarray*} As $z$ is on $\gamma_B$ and $\alpha$ minimizes the distance to the boundary, we have \\ $d(z,\alpha) + d(\alpha,x_B)=d(z,x_B) \le r_E$. Putting this into the inequality, we find: \begin{eqnarray*} r_{E} + 2d(\alpha,\partial E) &\ge& r_{E}+d(B,\partial E) \\ d(\alpha,\partial E) &\ge& \frac{1}{2}d(B,\partial E). \end{eqnarray*} The second part follows from the fact: \begin{eqnarray*} \frac{1}{2}d(B,\partial E) \le d(\gamma_B,\partial E) \le d(\gamma_B \cap 2B',\partial E). \end{eqnarray*} If $\alpha'$ is the point in $\gamma_B \cap 2\bar{B'}$ and $\beta'$ is the point in $\bar{B'}$ that realizes the distance to the boundary $\partial E$, then we have \begin{eqnarray*} d(\gamma_B \cap 2B',\partial E) = d(\alpha',\partial E) &\le& d(\alpha',x_{B'}) + d(x_{B'},\beta') + d(B',\partial E) \\ &\le& 2r_{B'} + r_{B'} + d(B',\partial E) . \end{eqnarray*} Combining this with fact (3), we have: \begin{eqnarray*} \frac{1}{2} 10^3\kappa r_{B} &\le& 3r_{B'} +10^3\kappa r_{B'} \\ \frac{1}{4}r_{B} &\le& \frac{10^3\kappa}{2(3+10^3\kappa)} r_{B} \le r_{B'}. \end{eqnarray*} \end{proof} For each ball $B$ in $\mathcal{F}$, we would like to define a string of balls, $\mathcal{F}(B)$, that takes $B_z$ to $B$. Set $B_0=B_z$. Then, for the first point on $\gamma_B$ that is not contained in $2B_i$, take a ball $B_{i+1}$ in $F$ such that that point is contained in $2B_{i+1}$. As $B_{i+1}$ is open, this guarantees that $2B_i\cap 2B_{i+1} \ne \emptyset$. We continue in this manner until $2B_{\ell -1}\cap 2B \ne \emptyset$. Then we set $B_{\ell} =B$. We label $\mathcal{F}(B)= \cup_{i=0}^{\ell} B_i$. Note that due to volume doubling, the chain will be finite. This chain will allow us to move from the central ball to any other ball in the cover of $E$. It is useful because neighboring balls are of comparable radii and volume. \begin{lemma} \label{537} For any $B\in \mathcal{F}$ and any $B_i,B_{i+1} \in \mathcal{F}(B)$ we can compare the radii where $r_j=r_{B_j}$ in the following manner: \begin{eqnarray*} (1 + 10^{-2}\kappa^{-1})^{-1}r_i \le r_{i+1} \le (1 + 10^{-2}\kappa^{-1}) r_i. \end{eqnarray*} We also have $B_{i+1}\subset 6B_{i}$ and $B_{i}\subset 6B_{i+1}$, and so \begin{eqnarray*} \mu(6B_i \cap 6B_{i+1}) \ge \max\{\mu(B_i),\mu(B_{i+1})\}. \end{eqnarray*} \end{lemma} \begin{proof} Let $x_i$ and $x_{i+1}$ be the centers of $B_i$ and $B_{i+1}$ respectively. By our construction, $2B_i \cap 2B_{i+1} \ne \emptyset$. This tells us that $d(x_i,x_{i+1}) \le 2r_i + 2r_{i+1}$. \begin{eqnarray*} d(x_{i+1},\partial E) &\le& d(x_{i+1},x_i) + d(x_i,\partial E) \\ r_{i+1} + d(B_{i+1},\partial E) &\le& (2r_i + 2r_{i+1}) + (r_i + d(B_i,\partial E)) \\ r_{i+1} +10^3 \kappa r_{i+1} &\le& 2r_{i+1} + 3r_i + 10^3 \kappa r_{i} \\ r_{i+1} &\le& \frac{3+10^3 \kappa}{10^3 \kappa-1} r_i = \left(1+\frac{4}{10^3 \kappa-1}\right) r_i . \end{eqnarray*} This tells us that $r_{i+1} \le(1+10^{-2}\kappa^{-1}) r_i$. By a symmetric argument, we also get the lower bound. To show set inclusions, we use the fact that \begin{eqnarray*} d(x_i,x_{i+1}) \le 2r_i + 2r_{i+1} \le 2r_i +2(1+10^{-2}\kappa^{-1}) r_i . \end{eqnarray*} Since any point in $B_{i+1}$ is within distance $r_{i+1}$ of $x_{i+1}$, the triangle inequality tells us that it is within distance $2r_i +2(1+10^{-2}\kappa^{-1}) r_i + r_{i+1} \le 6r_i$ of $x_i$. This gives us the inclusion $B_{i+1}\subset 6B_{i}$. The reverse holds by a symmetric argument, and so $\mu(6B_i \cap 6B_{i+1}) \ge \max\{\mu(B_i),\mu(B_{i+1})\}$ follows. \end{proof} \begin{lemma}\label{538} Given $B\in \mathcal{F}$ and $A \in \mathcal{F}(B)$ we have $B \subset (10^3\kappa +9)A$. \end{lemma} \begin{proof} Let $x_B$ and $x_A$ be the centers of $B$ and $A$ respectively, and let $\alpha$ be a point in $2A\cap \gamma_B$. By \ref{536}, we know that $4 r_A \ge r_B$. Note that $\alpha$ occurs on the distance minimizing curve $\gamma_B$ between $x_B$ which is the center of $B$ and $z$, the center of the large ball $E$. This tells us that $d(z,x_B) = d(z,\alpha) +d(\alpha,x_B)$. Then \begin{eqnarray*} d(x_A,x_B) &\le& d(x_A,\alpha) + d(\alpha,x_B) \\ &=& d(x_A,\alpha) + (d(z,x_B) - d(z,\alpha)) \\ &\le& 2r_A +d(z,\partial E) - d(z,\alpha) \\ &\le& 2r_A +(d(z,\alpha) +d(\alpha,\partial E)) - d(z,\alpha) \\ &\le& 2r_A +d(\alpha, x_A) + d(x_A,\partial E) \\ &\le& 4r_A + (r_A + 10^3\kappa r_A) = (10^3\kappa +5)r_A . \end{eqnarray*} Because all points in $B$ are within $r_B \le 4 r_A$ of $x_B$, they will be within \\ $(10^3\kappa +5)r_A + 4r_A = (10^3\kappa +9)r_A$ of $x_A$. Thus, $B \subset (10^3\kappa +9)A$ holds. \end{proof} We now have a number of lemmas that describe the geometry of the covering. We can use these to get an extension of our Poincar\'{e} inequality. We will do this by using this chain of balls to get a chain of inequalities. Our first step is to compare the average of neighboring balls in the chain. \begin{lemma}\label{539} For $B_i$ and $B_{i+1}$ neighboring balls in a chain $\mathcal{F}(B)$, we have \begin{eqnarray*} |f_{6B_i} - f_{6B_{i+1}}| \le C_{Weak} 18 \kappa \frac{r_i}{\mu(B_i)} \int_{36\kappa B_i} |\nabla f(x)| d\mu(x) \end{eqnarray*} whenever $f$ satisfies \begin{eqnarray*} ||f-f_{6B_i}||_{1,6B_i} \le C_{Weak} \kappa 6 r_i ||\nabla f||_{1,\kappa 6B_i} \end{eqnarray*} for all $B_i \in \mathcal{F}(B)$. \end{lemma} \begin{proof} We can write: \begin{equation*} \begin{split} \mu(6B_i \cap 6B_{i+1}) & |f_{6B_i} - f_{6B_{i+1}}| \\ & = \int_{6B_i \cap 6B_{i+1}} |f_{6B_i} - f_{6B_{i+1}}| d\mu(x) \\ & \le \int_{6B_i \cap 6B_{i+1}} |f(x)-f_{6B_i}|+|f(x)- f_{6B_{i+1}}| d\mu(x) \\ & \le \int_{6B_i} |f(x)-f_{6B_i}| d\mu(x) +\int_{6B_{i+1}}|f(x) - f_{6B_{i+1}}| d\mu(x) \\ & \le C_{Weak} 6 \kappa \left(r_i \int_{6\kappa B_i}|\nabla f(x)| d\mu(x) + r_{i+1} \int_{6\kappa B_{i+1}}|\nabla f(x)| d\mu(x) \right)\\ & \le C_{Weak} 6 \kappa \left(r_i \int_{6\kappa B_i}|\nabla f(x)| d\mu(x) + 2 r_{i} \int_{36\kappa B_{i}}|\nabla f(x)| d\mu(x) \right)\\ & \le C_{Weak} 18 \kappa r_i \int_{36\kappa B_{i}}|\nabla f(x)| d\mu(x) . \end{split} \end{equation*} This string of inequalities holds by the triangle inequality, set inclusion, the weak Poincar\'{e} inequality (Theorem \ref{weakp1}), and the comparisons in Lemma \ref{537}. By Lemma \ref{537} we know that $\mu(B_i) \le \mu(6B_i \cap 6B_{i+1})$, and so we can rewrite this to get: \begin{eqnarray*} \mu(B_i)|f_{6B_i} - f_{6B_{i+1}}| \le 18 \kappa r_i C_{Weak} \int_{36\kappa B_i} |\nabla f(x)| d\mu(x). \end{eqnarray*} \end{proof} Recall that we have shown $C_{Weak}=\frac{2^{3n+3} M^3 \kappa^{N+1}}{\alpha n}$, as in Theorem \ref{weakp1} for our small balls containing only one vertex. We are now in a position to prove our main theorem. \begin{theorem}\label{UnifPoincare1General} Let $E$ be set whose subsets satisfy volume doubling with constant $C_{vol}$. Suppose $\mathcal{F}$ is a Whitney type cover of $E$ and that $f$ satisfies \begin{eqnarray*} ||f-f_{6B_i}||_{1,6B_i} \le C_{Weak} \kappa 6 r_i ||\nabla f||_{1,\kappa 6B_i} \end{eqnarray*} for all $B_i \in \mathcal{F}$. Then \begin{eqnarray*} \int_E |f(x)-f_E| d\mu(x) \le P_0 r \int_E |\nabla f(x)| d\mu(x) \end{eqnarray*} holds where $ P_0=\left(1+3 C_{vol}^{1+\log_2(10^3 \kappa +9)}\right) K C_{Weak} 12 \kappa 10^{-3} $. \end{theorem} \begin{proof} We want to bound $|f-f_E|$. In order to do this, we will split this quantity into two essentially similar pieces, $|f-f_{6B_z}|$ and $|f_E-f_{6B_z}|$. At the end of the proof, we will show that $|f_E-f_{6B_z}|$ can be bounded by $|f-f_{6B_z}|$. Because of this, we only need consider $|f-f_{6B_z}|$. We will take $f$ minus its average on the central ball and put it into a form where we can take advantage of the covering. This will involve splitting this further into chains of sufficiently small balls, and then applying the weak Poincar\'{e} inequality to them. After a bit of work, this will give us the desired inequality. First, we will use the fact that $\cup_{B \in \mathcal{F}} 2B$ covers all of $E$ to split the integral up into pieces. \begin{eqnarray*} \int_E |f(x)-f_{6B_z}| d\mu(x) &\le& \sum_{B \in \mathcal{F}} \int_{2B} |f(x)-f_{6B_z}| d\mu(x) \\ &\le& \sum_{B \in \mathcal{F}} \int_{2B} |f(x)-f_{6B}| d\mu(x) + \int_{2B} |f_{6B}-f_{6B_z}| d\mu(x) . \end{eqnarray*} The first piece can be bounded nicely using the weak Poincar\'{e} inequality (Theorem \ref{weakp1}). \begin{eqnarray*} \sum_{B \in F} \int_{2B} |f(x)-f_{6B}| d\mu(x) &\le& \sum_{B \in F} \int_{6B} |f(x)-f_{6B}| d\mu(x) \\ &\le& \sum_{B \in F} C_{Weak} 6 \kappa r_B \int_{6 \kappa B} |\nabla f(x)| d\mu(x)\\ &\le& K C_{Weak} 6 \kappa 10^{-3} r_{E} \int_{E} |\nabla f(x)| d\mu(x) . \end{eqnarray*} The last part of the inequality follows from the fact that $6 \kappa B \subset E$ (by Lemma \ref{536}), and at most $K$ balls in $6 \kappa \mathcal{F}$ overlap any given point in $E$. The second piece can be rewritten as: \begin{eqnarray*} \sum_{B \in \mathcal{F}}\int_{2B} |f_{6B}-f_{6B_z}| d\mu(x) &=&\sum_{B \in \mathcal{F}}\mu(2B) |f_{6B}-f_{6B_z}|\\ &\le& \sum_{B \in \mathcal{F}}C_{vol} \mu(B) |f_{6B}-f_{6B_z}|. \end{eqnarray*} Now let us consider what happens when we fix $B$. We have a chain, $\mathcal{F}(B)$, connecting $B$ to the central ball; we can use this and Lemma \ref{539} to find: \begin{eqnarray*} |f_{6B}-f_{6B_z}| &\le& \sum_{i=0}^{\ell -1} |f_{6B_i}-f_{6B_{i+1}}| \\ &\le& \sum_{i=0}^{\ell -1} C_{Weak} 18 \kappa \frac{r_i}{\mu(B_i)} \int_{36\kappa B_i} |\nabla f(x)| d\mu(x) \\ &=& \sum_{A \in \mathcal{F}(B)} C_{Weak} 18 \kappa \frac{r_A}{\mu(A)} \int_{36\kappa A} |\nabla f(x)| d\mu(x). \end{eqnarray*} By lemma \ref{538} we know that $B \subset (10^3 \kappa +9) A$ for any $A\in \mathcal{F}(B)$, and so we have $\chi_B = \chi_B \chi_{(10^3 \kappa +9) A}$. Multiplying the previous inequality by this, summing over the $B$, and then integrating over $E$ gives us: \begin{equation*} \begin{split} \int_E &\sum_{B \in \mathcal{F}} |f_{6B}-f_{6B_z}|\chi_B(y) d\mu(y) \\ &\le \int_E \sum_{B \in \mathcal{F}}\sum_{A \in \mathcal{F}(B)} C_{Weak} 18 \kappa \frac{r_A}{\mu(A)} \int_{36\kappa A} |\nabla f(x)| d\mu(x) \chi_B(y) \chi_{(10^3 \kappa +9) A}(y) d\mu(y). \end{split} \end{equation*} Since the $B$ are disjoint, we have $\sum_{B \in \mathcal{F}} \chi_B(y) \le 1$. This allows us to simplify the right hand side. We can then integrate. \begin{eqnarray*} ... &\le& \int_E \sum_{A \in \mathcal{F}} C_{Weak} 18 \kappa \frac{r_A}{\mu(A)} \int_{36\kappa A} |\nabla f(x)| d\mu(x) \chi_{(10^3 \kappa +9) A}(y) d\mu(y) \\ &=& \sum_{A \in \mathcal{F}} C_{Weak} 18 \kappa \frac{r_A\mu((10^3 \kappa +9)A)}{\mu(A)} \int_{36\kappa A} |\nabla f(x)| d\mu(x). \end{eqnarray*} Volume doubling gives us: \begin{eqnarray*} &\le& \sum_{A \in \mathcal{F}} C_{Weak} 18 \kappa r_{A} C_{vol}^{\log_2(10^3 \kappa +9)} \int_{36\kappa A} |\nabla f(x)| d\mu(x). \end{eqnarray*} We then use the bound from (4) to see: \begin{eqnarray*} &\le& C_{Weak} 18 \kappa 10^{-3} r_{E} C_{vol}^{\log_2(10^3 \kappa +9)} K \int_{E} |\nabla f(x)| d\mu(x). \end{eqnarray*} Putting all of this together and factoring, our original inequality becomes: \begin{equation*} \begin{split} \int_E |f(x)-f_{6B_z}| &d\mu(x) \\ &\le \left(1+3C_{vol}^{1+\log_2(10^3 \kappa +9)} \right) K C_{Weak} 6 \kappa 10^{-3} r_{E} \int_{E} |\nabla f(x)| d\mu(x) . \end{split} \end{equation*} Let $\frac{1}{2}P_0=\left(1+3 C_{vol}^{1+\log_2(10^3 \kappa +9)}\right) K C_{Weak} 6 \kappa 10^{-3}$. Then we can rewrite the inequality as: \begin{eqnarray*} \int_E |f(x)-f_{6B_z}| d\mu(x) \le \frac{1}{2}P_0 r_{E} \int_{E} |\nabla f(x)| d\mu(x) . \end{eqnarray*} All that remains is to switch from $f_{6B_z}$ to the average on the entire set, $f_E$. \begin{eqnarray*} \int_E |f_E-f_{6B_z}| d\mu(x) &=& \mu(E) |f_E-f_{6B_z}|\\ &=& \mu(E) \abs{\frac{1}{\mu(E)} \int_E f(x) -f_{6B_z} d\mu(x) }\\ &\le& \int_E |f(x) -f_{6B_z}| d\mu(x) \\ &\le& \frac{1}{2}P_0 r_{E} \int_{E} |\nabla f(x)| d\mu(x). \end{eqnarray*} Thus, the Poincar\'{e} inequality holds on the ball $E=B(z,r)$. \begin{eqnarray*} \int_E |f(x)-f_E| d\mu(x) &\le& \int_E |f(x)-f_{6B_z}| d\mu(x) +\int_E |f_{6B_z}-f_{E}| d\mu(x) \\ &\le& P_0 r_{E} \int_{E} |\nabla f(x)| d\mu(x) . \end{eqnarray*} \end{proof} \begin{corollary}\label{UnifPoincare1} Let $X$ be an admissible n-dimensional Euclidean complex with degree bounded above by $M$, solid angle bounded by $\alpha$, and edge lengths bounded below by $\ell$. Let $E=B(z,r)$ where $r<R_0:=\frac{\ell}{\kappa}$. Then \begin{eqnarray*} \int_E |f(x)-f_E| d\mu(x) \le P_0 r \int_E |\nabla f(x)| d\mu(x) \end{eqnarray*} holds for $f \in \operatorname{Lip}(X) \cap L^1(E)$ where $\kappa=6(\frac{2}{\sqrt{2(1-\cos(\alpha))}}+1)^n$ and \\ $P_0=(1+3 M^3 2^n (10^3 \kappa +9)^n)M (8(1+10^{3}\kappa))^n C_{Weak} 6 \kappa$. \end{corollary} \begin{proof} Apply Theorem \ref{UnifPoincare1General} with $K=M(8(1+10^{3}\kappa))^n$ and $C_{vol} = M2^n$. \end{proof} \begin{corollary}\label{UnifPoincareP} Let $X$ be an admissible n-dimensional Euclidean complex with degree bounded above by $M$, solid angle bounded by $\alpha$, and edge lengths bounded below by $\ell$. For $f \in \operatorname{Lip}(X) \cap L^p(E)$ and $r<R_0$ we have \begin{eqnarray*} \inf_c ||f-c||_{p,E} \le p P_0 r ||\nabla f||_{p,E} \end{eqnarray*} where $E$ is a ball of radius $r$ and $1\le p < \infty$. Note that this implies: \begin{eqnarray*} ||f-f_E||_{p,E} \le 2 p P_0 r ||\nabla f||_{p,E}. \end{eqnarray*} Here $P_0=(1+3 M^3 2^n (10^3 \kappa +9)^n)M (8(1+10^{3}\kappa))^n C_{Weak} 6 \kappa$, $C_{Weak}=\frac{2^{3n+3} M^3 \kappa^{n+1}}{\alpha n}$, and $\kappa=6(\frac{2}{\sqrt{2(1-\cos(\alpha))}}+1)^n$. \end{corollary} \begin{proof} Apply Lemma \ref{PoincareP} to Corollary \ref{UnifPoincare1}. \end{proof} \begin{corollary}\label{PoincareExtend1} Assume $p=1$ Poincar\'{e} inequality $||f-f_B||_{1,B} \le P_0 r ||\nabla f||_{1,B}$ holds for $f \in \operatorname{Lip}(X) \cap L^p(E)$ on balls $B= B(x,r)$ with $r \le R$. Assume volume doubling holds with constant $C_{vol}$ for balls with radius less than $C_0 R$. Then \begin{eqnarray*} ||f-f_E||_{p,E} \le p 2 P_0 \left( 6 (1+3C_{vol}^{11}) C_{vol}^{13} 10^{-3} \right)^{\ceil{\log_{\frac{10^3}{6}}(C_0)}} r_E ||\nabla f||_{p,E} \end{eqnarray*} also holds for balls $E$ with radius less than $C_0 R$ and $1 \le p < \infty$. \end{corollary} \begin{proof} Note that if $E=B(x,\frac{1}{6} 10^3r)$, then the $p=1$ Poincar\'{e} inequality holds for all balls in the Whitney cover dilated by a factor of 6 with $\kappa =1$. We can apply Theorem \ref{UnifPoincare1General} which gives us \begin{eqnarray*} ||f-f_E||_{1,E} \le 6 (1+3C_{vol}^{11}) C_{vol}^{13} 10^{-3} P_0 r_E ||\nabla f||_{1,E}. \end{eqnarray*} In particular, we can repeat this to show that the $p=1$ Poincar\'{e} inequality holds for balls up to radius $C_0 R$ with constant $( 6 (1+3C_{vol}^{11}) C_{vol}^{13} 10^{-3})^{\ceil{\log_{\frac{10^3}{6}}(C_0)}} P_0$. To get the $p$ Poincar\'{e} inequality, apply lemma \ref{PoincareP}. \end{proof} Note that bounds on degree $M$, angles $\alpha$ and edge lengths $\ell$ give us uniform local volume doubling on our complex, $X$. For any fixed $R$ we can then apply Lemma \ref{PoincareP} to Corollary \ref{PoincareExtend1} to get the standard $L^2$ Poincar\'{e} inequality for balls of radius up to $R$. In general, this cannot be extended to $R = \infty$; note that the constant in the new Poincar\'{e} inequality goes to infinity as $C_0$ goes to infinity. \chapter{Small Time Heat Kernel Estimates for X} The heat kernel, $h_t(x,y)$, is the fundamental solution to the heat equation \begin{eqnarray*} \partial_t u = \Delta u. \end{eqnarray*} Note that our formulation does not have factors of $-1$ or $\frac{1}{2}$, which appear in some of the literature. This type of differential equation is parabolic; one way of obtaining information about it is through parabolic Harnack inequalities. Sturm \cite{Sturm} shows that local volume doubling and Poincar\'{e} inequalities on a subset of a complete metric space imply a local parabolic Harnack inequality on that subset. He then uses this to find Gaussian estimates on the heat kernel. The equivalence of the parabolic Harnack inequality with Poincar\'{e} and volume doubling had previously been done in the Riemannian manifold case by Grigor'yan \cite{Grigor} and Saloff-Coste \cite{LSCNoteOn}. \section{Small time Heat Kernel Asymptotics} We have shown a uniform local Poincar\'{e} inequality, and our complex is both complete and locally satisfies volume doubling. This tells us that we've satisfied the hypotheses of the following theorem of Sturm \cite{Sturm} which gives a lower bound on the diagonal. \begin{theorem}[Sturm]\label{Sturm1} Assume $Y$ is an open subset of a complete space $X$ that admits a Poincar\'{e} inequality with constant $C_P$ and volume doubling with constant $2^N$. Then there exists a constant $C=C(C_P,N)$ such that \begin{eqnarray*} h_t(x,x) \ge \frac{1}{C \mu(B(x,\sqrt{t}))} \end{eqnarray*} for all $x \in Y$ and all $t$ such that $0<t<\rho^2(x,X-Y)$. \end{theorem} Here, $\rho$ refers to the intrinsic distance. In our complex, this will always satisfy $\rho(x,y) \ge d(x,y)$. Also note that since we have a uniform local Poincar\'{e} inequality and a uniform local volume doubling constant we have the following corollary: \begin{corrolary}\label{Corge} Let $X$ be an admissible $n$-dimensional Euclidean complex with degree bounded above by $M$, solid angle bounded by $\alpha$, and edge lengths bounded below. For any $R_0 > 0$ there is a corresponding constant $C=C(X,R_0)$ so that \begin{eqnarray*} h_t(x,x) \ge \frac{1}{C M \mu(S^{(n-1)}) t^{n/2}} \end{eqnarray*} for all $x \in X$ and all $t$ such that $0<t<R_0^2$. \end{corrolary} \begin{proof} For each $x\in X$ apply \ref{Sturm1} to $X$ with $Y=B(x,R_0)$. The distance compares easily: $\rho(x,X-Y) \ge d(x,X-Y) = R_0$. Because the constant $C$ in \ref{Sturm1} depends only on $C_P$ and $N$, we can use the fact that our constants $C_P$ and $N$ do depend only on the radius of our ball to obtain a universal constant, $C$. \end{proof} Sturm \cite{Sturm} also proves an upper bound for the heat kernel; this bound is especially useful near the diagonal. \begin{theorem}[Sturm]\label{Sturm2} Assume $Y$ is an open subset of a complete space $X$ that admits a Poincar\'{e} inequality with constant $C_P$ and volume doubling with constant $2^N$. Then there exists a constant $C=C(C_P,N)$ such that for every $x,y\in Y$ \begin{eqnarray*} h_t(x,y) \le \frac{Ce^{-\frac{\rho^2(x,y)}{4t}}}{\sqrt{\mu(B(x,\sqrt{T})) \mu(B(y,\sqrt{T}))}} \left(1+\frac{\rho^2(x,y)}{t}\right)^{N/2} e^{-\lambda t}(1+\lambda t)^{1+N/2} \end{eqnarray*} where $\rho$ is the intrinsic distance, $R=\inf(\rho(x,X-Y),\rho(y,X-Y))$, $T=\min(t,R^2)$ and $\lambda$ is the bottom of the spectrum of the self-adjoint operator $-L$ on $L^2(X,\mu)$. \end{theorem} In general, we can replace $\lambda$ with $0$, which increases the value of the right hand side. In our setting, we can simplify this a bit more. \begin{corrolary}\label{Corle} Let $X$ be an admissible $n$-dimensional Euclidean complex with degree bounded above by $M$, solid angle bounded by $\alpha$, and edge lengths bounded below. Then for any $R_0$ we there exists a constant $C = C(X,R_0)$ so that for any $x \in X$ and $t>0$ we have: \begin{eqnarray*} h_t(x,x) \le \frac{C}{M \mu(S^{(n-1)}) (\min(t,R_0^2))^{n/2}} . \end{eqnarray*} \end{corrolary} \begin{proof} For each $x\in X$ apply \ref{Sturm2} to $X$ with $Y=B(x,R_0)$. The distance compares easily: $\rho(x,X/Y) \ge d(x,X/Y) = R_0$. The $d(x,x)$ terms drop out, as do the $\lambda$ terms. Because the constant $C$ in \ref{Sturm2} depends only on $C_P$ and $N$, we can use the fact that our constants $C_P$ and $N$ do not depend on our specific choice of ball to obtain a universal constant, $C$. \end{proof} \begin{corrolary}\label{offdiagonalHeat} Let $X$ be an admissible $n$-dimensional Euclidean complex with degree bounded above by $M$, solid angle bounded by $\alpha$, and edge lengths bounded below. For any $R_0$ there exists a $C=C(X,R_0)$ so that for any $x,y \in X$ and $t>0$ we have: \begin{eqnarray*} h_t(x,y) \le \frac{C}{M \mu(S^{(n-1)}) (\min(t,R_0^2))^{n/2}} e^{-\frac{d^2(x,y)}{4t}}\left(1+\frac{d^2(x,y)}{t}\right)^{N/2} . \end{eqnarray*} \end{corrolary} \begin{proof} For each $x,y\in X$ apply \ref{Sturm2} to $X$ with $Y=B(x,R_0) \cup B(y,R_0)$. The distance compares easily: $\rho(x,X/Y) \ge d(x,X/Y) = R_0$. Because the constant $C$ in \ref{Sturm2} depends only on $C_P$ and $N$, we can use the fact that our constants $C_P$ and $N$ do not depend on our specific choice of ball to obtain a universal constant, $C$. \end{proof} Note that we can rewrite this as a bound of the following form for some constants $C,c$: \begin{eqnarray*} h_t(x,y) \le \frac{C}{(\min(t,R_0^2))^{n/2}} e^{-c\frac{d^2(x,y)}{t}}. \end{eqnarray*} \begin{corrolary} \label{ComplexIsJustLikeRn} For an admissible $n$-dimensional Euclidean complex $X$ with degree bounded above by $M$, solid angle bounded by $\alpha$, and edge lengths bounded below, on $X^{(k)}$ we have \begin{eqnarray*} \frac{1}{C_k t^{k/2}} \le h_t^k(x,x) \le \frac{C_k}{t^{k/2}} . \end{eqnarray*} This holds for all $t<R_0^2$ and $x\in X^{(k)}$, where $C_k$ depends on $R_0$, $\alpha$, $M$, $k$, and $\inf_{v,w \in X^{(0)}} d(v,w)$. In particular, we can take $C= \max_{k=1..n} C_k$ to have a uniform constant for each $X^{(k)}$. \end{corrolary} \begin{proof} Apply Corollaries \ref{Corge} and \ref{Corle} to $X^{(k)}$. This holds because $X^{(k)}$ is also an admissible complex satisfying the same bounds as $X$. $C_k$ varies slightly in each dimension due to the effect of dimension on volume doubling, and hence Poincar\'{e}. \end{proof} Off diagonal, the lower bound is more complicated. \begin{theorem}[Sturm]\label{Sturm3} Assume $Y$ is an open subset of a complete space $X$ that admits a Poincar\'{e} inequality with constant $C_P$ and volume doubling with constant $2^N$. Then there exists a constant $C=C(C_P,N)$ such that for every $x,y\in Y$ which are joined by a curve $\gamma$ of length $\rho(x,y)$ \begin{eqnarray*} h_t(x,y) \ge \frac{1}{C \mu(B(x,\sqrt{T})) } e^{-C\frac{\rho^2(x,y)}{t}} e^{-\frac{Ct}{R^2}} \end{eqnarray*} where $\rho$ is the intrinsic distance, $R=\inf_{0\le s \le 1}(\rho(\gamma(s),X-Y))$, and $T=\min(t,R^2)$. \end{theorem} In our setting, we can find a near-diagonal lower bound for any complex. \begin{corrolary}\label{offdiagonalHeatlower} Let $X$ be an admissible $n$-dimensional Euclidean complex with degree bounded above by $M$, solid angle bounded by $\alpha$, and edge lengths bounded below. For any $R_0>0$ there exists a $C=C(X,R_0)$ so that for any $x,y \in X$ with $d(x,y)<R_0$ and $t>0$ we have: \begin{eqnarray*} h_t(x,y) \ge \frac{1}{C \mu(B(x,\sqrt{\min(t,R_0^2)})) } e^{-C\frac{d^2(x,y)}{t}} e^{-\frac{Ct}{R_0^2}}. \end{eqnarray*} If $X$ is volume doubling and has a global Poincar\'{e} inequality, we can set $R_0 =\infty$ to get: \begin{eqnarray*} h_t(x,y) \ge \frac{1}{C \mu(B(x,\sqrt{t}))} e^{-C\frac{d^2(x,y)}{t}}. \end{eqnarray*} \end{corrolary} \begin{proof} For each $x,y\in X$ apply \ref{Sturm3} to $X$ with $Y=B(x,2R_0)$. Since we have a length space, the distance compares easily: $\rho(\gamma(s),X/Y) \ge d(\gamma(s),X/Y) = R_0$. Because the constant $C$ in \ref{Sturm2} depends only on $C_P$ and $N$, we can use the fact that our constants $C_P$ and $N$ do not depend on our specific choice of ball to obtain a universal constant, $C$. \end{proof} \section{Examples} \begin{example} In $R^1$ the heat kernel, $h_t(x,y)$ is the density for the transition probability of Brownian motion. \begin{eqnarray*} h_t(x,y) = \frac{1}{\sqrt{4\pi t}} e^{-\frac{|x-y|^2}{4t}}. \end{eqnarray*} This is a normal density for $y$ with expectation $x$ and variance $2t$. In the probability literature, it is common for the heat equation to be written with the time derivative multiplied by an extra factor of $1/2$ so that the variance is $t$. See for example Feller Volume 2 \cite{Feller2}. We can think of $R^1$ as an Euclidean complex. This matches our on diagonal bound exactly, but it is slightly nicer (by a factor of $\sqrt{1 + d^2(x,y)/t}$ than our off diagonal bound. \end{example} \begin{example} In $R^n$, the heat equation can be solved using either a scaling argument or Fourier series. Alternately, it can be thought of as an n-dimensional version of Brownian motion. In the PDE literature, the heat kernel is also called the Gauss kernel or the fundamental solution to the heat equation. See Evans \cite{Evans} for a derivation. \begin{eqnarray*} h_t(x,y) = \frac{1}{(4\pi t)^{n/2}} e^{-\frac{|x-y|^2}{4t}}. \end{eqnarray*} Note that $R^n$ is also an Euclidean complex, and that this kernel is consistent with our asymptotics. \end{example} \begin{example} We can think of a circle of length $1$ as a complex consisting of three edges of length $1/3$ joined in a triangle shape. One can calculate the heat kernel in terms of a sum using Fourier series; see Dym and McKean \cite{Dym}. The heat kernel here is \begin{eqnarray*} h_t(x,y) = \frac{1}{\sqrt{4\pi t}} \sum_{n=-\infty}^{\infty} e^{-\frac{|x-y-n|^2}{4t}}. \end{eqnarray*} When $x=y$, this simplifies to: \begin{eqnarray*} h_t(x,x) = \frac{1}{\sqrt{4\pi t}} + \frac{2}{\sqrt{4\pi t}} \sum_{n=1}^{\infty} e^{-\frac{n^2}{4t}}. \end{eqnarray*} For small values of $t$, the dominant term is $\frac{1}{\sqrt{4\pi t}}$. This is the same behavior as our small time prediction. Note that once $t =1/4$ , the ball of radius $\sqrt{t}$ will be of size 1. By this point in time, the asymptotic will cease to be useful. \end{example} \begin{example} We will look at the heat kernel on a star shaped graph, $X$, which has a central vertex with $n$ edges attached to it. \begin{figure}[h] \centering \includegraphics[angle=0,width=2in]{OneStarUnlabelled.eps} \caption{Example of a star; here n=8.} \end{figure} When we compute functions on $X$ for $x \in e(a,b)$, we will let $x$ represent $d(x,a)$. For example, $a$ would be 0, the midpoint would be $\frac{1}{2}$, and $b$ would be 1. All functions on $X$ are of the form $f(x,j) = \sum_i f_i(x) I_{j=i} + f(0)I_{x=0}$ where $x \in (0,1]$ and $j=1,2,...n$. $f_i(x)$ represents the value of the function along the ith leg, and $f(0)$ is the value at the center of the star. The measure on our star is $d\mu(x,j) = dx$. When we write the derivative $\frac{df}{dx}$ we will mean the usual derivative with respect to Lebesgue measure. $\frac{df}{d\mu}(0) = \sum_i \frac{df_i}{dx}(0)$ for functions $f$ which are differentiable for each $f_i$ on $(0,1]$. We'll look at a domain where our function has zero derivative at the boundary points and has zero derivative in the center. \\ $\operatorname{Dom}(\Delta) = \{f \in C(X): f_i \in C^1((0,1]) \text{ , and }\frac{df}{d\mu}(0)=0, \frac{df_i}{d\mu}(1)=0 \}$; we will be using an $L^2$ norm on this space. The symmetric set of eigenfunctions are cosine on every leg: \begin{eqnarray*} \Phi_k(x) =\sqrt{\frac{2}{n}}\cos(k \pi x) \mbox{ for }x \in e_i,\mbox{ } i=1..n, \end{eqnarray*} or, when $k=0$, they are a constant on every leg: \begin{eqnarray*} \Phi_0(x) =\frac{1}{\sqrt{n}} \mbox{ for }x \in e_i,\mbox{ } i=1..n. \end{eqnarray*} These functions have derivative zero at each vertex and are continuous at the central vertex. The coefficients are chosen so that they have an $L^2$ norm of 1. Note that if we look at the product of these for points $x$ and $y$ on the star (regardless of which leg they occur on), we have \begin{eqnarray*} \Phi_k(x)\Phi_k(y) &=& \frac{2}{n}\cos(k \pi x)\cos(k \pi y) \\ &=& \frac{1}{n} \left(\cos(k \pi (x-y)) + \cos(k \pi (x+y))\right). \end{eqnarray*} If we sum $e^{-\lambda^2 t}\Phi(x)\Phi(y)$ we find: \begin{equation*} \begin{split} \frac{1}{n} + \frac{1}{n}\sum_{k=1}^{\infty} e^{-(k\pi)^2t} & \left(\cos(k \pi (x-y)) + \cos(k \pi (x+y))\right) \\ &= \frac{1}{2n } \sum_{k=-\infty}^{\infty} e^{-(k\pi)^2t} \left(\cos(k \pi (x-y)) + \cos(k \pi (x+y))\right). \end{split} \end{equation*} In order to simplify this, we will use Jacobi's identity (see Dym for derivation): \begin{eqnarray*} \sum_{k=-\infty}^{\infty} e^{-\frac{(x-k)^2}{2s}} = \sqrt{2 \pi s}\sum_{k=-\infty}^{\infty}e^{-2 \pi^2 k^2 s}e^{2 \pi i k x}. \end{eqnarray*} Because this sums to a real number, we can rewrite it as: \begin{eqnarray*} \sum_{k=-\infty}^{\infty} e^{-\frac{(x-k)^2}{2s}} = \sqrt{2 \pi s}\sum_{k=-\infty}^{\infty}e^{-2 \pi^2 k^2 s} \cos(2\pi k x). \end{eqnarray*} This gives us: \begin{equation*} \begin{split} \frac{1}{2n } \sum_{k=-\infty}^{\infty} e^{-(k\pi)^2t} & \left(\cos(k \pi (x-y)) + \cos(k \pi (x+y))\right) \\ &= \frac{1}{2n\sqrt{\pi t}} \sum_{k=-\infty}^{\infty} \left( e^{-\frac{(x-y -2k)^2}{4t}} + e^{-\frac{(x+y -2k)^2}{4t}}\right). \end{split} \end{equation*} We will also have ones that form an $n-1$ dimensional basis on the legs. These are the ``odd'' eigenfunctions. These will be either sine or $0$ along the legs. Since $\sin(0)=0$, they will be continuous at the center. We require the derivatives at the vertices to be 0, and so the possible sine functions are $\sin \left(\frac{(2k+1) \pi}{2} x\right)$. Note that they must be normalized according to an $L^2$ norm; this means that $\sum_{i=1}^n \frac{1}{2} b_i^2 =1$, where the $b_i$ are the coefficients. Since they will need to have derivative zero at the central vertex, we will need $\sum_{i=1}^n b_i =0$. The combination of these two restrictions, along with the fact that these eigenfunctions must be orthogonal to one another, determine the coefficients. For $i=1...\lfloor \frac{n}{2} \rfloor$ we have eigenfunctions of the form: \begin{eqnarray*} \tilde{\Phi}_{k,i}(x) = \left\{ \begin{array}{ll} \sin \left(\frac{(2k+1) \pi}{2} x\right) & \text{ for } x \in e_{2i-1} \\ -\sin \left(\frac{(2k+1) \pi}{2} x\right) & \text{ for } x \in e_{2i} \\ 0 & \text{ otherwise.} \end{array} \right. \end{eqnarray*} These functions are trivially orthogonal to one another. The factors of $\pm 1$ give us derivative 0 at the center. For $i=1...\lfloor \frac{n}{2} \rfloor -1$ we have: \begin{eqnarray*} \tilde{\Phi}_{k,\lfloor \frac{n}{2} \rfloor +i}(x) = \left\{ \begin{array}{ll} \frac{1}{\sqrt{i(i+1)}} \sin\left(\frac{(2k+1) \pi}{2} x\right) & \text{ for } x \in e_j, j=1..2i\\ - \frac{i}{\sqrt{i(i+1)}} \sin\left(\frac{(2k+1) \pi}{2} x\right) & \text{ for } x \in e_j, j=2i+1, 2i+2 \\ 0 & \text{ otherwise.} \end{array} \right. \end{eqnarray*} In the case where there is an odd number of legs we have: \begin{eqnarray*} \tilde{\Phi}_{k,n-1}(x) = \left\{ \begin{array}{ll} \frac{\sqrt{2}}{\sqrt{n(n-1)}} \sin\left(\frac{(2k+1) \pi}{2} x\right) & \text{ for } x \in e_j, j=1..n-1 \\ - (n-1)\frac{\sqrt{2}}{\sqrt{n(n-1)}} \sin\left(\frac{(2k+1) \pi}{2} x\right) & \text{ for } x \in e_n \end{array} \right. \end{eqnarray*} Note that edges $2i$ and $2i-1$ have constants with the same sign, which forces the functions to be orthogonal to the first set. The pattern of + and - allow them to be orthogonal to one another. The other factors guarantee that the derivative at the center is zero. If we look at the product of the $\tilde{\Phi}_k$ for points $x$ and $y$ on the star, we have \begin{eqnarray*} \tilde{\Phi}_k(x)\tilde{\Phi}_k(y) &=& c \sin\left(\frac{(2k+1) \pi}{2} x\right) \sin\left(\frac{(2k+1) \pi}{2} y\right) \\ &=& \frac{c}{2}\left(\cos\left((2k+1)\pi \frac{x-y}{2}\right) -\cos\left((2k+1) \pi\frac{x+y}{2}\right)\right). \end{eqnarray*} If we sum these ``odd'' eigenfunctions, multiplied by $e^{-\frac{(2k+1)^2 \pi^2}{4}t}$, we have: \begin{eqnarray*} \sum_{k=0}^{\infty} e^{-\frac{(2k+1)^2 \pi^2}{4}t}\frac{c}{2} \left(\cos\left((2k+1)\pi \frac{x-y}{2}\right) -\cos\left((2k+1) \pi\frac{x+y}{2}\right)\right). \end{eqnarray*} We can rewrite the $x-y$ terms as follows; a similar calculation will work for the $x+y$ terms. First we add in terms with $2k$. We note that cosine is an even function, and so we can extend this to negative $k$. We also have the zero term, since it will cancel between the two sums. \begin{equation*} \begin{split} \sum_{k=0}^{\infty} &e^{-((2k+1)\pi)^2\frac{t}{4}}\frac{c}{2} \cos\left((2k+1)\pi \frac{x-y}{2}\right) \\ +&\sum_{k=1}^{\infty} e^{-(2k\pi)^2\frac{t}{4}}\frac{c}{2} \cos\left(2k\pi \frac{x-y}{2}\right) -\sum_{k=1}^{\infty} e^{-(2k\pi)^2\frac{t}{4}}\frac{c}{2} \cos\left(2k\pi \frac{x-y}{2}\right) \\ &=\sum_{k=1}^{\infty} e^{-(k\pi)^2\frac{t}{4}}\frac{c}{2} \cos\left(k\pi \frac{x-y}{2}\right) -\sum_{k=1}^{\infty} e^{-(k\pi)^2t}\frac{c}{2} \cos\left(k\pi(x-y)\right) \\ &=\sum_{k=-\infty}^{\infty} e^{-(k\pi)^2\frac{t}{4}}\frac{c}{4} \cos\left(k\pi \frac{x-y}{2}\right) -\sum_{k=-\infty}^{\infty} e^{-(k\pi)^2t}\frac{c}{4} \cos\left(k\pi(x-y)\right). \end{split} \end{equation*} We will apply Jacobi's identity to each of the sums. The first sum yields: \begin{eqnarray*} \frac{c}{4}\sum_{k=-\infty}^{\infty} e^{-(k\pi)^2\frac{t}{4}} \cos\left(2k\pi \frac{x-y}{4}\right) &=& \frac{c}{4\sqrt{2\pi\frac{t}{8}}}\sum_{k=-\infty}^{\infty} e^{-\frac{(\frac{x-y}{4} -k)^2}{\frac{2t}{8}}} \\ &=&\frac{c}{2\sqrt{\pi t}}\sum_{k=-\infty}^{\infty} e^{-\frac{(x-y -4k)^2}{4t}} . \end{eqnarray*} The second sum gives us: \begin{eqnarray*} -\frac{c}{4}\sum_{k=-\infty}^{\infty} e^{-(k\pi)^2t} \cos\left(2k\pi \frac{x-y}{2}\right) =\frac{c}{4\sqrt{\pi t}}\sum_{k=-\infty}^{\infty} -e^{-\frac{(x-y -2k)^2}{4t}}. \end{eqnarray*} Similarly, for the $x+y$ terms we have: \begin{equation*} \begin{split} -\sum_{k=1}^{\infty} e^{-\frac{(2k+1)^2 \pi^2}{4}t} &\frac{c}{2} \cos\left((2k+1) \pi\frac{x+y}{2}\right) \\ &= -\frac{c}{2\sqrt{\pi t}}\sum_{k=-\infty}^{\infty} e^{-\frac{(x+y -4k)^2}{4t}} + \frac{c}{4\sqrt{\pi t}}\sum_{k=-\infty}^{\infty} e^{-\frac{(x+y -2k)^2}{4t}}. \end{split} \end{equation*} To get the heat kernel, we sum the $e^{-\lambda^2 t}\Phi(x)\Phi(y)$: \begin{eqnarray*} h_t(x,y)&=&\frac{1}{2\sqrt{\pi t} n } \sum_{k=-\infty}^{\infty}\left( e^{-\frac{(x-y -2k)^2}{4t}} + e^{-\frac{(x+y -2k)^2}{4t}}\right) \\ & &+ \frac{c}{2\sqrt{\pi t}}\sum_{k=-\infty}^{\infty} \left(e^{-\frac{(x-y -4k)^2}{4t}} -e^{-\frac{(x+y -4k)^2}{4t}}\right) \\ & &+ \frac{c}{4\sqrt{\pi t}}\sum_{k=-\infty}^{\infty} \left(e^{-\frac{(x+y -2k)^2}{4t}} -e^{-\frac{(x-y -2k)^2}{4t}}\right). \end{eqnarray*} Note that the value of $c$ depends on which edges $x$ and $y$ are on. If they are on the same edge, $c=2\left(1-\frac{1}{n}\right)$. If $x$ and $y$ are on different edges, then $c=-\frac{2}{n}$. We are now in a position to see what happens on the star near $t=0$. On the diagonal, the heat kernel will limit to infinity as $t$ goes to zero with the following asymptotics: \begin{eqnarray*} h_t(0,0) & \approx& \frac{1}{n\sqrt{\pi t} }, \\ h_t(1,1) & \approx& \frac{1}{\sqrt{\pi t}}, \text{ and} \\ h_t(x,x) & \approx& \frac{1}{2\sqrt{\pi t}} \text{ for }x \ne 0,1. \end{eqnarray*} To determine what happens for $t$ near zero when we have two different points, we need to consider the relative positions of $x$ and $y$. We know that the heat kernel will limit to zero as $t$ goes to zero. Without loss of generality, we will look at when $d(x,0) < d(y,0)$. When $x=0$ and $y \ne 1$, the $k=0$ terms will dominate. They give us: \begin{eqnarray*} h_t(0,y) \approx \frac{1}{n\sqrt{\pi t}} e^{-\frac{y^2}{4t}} . \end{eqnarray*} When $x=0$ and $y=1$, we will only have terms involving $e^{-\frac{1}{4t}}$ contributing. After cancellation, this gives us: \begin{eqnarray*} h_t(0,1) \approx \frac{2}{n\sqrt{\pi t}} e^{-\frac{1}{4t}}. \end{eqnarray*} When $x\ne 0$, the dominant terms will involve $e^{-\frac{(x-y)^2}{4t}}$. If the coefficient for those terms is zero, we then have $e^{-\frac{(x+y)^2}{4t}}$ instead. The relevant terms are: \begin{eqnarray*} h_t(x,1) &\approx & \left(\frac{2}{n}-c\right)\frac{1}{2\sqrt{\pi t}} e^{-\frac{(x+1)^2}{4t}} + \left(\frac{2}{n}+c\right)\frac{1}{2\sqrt{\pi t}} e^{-\frac{(x-1)^2}{4t}}, \mbox{ and} \\ h_t(x,y) &\approx & \left(\frac{2}{n}-c\right)\frac{1}{4\sqrt{\pi t}} e^{-\frac{(x+y)^2}{4t}} + \left(\frac{2}{n}+c\right)\frac{1}{4\sqrt{\pi t}} e^{-\frac{(x-y)^2}{4t}} \mbox{ for }x \ne y. \end{eqnarray*} When $x$ and $y$ are on different edges, $c=-\frac{2}{n}$, and so all of the terms with coefficient $\frac{2}{n}+c$ disappear. Note that the notation gives us $d(x,y)=x+y$: \begin{eqnarray*} h_t(x,1) &\approx& \frac{2}{n\sqrt{\pi t}} e^{-\frac{(x+1)^2}{4t}} \text{ for }x \ne 1 \text{, and} \\ h_t(x,y) &\approx& \frac{1}{n\sqrt{\pi t}} e^{-\frac{(x+y)^2}{4t}} \text{ for }x \ne y, y \ne 1. \end{eqnarray*} When $x$ and $y$ are on the same edge, $\frac{2}{n} + c = \frac{2}{n} + 2\left(1-\frac{1}{n}\right) =2$. This gives us the same asymtotic as in $\mbox{\bf R}$. \begin{eqnarray*} h_t(x,1) &\approx & \frac{1}{\sqrt{\pi t}} e^{-\frac{(x-1)^2}{4t}}, \mbox{ and} \\ h_t(x,y) &\approx & \frac{1}{2\sqrt{\pi t}} e^{-\frac{(x-y)^2}{4t}} \mbox{ for }x \ne y. \end{eqnarray*} \end{example} \begin{example} Consider the graph consisting of two central vertices which are joined by one edge with edges coming off of them. We will calculate the behaviour of the heat kernel between the two central vertices for small times. \begin{figure}[h] \centering \includegraphics[angle=0,width=2in]{TwoStarsLabeledW.eps} \caption{Example of two joined stars with labels; here n=7 and m=3.} \end{figure} Consider two star-like graphs joined together by a central edge, $e(v_1,v_2)$. The star centered at $v_1$ connects to $n$ edges in addition to the central edge. These are labeled $e(v_1,w_i)$ $i=1..n$. For the star centered at $v_2$, there are $m$ such edges which are labeled $e(v_2,w_i')$ $i=1..m$. This graph has three different types of eigenfunctions. When we compute these functions for $x \in e(a,b)$, we will let $x$ represent $d(x,a)$. So $a$ would be 0, the midpoint would be $\frac{1}{2}$, and $b$ would be 1. The first kind correspond to eigenvalues of the form $\frac{(2k+1)^2 \pi^2}{4}$. They form an $m-1+n-1$ dimensional space. The corresponding eigenfunctions are of the form $c(e(v,w)) \sin(\frac{(2k+1) \pi}{2} x)$ on the edges $e(v_1,w_i)$ and $e(v_2,w_i')$ and are 0 on the central edge, $e(v_1,v_2)$. Since each eigenfunction is zero on the central edge, they will not contribute when we compute $h_t(x,y)$ for $x \in e(v_1,v_2)$. The second kind correspond to eigenvalues of the form $(k\pi)^2$. When $k\ne 0$, these are $\pm \frac{\sqrt{2}}{\sqrt{m+n+1}}\cos(k\pi x)$ along each edge in the graph, with sign chosen to preserve continuity. We write them: \begin{eqnarray*} \Phi_k(x) = \left\{ \begin{array}{l} \frac{\sqrt{2}}{\sqrt{m+n+1}}\cos(k\pi x) \mbox{ for }x \in e(v_1,v_2)\mbox{ or }e(v_1,w_i)\mbox{ }i=1..n \\ \operatorname{sgn}(\cos(k \pi)) \frac{\sqrt{2}}{\sqrt{m+n+1}}\cos(k\pi x) \mbox{ for }x \in e(v_2,w_i)\mbox{ }i=1..m. \end{array} \right. \end{eqnarray*} The product of this function with itself at the vertices $v_1$ and $v_2$ is \begin{eqnarray*} \Phi_k(v_1)\Phi_k(v_2) = \frac{2}{m+n+1}\cos(k\pi). \end{eqnarray*} When $k=0$, we have $\Phi_0(v_1)\Phi_0(v_2) = \frac{1}{m+n+1}$. If we look at the sum of $\Phi_k(v_1)\Phi_k(v_2)e^{-k^2\pi^2 t}$, we can write it as follows: \begin{eqnarray*} \sum_{k=0}^{\infty} \Phi_k(v_1)\Phi_k(v_2)e^{-k^2\pi^2 t} &=& \frac{1}{m+n+1} +\sum_{k=1}^{\infty} \frac{2}{m+n+1}\cos(k\pi) e^{-k^2\pi^2 t}\\ &=& \frac{1}{m+n+1} \sum_{k=-\infty}^{\infty}\cos(k\pi) e^{-k^2\pi^2 t}. \end{eqnarray*} The third kind are more complicated. They are of the form \begin{eqnarray*} f(x) = c_1(e)\left(\sin\left(\sqrt{\lambda}(1-x)\right) + c_2(e)\sin\left(\sqrt{\lambda}x\right)\right) \end{eqnarray*} along each edge, where $c_1(e)$ and $c_2(e)$ will depend on the edge $e$; see \cite{Kuc}. For these to be in the domain, they must be continuous and have zero derivative at each vertex. We have $2(m+n+1)$ constants $c_i(e)$ to determine, as well as the possible values of $\lambda$. To guarantee continuity, we need the function to have the same value at $v_1$ regardless of which edge we're considering. To get this, we set \begin{align*} c_1(e(v_1,w_i)) = c_1(e(v_1,v_2)). \end{align*} Similarly, for continuity at $v_2$, we need \begin{align*} c_1(e(v_2,w_i')) = c_1(e(v_1,v_2))c_2(e(v_1,v_2)). \end{align*} This brings us down to $m+n+2$ different $c_i(e)$ that we need to determine. Now we need zero derivative at the vertices. Note that for an edge $e$ \begin{eqnarray*} f'(x) = \sqrt{\lambda} c_1(e)\left(-\cos\left(\sqrt{\lambda}(1-x)\right) +c_2(e)\cos\left(\sqrt{\lambda}x\right)\right). \end{eqnarray*} For edges of the form $e(v,w)$, we find that setting $c_2(e(v,w)) =\frac{1}{\cos\left(\sqrt{\lambda}\right)}$ will allow $f$ to satisfy $f'(w)=0$. For $f'(v_1)=0$, we need $c_2(v_1,v_2) =\frac{(n+1)\cos^2\left(\sqrt{\lambda}\right)-n} {\cos\left(\sqrt{\lambda}\right)}$. We now have only two things that can be used to determine our function; $c_1(e(v_1,v_2))$ and $\lambda$. Varying the value of $c_1(e(v_1,v_2))$ will multiply the entire function by a constant. We'll need to determine $\lambda$ in order to have $f'(v_2)=0$. For this, we need to solve \begin{eqnarray*} \sqrt{\lambda} c_1(e(v_1,v_2)) \left( m\frac{(n+1)\cos^2\left(\sqrt{\lambda}\right)-n} {\cos\left(\sqrt{\lambda}\right)} \left( -\cos\left(\sqrt{\lambda}\right) +\frac{1}{\cos\left(\sqrt{\lambda}\right)} \right) \right.& \\ + \left. (-1) \left( -1 +\frac{(n+1)\cos^2\left(\sqrt{\lambda}\right)-n} {\cos\left(\sqrt{\lambda}\right)} \cos\left(\sqrt{\lambda}\right) \right) \right) &=0. \end{eqnarray*} The $(-1)$ comes from the fact that we want the sum of the inward pointing derivatives along the edges to sum to zero. We can simplify this to: \begin{eqnarray*} m \left( - (n+1)\cos^2\left(\sqrt{\lambda}\right) +2n+1 -\frac{n}{\cos^2\left(\sqrt{\lambda}\right)} \right) + \left( n+1 - (n+1)\cos^2\left(\sqrt{\lambda}\right) \right) \\ =0. \end{eqnarray*} This can be written as a quadratic equation in $\cos^2\left(\sqrt{\lambda}\right)$; when solved, the only possibility for $\cos^2\left(\sqrt{\lambda}\right)$ in $[0,1]$ is $\cos^2\left(\sqrt{\lambda}\right) = \frac{mn}{(m+1)(n+1)}$. Finally, we set $c_1(e(v_1,v_2))$ so that $||f||_2=1$. This holds when \\ $c_1(e(v_1,v_2))= \frac{\sqrt{m(m+1)}}{m+n+1}$. Putting all of this information together, we find that the third type of eigenfunction has the form: \begin{eqnarray*} \tilde{\Phi}_k(x) = \left\{ \begin{array}{l} \frac{\sqrt{m(m+1)}}{m+n+1} \left(\sin\left(\sqrt{\lambda_k}(1-x)\right)+ \frac{(n+1)\cos^2\left(\sqrt{\lambda_k}\right)-n} {\cos\left(\sqrt{\lambda_k}\right)} \sin\left(\sqrt{\lambda_k}x\right)\right) \\ \mbox{ for } x \in e(v_1,v_2) \\ \frac{\sqrt{m(m+1)}}{m+n+1} \left(\sin\left(\sqrt{\lambda_k}(1-x)\right)+ \frac{1}{\cos\left(\sqrt{\lambda_k}\right)} \sin\left(\sqrt{\lambda_k}x\right)\right) \\ \mbox{ for } x \in e(v_1,w_i), i=1..n\\ \frac{\sqrt{m(m+1)}((n+1)\cos^2\left(\sqrt{\lambda_k}\right)-n)} {\cos\left(\sqrt{\lambda_k}\right)(m+n+1)} \left(\sin\left(\sqrt{\lambda_k}(1-x)\right)+ \frac{1}{\cos\left(\sqrt{\lambda_k}\right)} \sin\left(\sqrt{\lambda_k}x\right)\right) \\ \mbox{ for } x \in e(v_2,w_i'), i=1..m. \end{array} \right. \end{eqnarray*} These eigenfunctions correspond to eigenvalues $\lambda_k$, where \\ $\cos^2\left(\sqrt{\lambda_k}\right) = \frac{mn}{(m+1)(n+1)}$. Let $\sqrt{\lambda_0}$ be the square root of the eigenvalue in $(0,\frac{\pi}{2})$. All other $\sqrt{\lambda}$ are of the form $\sqrt{\lambda_0} +k\pi$ and $-\sqrt{\lambda_0} +k\pi$, where $k$ is a positive integer. When $k$ is even, the cosine is positive; for $k$ odd, cosine is negative. When we look at the product of $\tilde{\Phi}_k(v_1)$ and $\tilde{\Phi}_k(v_2)$, this expression simplifies greatly. \begin{eqnarray*} \tilde{\Phi}_k(v_1)\tilde{\Phi}_k(v_2) =-\operatorname{sgn}\left(\cos\left(\sqrt{\lambda_k}\right)\right) \frac{\sqrt{mn}}{ \sqrt{(m+1)(n+1)}(m+n+1) } . \end{eqnarray*} We then know that the sum of $\tilde{\Phi}_k(v_1)\tilde{\Phi}_k(v_2)e^{-\lambda_k^2 t}$ is $\frac{\sqrt{mn}}{\sqrt{(m+1)(n+1)}(m+n+1)}$ multiplied by the following: \begin{equation*} \begin{split} \sum_{k=0}^{\infty} & -\operatorname{sgn}\left(\cos\left(\sqrt{\lambda_k}\right)\right) e^{-(\lambda_k)^2 t} \\ &= \sum_{k=0}^{\infty} e^{-((2k+1)\pi-\sqrt{\lambda_0})^2 t} - e^{-((2k+2)\pi-\sqrt{\lambda_0})^2 t} - e^{-(2k\pi +\sqrt{\lambda_0})^2 t} + e^{-((2k+1)\pi+\sqrt{\lambda_0})^2 t} \\ &= \sum_{k=0}^{\infty} e^{-(\sqrt{\lambda_0}-(2k+1)\pi)^2 t} - e^{-(\sqrt{\lambda_0}-(2k+2)\pi)^2 t} - e^{-(\sqrt{\lambda_0}+2k\pi)^2 t} + e^{-(\sqrt{\lambda_0}+(2k+1)\pi)^2 t} \\ &= \sum_{k=-\infty}^{\infty} e^{-(\sqrt{\lambda_0}-(2k+1)\pi)^2 t} -e^{-(\sqrt{\lambda_0}-2k\pi)^2 t}. \end{split} \end{equation*} We can sum $\Phi(v_1)\Phi(v_2)e^{-\lambda^2 t}$ over all of the eigenfunctions to find an explicit expression for the heat kernel at $(v_1,v_2)$. \begin{equation*} \begin{split} h_t(v_1,v_2) =& \frac{1}{m+n+1} \left( \sum_{k=-\infty}^{\infty} e^{- k^2 \pi^2 t} \cos(k \pi) \right. \\ & + \left. \sqrt{\frac{mn}{(m+1)(n+1)}} \left( \sum_{k=-\infty}^{\infty} e^{-(\sqrt{\lambda_0}-(2k+1)\pi)^2 t} -\sum_{k=-\infty}^{\infty} e^{-(\sqrt{\lambda_0} -2k\pi)^2 t} \right) \right) . \end{split} \end{equation*} We can rewrite this using Jacobi's identity (see Dym \cite{Dym}). The identity is: \begin{eqnarray*} \sum_{k=-\infty}^{\infty} e^{-\frac{(x-k)^2}{2s}} = \sqrt{2 \pi s}\sum_{k=-\infty}^{\infty}e^{-2 \pi^2 k^2 s}e^{2 \pi i k x}. \end{eqnarray*} Because the left hand side is a real number, we can rewrite the $e^{2 \pi i k x}$ on the right hand side to obtain: \begin{eqnarray*} \sum_{k=-\infty}^{\infty} e^{-\frac{(x-k)^2}{2s}} = \sqrt{2 \pi s}\sum_{k=-\infty}^{\infty}e^{-2 \pi^2 k^2 s} \cos(2\pi k x). \end{eqnarray*} We can apply this identity to the three series. For the first, we use the reverse of the equality with $x=\frac{1}{2}$ and $s=\frac{t}{2}$ in the right hand side. This gives us: \begin{eqnarray*} \sum_{k=-\infty}^{\infty} e^{- k^2 \pi^2 t} \cos(k \pi) =\frac{1}{\sqrt{\pi t}} \sum_{k=-\infty}^{\infty} e^{-\frac{(1-2k)^2}{4t}}. \end{eqnarray*} In the second, let $x=\frac{\sqrt{\lambda_0}+\pi}{2 \pi}$ and $s=\frac{1}{8\pi^2 t}$. This gives us \\ $ \frac{(x-k)^2}{2s} = \frac{(\sqrt{\lambda_0}+\pi -2\pi k)^2}{(2\pi)^2 2 \frac{1}{8 \pi^2 t}} = (\sqrt{\lambda_0} + (1-2k)\pi)^2 t$. When we put that into the identity, we have: \begin{eqnarray*} \sum_{k=-\infty}^{\infty} e^{-(\sqrt{\lambda_0} + (1-2k)\pi)^2 t} = \frac{1}{2\sqrt{\pi t}}\sum_{k=-\infty}^{\infty}e^{-\frac{k^2}{4t}} \cos\left(k\left(\sqrt{\lambda_0}+\pi\right)\right). \end{eqnarray*} Note that $\sum_{k=-\infty}^{\infty} e^{-(\sqrt{\lambda_0}-(2k+1)\pi)^2 t} =\sum_{k=-\infty}^{\infty} e^{-(\sqrt{\lambda_0} + (1-2k)\pi)^2 t}$ by reindexing, and so this is a way of rewriting our original sum. For the third, we use $x= \frac{\sqrt{\lambda_0}}{2 \pi}$ and $s=\frac{1}{8 \pi^2 t}$ which, by a similar computation, gives us: \begin{eqnarray*} \sum_{k=-\infty}^{\infty} e^{-(\sqrt{\lambda_0} -2k\pi)^2 t} =\frac{1}{2\sqrt{\pi t}} \sum_{k=-\infty}^{\infty}e^{-\frac{k^2}{4t}} \cos\left(k \sqrt{\lambda_0}\right). \end{eqnarray*} We can combine these three to rewrite $h_t(v_1,v_2)$ as follows: \begin{equation*} \begin{split} h_t(v_1,v_2)=& \frac{1}{m+n+1} \frac{1}{\sqrt{\pi t}}\left( \sum_{k=-\infty}^{\infty} e^{-\frac{(1-2k)^2}{4t}} \right. \\ +& \left. \sqrt{\frac{mn }{(m+1)(n+1)}} \frac{1}{2} \sum_{k=-\infty}^{\infty}e^{-\frac{k^2}{4t}} \left( \cos\left(k\sqrt{\lambda_0}+k \pi \right) - \cos\left(k \sqrt{\lambda_0}\right) \right) \right). \end{split} \end{equation*} We can use the angle sum formula for cosines to see that \begin{eqnarray*} \cos\left(k\sqrt{\lambda_0}+k \pi \right) &=& \sin\left(k\sqrt{\lambda_0}\right)\sin\left(k\pi\right) + \cos\left(k\sqrt{\lambda_0}\right)\cos\left(k\pi\right)\\ &=&\cos\left(k\sqrt{\lambda_0}\right)\cos\left(k\pi\right). \end{eqnarray*} As $\cos\left(k\pi\right) = 1$ for $k$ even and $-1$ for $k$ odd, the term $\cos\left(k \sqrt{\lambda_0}+k \pi \right) -\cos\left(k \sqrt{\lambda_0}\right)$ reduces to $0$ for even $k$ and $-2\cos\left(k \sqrt{\lambda_0}\right)$ for odd $k$. This allows us to write: \begin{eqnarray*} h_t(v_1,v_2) &=& \frac{1}{m+n+1} \frac{1}{\sqrt{\pi t}} \left( \sum_{k=-\infty}^{\infty} e^{-\frac{(2k+1)^2}{4t}} \right. \\ &&\left. + \sqrt{\frac{mn}{(m+1)(n+1)}} \frac{1}{2} \sum_{k=-\infty}^{\infty}e^{-\frac{(2k+1)^2}{4t}} (-2)\cos\left((2k+1) \sqrt{\lambda_0}\right) \right) \\&=& \frac{1}{m+n+1} \frac{1}{\sqrt{\pi t}} \sum_{k=-\infty}^{\infty} e^{-\frac{(2k+1)^2}{4t}} \left(1-\frac{\sqrt{mn}\cos\left((2k+1)\sqrt{\lambda_0}\right)} {\sqrt{(m+1)(n+1)}}\right). \end{eqnarray*} When we consider the limit as $t$ approaches $0$, the $k=0$ and $k=-1$ terms will dominate. By symmetry, these terms are equal. This gives us the approximation: \begin{eqnarray*} h_t(v_1,v_2) & \approx & \frac{1}{m+n+1} \frac{2}{\sqrt{\pi t}} e^{-\frac{1}{4t}} \left(1-\frac{\sqrt{mn}\cos\left(\sqrt{\lambda_0}\right)} {\sqrt{(m+1)(n+1)}}\right) \\&=& \frac{1}{m+n+1} \frac{2}{\sqrt{\pi t}} e^{-\frac{1}{4t}} \left(1-\frac{\sqrt{mn} \sqrt{ \frac{mn}{(m+1)(n+1)}} } {\sqrt{(m+1)(n+1)}}\right) \\&=& \frac{1}{(m+1)(n+1)} \frac{2}{\sqrt{\pi t}} e^{-\frac{1}{4t}}. \end{eqnarray*} Note that when $m=n=0$ we have a line; this is the correct asymptotic there. Similarly, when $m=n=1$, we have a line of length 3. This works out there too. We can also find the on-diagonal asymptotic at $v_1$, one of the central vertices. Here we have \begin{eqnarray*} \Phi_k^2(v_1)=\frac{2}{m+n+1} \end{eqnarray*} and \begin{eqnarray*} \tilde{\Phi}_k^2(v_1) &=& \left(\frac{\sqrt{m(m+1)}}{\sqrt{m+n+1} } \right) \sin^2\left(\sqrt{\lambda_k}\right) \\ &=& \frac{m}{(m+n+1)(n+1)}. \end{eqnarray*} Using the same style of manipulations as before we find that $h_t(v_1,v_1)$ is \begin{eqnarray*} h_t(v_1,v_1) = \frac{1}{m+n+1} \sum_{k=-\infty}^{\infty} e^{-k^2 \pi^2 t} + \frac{m}{(m+n+1)(n+1)} \sum_{k=-\infty}^{\infty} e^{-(\lambda_0 -k \pi)^2 t}. \end{eqnarray*} We can use a Jacobi transform here with $x=0$, $t=2s$ in the first sum and $x=\frac{\sqrt{\lambda_0}}{\pi}$ and $t=\frac{1}{2 \pi^2 s}$ in the second to obtain: \begin{eqnarray*} h_t(v_1,v_1) = \frac{1}{(m+n+1)\sqrt{\pi t}} \sum_{k=-\infty}^{\infty} \left(1 + \frac{m}{n+1} \cos(2k\sqrt{\lambda_0})\right) e^{-\frac{k^2}{t}}. \end{eqnarray*} As t approaches zero, the $k\ne 0$ terms also approach zero. The dominant $k=0$ term is \begin{eqnarray*} \frac{1}{(m+n+1)\sqrt{\pi t}} \left(1 + \frac{m}{n+1} \right) = \frac{1}{(n+1)\sqrt{\pi t}}. \end{eqnarray*} This is the same asymptotic as we have on a single star whose center connects to $n+1$ edges. This is not surprising, since small values of $t$ correspond to the local geometry of a space. \end{example} \begin{example} The heat kernel on an interval is messier than that on a line. Dym \cite{Dym} calculates it for $\frac{\partial}{\partial t} u -\frac{1}{2}\Delta u=0$ on the interval $[0,1]$ to be: \begin{eqnarray*} p_t(x,y) = 1 + 2 \sum_{n=1}^{\infty} e^{-n^2 \pi^2 t/2} \cos(n \pi x) \cos(n \pi y) \\ = \frac{1}{\sqrt{2 \pi t}} \sum_{n=-\infty}^{\infty} e^{-\frac{(x-y-2n)^2}{2t}} + e^{-\frac{(x+y-2n)^2}{2t}}. \end{eqnarray*} We would like to have the heat kernel for $\frac{\partial}{\partial t} \tilde{u} -\Delta \tilde{u}=0$ on $[0,L]$. We can find it by noting that if $u(t,x)$ is a solution to $\frac{\partial}{\partial t} u -\frac{1}{2}\Delta u=0$ on the interval $[0,1]$, then $\tilde{u}(t,x) = c u(a t,b x)$ has derivatives \begin{eqnarray*} \left(\frac{\partial}{\partial t} \tilde{u}\right)(t,x) = \left(c a \frac{\partial}{\partial t} u\right)(a t,b x) \end{eqnarray*} and \begin{eqnarray*} (\Delta \tilde{u})(t,x) = (c b^2 \Delta u)(a t, b x) = \left( 2 c b^2 \frac{\partial}{\partial t} u \right)(a t,b x). \end{eqnarray*} Then $(\frac{\partial}{\partial t} \tilde{u})(t,x) - (\Delta \tilde{u})(t,x) = 0$ when $a=2b^2$. We change the interval length by setting $b=\frac{1}{L}$. This gives us $\tilde{p}_t(x,y) = c p_{\frac{2t}{L^2}}(\frac{x}{L},\frac{y}{L})$. The constant $c$ is used to normalize so that $\int_0^L \tilde{p}_t(x,y) dx=1$ holds for each $t>0$. Since this integral is $\int_0^L c p_{\frac{2t}{L^2}}(\frac{x}{L},\frac{y}{L}) dx = \int_0^1 c p_{\frac{2t}{L^2}}(z,\frac{y}{L}) Ldz = cL$, we have $c=\frac{1}{L}$. We can write this as \begin{eqnarray*} \tilde{p}_t(x,y) = \frac{1}{\sqrt{4 \pi t}} \sum_{n=-\infty}^{\infty} e^{-\frac{(x-y-2nL)^2}{4t}} + e^{-\frac{(x+y-2nL)^2}{4t}}. \end{eqnarray*} This description allows one to look at terms with $n$ near 0 to find the small time asymptotic. Note that for $L=3$, $x=1$ and $y=2$ we have: \begin{eqnarray*} \tilde{p}_t(1,2) = \frac{1}{\sqrt{4 \pi t}} \sum_{n=-\infty}^{\infty} e^{-\frac{(-1-6n)^2}{4t}} + e^{-\frac{(3-6n)^2}{4t}}. \end{eqnarray*} When $t$ is small, this behaves like the first $n=0$ term: \begin{eqnarray*} \tilde{p}_t(1,2) \approx \frac{1}{2\sqrt{\pi t}} e^{-\frac{1}{4t}} . \end{eqnarray*} This is consistent with the asymptotic for the two star case when $m=n=1$. Note also that when $L=1$, $x=0$, $y=1$ that this gives $\frac{2}{\sqrt{\pi t}} e^{- \frac{1}{4t}}$ which is consistent with the star with one edge. \end{example} \begin{example} \begin{figure}[h]\label{picZ3} \centering \includegraphics[angle=0,width=1in]{GridBlackAndGrey3D.eps} \caption{A subset of the three dimensional grid.} \end{figure} Complexes can have an underlying group structure. For example, $Z^3$, the group consisting of triplets of integers, can be used to create a 3 dimensional complex by connecting a cube, $[0,1]^3$, to itself where each pair of triples which differ by one by a line segment. This grid is the 1-skeleton, and the points in the group form a 0-skeleton. We can use the 1-skeleton to create a space that looks like a bunch of empty boxes by filling in the faces formed by loops of four edges. This is the 2-skeleton. If we then fill in the boxes in the 2-skeleton, we'll have a 3-skeleton, which is $R^3$. Locally, we've shown that if $h_t$ is the heat kernel on the $k$-skeleton, \begin{eqnarray*} \frac{1}{{C} t^{k/2}} \le h_t(x,x) \le \frac{C}{(\min(t,R_0^2))^{k/2}} \end{eqnarray*} for all $x \in X$ and all $t$ such that $0<t<R_0^2$. We prove in chapter 5 that the heat kernel on each of these k-skeletons globally behaves like the heat kernel on $R^3$. \end{example} \begin{example} There are also complexes with underlying group structure whose geometry is not globally Euclidean. Let $G$ be the free group on two elements; this is a group of words formed by letters $a$ and $b$ and their inverses $a^{-1}$ and $b^{-1}$ where the only cancellations are $aa^{-1} =a^{-1}a =1$ and $bb^{-1}=b^{-1}b=1$, and $a$ and $b$ don't commute. Let $Y$ be the complex formed by three squares joined into an L shape (see Fig. \ref{picN}). We have a larger structure $X$ which has copies of $Y$ connected to each other via the group $G$. That is, each copy of $Y$ will be connected to four other copies of $Y$; the top of the L connects to the bottom edge of the lower left square of the L, and the right of the L connects to the left edge of the bottom square. \begin{figure}[h] \centering \includegraphics[angle=0,width=1in]{NSameCrop.eps} \hspace{1in} \includegraphics[angle=0,width=2in]{NtogetherCrop.eps} \caption{Y (left); Y (darker shading) and its four surrounding copies (lighter shading) (right) Each of the edges in these pictures should be interpreted as having length 1.}\label{picN} \end{figure} Note that we can not isometrically embed $X$ into $R^2$. The stretching of the edges in Fig. \ref{picN} is to allow you to see distinct edges and vertices. This structure globally it acts like a hyperbolic space, but locally it is Euclidean. For a small ball with $R<.5$, we have a two dimensional circle (possibly missing a wedge) whose volume is $\approx \pi R^2$. For $t <.5$, corollary \ref{ComplexIsJustLikeRn} tells us: \begin{eqnarray*} h_t(x,x) \approx \frac{C}{(4\pi t)}. \end{eqnarray*} For a given copy of $Y$, there are four neighbors, each of which has three additional neighbors. For a ball of radius $R>1$, there will be approximately $1 + 4 + 4(3) + \cdots + 4(3^R) \approx 2 (3^{R+1})$ copies of $Y$. This tells us that for large $R$, we have exponential volume growth. In particular, this group is nonamenable. We define this in chapter 5 and show that the large time behavior of the heat kernel is \begin{eqnarray*} \sup_{x \in X} h_t(x,x) \approx C_0 e^{-t/C_1}. \end{eqnarray*} \end{example} \chapter{Setup for Groups} A finite product of elements from a set $S$ is called a word. If a word is written $s_1s_2...s_k$, we say it has length $k$. A finitely generated group is a group with a generating set, $S$, where every element in the group can be written as a finite word using elements of $S$. Although for a given $g \in G$ it is computationally difficult to determine which word is the smallest one representing $g$, such a word (or words) exists. If this word has length $k$, then we write $|g|=k$. We define the volume of a subset of $G$ to be the number of elements of $G$ contained in that subset. We write $|B_r|$ to denote the volume of a ball of radius $r$, $B_r := \{g \in G : |g| \le r\}$. For groups, volume is translation invariant, and so we do not lose any generality by having it centered at the identity. For a function $f$ which maps elements of a group to the reals, we define the Dirichlet form on $\ell^2(G)$ to be $E(f,f) = \frac{1}{|S|}\sum_{g \in G}\sum_{s \in S} |f(g) -f(gs)|^2$. Although we'd need to specify directions if we were to define a gradient, we can define an object which behaves like the length of the gradient of $f$ on $G$. We write this as $|\nabla f(x)| =\sqrt{\frac{1}{|S|}\sum_{s \in S} |f(x) -f(xs)|^2}$. Notationally, this means that $E(f,f) = \sum_{g \in G} |\nabla f(g)|^2$. Discrete $L^p$ norms restricted to a subset $A \subset G$ are written as \\ $||f||_{p,A} = \left(\sum_{x\in A} |f(x)|^p \right)^{1/p}$. When $A=G$, we will write $||f||_{p,G}$. One can show a Poincar\'{e} type inequality on a volume doubling finitely generated group. The arguments used in this can be found in \cite{CLsc1993}. \begin{lemma}\label{groupVDpoincare} Let $G$ be a finitely generated group with generating set $S$. For any $f : G \rightarrow R$, the following inequality holds on balls $B_r$: \begin{eqnarray*} \norm{f-f_{B_r}}_{1,B_r} \le \frac{|B_{2r}|}{|B_r|} 2r \sqrt{|S|} \norm{\nabla f}_{1,B_{3r}}. \end{eqnarray*} If the group is volume doubling, this is a weak Poincar\'{e} inequality on balls for $p=1$: \begin{eqnarray*} \norm{f-f_{B_r}}_{1,B_r} \le 2r C_{Doubling}\sqrt{|S|} \norm{\nabla f}_{1,B_{3r}}. \end{eqnarray*} \end{lemma} \begin{proof} Let $G$ be a finitely generated group with a symmetric set of generators, $S$. Let $B_r$ be a ball of radius $r$; for brevity, we will not explicitly write the center. We can write the norm of $f$ minus its average as follows. \begin{eqnarray*} \norm{f-f_{B_r}}_{1,B_r} &= &\sum_{x \in B_r} |f(x) - \frac{1}{|B_r|} \sum_{y \in B_r} f(y)| \\ &\le & \frac{1}{|B_r|}\sum_{x \in B_r} \sum_{y \in B_r}| f(x)-f(y)|. \end{eqnarray*} For each $y \in B_r$, there exists a $g\in G$ with $|g| \le 2r$ such that $y =x g$. We make this substitution and sum over all $g\in G$ with $|g| \le 2r$. \begin{eqnarray*} \frac{1}{|B_r|}\sum_{x \in B_r} \sum_{y \in B_r}| f(x)-f(y)| &\le &\frac{1}{|B_r|}\sum_{x \in B_r} \sum_{g: |g|\le 2r} | f(x)-f(x g)| \\ &=& \frac{1}{|B_r|}\sum_{g: |g|\le 2r}\sum_{x \in B_r} | f(x)-f(x g)|. \end{eqnarray*} We will begin with the innermost quantity, and then simplify the sums. We can write $g = s_1..s_k$ as a reduced word with $k \le 2r$. We rewrite the difference of $f$ at $x$ and $xg$ by splitting the path between them into pieces. \begin{eqnarray*} |f(x)-f(x g)| \le \sum_{i=1}^{|g|} |f(xs_1...s_{i-1})- f(xs_1...s_i)|. \end{eqnarray*} We fix $g$ and sum over all $x \in B_r$. \begin{eqnarray*} \sum_{x \in B_r} |f(x)-f(xg)| & \le &\sum_{x \in B_r} \sum_{i=1}^{|g|} |f(xs_1...s_{i-1})- f(xs_1...s_i)| \\ &=& \sum_{i=1}^{|g|} \sum_{x \in B_r} |f(xs_1...s_{i-1})- f(xs_1...s_i)|. \end{eqnarray*} We can change variables by letting $z= xs_1...s_{i-1}$. Note that $|x s_1..s_{i-1}| < 3r$ as $i\le 2r$. Then $z \in B_{3r}$. \begin{eqnarray*} \sum_{i=1}^{|g|} \sum_{x \in B_r} |f(xs_1...s_{i-1})- f(xs_1...s_i)| \le \sum_{i=1}^{|g|} \sum_{z \in B_{3r}} |f(z)- f(zs_i)| . \end{eqnarray*} Since $s_i \in S$, we can sum over all $s \in S$ instead of the $s_i$ in $g$. To do this, we must account for the multiplicity of the $s_i$. We could have at most $|g|$ copies of any generator; $|g| \le 2r$, and so we will multiply by $2r$. \begin{eqnarray*} \sum_{i=1}^{|g|} \sum_{z \in B_{3r}} |f(z)- f(zs_i)| \le 2r \sum_{s \in S} \sum_{z \in B_{3r}} |f(z)- f(zs)|. \end{eqnarray*} Jensen's inequality allows us to rewrite this in terms of the gradient. \begin{eqnarray*} 2r \sum_{s \in S} \sum_{z \in B_{3r}} |f(z)- f(zs)| & \le& 2r \sum_{z \in B_{3r}} \sqrt{|S| \frac{1}{|S|} \sum_{s \in S} |f(z)- f(zs)|^2} \\ & =& 2r \sum_{z \in B_{3r}} \sqrt{|S|} |\nabla f(z)|. \end{eqnarray*} We'll use this calculation to get the desired inequality. We now have \begin{eqnarray*} \sum_{x \in B_r} |f(x)-f(xg)| \le 2r \sum_{z \in B_{3r}} \sqrt{|S|} |\nabla f(z)|. \end{eqnarray*} Dividing by $|B_r|$ and summing over the $g$ gives us \begin{eqnarray*} \frac{1}{|B_r|}\sum_{g: |g|\le 2r}\sum_{x \in B_r} | f(x)-f(x g)| &\le& \frac{1}{|B_r|}\sum_{g: |g|\le 2r} \sum_{z \in B_{3r}} 2r \sqrt{|S|} |\nabla f(z)| \\ &=& \frac{|B_{2r}|}{|B_r|} 2r \sqrt{|S|} \sum_{z \in B_{3r}} |\nabla f(z)|. \end{eqnarray*} This reduces to \begin{eqnarray*} \norm{f-f_{B_r}}_{1,B_r} \le \frac{|B_{2r}|}{|B_r|} 2r \sqrt{|S|} \norm{\nabla f}_{1,B_{3r}}. \end{eqnarray*} Note that in general, $\frac{|B_{2r}|}{|B_r|}$ will depend on the radius, $r$. If the group is volume doubling, this gives us a weak Poincar\'{e} inequality. \begin{eqnarray*} \norm{f-f_{B_r}}_{1,B_r} \le C_{Doubling} 2r \sqrt{|S|} \norm{\nabla f}_{1,B_{3r}}. \end{eqnarray*} \end{proof} \section{Comparing distances in X and G} Let $X$ be a complex, and $G$ be a finitely generated group of isomorphisms on the complex such that $X/G =Y$ is an admissible complex consisting of a finite number of polytopes. One example of this type of complex is a Cayley graph; this is the graph where each vertex corresponds to a group element, and two vertices are connected by an edge if they differ by an element of the generating set. In this case, $Y$ is the unit interval. We would like to be able to compare functions defined on the group, $G$, with functions defined on the complex, $X$. To do this, we will look at ways to transfer a function defined on $G$ to a function defined on $X$ that roughly preserves the norm of both the function and its energy form. We seek to do the reverse as well. We will use a technique that originated with Kanai \cite{Kanai} and additionally was used by Coulhon and Saloff-Coste \cite{CoulhonLsc}. We also want a way of changing from real valued functions which take values in $X$ to ones that take values in $G$. We will do this by taking a copy of $Y$ and splitting it into many smaller pieces. Given $\delta \le \operatorname{diam}(Y)$, we can find a finite covering of $Y$ by balls of radius $\delta$ such that balls of radius $\delta /2$ are disjoint in $Y$. As $Y$ is a finite polytopal complex, volume doubling on $Y$ implies that at most a finite number of balls of radius $\delta$ will overlap. $X$ can be written by taking a copy of $Y$ for each element of $G$, and so this cover can be expanded to a cover of $X$. Note that once we have a copy of the cover of $Y$ for each element of $G$, balls of radius $\delta /2$ in this larger cover may overlap. For $X$, the overlap is also finite; call the number of overlapping balls $C_{N,S}$. Call the centers of the balls covering $Y$ $\{ \gamma_i\}_{i=1}^N$ and the balls covering $X$ $\{ g\gamma_i\}_{i=1..N; g \in G}$. Note that each $x\in X$ is within $\delta$ of at least one of the $g\gamma_i$. As we are frequently switching between $X$ and $G$, we will use $B_X$ for balls in $X$ and $B_G$ for balls in $G$. \begin{example} \begin{figure}[h] \centering \includegraphics[angle=0,width=1.5in]{R2tiled.eps} \hspace{.5in} \includegraphics[angle=0,width=1.5in]{Ycovered.eps} \hspace{.5in} \includegraphics[angle=0,width=1.5in]{TwoGplacedcopies.eps} \caption{X split into copies of Y(left);a copy of Y covered by 5 balls of radius .6 (center); a cover for Y shifted by five copies of G cover everything -- the black lines represent four copies that overlap exactly; the fifth copy is gray(right).} \end{figure} Let $X=R^2$, $G =Z^2$ and $Y =[0,1]^2$. For $\delta =.6$, balls of radius $\delta/2 =.3$ centered at the corners of $Y$ are disjoint, but not all points in the plane are covered. We can introduce another copy of $G$ that's shifted by $(.5,.5)$. All points in $Y$ are covered by some ball of radius delta, but the balls of radius $.3$ will not overlap. We can check this by comparing the distance along the diagonal from $(0,0)$ to $(1,1)$ with the length covered by the radii along the same diagonal. We have $d((0,0),(1,1))= \sqrt{2} \approx 1.4$. For the balls, there's the radius of the one centered at $(0,0)$, the one centered at $(1,1)$ and the diameter of the one centered at $(.5,.5)$. This sums to $4\delta =1.2$. The number of overlapping balls in $X$ is $C_{N,S}=10$. This happens at $(0,.5)$ where there are four balls centered at $(0,0)$, one centered at $(.5,.5)$, one centered at $(-.5,.5)$, and four centered at $(0,1)$. In this example, $\gamma_1=(0,0)$, $\gamma_2=(0,1)$, $\gamma_3=(1,0)$, $\gamma_4=(1,1)$, and $\gamma_5=(.5,.5)$. \end{example} We define \begin{eqnarray*} \group{f(g,i)} = \frac{1}{\mu\left(B_X(g\gamma_i,\delta)\right)} \int_{B_X(g\gamma_i,\delta)} f(x) dx = \Xint-_{B_X(g\gamma_i,\delta)} f(x) dx . \end{eqnarray*} We can view $\group{f}$ as a collection of $N$ functions defined on $G$. For any fixed $i$, we can treat $\group{f(\cdot,i)}$ as a function on the group, and so the norm of $\group{f}$ can be found by summing these over $i$. It's important to note that the sets $\{g : g\in B_X(r) \cap G\}$ and $B_G(r)$ are potentially different. We can describe this by comparing distances with some explicit constants. \begin{lemma}\label{CompareDistance} We can compare distances in $G$ and $X$ in the following manner. There exist constants $C_{XG}$ and $C_0$ so that for any $g,h\in G$ we have: \begin{eqnarray*} \frac{1}{C_0} d_X(g,h) \le d_G(g,h) \le C_{XG} d_X(g,h). \end{eqnarray*} This tells us that balls centered at points in $G$ compare as: \begin{eqnarray*} G \cap B_X(\frac{r}{C_{XG}}) \subset B_G(r) \subset G \cap B_X(C_0 r). \end{eqnarray*} Here, $C_0 = \max_{v_1,v_2 \in Y^{(0)}} d_{Y^{(1)}}(v_1,v_2)$ and \\ $C_{XG} = \frac{1}{\min_{v_1,v_2 \in Y^{(0)}} d_{X^{(1)}}(v_1,v_2)} \prod_{i=1}^n \max\left(\sqrt{\frac{2}{1-\cos(\alpha)}},\frac{\max_{y_1,y_2 \in Y^{(i-1)}} d_{X^{(i-1)}}(y_1,y_2)}{\min_{v_1,v_2 \in Y^{(0)}} d_{X^{(1)}}(v_1,v_2)} \right)$ where $\alpha$ is the smallest interior angle in $X$. \end{lemma} \begin{proof} We'll start with the easy direction. Any path in $X^{(1)}$ is also a path in $X$, and so $d_X(g,h) \le d_{X^{(1)}}(g,h)$. To compare this with distances in $G$, which count numbers of points in paths, we use lengths of edges between them. \begin{eqnarray*} d_{X^{(1)}}(g,h) \le \max_{v_1,v_2 \in Y^{(0)}} d_{Y^{(1)}}(v_1,v_2) d_G(g,h). \end{eqnarray*} Set $C_0 = \max_{v_1,v_2 \in Y^{(0)}} d_{Y^{(1)}}(v_1,v_2)$. Since whenever $ C_0 d_{G}(g,h)\le r$ we also have $d_X(g,h)\le r$, we know that any point in $B_G(\frac{r}{C_0})$ is also in $B_X(r)$. This tells us that $B_G(r) \subset G \cap B_X(C_0 r)$. We can use the fact that $X$ can be subdivided into copies of $Y$ to relate distances in the other direction as well. We will compare distances in the different skeletons of $X$. In order to simplify notation, $d_k(x,y)$ will refer to the distance between $x$ and $y$ when we restrict to paths in $X^{(k)}$. Let $g,h \in G$ be given. Then there is a shortest path in $X$ between them. If there are multiple such paths, pick one. Label it $\{x_0=g,x_1,x_2,..,x_k=h\}$ where $x_i \in$ $X^{(n-1)}$ for $i=1..k-1$, and $x_i$, $x_{i+1}$ are both in the boundary of the same maximal polytope, although they are on different faces. In particular, both are in $X^{(n-1)}$. Then we know that the length of this path is$\sum_{i=1}^k d_{n}(x_{i-1},x_i)$. We will compare $d_{n}(x_{i-1},x_i)$ with $d_{n-1}(x_{i-1},x_i)$, and use this to relate the distances between skeletons who differ by 1 dimension. This will allow us to work our way from $n$ dimensions down to $X^{(0)}$. Then we can compare $X^{(0)}$ with $G$. Look at $x_i$ and $x_{i+1}$ which are in $(n-1)$ dimensional faces $F_i$ and $F_{i+1}$. Either $F_i \cap F_{i+1}$ is nonempty and so they share a lower dimensional point, or else it is empty and they do not. If they do not share a point, then: \begin{eqnarray*} \min_{v_1,v_2 \in Y^{(0)}} d_{1}(v_1,v_2) \le d_{n}(x_i,x_{i+1}). \end{eqnarray*} We get this bound because the diameter of any subpolyhedra of $Y$ must be bounded below by the length of the smallest edge of that polyhedra. We can also bound the $(n-1)$ distance: \begin{eqnarray*} d_{n-1}(x_i,x_{i+1}) \le \max_{y_1,y_2 \in Y^{(n-1)}} d_{n-1}(y_1,y_2). \end{eqnarray*} Putting this together, we get: \begin{eqnarray*} d_{n-1}(x_i,x_{i+1}) \le \frac{\max_{y_1,y_2 \in Y^{(n-1)}} d_{n-1}(y_1,y_2)}{\min_{v_1,v_2 \in Y^{(0)}} d_{1}(v_1,v_2)} d_{n}(x_i,x_{i+1}). \end{eqnarray*} Otherwise, if $F_i$ and $F_{i+1}$ intersect in a lower dimensional face, we will call $v$ the point on the intersection which minimizes $d_{n-1}(v,x_i)+d_{n-1}(v,x_{i+1})$. These three points form a triangle with angle $x_i v x_{i+1} = \theta \ge \alpha$, where $\alpha$ is the smallest interior angle in $Y$ as well as in $X$. Note that this angle is bounded because $Y$ is made up of a finite number of polytopes. We would like to determine a relationship between $d_{n}(x_i,x_{i+1})$ and $\inf_{v \in F_i \cap F_{i+1}} d_{n-1}(v,x_i)+d_{n-1}(v,x_{i+1})$. To find this, we will use a simple derivation. For positive numbers $a$ and $b$ we have \begin{eqnarray*} (a-b)^2 &\ge& - \cos(\alpha)(a-b)^2 \\ a^2 + b^2 -2ab\cos(\alpha) &\ge& 2ab -a^2\cos(\alpha) -b^2\cos(\alpha) \\ 2a^2 + 2b^2 -4ab\cos(\alpha) &\ge& a^2 + b^2 + 2ab -2ab\cos(\alpha) -a^2\cos(\alpha) -b^2\cos(\alpha) \\ a^2 + b^2 -2ab\cos(\alpha) &\ge& (a+b)^2\left(\frac{1-\cos(\alpha)}{2}\right). \end{eqnarray*} This is helpful because when we apply the law of cosines to the triangle we have: \begin{eqnarray*} d_{n}^2(x_i,x_{i+1}) = d_{n-1}^2(x_i,v) +d_{n-1}^2(v,x_{i+1}) -2 d_{n-1}(x_i,v)d_{n-1}(v,x_{i+1})\cos(\theta). \end{eqnarray*} We can form an inequality by replacing $\cos(\theta)$ with the larger $\cos(\alpha)$: \begin{eqnarray*} d_{n}^2(x_i,x_{i+1}) \ge d_{n-1}^2(x_i,v) +d_{n-1}^2(v,x_{i+1}) -2 d_{n-1}(x_i,v)d_{X^{(n-1)}}(v,x_{i+1})\cos(\alpha). \end{eqnarray*} Then we can apply our fact with $a=d_{n-1}(x_i,v)$ and $b=d_{n-1}(v,x_{i+1})$. \begin{eqnarray*} d_{n}^2(x_i,x_{i+1}) \ge (d_{n-1}(x_i,v)+ d_{n-1}(v,x_{i+1}))^2\left(\frac{1-\cos(\alpha)}{2}\right). \end{eqnarray*} This leads us to the conclusion that \begin{eqnarray*} d_{n}(x_i,x_{i+1}) &\ge& \left(d_{n-1}(x_i,v)+ d_{n-1}(v,x_{i+1})\right)\sqrt{\frac{1-\cos(\alpha)}{2}} \\ &\ge& d_{n-1}(x_i,x_{i+1})\sqrt{\frac{1-\cos(\alpha)}{2}}. \end{eqnarray*} When we combine the cases where faces intersect with the case where they do not, we get the following inequality: \begin{eqnarray*} d_{n-1}(x_i,x_{i+1}) \le \max\left(\sqrt{\frac{2}{1-\cos(\alpha)}},\frac{\max_{y_1,y_2 \in Y^{(n-1)}} d_{n-1}(y_1,y_2)}{\min_{v_1,v_2 \in Y^{(0)}} d_{1}(v_1,v_2)} \right) d_{n}(x_i,x_{i+1}). \end{eqnarray*} We can sum and use the fact that we had a distance minimizing path in $X^{(n)}$ to get \begin{eqnarray*} d_{n-1}(g,h) \le \max\left(\sqrt{\frac{2}{1-\cos(\alpha)}},\frac{\max_{y_1,y_2 \in Y^{(n-1)}} d_{n-1}(y_1,y_2)}{\min_{v_1,v_2 \in Y^{(0)}} d_{1}(v_1,v_2)} \right) d_{n}(g,h). \end{eqnarray*} We can repeat this argument for the lower dimensions (down to dimension 1) to get: \begin{eqnarray*} d_{1}(g,h) \le \prod_{i=1}^n \max\left(\sqrt{\frac{2}{1-\cos(\alpha)}},\frac{\max_{y_1,y_2 \in Y^{(i-1)}} d_{i-1}(y_1,y_2)}{\min_{v_1,v_2 \in Y^{(0)}} d_{1}(v_1,v_2)} \right) d_{X^{(n)}}(g,h). \end{eqnarray*} To compare with the distance in $G$, we see that \begin{eqnarray*} \min_{v_1,v_2 \in Y^{(0)}} d_{1}(v_1,v_2) d_G(g,h) \le d_{1}(g,h). \end{eqnarray*} We will define $C_{XG}$ to be \begin{eqnarray*} C_{XG} = \frac{1}{\min_{v_1,v_2 \in Y^{(0)}} d_{1}(v_1,v_2)} \prod_{i=1}^n \max\left(\sqrt{\frac{2}{1-\cos(\alpha)}},\frac{\max_{y_1,y_2 \in Y^{(i-1)}} d_{i-1}(y_1,y_2)}{\min_{v_1,v_2 \in Y^{(0)}} d_{1}(v_1,v_2)} \right). \end{eqnarray*} This gives us the inequality: \begin{eqnarray*} d_G(g,h) \le C_{XG} d_{X}(g,h). \end{eqnarray*} The inequality implies the containment $G \cap B_X(\frac{r}{C_{XG}}) \subset B_G(r)$ by the argument from the start of this proof. \end{proof} \section{Comparing functions on $X$ with corresponding ones on $G$} We can compare the norm of $f$ with the norm of $\group{f}$, as well as the norm of $\nabla f$ with that of its analogue. Note that given a radius, $R$, Corollary \ref{PoincareExtend1} tells us that we have a uniform Poincar\'{e} inequality for $f$ on balls of radius at most $R$. This will be helpful for our comparison. In particular, we will use this where $C_{P}$ is the constant associated to the Poincar\'{e} inequality for balls of radius up to $3 \operatorname{diam}(Y)$. Note that if we took $\delta =\operatorname{diam}(Y)$, we could cover $Y$ with exactly one ball. \begin{lemma}\label{fToGroupf} Let $B_X(r) := B_X(g',r)$ be a ball in $X$ centered at $g' \in G$. For any $c \in R$, we can compare $f:X\rightarrow R$ with $\group{f}: G^N \rightarrow R$ in the following manner: \begin{eqnarray*} ||f-c||_{p,B_X(r)}^p \le C \left(\delta^p ||\nabla f||_{p,B_X(r+2\delta)}^p + ||\group{f} -c||_{p,B_G(C_{XG}(r+\delta+\operatorname{diam}(Y)))}^p\right). \end{eqnarray*} When $r=\infty$, this says that: \begin{eqnarray*} ||f-c||_{p,X}^p \le C \left(\delta^p ||\nabla f||_{p,X}^p +||\group{f} -c||_{p,G}^p \right). \end{eqnarray*} The constant $C$ depends on $X$, $p$, and $\delta$. \end{lemma} \begin{proof} We begin by rewriting the norm using the fact that balls of radius $r + \delta$ centered at $g \gamma_i$ form a cover. \begin{eqnarray*} ||f-c||_{p,B_X(r)}^p &=& \int_{B_X(r)} |f(x) -c|^p dx \\ &\le& \sum_i \sum_{g\gamma_i \in B_X(r+ \delta)} \int_{B_X(g\gamma_i,\delta)} |f(x) -c|^p dx . \end{eqnarray*} We'll use the fact that $|f(x) -c|^p \le 2^p |f(x) -\group{f(g,i)}|^p + 2^p |\group{f(g,i)} -c|^p$ to split this into two pieces. In the first piece, we can simplify using the local Poincar\'{e} inequality in $X$. \begin{equation*} \begin{split} \sum_i \sum_{g\gamma_i \in B_X(r + \delta)} & 2^p \int_{B_X(g\gamma_i,\delta)} |f(x) -\group{f(g,i)}|^p dx \\ &\le \sum_i \sum_{g\gamma_i \in B_X(r + \delta)} 2^p \delta^p C_{P} \int_{B_X(g\gamma_i,\delta)} |\nabla f(x)|^p dx \\ &\le 2^p \delta^p C_{P} C_{N,S} \int_{B_X(r + 2\delta)} |\nabla f(x)|^p dx . \end{split} \end{equation*} In the second, we first note that there is no $x$ dependence in the integrand. We integrate to get the volume of the ball. This will be dominated by the largest such volume. \begin{equation*} \begin{split} \sum_i &\sum_{g\gamma_i \in B_X(r + \delta)} 2^p \int_{B_X(g\gamma_i,\delta)} |\group{f(g,i)} -c|^p dx \\ &= \sum_i \sum_{g\gamma_i \in B_X(r + \delta)} 2^p \mu(B_X(g\gamma_i,\delta)) |\group{f(g,i)} -c|^p \\ &\le 2^p \left( \max_{g\gamma_i \in B_X(r+ \delta)} \mu(B_X(g\gamma_i,\delta)) \right) \sum_i \sum_{g\gamma_i \in B_X(r + \delta)} |\group{f(g,i)} -c|^p \end{split} \end{equation*} This gives us the definition of the $p$ norm in $X$. We switch to the norm in $G$ using the distance comparisons from Lemma \ref{CompareDistance}. \begin{equation*} \begin{split} ... &= 2^p \max_{g\gamma_i \in B_X(r+ \delta)} \mu(B_X(g\gamma_i,\delta)) ||\group{f} -c||_{p,B_X(r + \delta)}^p \\ &\le 2^p \max_{g\gamma_i \in B_X(r+ \delta)} \mu(B_X(g\gamma_i,\delta)) ||\group{f} -c||_{p,B_G(C_{XG}(r+\delta+\operatorname{diam}(Y)))}^p . \end{split} \end{equation*} When we put these together we have for some constant $C$: \begin{eqnarray*} ||f-c||_{p,B_X(r)}^p \le C \left(\delta^p ||\nabla f||_{p,B_X(r+2\delta)}^p + ||\group{f} -c||_{p,B_G(C_{XG}(r+\delta+\operatorname{diam}(Y)))}^p\right). \end{eqnarray*} Note that the uniformity of $X$ tells us that $\mu(B_X(g\gamma_i,\delta))$ can be bounded by a constant. In particular, we use the fact that $\mu(B_X(g\gamma_i,\delta))=\mu(B_X(h\gamma_i,\delta))$ for any $g,h \in G$. \end{proof} We can also bound the gradients of $f$ and $\group{f}$ in their respective norms. \begin{lemma}\label{gradGroupftogradf} Let $f\in \operatorname{Lip}(X)$ and $B_G(r)$ be given. For $1 \le p <\infty$, we have \begin{eqnarray*} ||\nabla \group{f} ||_{p,B_G(r)}^p \le C(\delta) ||\nabla f||_{p,B_X(C_0 r+2 \operatorname{diam}(Y))}^p. \end{eqnarray*} When $r = \infty$, this is: \begin{eqnarray*} ||\nabla \group{f} ||_{p,G}^p \le C(\delta) ||\nabla f||_{p,X}^p. \end{eqnarray*} $C(\delta) = \max_{\gamma_i} \frac{\mu(B_X(g\gamma_i,R))}{\mu(B_X(g\gamma_i,\delta))^2} N \max_{x\in X} \# \{B_X(g,R) | x \in B_X(g,R)\} C_{P} R^p$. \\ Note that this constant depends on $X$, $p$, $N$, and $\delta$. \end{lemma} \begin{proof} Here, $R$ is a large enough radius so that for any $g$ and $\gamma_i$ both $B_X(g\gamma_i,\delta)$ and $B_X(gs\gamma_i,\delta)$ are covered by $B_X(g\gamma_i,R)$. Note that $R = \operatorname{diam}(Y)+\delta$ will work, but to remove the dependence on $\delta$, we can take $R = 2\operatorname{diam}(Y)$. We start by explicitly writing out the gradient and then moving the $p/2$ into the integral via Jensen. \begin{eqnarray*} \lefteqn{||\nabla \group{f} ||_{p,B_G(r)}^p}\\ & = & \sum_i \sum_{g\in B_G(r)} \left( \frac{1}{|S|}\sum_{s \in S} |\group{f(g,i)} -\group{f(gs,i)}|^2 \right)^{p/2}\\ &\le& \sum_i \sum_{g\in B_G(r)} \frac{1}{|S|} \sum_{s \in S} |\group{f(g,i)} -\group{f(gs,i)}|^{p}\\ &= &\sum_i \sum_{g \in B_G(r)} \frac{1}{|S|} \sum_{s \in S} | \Xint-_{B_X(g\gamma_i,\delta)} f(x) dx - \Xint-_{B_X(gs\gamma_i,\delta)} f(y) dy|^p. \end{eqnarray*} We apply Jensen again; this time to the absolute value. \begin{eqnarray*} ...&\le& \sum_i \sum_{g \in B_G(r)} \frac{1}{|S|}\sum_{s \in S} \Xint-_{B_X(g\gamma_i,\delta)} \Xint-_{B_X(gs\gamma_i,\delta)} |f(x) - f(y)|^p dx dy . \end{eqnarray*} The regularity of the space and the cover tell us $\mu(B_X(g\gamma_i,\delta)) =\mu(B_X(g s \gamma_i,\delta))$. \begin{eqnarray*} ...&=& \sum_i \sum_{g \in B_G(r)} \frac{1}{|S|}\sum_{s \in S} \int_{B_X(g\gamma_i,\delta)} \int_{B_X(gs\gamma_i,\delta)} |f(x) - f(y)|^p \frac{dx dy}{\mu(B_X(g\gamma_i,\delta))^2}. \end{eqnarray*} We expand the sets we are integrating over to $B_X(g\gamma_i,R)$. This larger set contains both $B_X(g\gamma_i,\delta)$ and $B_X(gs\gamma_i,\delta)$ by construction. We then rewrite the sum over $S$, and change one integral to an average integral. \begin{eqnarray*} ...&\le &\frac{1}{|S|} \sum_i \sum_{g \in B_G(r)} \sum_{s \in S} \frac{1}{\mu(B_X(g\gamma_i,\delta))^2} \int_{B_X(g\gamma_i,R)} \int_{B_X(g\gamma_i,R)} |f(x) - f(y)|^p dx dy \\ & = & \frac{1}{|S|} \sum_i \sum_{g \in B_G(r)} |S| \frac{\mu(B_X(g\gamma_i,\delta))}{\mu(B_X(g\gamma_i,\delta))^2} \int_{B_X(g\gamma_i,R)} \Xint-_{B_X(g\gamma_i,R)} |f(x) - f(y)|^p dx dy . \end{eqnarray*} We now apply a local $p$ Poincar\'{e} inequality on $X$ to $f$. The constant for this is $C_{P}$. \begin{eqnarray*} ...&\le & \sum_i \sum_{g \in B_G(r)} C_{P} R^p \frac{\mu(B_X(g\gamma_i,R))}{\mu(B_X(g\gamma_i,\delta))^2} \int_{B_X(g\gamma_i,R)} |\nabla f(x)|^p dx. \end{eqnarray*} We combine the sums and integral into a single integral. All of the $B_X(g\gamma_i,R)$ for $g\in B_G(r)$ are contained in $B_X(C_0 r+R)$ by our distance comparison between $G$ and $X$. We multiply this integral by the number of overlapping balls in our sum. $C_M$ is $N$ times the maximum number of balls $B_X(g\gamma_i,R)$ which overlap at a point in $X$. \begin{eqnarray*} ...&\le& \left(\max_{\gamma_i} \frac{\mu(B_X(g\gamma_i,R))}{\mu(B_X(g\gamma_i,\delta))^2}\right) C_{M} C_{P} R^p \int_{B_X(C_0 r+R)} |\nabla f(x)|^p dx \\ &=& C(\delta) ||\nabla f||_{p,B_X(C_0 r+R)}^p. \end{eqnarray*} \end{proof} \section{Poincar\'{e} inequalities on X with underlying group structure} The bounds in the previous section can be used to transfer inequalities between $X$ and $G$. We can combine them with the weak Poincar\'{e} inequality on $G$ to get an inequality on $X$. \begin{theorem}\label{WeakGtoX} Let $X$, a volume doubling Euclidean complex and $G$, a finitely generated group with $X/G=Y$, a finite admissible polytopal complex be given. $X$ admits a Poincar\'{e} inequality with uniform constant at all scales. Let $f\in \operatorname{Lip}(X)$. For $1 \le p < \infty$, we have: \begin{eqnarray*} \inf_c||f-c||_{p,B_X(r)} \le C r ||\nabla f||_{p,B_X(r)}. \end{eqnarray*} Note that this implies: \begin{eqnarray*} ||f-f_{B_X(r)}||_{p,B_X(r)} \le 2C r ||\nabla f||_{p,B_X(r)}. \end{eqnarray*} Here the balls can be centered at any point in $X$. \end{theorem} \begin{proof} Note that we chose $C_{P}$ so that the Poincar\'{e} inequality holds for balls of radius up to $3\operatorname{diam}(Y)$. We need to show that it also holds for balls of radius greater than $3\operatorname{diam}(Y)$. Let $r\ge 3\operatorname{diam}(Y)$ be given. To start, we will assume that the center of $B_X(r)$ is in $G$. Pick $\delta=\operatorname{diam}(Y)$; this will force $N=1$. This will allow us to split things up in such a way that we can use the weak Poincar\'{e} inequality on $G$. If we had multiple copies of $G$, we wouldn't necessarily have the same average on each of them. By choosing a value of $c$, we obtain something at least as large as the infimum: \begin{eqnarray*} \inf_c ||f-c||_{1,B_X(r)} \le ||f-\group{f}_{B_G(2C_{XG} r)}||_{1,B_X(r)}. \end{eqnarray*} Then we can use our first bound to get \begin{eqnarray*} \lefteqn{||f-\group{f}_{B_G(2C_{XG} r)}||_{1,B_X(r)}} \\ &\le& C \left(\delta ||\nabla f||_{1,B_X(r+\delta)} + ||\group{f} -\group{f}_{B_G(2C_{XG} r)} ||_{1,B_G(C_{XG}(r+\delta+\operatorname{diam}(Y)))}\right)\\ &\le &C \left(\delta ||\nabla f||_{1,B_X(1.5r)} + ||\group{f} -\group{f}_{B_G(2C_{XG} r)}||_{1,B_G(2C_{XG} r)}\right). \end{eqnarray*} Happily, we can apply the weak Poincar\'{e} inequality on groups (Lemma \ref{groupVDpoincare}) to the second term: \begin{eqnarray*} ||\group{f} -\group{f}_{B_G(2C_{XG} r)}||_{1,B_G(2C_{XG} r)} \le 3 r \sqrt{|S|} ||\nabla \group{f}||_{1,B_G(6C_{XG} r)}. \end{eqnarray*} Then, we can use the bound we have on the gradients to get an inequality on $\nabla f$. Setting $R=2\operatorname{diam}(Y) < r$ tells us that $B_X((6r+R)C_0 C_{XG}) \subset B_X(7C_0 C_{XG} r)$. \begin{eqnarray*} ||\nabla \group{f} ||_{1,B_G(6C_{XG} r)} \le C(\delta) ||\nabla f||_{1,B_X(7C_0 C_{XG} r)}. \end{eqnarray*} Combining these, we have: \begin{eqnarray*} \inf_c ||f-c||_{1,B_X(r)} &\le& C \left(\delta ||\nabla f||_{1,B_X(1.5r)} +3 r \sqrt{|S|} C(\delta) ||\nabla f||_{1,B_X(7C_0 C_{XG} r)}\right) \\ &\le& C_0 r ||\nabla f||_{1,B_X(7C_0 C_{XG} r)}. \end{eqnarray*} As in lemma \ref{PoincareP}, we have: \begin{eqnarray*} ||f-f_{B_X(r)}||_{1,B_X(r)} \le 2 \inf_c ||f-c||_{1,B_X(r)}. \end{eqnarray*} If the center, $x$, were not in $G$, there is some $g' \in G$ such that the center is within $\operatorname{diam}(Y)$ of $g'$. That is, $d_X(x,g') \le \operatorname{diam}(Y)$. By inclusions of balls, we know that: \begin{eqnarray*} \inf_c ||f-c||_{1,B_X(x,r)} \le \inf_c ||f-c||_{1,B_X(g',r+\operatorname{diam}(Y))}. \end{eqnarray*} As $r+\operatorname{diam}(Y) \le 1.5r$ and $B_X(g',7 C_0 C_{XG} 1.5 r) \subset B_X(x,12 C_0 C_{XG} r)$, we can switch centers by increasing the radius: \begin{eqnarray*} ||\nabla f||_{1,B_X(g',7 C_0 C_{XG} 1.5 r)} \le ||\nabla f||_{1,B_X(x, 12C_0 C_{XG} r)}. \end{eqnarray*} This tells us that any complex $X$ with the underlying group structure admits a weak $p=1$ Poincar\'{e} inequality. $X$ is volume doubling, and so this weak inequality can be turned into a strong p inequality via repeated application of a Whitney cover, using Corollary \ref{PoincareExtend1}. \end{proof} In \cite{Varo} Varopolous showed that groups with polynomial growth of degree $d$ have on diagonal behavior $p_{2n}(e,e) \approx n^{-d/2}$. We show that a similar result holds for volume doubling complexes with underlying group structure. \begin{theorem} \label{XuniformPoincarefromG} Assume $X$ is a volume doubling Euclidean complex and $G$ is a finitely generated group with $X/G=Y$, where $Y$ is a finite admissible polytopal complex. Then $X$ satisfies the on diagonal heat kernel estimates: \begin{eqnarray*} \frac{1}{C \mu(B(x,\sqrt{t}))} \le h_t(x,x) \le \frac{C}{\mu(B(x,\sqrt{t}))} \end{eqnarray*} $X$ also satisfies the off diagonal heat kernel lower bound: \begin{eqnarray*} \frac{1}{C \mu(B(x,\sqrt{t}))} \exp\left(-C \frac{d_X^2(x,y)}{t}\right) \le h_t(x,y), \end{eqnarray*} as well as the upper bound: \begin{eqnarray*} h_t(x,y) \le \frac{C}{\sqrt{\mu(B(x,\sqrt{t}))\mu(B(y,\sqrt{t}))}} \exp\left(-\frac{d_X^2(x,y)}{4t}\right) \left(1 + \frac{d_X^2(x,y)}{t}\right)^{n/2}. \end{eqnarray*} \end{theorem} \begin{proof} To get the heat kernel bounds, apply Sturm \cite{Sturm} theorems \ref{Sturm1} and \ref{Sturm2}, noting that we've satisfied both volume doubling and a Poincar\'{e} inequality at all scales uniformly. \end{proof} \section{Mapping functions on $G$ to $X$} Now we will look at how to take functions on $G$ to smooth versions on $X$. Let $f$ be a function mapping $G$ to the reals. We'll look at a partition of unity on the complex, $X$, which is created by translating a smooth function $\chi$ by $g\in G$. Then $\sum_{g \in G} \chi_g(x) =1$. We require the following: \begin{itemize} \item $\chi_g(x) = 1$ if $d_X(x,g) \le \frac{1}{4}$ \item $\chi_g(x) = 0$ if $d_X(x,g) \ge C_{sup}$ \item $|\nabla \chi_g(x)| \le C_g$ \end{itemize} We know that $|\{g \in G : d_X(g,e) \le C_{sup} \}|$ is finite; when $Y$ is nice and $C_{sup}=1$ this will be $|S|$. We also know that for any $x\in X$, $|\{g \in G : \chi_g(x) \ne 0\}|$ is finite. In particular, there is a uniform bound, $C_{Over}$. This allows us to define a nice smooth function, $\comp{f(x)}$, mapping $X$ to the reals: \begin{eqnarray*} \comp{f(x)} = \sum_{g \in G} f(g) \chi_g(x). \end{eqnarray*} Its $L^p$ norm is comparable to that of $f$. \begin{theorem}\label{Compcompare} Let $f: G\rightarrow R$ be given. If we limit ourselves to a ball, $B_G(r)$, with radius at least 1, we can compare $L^p$ norms in the following way: \begin{eqnarray*} C_1 ||f-c||_{p,B_G(\frac{r-.25}{C_0})} \le ||\comp{f}-c||_{p,B_X(r)} \le C_2 ||f-c||_{p,B_G(C_{XG}(r+C_{sup}))} \end{eqnarray*} This holds for any $c\in R$. Note that when $r=\infty$ and $c=0$ we have a nice bound on the norms: \begin{eqnarray*} C_1 ||f||_{p,G} \le ||\comp{f}||_{p,X} \le C_2 ||f||_{p,G}. \end{eqnarray*} For both of these inequalities, $C_1=\mu(B_X(e,\frac{1}{4}))^{\frac{1}{p}}$ and $C_2=C_{Over}^{(p-1)/p} ||\chi_e||_{p,X}$. \end{theorem} \begin{proof} If we limit ourselves to a ball, $B_G(r)$, with radius at least 1, we have a comparison. We first write out the definition of the norm, and then we use the fact that for every $x$, $\sum_{g \in G} \chi_g(x) =1$. \begin{eqnarray*} ||\comp{f} -c||_{p,B_X(r)} &=& \left(\int_{B_X(r)} |\sum_{g \in G} f(g) \chi_g(x)-c|^p dx\right)^{\frac{1}{p}} \\ &=& \left(\int_{B_X(r)} |\sum_{g \in G} (f(g)-c) \chi_g(x)|^p dx\right)^{\frac{1}{p}}. \end{eqnarray*} For each $x$, at most $C_{Over}$ of the $\chi_g(x)$ are nonzero. This allows us to apply a discrete version of Jensen to move the exponent into the sum. \begin{eqnarray*} ...&\le& \left(\int_{B_X(r)}C_{Over}^{p-1} \sum_{g \in G} |f(g)-c|^p \chi_g(x)^p dx\right)^{\frac{1}{p}}. \end{eqnarray*} The only $g$ with a nonzero $\chi_g(x)$ will be those within $X$ distance $C_{sup}$ of a point in $B_X(r)$. We can integrate over $g \in G\cap B_X(r+C_{sup})$, and switch the finite integral and sum. \begin{eqnarray*} ...&\le& C_{Over}^{(p-1)/p} \left(\sum_{g \in G\cap B_X(r+C_{sup})}|f(g)-c|^p \int_{B_X(r)} \chi_g(x)^p dx\right)^{\frac{1}{p}}. \end{eqnarray*} The quantity $\int_{B_X(r)} \chi_g(x)^p$ will be bounded above by $\int_{X} \chi_g(x)^p = \int_{X} \chi_e(x)^p$. \begin{eqnarray*} ...&\le& C_{Over}^{(p-1)/p} ||\chi_e||_{p,X} \left(\sum_{g \in G\cap B_X(r+C_{sup})}|f(g)-c|^p \right)^{\frac{1}{p}}. \end{eqnarray*} We then use the distance comparisons from lemma \ref{CompareDistance} to get a norm with respect to distance in $G$. \begin{eqnarray*} ...&\le& C_{Over}^{(p-1)/p} ||\chi_e||_{p,X} ||f-c||_{p,B_G(C_{XG}(r+C_{sup}))}. \end{eqnarray*} Now we will show the other inequality. By definition, we can write the norm in $G$ as: \begin{eqnarray*} ||f-c||_{p,B_G(r)} &=& \left(\sum_{g \in B_G(r)} |f(g)-c|^p\right)^{\frac{1}{p}} . \end{eqnarray*} We introduce $\chi_g$ by noting $\chi_g(x) = 1$ for $x$ in $B_X(g,\frac{1}{4})$, and integrating over this set. Due to the regularity of $X$, $\mu(B_X(g,\frac{1}{4}))$ does not depend on $g$, and so we write it as $\mu(B_X(e,\frac{1}{4}))$. \begin{eqnarray*} ...&=& \left(\sum_{g \in B_G(r)} \frac{1}{\mu(B_X(e,\frac{1}{4}))} \left(\int_{B_X(g,\frac{1}{4})} |f(g)-c|^p \chi_g(x) dx \right)\right)^{\frac{1}{p}}. \end{eqnarray*} We now will switch the integral and the sum. We are integrating only over $x$ in balls centered at points in $B_G(r)$ of radius $1/4$. This set can be written $\cup_{h \in B_G(r)}B_X(h,\frac{1}{4})$. \begin{eqnarray*} ...&\le& \left(\frac{1}{\mu(B_X(e,\frac{1}{4}))} \int_{\cup_{h \in B_G(r)}B_X(h,\frac{1}{4}) } \sum_{g \in G} |f(g)-c|^p \chi_g(x) dx \right)^{\frac{1}{p}}. \end{eqnarray*} For $x$ in $\cup_{h \in B_G(r)}B_X(h,\frac{1}{4})$, $\chi_g(x)=1$ for exactly one $g \in G$, and it is zero otherwise. This tells us $\sum_{g \in G} |f(g)-c|^p \chi_g(x) = |\sum_{g \in G}(f(g)-c) \chi_g(x)|^p$. We then can write the above as \begin{eqnarray*} ...&=& \left(\frac{1}{\mu(B_X(e,\frac{1}{4}))} \int_{\cup_{h \in B_G(r)}B_X(h,\frac{1}{4}) } |\sum_{g \in G} (f(g)-c) \chi_g(x)|^p dx \right)^{\frac{1}{p}}. \end{eqnarray*} We use the distance comparisons from Lemma \ref{CompareDistance} to see that $\cup_{h \in B_G(r)}B_X(h,\frac{1}{4}) \subset B_X(C_0 r+.25)$. \begin{eqnarray*} ...&\le& \left(\frac{1}{\mu(B_X(e,\frac{1}{4}))} \int_{B_X(C_0 r+.25) } |\sum_{g \in G} (f(g)-c) \chi_g(x)|^p dx \right)^{\frac{1}{p}}. \end{eqnarray*} Now we rewrite this using the fact that $\sum_{g \in G}\chi_g(x) =1$. \begin{eqnarray*} ...&=& \left(\frac{1}{\mu(B_X(e,\frac{1}{4}))} \int_{B_X(C_0 r+.25) } |\sum_{g \in G} f(g) \chi_g(x)-c|^p dx \right)^{\frac{1}{p}} \\ &=& \left(\frac{1}{\mu(B_X(e,\frac{1}{4}))}\right)^{\frac{1}{p}} ||\comp{f}-c||_{p,B_X(C_0 r+.25)}. \end{eqnarray*} \end{proof} We'd also like to compare the norms of the gradients. To do this, we want to write the gradient in such a way that we can compare it with the one on $G$. We first note that: \begin{eqnarray*} \nabla\left(\sum_{g \in G} \chi_g(x)\right) = \nabla 1 =0. \end{eqnarray*} This allows us to write the gradient of $\comp{f(x)}$ as \begin{eqnarray*} \nabla \comp{f(x)} = \sum_{g \in G} f(g) \nabla \chi_g(x) = \sum_{g \in G} (f(g) -f(h))\nabla \chi_g(x). \end{eqnarray*} \begin{lemma}\label{CompGradcompare} Let $f: G\rightarrow R$ be given. Then for any $r$ we have: \begin{eqnarray*} ||\nabla \comp{f(x)}||_{p,B_X(r)}^p \le C ||\nabla f||^p_{p,B_G(C_{XG}(r+3C_{sup}))} \end{eqnarray*} where $C = C_g^p \mu(B_X(e,C_{sup})) \operatorname{Vol}_G(G \cap B_X(e,2C_{sup}))^p |S|^p$. If we have $B(e,2C_{sup}) = S$, the generating set, then this is: \begin{eqnarray*} ||\nabla \comp{f(x)}||_{p,B_X(r)}^p \le C ||\nabla f||^p_{p,B_G(C_{XG}(r+C_{sup}))} \end{eqnarray*} where $C = C_g^p \mu(B_X(e,C_{sup})) |S|^p $. When $r=\infty$, this is: \begin{eqnarray*} ||\nabla \comp{f(x)}||_{p,X}^p \le C ||\nabla f||^p_{p,G}. \end{eqnarray*} \end{lemma} \begin{proof} We can cover $X$ with balls of radius $C_{sup}$. This lets us rewrite the norm as follows: \begin{eqnarray*} ||\nabla \comp{f(x)}||_{p,B_X(r)}^p & = &\int_{B_X(r)} |\nabla \comp{f(x)}|^p dx \\ & \le& \sum_{h \in G\cap B_X(r+C_{sup})} \int_{B_X(h,C_{sup})} |\nabla \comp{f(x)}|^p dx \\ & =& \sum_{h \in G\cap B_X(r+C_{sup})} \int_{B_X(h,C_{sup})} \abs{\sum_{g \in G} (f(g) -f(h))\nabla \chi_g(x)}^p dx \end{eqnarray*} From its definition, we know that $\nabla \chi_g(x)$ will be nonzero only when $d_X(x,g) < C_{sup}$. As we're integrating over $x$ with $d_X(x,h) \le C_{sup}$, we can restrict our possible $g$ to those with $d_X(g,h) < 2C_{sup}$. Then we use the fact that $|\nabla \chi_g(x)| \le C_g$. \begin{equation*} \begin{split} \sum_{h \in G\cap B_X(r+C_{sup})}& \int_{B_X(h,C_{sup})} \abs{\sum_{g \in B_X(h,2C_{sup})} (f(g) -f(h))\nabla \chi_g(x)}^p dx \\ &\le C_g^p \sum_{h \in G\cap B_X(r+C_{sup})} \abs{ \sum_{g \in G \cap B_X(h,2C_{sup})} (f(g) -f(h)) }^p \mu(B_X(h,C_{sup})) . \end{split} \end{equation*} Note that by invariance, $\mu(B_X(h,C_{sup})) =\mu(B_X(e,C_{sup}))$. At this point, if we had $G \cap B_X(e,2C_{sup}) = S$, the generating set, we could proceed as follows. Otherwise, we'll need to expand things a little bit more. \begin{equation*} \begin{split} C_g^p &\mu(B_X(e,C_{sup})) |S|^p \sum_{h \in G\cap B_X(r+C_{sup})} \abs{ \sum_{s\in S} \frac{1}{|S|}(f(hs) -f(h)) }^p \\ &\le C_g^p \mu(B_X(e,C_{sup})) |S|^p \sum_{h \in G\cap B_X(r+C_{sup})} \left( \abs{\sum_{s\in S} \frac{1}{|S|}(f(hs) -f(h) )^2}\right)^{p/2} \\ &= C_g^p \mu(B_X(e,C_{sup})) |S|^p ||\nabla f||^p_{p,G \cap B_X(r+C_{sup})} \\ &\le C_g^p \mu(B_X(e,C_{sup})) |S|^p ||\nabla f||^p_{p,B_G(C_{XG}(r+C_{sup}))}. \end{split} \end{equation*} If $G \cap B(e,2C_{sup}) \ne S$, we could modify this by noting that: \begin{eqnarray*} &&\lefteqn{\sum_{h \in G\cap B_X(r+C_{sup})} \abs{ \sum_{g \in G\cap B_X(h,2C_{sup})} (f(g) -f(h)) }^p} \\&& = \sum_{h \in G\cap B_X(r+C_{sup})} \abs{ \sum_{g \in G\cap B_X(h,2C_{sup})} \sum_{i=0 : s_0..s_k=h^{-1}g}^{k-1} (f(hs_0..s_i) -f(hs_0..s_{i+1})) }^p \\ &&\le \sum_{h \in G\cap B_X(r+3C_{sup})} \operatorname{Vol}_G(G \cap B_X(e,2C_{sup}))^p \abs{\sum_{s \in S} (f(hs) -f(h)) }^p. \end{eqnarray*} This will yield the inequality: \begin{eqnarray*} ||\nabla \comp{f(x)}||_{p,B_X(r)}^p \le C ||\nabla f||^p_{p,B_G(C_{XG}(r+3C_{sup}))} \end{eqnarray*} for $C =C_g^p \mu(B_X(e,C_{sup})) \operatorname{Vol}_G(G \cap B_X(e,2C_{sup}))^p |S|^p $. Note that in these, $C_g$ is the bound on the gradient of $\chi_g$. \end{proof} \section{Poincar\'{e} inequality for volume doubling finitely generated groups} We can use these estimates along with our knowledge of complexes in order to show that volume doubling finitely generated groups admit a strong Poincar\'{e} Inequality. This is not a new fact, but it is a cute proof. \begin{theorem} Let $G$ be a finitely generated volume doubling group. Let $f: G\rightarrow R$ and $B_G(r) \subset G$ be given. Then \begin{eqnarray*} ||f-f_{B_G(r)}||_{1,B_G(r)} \le C r ||\nabla f||_{1,B_G(r)}. \end{eqnarray*} Here $C=4 C_P C_g |S|C_{sup} $ where $C_P$ is the constant in the global Poincar\'{e} inequality for $X$. \end{theorem} \begin{proof} Take any such group, and let $X$ be its Cayley graph. Theorem \ref{XuniformPoincarefromG} showed that strong Poincar\'{e} inequalities hold on $X$. We happily note that both $C_0$ and $C_{XG}$ are $1$ on a Cayley graph, and so we can omit them from our calculation. We form a chain of inequalities as follows. From Theorem \ref{Compcompare}, we can set $c=(\comp{f})_{B_X(r+.25)}$ to get: \begin{equation*} \begin{split} ||f-(\comp{f})_{B_X(r+.25)}&||_{1,B_G(r)} \\ & \le \frac{1}{\mu(B_X(g,\frac{1}{4}))} ||\comp{f}-(\comp{f})_{B_X(r+.25)}||_{1,B_X(r+.25)}. \end{split} \end{equation*} Note that for every $g \in G$, $\mu(B_X(g,\frac{1}{4}))= 1/4 |S|$. From Theorem \ref{XuniformPoincarefromG}, we know that: \begin{eqnarray*} ||\comp{f}-(\comp{f})_{B_X(r+.25)}||_{1,B_X(r+.25)} \le C_P r ||\nabla \comp{f}||_{1,B_X(r+.25)}. \end{eqnarray*} Then we transfer back, using the fact that $G \cap B_X(e,2C_{sup}) =S$. \begin{eqnarray*} ||\nabla \comp{f(x)}||_{1,B_X(r+.25)} = C_g \mu(B_X(C_{sup})) |S| ||\nabla f||_{1,B_G(r+.25+C_{sup})}. \end{eqnarray*} We can evaluate this as $X$ is a Cayley graph: $\mu(B_X(e,C_{sup})) = |S|C_{sup}$. Since $X$ is a graph whose edges have unit length, $C_{sup} <1$. In particular, we can pick $C_{sup}=.74$. Since our original ball, $B_G(r)$ is on the group, without loss of generality we know that $r$ is an integer. Then $B_G(r + C_{sup} +.25) = B_G(r+.99) = B_G(r)$ on the group. Combining this, we have: \begin{eqnarray*} ||f-\comp{f}_{B_X(r+.25)}||_{1,B_G(r)} \le \frac{1}{.25|S|} C_P r C_g |S|C_{sup} |S| ||\nabla f||_{1,B_G(r)}. \end{eqnarray*} We can get the desired left hand side from \begin{eqnarray*} ||f-f_{B_G(r)}||_{1,B_G(r)} \le ||f-\comp{f}_{B_X(r+.25)}||_{1,B_G(r)}. \end{eqnarray*} We can use the graph structure to reduce this to: \begin{eqnarray*} ||f-f_{B_G(r)}||_{1,B_G(r)} \le 4 C_P C_g |S|C_{sup} r ||\nabla f||_{1,B_G(r)}. \end{eqnarray*} \end{proof} \chapter{Comparing Heat Kernels on X and G} The main goal of this chapter is to show that for large times, the heat kernel on the group is comparable to the heat kernel on the complex. The comparison was shown for groups and manifolds by Saloff-Coste and Pittet \cite{LSCP}. \begin{notation} To simplify notation, we use $p_t$ for the heat kernel on the group, and $h_t$ when it is on the complex. \end{notation} On a finitely generated group, the heat kernel can be used to describe a symmetric random walk. This is a walk where from a point $g\in G$, the probability of moving to $gs$ in one step is $\frac{1}{|S|}$ for each generator $s \in S$. The value of the heat kernel on the diagonal, $p_{2n}(e,e)$ gives us the probability of returning to the same point after $2n$ steps. We are interested in this for even numbers of steps because this avoids parity issues. The set-up for these walks can be found in \cite{Kesten}. \begin{definition} We say $f(t) \approx g(t)$ if there exist positive finite constants $C_1,C_2,C_3$, and $C_4$ so that \begin{eqnarray*} C_1 f(C_2 t) \le g(t) \le C_3 f(C_4 t). \end{eqnarray*} \end{definition} We will show that the following holds when $t \ge 1$: \begin{eqnarray*} p_{2\floor{t}}(e,e) \approx \sup_{x \in X} h_t(x,x). \end{eqnarray*} Note that it doesn't make sense to compare them for small times, since $p_t$ is only defined for integer values of $t$. An important notion in this proof is that of amenability. \begin{definition} A {\bf F\o lner sequence} is a sequence of finite subsets, $F(i)$, with the following properties: \\ (1) For any $g \in G$ there exists $i$ such that $g \in F(i)$, \\ (2) $F(i) \subset F(i+1)$, and \\ (3) For any finite subset $Q \subset G$, $\lim_{i \rightarrow \infty} \frac{\#(QF(i))}{\# F(i)} =1$. \\ Here, $QF(i)$ refers to the set $\{g : g = qf \text{ with }q \in Q, f \in F(i)\} $. \end{definition} \begin{definition} $G$ is {\bf amenable} if and only if $G$ admits a F\o lner sequence. \end{definition} \begin{example} The group of integers, $Z$, is amenable. Here, the sets $[-i,i]$ form a F\o lner sequence. \end{example} In order to show this, we will split it into two cases. In the first, we look at when $G$ is nonamenable. Here, $p_{2\floor{t}}(e,e) \approx e^{-t}$. Then we will look at when $G$ is amenable. We will first show $p_t$ is approximately less than or equal to $h_t$, and then we will show the reverse. \section{Heat kernels in the nonamenable case} We now look at the behavior of the heat kernel on $X$ and $G$ when $G$ is nonamenable. We call $H_t$ is the semigroup form for the heat kernel on $X$. It is related to $h_t(x,y)$ by $H_t f(x) = \int_X f(y) h_t(x,y) dy$. It is also written as $H_t = e^{-t\Delta}$. Alternatively, $h_t$ is called the transition function for $H_t$. Estimates on norms of functions and their derivatives can give us estimates on $||H_t||_{2\rightarrow 2}$. \begin{lemma}\label{fnormvsHt} \begin{eqnarray*} ||f||_2 \le C ||\nabla f||_2 \end{eqnarray*} will be true for all $f\in \operatorname{Dom}(\Delta)$ if and only if for all $t>0$, \begin{eqnarray*} ||H_t||_{2\rightarrow 2} \le e^{-t/C}. \end{eqnarray*} \end{lemma} \begin{proof} We will sketch the proof. We can show the forward implication by using integration by parts: \begin{eqnarray*} ||\nabla f||^2_2 = \int |\nabla f|^2 = \int f \Delta f = \int \sqrt{\Delta}f \sqrt{\Delta} f = ||\sqrt{\Delta} f||^2_2. \end{eqnarray*} This tells us that for any non-zero $f\in \operatorname{Dom}(\Delta)$, we have: \begin{eqnarray*} \frac{||\sqrt{\Delta} f||^2_2}{||f||^2_2} \ge \frac{1}{C}. \end{eqnarray*} We can take a square root and then an infimum to get: \begin{eqnarray*} \inf_{f \ne 0} \frac{||\sqrt{\Delta} f||_2}{||f||_2} \ge \frac{1}{\sqrt{C}}. \end{eqnarray*} This tells us that $\frac{1}{\sqrt{C}}$ is a lower bound on eigenvalues of $\sqrt{\Delta}$. Spectral theory tells us that $\frac{1}{C}$ is a lower bound on eigenvalues of $\Delta$, and $e^{-\frac{t}{C}}$ is an upper bound on eigenvalues of $H_t = e^{-t\Delta}$. This yields \begin{eqnarray*} ||H_t||_{2\rightarrow 2} \le e^{-t/C}. \end{eqnarray*} For the reverse, consider the fact that \begin{eqnarray*} E(f,f) = \lim_{t \rightarrow 0} \frac{\langle(H_t - I)f,f \rangle }{t} = -\langle \nabla f ,\nabla f \rangle. \end{eqnarray*} We can use our bound to get: \begin{eqnarray*} \lim_{t \rightarrow 0} \frac{\langle(H_t - I)f,f\rangle}{t} \le \lim_{t \rightarrow 0} \frac{\langle(e^{-t/C} - 1)f,f\rangle}{t} = \langle(-1/C)f,f\rangle. \end{eqnarray*} Then since \begin{eqnarray*} E(f,f) = -\langle \nabla f ,\nabla f\rangle, \end{eqnarray*} we have \begin{eqnarray*} -\langle\nabla f ,\nabla f\rangle \le \langle(-1/C)f,f\rangle \end{eqnarray*} which gives us \begin{eqnarray*} ||f||_2 \le C||\nabla f||_2. \end{eqnarray*} \end{proof} We can transfer between estimates on $||H_t||_{2\rightarrow 2}$ and $h_t(x,y)$. Since the bound on the norm of $f$ will hold for nonamenable groups, we will combine lemmas \ref{fnormvsHt} and \ref{Httopt} to get our heat kernel estimates. \begin{lemma} If $||H_t||_{2 \rightarrow 2}^2 \le e^{-2t/C}$, then for all $z\in X$ and $t \ge t'$: \begin{eqnarray*} h_t(z,z) \le h_{t'}(z,z)e^{-t/C}. \end{eqnarray*} \label{Httopt} \end{lemma} \begin{proof} Apply $H_t$ to $f(y) = \frac{h_s(y,z)}{||h_s(\cdot,z)||_2}$. This gives you \begin{eqnarray*} H_t\frac{h_s(x,z)}{||h_s(\cdot,z)||_2} = \int_X \frac{h_s(y,z)}{||h_s(\cdot,z)||_2} h_t(x,y) dy = \frac{h_{t+s}(x,z)}{||h_s(\cdot,z)||_2}. \end{eqnarray*} Our estimate then tells us: \begin{eqnarray*} ||H_t||_{2 \rightarrow 2}^2 \ge \int_X \frac{h_{t+s}(x,z)^2}{||h_s(\cdot,z)||_2^2} dx = \int_X \frac{h_{t+s}(x,z)h_{t+s}(z,x)}{||h_s(\cdot,z)||_2^2} dx =\frac{h_{2t+2s}(z,z)}{||h_s(\cdot,z)||_2^2}. \end{eqnarray*} Note that \begin{eqnarray*} ||h_s(\cdot,z)||_2^2 = \int_X h_s(y,z)^2 dy = h_{2s}(z,z). \end{eqnarray*} When we combine this with the inequality for $H_t$, we have: \begin{eqnarray*} \frac{h_{2t+2s}(z,z)}{h_{2s}(z,z)} \le e^{-2t/C}. \end{eqnarray*} Fix $z$ and let $u(t)= h_{t}(z,z)$. This can be written as: \begin{eqnarray*} u(t+s) \le u(s)e^{-t/C}. \end{eqnarray*} This is equivalent to: \begin{eqnarray*} \frac{u(t+s)-u(s)}{t} \le u(s)\frac{e^{-t/C}-1}{t}. \end{eqnarray*} Taking the limit as $t \rightarrow 0^{+}$ gives us: \begin{eqnarray*} u'(s) \le (-1/C)u(s). \end{eqnarray*} This gives us the estimate that $u(t) \le u(t_0) e^{-t/C}$ for any $t\ge t_0$. Rewriting this, we have the long time decay for all $z\in X$ and $t \ge t'$: \begin{eqnarray*} h_t(z,z) \le h_{t'}(z,z)e^{-t/C}. \end{eqnarray*} \end{proof} Note that the converse is essentially true as well. If $h_t(z,z) \le h_{t'}(z,z)e^{-t/C}$ for $t \ge t'$, then we can construct an upper bound for $||H_t||_{2 \rightarrow 2}^2$ whenever $t \ge t'$. \begin{eqnarray*} ||H_t||_{2 \rightarrow 2}^2 &=& \sup_{||f||_2=1} \int_X \left(\int_X f(y) h_t(x,y) dy \right)^2 dx. \end{eqnarray*} Note that $\int_X f(y)p_t(x,y) \le ||f||_2 ||p_t(x,\cdot)||_2$ holds by H\"older. \begin{eqnarray*} ...&\le& \sup_{||f||_2=1} \int_X ||f||_2^2 ||p_t(x,\cdot)||_2^2 dx \\ &=& \int_X ||p_t(x,\cdot)||_2^2 dx \\ &=& \int_X \int_X h_t(x,y)h_t(y,x) dy dx \\ &=& \int_X h_{2t}(x,x) dx \\ &\le& \left(\int_X h_{t'}(x,x) dx \right) e^{-2t/C}. \end{eqnarray*} This gives us $||H_t||_{2 \rightarrow 2} \le C' e^{-t/C}$ for $t \ge t'$ where $C' = \int_X h_{t'}(x,x) dx$ depends only on $X$ and $t'$. In the case where $G$ is not amenable, it is well known that the heat kernel decays exponentially. This result was shown by Kesten \cite{Kesten}. In particular, for any $f \in \operatorname{Dom}(E)$, we know that: \begin{eqnarray*} ||f||_{2,G} \le C_G || \nabla f||_{2,G}. \end{eqnarray*} We can use averaging to show that this will hold on $X$ as well. \begin{lemma} \label{NonAmenableht} If $G$ is not amenable and $X/G=Y$, then for any $t'>0$ there exist constants $C_0 = \sup_{y \in Y}h_{t'}(y,y)$ and $C_1 =\sqrt{C (\delta^2 + C_G^2 C(\delta))}$ so that for all $x,y \in X$ \begin{eqnarray*} h_t(x,y) \le C_0 e^{-t/C_1} \end{eqnarray*} holds for all $t \ge t'$. Note that $C, C(\delta)$ are as in Lemmas \ref{fToGroupf} and \ref{gradGroupftogradf}. \end{lemma} \begin{proof} Applying Lemma \ref{fToGroupf} with $p=2$, $r=\infty$ gives us: \begin{eqnarray*} ||f||_{2,X}^2 \le C \left(\delta^2 ||\nabla f||_{2,X}^2 + ||\group{f}||_{2,G}^2\right). \end{eqnarray*} The inequality for groups then tells us this is less than \begin{eqnarray*} ||f||_{2,X}^2 \le C \left(\delta^2 ||\nabla f||_{2,X}^2 + C_G^2 ||\nabla(\group{f})||_{2,G}^2\right). \end{eqnarray*} We can then bound the gradient in $G$ by the gradient in $X$ using Lemma \ref{gradGroupftogradf} with $p=2$, $r=\infty$: \begin{eqnarray*} ||f||_{2,X}^2 \le C \left(\delta^2 ||\nabla f||_{2,X}^2 + C_G^2 C(\delta) ||\nabla f||_{2,X}^2\right). \end{eqnarray*} Putting this together, we have: \begin{eqnarray*} ||f||_{2,X} \le \sqrt{C (\delta^2 + C_G^2 C(\delta))} ||\nabla f||_{2,X}. \end{eqnarray*} We can apply this with $C_1 =\sqrt{C (\delta^2 + C_G^2 C(\delta))}$ to the first argument to get $||H_t||_{2\rightarrow 2} \le e^{-t/C_1}$ on our complex, $X$. Then, apply Lemma \ref{Httopt} to get the on-diagonal heat kernel bound for any fixed $z$. \begin{eqnarray*} h_t(z,z) \le h_{t'}(z,z)e^{-t/C_1}. \end{eqnarray*} Because $X/G=Y$, we can shift $z$ by elements of $G$, and it won't affect our heat kernel. Specifically, this means $h_t(z,z)=h_t(z+g,z+g)$ for any $g \in G$. This allows us to consider only values of $h_t(y,y)$ for points $y \in Y$. This tells us that the supremum in $Y$ dominates: $\sup_{y \in Y} h_{t}(y,y) \ge h_t(z,z).$ Set $C_0 = \sup_{y \in Y} h_{t'}(y,y)$. Because $Y$ is compact and $h_{t'}(y,y)$ is continuous in $y$, for fixed $t'>0$ we will have $C_0<\infty$. As $\sup_{x,y} h_{t}(x,y) = \sup_y h_{t}(y,y)$, this will give us our overall bound. \end{proof} \begin{corollary}\label{nonam} If $G$ is not amenable and $X/G=Y$, then for $t\ge 1$ \begin{eqnarray*} \sup_{x \in X} h_t(x,x) \approx p_{2\ceil{t}}(e,e). \end{eqnarray*} \end{corollary} \begin{proof} Kesten \cite{Kesten} showed that nonamenable groups have heat kernel behavior $p_{2\ceil{t}}(e,e) \approx e^{-t/C}$. By lemma \ref{NonAmenableht}, we have $h_{t}(x,x) \le c e^{-t/C}$. Since \\ $\sup_{x \in X} h_{t}(x,x) \ge c' e^{-t/C'}$, we have the equivalence. \end{proof} \section{Heat Kernels in the Amenable Case} This is a modified version of the argument in LSC-Pittet paper \cite{LSCP} which shows that the on diagonal heat kernel on a group is bounded above (in some sense) by the one on a manifold. The basic argument involves comparing eigenvalues and traces of the heat equation restricted to a finite set. We iterate through these sets using F\o lner sequences, and then we compare the heat kernels themselves. \subsection {Bounding those on G above by those on X} \begin{theorem}\label{plessh} Let $G$ be an amenable group and $X$ the associated complex. For times $t > 1$, we have constants $C, C_0$ so that \begin{eqnarray*} p_{\ceil{Ct}}(e,e) \le C_0 \sup_{x \in X} h_t(x,x). \end{eqnarray*} Here $C=2C_1C_2$ where $C_1$ and $C_2$ are the constants in \ref{Compcompare} and\ref{CompGradcompare} and \\ $C_0= |S|^{C_{XG} R_0 / \min_{g\ne h} d_G(g,h)}$. \end{theorem} \begin{proof} Let $A$ be a finite subset of $G$, and let $A_0$ be the set of points in $X$ which surround it. That is, $A_0 := \{x \in X | d(x,A) <R_0 \}$. Because $R_0 \ge C_{sup}$, we will have functions $f: G\rightarrow R$ which are supported in $A$ map to functions $\comp f: X \rightarrow R$ which are supported in $A_0$. Using lemmas \ref{Compcompare} and \ref{CompGradcompare}, we know that $||f||_2^2 \le C_1 ||\comp{f}||_2^2$ and $|| \nabla \comp{f}||_2^2 \le C_2 E(f,f)$. Combining these, we get: \begin{eqnarray*} \frac{|| \nabla \comp{f}||_2^2}{||\comp{f}||_2^2} &\le& C_1C_2 \frac{ E(f,f)}{||f||_2^2} \\ &=& C_1C_2 \frac{ \langle I-K_A f,f \rangle}{||f||_2^2} \\ &=& C_1C_2 \left(1 - \frac{ ||K^{1/2}_A f||_2^2}{||f||_2^2}\right). \end{eqnarray*} Here, we used the fact that $K_A$ is self-adjoint. We can apply the min-max principle in order to compare eigenvalues. Let $\lambda_{A_0}(i)$ be the ith eigenvalue for $H_t$ on $A_0 \subset X$ (denoted $H_t^{A_0}$) and $\beta_{A}(i)$ ith eigenvalue for $K$ on $A \subset G$ (denoted $K_A$). For eigenvalues $1..|A|$, we have: \begin{eqnarray*} \lambda_{A_0}(i) \le C_1C_2 (1 -\beta_{A}(i)) \end{eqnarray*} We can rewrite this as: \begin{eqnarray*} \beta_{A}(i) \le 1 - \frac{1}{C_1C_2} \lambda_{A_0}(i). \end{eqnarray*} As $\lambda_{A_0}(i)$ will be bounded below by 0, we can use $1-x \le e^{-x}$ to get: \begin{eqnarray*} \beta_{A}(i) \le e^{- \frac{1}{C_1C_2} \lambda_{A_0}(i)}. \end{eqnarray*} We can use this to compare the traces. Recall \begin{eqnarray*} \operatorname{Tr}(H_t^{A_0}) &=& \sum_i e^{-t\lambda_{A_0}(i)} \\ \operatorname{Tr}(K_A^n) &=& \sum_i \beta_{A}^n(i). \end{eqnarray*} When $\beta_{A}(i)\ge 0$, we have: \begin{eqnarray*} \beta_{A}^{2n}(i) \le e^{- \frac{1}{C_1C_2} \lambda_{A_0}(i)2n}. \end{eqnarray*} We will compare the negative $\beta_{A}(i)$ terms with the positive ones. We know that $0 \le \operatorname{Tr}(K_A^{2n+1})$. This means we can split the sum into two pieces and subtract the part with negative eigenvalues from both sides: \begin{eqnarray*} \sum_{\beta_{A}(i) <0} \abs{\beta_{A}^{2n+1}(i)} \le \sum_{\beta_{A}(i) >0} \beta_{A}^{2n+1}(i). \end{eqnarray*} Since all of the eigenvalues are between -1 and 1, we have: \begin{eqnarray*} \sum_{\beta_{A}(i) <0} \abs{\beta_{A}^{2n+2}(i)} \le \sum_{\beta_{A}(i) <0} \abs{\beta_{A}^{2n+1}(i)} \le \sum_{\beta_{A}(i) >0} \beta_{A}^{2n+1}(i) \le \sum_{\beta_{A}(i) >0} \beta_{A}^{2n}(i) . \end{eqnarray*} This tells us that \begin{eqnarray*} \operatorname{Tr}(K_A^{2n+2}) \le \sum_{\beta_{A}(i)} \abs{\beta_{A}^{2n+2}(i)} \le 2 \sum_{\beta_{A}(i) >0} \beta_{A}^{2n}(i) \le 2 \sum_i \beta_{A}^{2n}(i). \end{eqnarray*} We can compare the first $|A|$ terms in the two sums, and the extra terms in $\operatorname{Tr}(H_t^{A_0})$ will only help us: \begin{eqnarray*} \operatorname{Tr}(K_A^{2n+2}) \le 2\operatorname{Tr}(H_{\frac{2n}{C_1C_2}}^{A_0}). \end{eqnarray*} We are now in a good spot. We will compare the heat kernels with the respective traces. Fix $n$. Let $F(i)$ be a F\o lner sequence in $G$, and recall $S^n$ is the set of words in $G$ of length at most $n$. For each $i$ we will have a set \\ $A = S^nF(i)= \{ g : g = f u, f \in F(i), u \in S^n \}$. In Lsc-Pittet \cite{LSCP}, they showed that for an amenable group $G$ we have the comparison: \begin{eqnarray*} p_{2n+2}(e,e) \le \frac{1}{|F(i)|} \operatorname{Tr}(K_A^{2n+2}). \end{eqnarray*} By the definition of the trace, we know that on the complex we have: \begin{eqnarray*} \operatorname{Tr}(H_t^{A_0}) &=& \sum_i e^{-t\lambda_{A_0}(i)} \\ &=& \int_{A_0} h^{A_0}_t(x,x) dx \\ &\le& \mu(A_0) \sup_{x\in A_0} h^{A_0}_t(x,x) \\ &\le& \mu(A_0) \sup_{x\in X} h_t(x,x). \end{eqnarray*} When we combine these, we find that: \begin{eqnarray*} p_{C_1C_2(2n+2)}(e,e) \le \frac{\mu(A_0)}{|F(i)|}\sup_{x\in X} h_{2n}(x,x). \end{eqnarray*} We can compare $\mu(A_0)$ with $\operatorname{Vol}_G(A)$. Since $A_0 := \{x \in X | d_X(x,A) <R_0 \}$, each element in $A$ can expand to at most $|S|^{C_{XG} R_0 / \min_{g\ne h} d_G(g,h)}$ new elements in $A_0$. This tells us: \begin{eqnarray*} p_{C_1C_2(2n+2)}(e,e) \le |S|^{C_{XG} R_0 / \min_{g\ne h} d_G(g,h)} \frac{|A|}{|F(i)|}\sup_{x\in X} h_{2n}(x,x). \end{eqnarray*} We can now let $i$ go to infinity; since we have a F\o lner sequence, $\frac{|A|}{|F(i)|} =\frac{|S^n F(i)|}{|F(i)|} $ will become $1$. This leaves us with: \begin{eqnarray*} p_{C_1C_2(2n+2)}(e,e) \le |S|^{C_{XG} R_0 / \min_{g\ne h} d_G(g,h)} \sup_{x\in X} h_{2n}(x,x). \end{eqnarray*} \end{proof} \subsection{Bounding those on X above by those on G} We'd like to show the reverse inequality. We will do this using a chain of comparisons. First, we will compare $h_t(x,x)$ with $h_t^W(x,x)$, where $h_t^W$ is the diffusions in an open subset $W \subset X$. Then we will compare eigenvalues of $h_t^W(x,x)$ and $p_t^{W'}(e,e)$, where $p_t^{W'}(e,e)$ represents probability of a random walk restricted to a set $W' \subset G$ returning to the identity, using our bounds on norms and minimax inequalities. Lastly, we use a comparison for $p_t^{W'}(e,e)$ and $p_t(e,e)$. At this point, we will remove some of the dependence on $W$, and limit away other factors to get the final result. We would like to look at what happens to diffusions in an open subset $W \subset X$. Let $\tau$ be the exit time for this set: $\tau = \inf \{t : t\ge 0, X_t \in W \}$. Then by the strong Markov property we have a restricted heat kernel: \begin{eqnarray*} h_t^W(x,y) =h_t(x,y) -E^x(h_{t-\tau}(X_{\tau},y)1_{\tau \le t}). \end{eqnarray*} Here, $X_{t}$ is a random variable which at time $t=\tau$ will be the point on $\partial W$ where $X_t$ exits $W$. The term $h_{t-\tau}(X_{\tau},y)$ represents going from the point on the boundary to $y$ in the time $t-\tau$ which is left after exiting $W$. We take the expected value of this where $X_0 = x$. We can bound the expected value above by the maximum value. Since $E^x(h_{t-\tau}(X_{\tau},y)1_{\tau \le t}) \le \sup_{0<s<t}\sup_{z \in \partial W} h_s(z,y)$, we have \begin{eqnarray*} h_t^W(x,y) \ge h_t(x,y) - \sup_{0<s<t}\sup_{z \in \partial W} h_s(z,y). \end{eqnarray*} We can use this to bound $h_t^W(x,x)$ below for $x$ sufficiently far from $\partial W$. \begin{lemma}\label{htWbysupht} There exists a constant $C_H$ so that for all $\varepsilon_1 > 0$ there exists $a>0$ so that for all open subsets $W \subset X$ and for all $t \ge 6r_1^2$ we know that \begin{eqnarray*} h_t^W(x,x) \ge C_H^{-1} \sup_{y \in X} h_{t-3r_1^2}(y,y) -\varepsilon_1. \end{eqnarray*} for all $x \in \{x \in W : d(x,\partial W)> at^{1/2} \}$. Here, $r_1=\operatorname{diam}(Y)$. \end{lemma} \begin{proof} By Corollary \ref{offdiagonalHeat} we know that there are constants $C_1$ and $C_2$ so that \begin{eqnarray*} h_t(x,y) \le \frac{C_1}{\min(t,1)^{d/2}} e^{-C_2\frac{d^2(x,y)}{t}}. \end{eqnarray*} This estimate allows us to bound $\sup_{0<s<t}\sup_{z \in \partial W} h_s(z,y)$ whenever $y$ is at a distance at least $at^{1/2}$ away from the boundary of $W$. If $s \le 1$, then \\ $h_s(z,y) \le \frac{C_1}{s^{d/2}} e^{-C_2\frac{a^2t}{s}}$. This has a maximum at $s=\frac{2a^2 C_2 t}{d}$ which tells us: \begin{eqnarray*} h_s(z,y) \le C_1 \left(\frac{d}{2a^2 C_2 t}\right)^{d/2} e^{-d/2}. \end{eqnarray*} For $t \ge 6r_1^2$, this is maximized at $t=6r_1^2$. We have $C_1 (\frac{d}{12r_1^2 C_2 e})^{d/2} a^{-d} < \varepsilon_1$ when $a > C_1^{1/d} (\frac{d}{12r_1^2 C_2 e})^{1/2} \varepsilon_1^{-1/d}$. If $t>s>1$, then the maximum occurs when $s=t$: \begin{eqnarray*} h_s(x,y) \le C_1 e^{-C_2\frac{a^2 t}{s}} \le C_1 e^{-C_2 a^2}. \end{eqnarray*} We know that $C_1 e^{-C_2 a^2} < \varepsilon_1$ whenever $a > \sqrt{\frac{1}{C_2} \ln(\frac{C_1}{\varepsilon_1})}$. Thus, whenever $a > \max\left(C_1^{1/d} (\frac{d}{12r_1^2 C_2e})^{1/2} \varepsilon_1^{-1/d}, \sqrt{\frac{1}{C_2} \ln(\frac{C_1}{\varepsilon_1})}\right)$ we have \begin{eqnarray*} \sup_{0<s<t}\sup_{z \in \partial W} h_s(z,y) < \varepsilon_1. \end{eqnarray*} We can bound $h_t(x,x)$ below using a parabolic Harnack inequality. Theorem 3.5 in Sturm \cite{Sturm} uses techniques in Moser \cite{Moser} to show that Poincar\'{e} and volume doubling locally imply a parabolic Harnack inequality. In our situation, we have uniformly bounded constants for both local Poincar\'{e} and volume doubling, and so the constant $C_H$ in the Harnack inequality will also be uniform. In the language of Sturm: \\ For all $K\ge 1$ and all $\alpha, \beta, \gamma, \delta$ with $0 < \alpha < \beta < \gamma < \delta$ and $0 < \varepsilon < 2$ there exists a constant $C_H = C_H(Y_1)$ such that for balls $B_{2r}(x) \subset Y_1$ and all $T$, \begin{eqnarray*} \sup_{(s,y) \in Q^-} u(s,y) \le C_H \inf_{(s,y) \in Q^+} u(s,y) \end{eqnarray*} whenever $L_T$ is a uniformly parabolic operator whose associated Dirichlet form is comparable by a factor of $K$ with the original Dirichlet form, and $u$ is a nonnegative local solution of the parabolic equation $(L_T - \frac{\partial}{\partial T}) u =0$ on $Q = (T-\delta r^2,T) \times B_{2r}(x)$. Here $Q^- = (T-\gamma r^2,T-\beta r^2) \times B_{\varepsilon r}(x)$ and $Q^+ = (T-\alpha r^2,T) \times B_{\varepsilon r}(x)$. We can translate this language to our situation. For us, $L_T = \Delta$, and so there is no $T$ dependence in the operator. This means the Dirichlet form condition will be trivially satisfied when $K=1$. We also will take $\varepsilon =1$. We will set $Y_1 = B_{3r_1}$. This is a ball which is large enough so that every equivalence class of $x \in Y$ has a representative in $Y_1$, as well as an associated copy of $Y$ in $Y_1$. When $t > 6r_1^2$, we can set $T = t + r^2$, $\alpha =1 $, $\beta =2$, $\gamma = 4$, and $\delta =5$. Then $Q^+ = (T-r^2, T+r^2)$, $Q^- = (T-4r^2,T -2r^2)$, and $Q = (T-5r^2, T+r^2)$. Applying Sturm here gives us \begin{eqnarray*} \sup_{y \in B_r(x)} h_{t-3r^2}(y,y) \le C_H \inf_{y \in B_r(x)} h_t(y,y) \le C_H h_t(x,x). \end{eqnarray*} Due to the symmetry of the space $X$, $h_{s}(y,y)$ is the same as $h_{s}$ when $y$ is translated by an element of $G$. For $r = \operatorname{diam}(Y)$, we have a copy of \\ $Y \subset B_r(x) \subset B_{2r}(x) \subset Y_1$ for every $x \in Y$. This tells us that \begin{eqnarray*} \sup_{y \in B_r(x)} h_{s}(y,y) =\sup_{y \in X} h_{s}(y,y). \end{eqnarray*} \end{proof} We can bound the integral of $h_t^W(x,x)$ above by an analogue of Lemma 5.3 in LSC-Pittet \cite{LSCP}. \begin{lemma}\label{htWbylambdale1} For subsets $W \subset X$, $B >0$, and $t\ge 1$, \begin{eqnarray*} \int_W h_t^W(x,x) dx \le \sum_{\lambda_W(i) \le 1/B} e^{-t \lambda_W(i)} + C_1 2^{d/2} \mu(W)e^{-t/(2B)}. \end{eqnarray*} Here, $C_1$ and $d$ are defined as in Corollary \ref{offdiagonalHeat}, and $\lambda_W$ are the eigenvalues of $h_t^W$. \end{lemma} \begin{proof} When $a,b \ge 1$ we have the inequality $ab \ge a/2 +b/2$. Let $a=t$ and $b=B/\lambda_W(i)$. Then for $t \ge 1$ and $\lambda_W(i) \ge 1/B$ we have \begin{eqnarray*} tB/\lambda_W(i) \ge t/2 + B/(2\lambda_W(i)). \end{eqnarray*} If we multiply through by $-1/B$ and exponentiate we find \begin{eqnarray*} e^{-t \lambda_W(i)} \le e^{-t/(2B) - \lambda_W(i)/2}. \end{eqnarray*} This allows us to bound the sum over the larger eigenvalues: \begin{eqnarray*} \sum_{\lambda_W(i) \ge 1/B} e^{-t \lambda_W(i)} &\le& \sum_{\lambda_W(i) \ge 1/B} e^{-t/(2B) - \lambda_W(i)/2} \\ &\le& e^{-t/(2B)} \sum_{\lambda_W(i)} e^{- \lambda_W(i)/2} \\ &=& e^{-t/(2B)}\int_W h_{1/2}^W(x,x) dx \\ &\le& e^{-t/(2B)} C_1 2^{d/2} \mu(W). \end{eqnarray*} In the last step, we used the bound in \ref{offdiagonalHeat} which tells us $h_{1/2}^W(x,x) \le C_1 2^{d/2}$. Using the eigenvalue expansion, we can compare the integral of the heat kernel at times greater than one with the sum over small eigenvalues plus our bound on the sum over larger eigenvalues: \begin{eqnarray*} \int_W h_t^W(x,x) dx &=& \sum_{\lambda_W(i)} e^{-t \lambda_W(i)} \\ &\le& \sum_{\lambda_W(i) \le 1/B} e^{-t \lambda_W(i)} + C_1 2^{d/2} \mu(W)e^{-t/(2B)}. \end{eqnarray*} \end{proof} Let's consider what it means to have a Laplacian, $\Delta^{\Omega}$, defined for functions restricted to a set, $\Omega$ with a polygonal boundary. Let the domain of $\Delta^{\Omega}$ be the closure of the intersection of $\operatorname{Dom}(\Delta)$ and the continuous functions which are compactly supported on $\Omega$; that is, $\operatorname{Dom}(\Delta^{\Omega}) = \overline{\operatorname{Dom}(\Delta) \cap C^C_0(\Omega)}$. Note that since $\operatorname{Dom}(\Delta) \cap C_0(\Omega) \subset \operatorname{Dom}(\Delta)$ and $\operatorname{Dom}(\Delta)$ is closed, we know that \\ $\operatorname{Dom}(\Delta^{\Omega}) \subset \operatorname{Dom}(\Delta)$. For functions $f \in \operatorname{Dom}(\Delta^{\Omega})$, we set $\Delta^{\Omega} f = \Delta f$. $\Delta^{\Omega}$ inherits many properties from $\Delta$. It is self-adjoint with a discrete spectrum, and as we will see in the following lemma, for the $\Omega$ that we are interested in there will be only finitely many eigenvalues which are close to $0$. We can show this by comparing operators restricted to subsets of $X$ to operators restricted to subsets of $G$. Let $A \subset G$ be given. Let $\Omega = U(A)$ be a subset of $X$ with polygonal boundary so that any function $f$ whose support is in $U(A)$ has an associated function $\group{f}$ whose support is in $(A, \{1..N\})$. In particular, we would like $U(A)$ to be close in size to $A$. Since $\group f(g,i) = \Xint-_{B_X(g\gamma_i,\delta)} f(x) dx $ averages over neighborhoods of points in $X$, we can guarantee a set with volume estimate: \begin{eqnarray*} \min_{y \in Y} \mu(B(y, \delta)) N \# A \le \mu(U(A)) \le \mu(Y) N \#A. \end{eqnarray*} The following lemma will give us a comparison for small eigenvalues on $h_t^{U(A)}$. \begin{lemma}\label{htvsKAN} Let $A \subset G$ and $U(A) \subset X$ be given as above. Eigenvalues of $h_t^{U(A)}(x,x)$ and $p_n(e,e)$ are comparable in the following manner: \begin{eqnarray*} \sum_{i :0 \le \lambda_{U(A)}(i) \le 1/B} e^{-2 n \lambda_{U(A)}(i)} \le \#A N p_{2\floor{n/(B(2-\sqrt{2}))}}(e,e). \end{eqnarray*} Here $B=4 C(\sqrt{\frac{1}{2C_{grad}}})/(2-\sqrt{2})$ where $C_1$ is the constant in Lemma \ref{fToGroupf} and $C(\cdot)$ is the constant from Lemma \ref{gradGroupftogradf}. $N$ depends on $\delta = \sqrt{\frac{1}{2 C_{grad}}}$. \end{lemma} \begin{proof} Suppose $u$ is a solution to $\Delta^{\Omega} u = \lambda u$ on a set $\Omega \subset X$ with polygonal boundary, and $u=0$ on $\partial \Omega$. Set $u=0$ outside of $\Omega$. For $u\in \operatorname{Dom}(\Delta^{\Omega})$ and $\lambda \ne 0$, a formal argument using integration by parts tells us: \begin{eqnarray*} \langle u,u \rangle &=& \frac{1}{\lambda} \langle \lambda u, u \rangle \\ &=& \frac{1}{\lambda} \langle \Delta^{\Omega} u, u \rangle \\ &=& \frac{1}{\lambda} \langle \nabla u, \nabla u \rangle + \langle \nabla u, u \rangle |_{\partial \Omega} \\ &=& \frac{1}{\lambda} \langle \nabla u, \nabla u \rangle . \end{eqnarray*} This gives us $||u||_2 = |\frac{1}{\lambda}| ||\nabla u||_2$. We know that such eigenfunctions exist because $\Delta^{\Omega}$ is self-adjoint. We will combine this with the inequality in Lemma \ref{fToGroupf} for eigenfunctions $f$ on the set $U(A)$: \begin{eqnarray*} ||f||_{2,X}^2 &\le& C_{grad}(\delta^2 ||\nabla f||_{2,X}^2 +||\group{f}||_{2,G}^2) \\ & =& C_{grad} (\delta^2 |\lambda| ||f||_{2,X}^2 +||\group{f}||_{2,G}^2). \end{eqnarray*} This tells us: \begin{eqnarray*} (1- C_{grad} \delta^2 |\lambda|)||f||_{2,X}^2 \le ||\group{f}||_{2,G}^2. \end{eqnarray*} If $\delta$ is less than $\sqrt{\frac{1}{\lambda C_{grad}}}$, we have a nice bound for that $\lambda$. In particular, $\delta = \sqrt{\frac{1}{2C_{grad}}}$ gives us a simple bound for all $\lambda \le 1$ because $1- (1/2) |\lambda| > 1/2$. \begin{eqnarray*} ||f||_{2,X}^2 \le 2 ||\group{f}||_{2,G}^2. \end{eqnarray*} Lemma \ref{gradGroupftogradf} tells us \begin{eqnarray*} ||\nabla \group{f} ||_{2,G}^2 \le C(\delta) ||\nabla f||_{2,X}^2. \end{eqnarray*} We have that for $C' = 2 C(\sqrt{\frac{1}{2C_{grad}}})$: \begin{eqnarray*} \frac{||\nabla \group{f} ||_{2,G}^2}{||\group{f}||_{2,G}^2} \le C' \frac{||\nabla f||_{2,X}^2}{||f||_{2,X}^2}. \end{eqnarray*} We can rewrite $\nabla \group{f}$ in terms of $K_{A,N}^{1/2}$. \begin{eqnarray*} 1 - \frac{||K_{A,N}^{1/2} \group{f}||_{2,G}^2}{||\group{f}||_{2,G}^2} \le C' \frac{||\nabla f||_{2,X}^2}{||f||_{2,X}^2}. \end{eqnarray*} This will allow us to compare the first $k$ eigenvalues of $h^{U(A)}_t$ with the absolute values of those for $K_{A,N}$, where $k = \min(\#NA, \#\{\lambda_{U(A)}(i) \in [0,1]\})$. The min-max definition will give us these eigenvalue comparisons. For simplicity, we will use $\lambda_{U(A)}(i)$ to refer to the ith smallest eigenvalue of $h^{U(A)}_t$, and $|\beta_A(i)|$ to refer to the ith largest absolute value of the eigenvalue of $K_{A,N}$. We have \begin{eqnarray*} 1- |\beta_{A}(i)| &\le& C' \lambda_{U(A)}(i) \text{ which can be written as}\\ 1- C' \lambda_{U(A)}(i) &\le& |\beta_{A}(i)|. \end{eqnarray*} When $1/2 \le x \le 1$, we know $x \ge e^{-2(1-x)}$. Applying that to $x= 1-C'\lambda_{U(A)}(i)$, we have \begin{eqnarray*} e^{-2C'\lambda_{U(A)}(i)} \le |\beta_{A}(i)| \end{eqnarray*} for $i \le k$ with $0 \le \lambda_{U(A)}(i) \le 1/(2C')$. We can exponentiate to get: \begin{eqnarray*} e^{-2 n \lambda_{U(A)}(i)} \le |\beta_{A}(i)|^{n/C'}. \end{eqnarray*} We will have this bound for all of the $\lambda_{U(A)} \in [0,(2-\sqrt{2})/(2C')]$ provided we can show that we have an $i$ with $C'\lambda_{U(A)}(i) > (2-\sqrt{2})/2$. If we knew that $(2-\sqrt{2})/2 \le 1- |\beta_{A}(i)|$ for some $i$, then this would be shown. This means we want to have $|\beta_{A}(i)|^2 \le 1/2$ for some $i$. We know that $K_{A,N}$ is an $\#AN$ by $\#AN$ matrix whose entries are either $1/|S|$ or $0$ and that there are $|S|$ nonzero entries per row. When we look at its square, we have another $\#AN$ by $\#AN$ matrix whose entries are at most $|S|/|S|^2 = 1/|S|$ and at least $0$. $K_{A,N}^2$ has eigenvalues $|\beta_A(i)|^2$. This means that the largest $\operatorname{Tr}(K_{A,N}^2)$ could possibly be is $\#AN/|S|$, and so $\sum_{i=1}^{\#AN} |\beta_A(i)|^2 \le \#AN/|S|$. The average value of an eigenvalue $|\beta_{A}|^2$ is $1/|S|$. Since $|\beta_A(i)|^2 \in [0,1]$, we must have at least one $|\beta_A|^2$ which is smaller than $1/|S|$ in order to have that as the average. This tells us that there is some $i$ with $|\beta_A(i)| \le 1/\sqrt{|S|} \le 1/\sqrt{2}$. In this way, we have guaranteed the bound for all $\lambda_{U(A)} \in [0,(2-\sqrt{2})/(2C')]$. Note that this also shows that there are at most $\#AN$ such eigenvalues. Summing over $\lambda_{U(A)}(i) \in [0,(2-\sqrt{2})/(2C')]$ gives us: \begin{eqnarray*} \sum_{i :0 \le \lambda_{U(A)}(i) \le (2-\sqrt{2})/(2C')} e^{-2 n \lambda_{U(A)}(i)} \le \sum_{i :0 \le \lambda_{U(A)}(i) \le (2-\sqrt{2})/(2C')} (\beta_{A}(i))^{n/C'}. \end{eqnarray*} Note that the $\beta_{A}(i)$ in this sum are positive. We can compare these to positive eigenvalues in the trace by using $K^{2\floor{n/(2C')}}_{A,N}$. \begin{eqnarray*} \sum_{i : 0 \le \lambda_{U(A)}(i) \le (2-\sqrt{2})/(2C')} (\beta_{A}(i))^{n/C'} \le \operatorname{Tr} (K_{A,N}^{2\floor{n/(2C')}}). \end{eqnarray*} Combining these yields: \begin{eqnarray*} \sum_{i :0 \le \lambda_{U(A)}(i) \le (2-\sqrt{2})/(2C')} e^{-2 n \lambda_{U(A)}(i)} \le \operatorname{Tr} (K_{A,N}^{2\floor{n/(2C')}}). \end{eqnarray*} We know that by its definition \begin{eqnarray*} \operatorname{Tr} (K_{A,N}^{2\floor{n/(2C')}}) &=& \sum_{g \in A, j = 1..n} p_{2\floor{n/(2C')}}(g,g) \\ &\le& \#A N p_{2\floor{n/(2C')}}(e,e). \end{eqnarray*} This gives us the result: \begin{eqnarray*} \sum_{i :0 \le \lambda_{U(A)}(i) \le (2-\sqrt{2})/(2C')} e^{-2 n \lambda_{U(A)}(i)} \le \#A N p_{2\floor{n/(2C')}}(e,e). \end{eqnarray*} If we want to simplify the notation on the left, we may set $B= 2C'/(2-\sqrt{2})$. This means $n/(2C') = n/(B(2-\sqrt{2}))$. Hence: \begin{eqnarray*} \sum_{i :0 \le \lambda_{U(A)}(i) \le 1/B} e^{-2 n \lambda_{U(A)}(i)} \le \#A N p_{2\floor{n/(B(2-\sqrt{2}))}}(e,e). \end{eqnarray*} \end{proof} \begin{theorem}\label{hlessp} For $t> 6r_1^2$ we get: \begin{eqnarray*} \sup_{y \in X} h_{t-3r_1^2}(y,y) \le C p_{2\floor{\frac{t}{B \log |S|}}}(e,e) \end{eqnarray*} where $C = C_H \left(\frac{1}{\min_{y \in Y} \mu(B(y, \delta))} + \frac{\mu(Y)}{\mu(B(y, \delta))} C_1 2^{d/2}\right)$. \end{theorem} \begin{proof} We'll use these lemmas and F\o lner sequences to build this inequality. Recall Lemma \ref{htWbysupht} told us: \begin{eqnarray*} h_t^W(x,x) \ge C_H^{-1} \sup_{y \in X} h_{t-3r_1^2}(y,y) -\varepsilon_1. \end{eqnarray*} for all $x \in \{x \in W : d(x,\partial W)> at^{1/2} \}$ when $t > 6r_1^2$. \\ Set $T = \{g \in G : d_X(e,g) \le \sqrt{t} a + 10 R_0 \}$. \\ Then $AT = \{g \in G : g=th$ for $t \in T, h \in A \}$. We'll apply this to $W=U(AT)$. \\ Note that $U(A) \subset \{x \in U(AT) : d(x,\partial U(AT))> at^{1/2} \}$. When we take the average over $U(A)$ we have: \begin{eqnarray*} \sup_{y \in X} h_{t-3r_1^2}(y,y) \le C_H \left(\Xint-_{U(A)} h_t^{U(AT)}(x,x)dx + \varepsilon_1 \right) \end{eqnarray*} From Lemma \ref{htWbylambdale1} we know how to bound the integral in terms of $\lambda_W \le 1/B$: \begin{eqnarray*} \int_{U(A)} h_t^{U(AT)}(x,x) dx &\le& \int_{U(AT)} h_t^{U(AT)}(x,x) dx \\ &\le& \sum_{\lambda_{U(AT)}(i) \le 1/B} e^{-t \lambda_{U(AT)}(i)} + C_1 2^{d/2} \mu({U(AT)})e^{-t/(2B)}. \end{eqnarray*} Putting them together gives us: \begin{equation*} \begin{split} \sup_{y \in X} & \text{ }h_{t-3r_1^2}(y,y) \\ &\le C_H \left(\frac{1}{\mu({U(A)})} \sum_{\lambda_{U(AT)}(i) \le 1/B} e^{-t \lambda_{U(AT)}(i)} + \frac{\mu(U(AT))}{\mu(U(A))} C_1 2^{d/2} e^{-t/(2B)} + \varepsilon_1 \right). \end{split} \end{equation*} By Lemma \ref{htvsKAN} we have: \begin{eqnarray*} \sum_{i :0 \le \lambda_{U(AT)}(i) \le 1/B} e^{-2 n \lambda_{U(AT)}(i)} \le \#(AT) N p_{2\floor{\frac{n}{B(2-\sqrt{2})}}}(e,e). \end{eqnarray*} When we set $n=t/2$, this gives us: \begin{equation*} \begin{split} \sup_{y \in X} & \text{ } h_{t-3r_1^2}(y,y) \\ & \le C_H \left(\frac{\# (AT) N}{\mu(U(A))} p_{2\floor{\frac{t}{2B(2-\sqrt{2})}}}(e,e) + \frac{\mu(U(AT))}{\mu(U(A))} C_1 2^{d/2} e^{-t/(2B)} + \varepsilon_1 \right). \end{split} \end{equation*} On $G$, we can bound below the probability of returning to the start by noting that because $S=S^{-1}$, after moving $n$ steps, we have a $\frac{1}{|S|^n}$ chance of exactly retracing our path. \begin{eqnarray*} p_{2n}(e,e) \ge \frac{1}{|S|^n} = e^{-n\log |S|}. \end{eqnarray*} A more convenient time gives us \begin{eqnarray*} p_{2\floor{\frac{n}{2B \log |S|}}}(e,e) \ge e^{-n /(2B)}. \end{eqnarray*} When we place this into the inequality, we have: \begin{equation*} \begin{split} &\sup_{y \in X} h_{t-3r_1^2}(y,y) \\ &\le C_H \left(\frac{\#(AT) N}{\mu({U(A)})} p_{2\floor{\frac{t}{2B(2-\sqrt{2})}}}(e,e) + \frac{\mu(U(AT))}{\mu(U(A))}C_1 2^{d/2} p_{2\floor{\frac{t}{2B \log|S|}}}(e,e) + \varepsilon_1 \right) . \end{split} \end{equation*} We can use the fact that $p_t(e,e) \le p_s(e,e)$ whenever $t>s$ noting that both $2\floor{\frac{t}{2B(2-\sqrt{2})}}$ and $2\floor{\frac{t}{2 B \log|S|}}$ are larger than $2\floor{\frac{t}{B \log |S|}}$. \begin{eqnarray*} ...\le C_H \left( \left(\frac{\#(AT) N}{\mu({U(A)})} + \frac{\mu(U(AT))}{\mu(U(A))} C_1 2^{d/2}\right) p_{2\floor{\frac{t}{B \log |S|}}}(e,e)) + \varepsilon_1\right) . \end{eqnarray*} We take a F\o lner sequence for $G$, and set $A=F(i)$. We can use our volume estimates to find: \begin{eqnarray*} \frac{\#(AT) N}{\mu({U(A)})} \le \frac{\#(AT) N}{\min_{y \in Y} \mu(B(y, \delta)) N \# A} = \frac{\#(AT)}{\# A \min_{y \in Y} \mu(B(y, \delta))} \end{eqnarray*} and \begin{eqnarray*} \frac{\mu(U(AT))}{\mu(U(A))} \le \frac{\mu(Y) N \#(AT)}{\min_{y \in Y} \mu(B(y, \delta)) N \# A} = \frac{\#(AT) \mu(Y)}{\# A \min_{y \in Y} \mu(B(y, \delta))}. \end{eqnarray*} When we take the limit of $\frac{\#(AT)}{\# A} = \frac{\#(F(i)T)}{\#F(i)}$ as $i \rightarrow \infty$, we find it is $1$. This gives us: \begin{eqnarray*} \sup_{y \in X} h_{t-3r_1^2}(y,y) \le C_H \frac{1 + \mu(Y)}{\min_{y \in Y} \mu(B(y, \delta))} C_1 2^{d/2} p_{2\floor{\frac{t}{B \log |S|}}}(e,e) + C_H \varepsilon_1. \end{eqnarray*} Now let $\varepsilon_1$ go to zero. This yields the comparison. \end{proof} We can combine these three results into a single theorem. \begin{theorem} Let $G$ be a finitely generated group and $X$ the associated complex. For times $t > 1$, we have the comparison \begin{eqnarray*} p_{2\ceil{t}}(e,e) \approx \sup_{x \in X} h_t(x,x). \end{eqnarray*} Note that by transitivity, this holds for the heat kernels on the skeletons as well. \end{theorem} \begin{proof} If $G$ is amenable, apply theorems \ref{plessh} and \ref{hlessp}. If $G$ is nonamenable, apply corollary \ref{nonam}. \end{proof} This theorem gives a comparison of heat kernel behavior at large times. It does not; however, tell you what that behavior is for a given group. Even though the proof tells you the asymptotic for nonamenable groups, it is not easy to determine amenability. For example, it is unknown whether Thompson's group $F$ is amenable or not. (See Belk \cite{Belk}.)
1,108,101,562,378
arxiv
\section{Introduction} \begin{figure*}[t] \includegraphics[width=\linewidth]{./figs/summarize_fig_2.pdf} \vspace{-.5em} \caption{\textbf{Realistic video simulation via geometry-aware composition for self-driving.} We proposed a novel data-driven image manipulation approach that inserts dynamic objects into existing videos. Our resulting synthetic video footages are \sm{highly realistic,} layout-aware, and geometrically consistent, allowing image simulation to scale to complex use cases.} \label{figure:pipeline} \vspace{-.5em} \end{figure*} Walking along an empty pavement on a silent Sunday morning, one can easily fantasize how busy it could look during rush hour on a weekday, or how a parked car might look when driving \sm{on} a different street. % Humans are capable of recreating the experience of visually perceiving objects and scenes to generate new visual data in their minds. Such an ability allows us to formulate novel scenarios and synthesize events in our heads without \sm{experiencing it directly.} Researchers have devoted significant effort towards enhancing computers with the capability of creating pictures by replicating visual content~\cite{tewari_state_2020}. This brings immense value to many industries, such as film making, robot simulation, augmented reality, and teleconferencing. In the literature, two main paradigms exist: \textit{computer graphics} approaches and \textit{image editing} methods. Computer graphics models the image generation % process \sm{with physics, by first } creating a virtual 3D world and then \sm{mimicking} how light is transmitted within the world to produce a realistic scene rendering. To produce visually appealing results, physics-based rendering requires a significant amount of computing resources, costly manual asset creation, and physical modeling \cite{corona}. Images produced by existing real-time rendering engines~\cite{unreal, martinez2017beyond, dosovitskiy_carla_2017}, still have a significant realism gap, reducing their impact in robot simulation and data augmentation for training. {Data-driven image editing} methods such as {image composition}~\cite{karsch2011rendering, lalonde2007photo, cong2020dovenet, dwibedi2017cut, bhattad2020cut} and generative image synthesis \cite{wang2016generative,yang_high-resolution_2016,wang_high-resolution_2017,chen_photographic_2017,johnson2018image,park2019semantic} have received significant attention over the past few years. % They focus on pushing realism through generative models trained from large-scale visual data. However, most of the efforts do not correspond to an underlying realistic 3D world, and as a consequence, the generated 2D contents are not directly useful for applications such as 3D gaming and robot simulation. In this paper, we propose {\it GeoSim}, a realistic image manipulation framework that inserts dynamic objects into existing videos. GeoSim exploits a combination of both data-driven approaches and computer graphics to \sm{generate assets inexpensively while maintaining high visual quality through} physically grounded simulation. In particular, by leveraging low-cost bounding box annotations and sensor data captured by a self-driving fleet driving \sm{around} multiple U.S. cities, GeoSim builds a fully-textured large-scale 3D assets bank. While self-driving data % is widely available ~\cite{kitti,argoverse, nutonomy, waymo}, it is non-trivial to automatically build these assets due to the sparsity of the 3D observations, occlusions, shadows, limited viewpoints, and lighting changes. Our asset reconstruction is robust to these challenges, as we ensure consistency \sm{across} % multiple observations in time and learn a strong shape prior to regularize our assets. GeoSim then exploits the 3D scene layout (from high-definition (HD) maps and LiDAR data) to add vehicles in plausible locations and make them behave realistically by considering the full scene. Finally, using this new 3D scene, GeoSim performs image-based rendering to properly handle occlusions, and neural network-based image in-painting to ensure the inserted object seamlessly blends in by filling holes, adjusting color inconsistencies due to lighting changes, and removing sharp boundaries. Using GeoSim, our resulting synthetic images and video footages are \sm{realistic,} dynamically plausible, and geometrically consistent. We showcase two important applications: long-range realistic video simulation across multiple camera sensors and synthetic labeled data generation for {training} self-driving perception algorithms. Our approach outperforms prior work \sm{on} both qualitative and quantitative realism metrics. We also see significant gains on perception performance when leveraging GeoSim images. These experiments suggest the potential of GeoSim for a plethora of applications, such as realistic safety verification, data augmentation, Sim2Real, augmented reality, and automatic video editing. \section{Related Work} \input{fig_architecture} \paragraph{Simulation for Robot Learning:} Sensor simulation has received wide attention in the literature~\cite{ros2016synthia,dosovitskiy_carla_2017,richter_playing_2016,alhaija_augmented_2017,savva2017minos,savva2019habitat,bousmalis2018using,coumans2016pybullet,xia_gibson_2018,kolve2017ai2,savva2019habitat,li2019putting} for its applications in training and evaluating robotic agents. Sensor simulation systems should be efficient and scalable in order to enable such applications. Many automatic approaches \cite{savva2017minos,savva2019habitat,coumans2016pybullet,xia_gibson_2018,kolve2017ai2} have been proposed to generate indoor environments. Unconstrained outdoor scenes such as the urban driving setting tackled here bring additional challenges due to the scale of the scene, weather, lighting, presence of fast moving objects, and large viewpoint changes arising from sensor motion. In the context of autonomous driving, simulation engines \cite{dosovitskiy_carla_2017,ros2016synthia,richter_playing_2016} based on rendered 3D models allow the combinatorial generation of scenarios with varying configurations of vehicle attributes, traffic, and weather conditions. However, these methods often have limited diversity in scene content due to the manual design of 3D assets and still have a Real2Sim gap. Data-driven sensor simulation offers a scalable alternative that can capture the complexity of the real world. Many methods \cite{li_aads_2019,fang_augmented_2019,Tallavajhula-2018-106471,yang_surfelgan_2020,amini_learning_2020,siva2020lidarsim} have been proposed to directly leverage real-world data for sensor simulation in the autonomous driving domain, typically by augmenting existing recorded data to generate corresponding sensor measurements for novel scene configurations. However, previous works either focus on LiDAR \cite{siva2020lidarsim,fang_augmented_2019,Tallavajhula-2018-106471}, rely on CAD model registration, constraining the set of dynamic objects that can be simulated \cite{li_aads_2019}, or require additional effort to scale to high-resolution images \cite{yang_surfelgan_2020}. In contrast, we combine data-driven simulation techniques with the image-based rendering techniques in simulation engines. This enables us to construct a scalable, geometrically consistent, and \sm{realistic} camera simulation system. \paragraph{Image Synthesis and Manipulation:} Image synthesis and manipulation methods offer another route to sensor simulation. % Existing work mainly focused on generating 2D images from intermediate representations including scene graphs~\cite{johnson2018image,hong2018inferring}, surface normal maps~\cite{wang2016generative}, semantic segmentations~\cite{isola2017image,zhu2017unpaired,chen_photographic_2017,wang_high-resolution_2017,qi_semi-parametric_2018,park2019semantic,mo2018instagan}, and images with different styles~\cite{karras2019style}. These methods create high-resolution images but with noticeable artifacts in texture and object shape. Rather than generating the full image in one shot, \cite{lin_st-gan_2018,hong_learning_2018,lee2018context} utilize a conditional image generator for scene manipulation. In particular, \cite{lin_st-gan_2018} proposed a spatial-transformer GAN that overlays the target objects on top of existing scene layouts by iteratively adjusting 2D affine transformations. \cite{hong_learning_2018} introduced a hierarchical image generation pipeline that is capable of inserting and removing one object at a time. {This improves realism, but using purely a network-based image synthesis approach has difficulty handling complex physics such as lighting changes. \cite{bhattad2020cut} attempts to combine data driven approaches with graphics knowledge, using an image-based neural renderer and image decomposition to improve the synthetic result.} {Our work builds on this direction of leveraging graphics with real world data.} {We perform image-based rendering and neural in-painting to adjust for differences between the original image and the image texture of the inserted actor.} Furthermore, GeoSim is \sm{3D-layout-aware, allowing for controllable and realistic scene modification.} \paragraph{\shivam{Video Synthesis and Manipulation:}} \sm{Image simulation alone is insufficient for generating new scenarios with realistic video. One way prior \shivam{works} \shivam{have} extended image synthesis approaches to video generation is by including the past and adding regularization to ensure temporal consistency for realistic object \yun{motion}. Conditional video generation methods \cite{wang2018vid2vid,mallya2020world, chan2019everybody, gafni2019vid2game} take \shivam{sequential} semantic masks, \shivam{depth maps or trajectory pose data as inputs}, which can then be semantically modified to generate the current video frame. \cite{ehrhardt2020relate} performs 2D-aware image composition via generative modeling of objects and learned dynamics. Automatic video manipulation approaches \cite{lee2019inserting, ibrahim2020inserting} \shivam{insert foreground objects or object videos into existing videos} in a seamless manner. Unlike most prior work, our image-composition approach is {3D-layout-aware} and handles occlusions. Thus, by combining our image composition with automatic trajectory generation methods \cite{schulz_interaction-aware_2018,suo2021trafficsim}, we easily extend to automatic and scalable controllable video simulation with high realism.} \paragraph{3D Reconstruction and View Synthesis:} Our neural network-based 3D asset creation step reconstructs 3D shape from camera imagery and LiDAR in order to synthesize novel views of dynamic objects. View synthesis and 3D reconstruction are well-studied open problems \cite{tewari_state_2020}, with varying approaches on the relationship between geometry and appearance and possible geometric representations. Image-based rendering methods~\cite{DeepBlending2018} focus on combining 2D textures to directly render novel views. Appearance flow-based approaches~\cite{zhou_view_2016,park_transformation-grounded_2017,ferrari_multi-view_2018} seek to learn unconstrained pixel-level displacements, whereas~\cite{chen_monocular_2019,zhu_visual_2018} encode geometric information in latent representations and~\cite{kanazawa_learning_2018} takes advantage of strong shape priors. Recently, advancements in differentiable rendering~\cite{liu_soft_2019,kato_neural_2017} and open-source libraries have enabled classical graphics rendering to serve as an optimizable module, allowing for better learning of 3D and visual representations~\cite{chen2019dibrender, zhang2020image}. \section{Geometry-Aware Image Simulation} \label{sec:sim} Here we describe our image simulation by composition approach that places novel objects into an existing 3D scene and generates a \sm{high-quality} video sequence of the composition. Our approach takes as input camera video footage, LiDAR point clouds, and an HD map in the form of a lane graph and automatically outputs a video with novel objects inserted into the scene. Note that the input sensory data and HD maps we employ are readily available in most self-driving platforms, which are the application domain we tackle in this paper. Importantly, our simulation takes into account both geometric occlusions by other actors and the background, plausibility of the locations and motions as well as the interactions with other dynamic agents and thus avoids collision for the newly inserted objects. Towards this goal, we first infer the location of all objects in the scene by performing 3D object detection and tracking \cite{liang2020pnpnet}. % For each new object to be inserted we select where to place it as well as which asset to use based on the HD map and the existing detected traffic. We then utilize an intelligent traffic model for our newly placed object such that its motion is realistic, takes into account the interactions with other actors and avoids collision. The output of this process defines the new scenario to be rendered. We then use a novel-view rendering with 3D occlusion reasoning w.r.t.~all elements in the scene, to create the appearance of the novel objects in the new image. Finally, we utilize a neural network to fill in the boundary of the inserted objects, create any missing texture and handle inconsistent lighting. % Fig.~\ref{figure:pipeline} outlines our approach. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figs/sections/occlusion_inpainting.pdf} \\ \vspace{-.5em} \caption{\textbf{Geometry-aware composition with occlusion reasoning followed by an image synthesis module.}} \label{fig:occlusion_inpainting} \vspace{-.5em} \end{figure*} \subsection{Scenario Generation} \label{sec:placement} We want to place new objects in existing images such that they are plausible in terms of their scale, location, orientation and motion. Towards this goal, we design a 3D sampling procedure, which takes advantage of the priors we have about how vehicles behave in our cities. Note that \sm{it is difficult to achieve similar levels of realism with 2D object insertion.} We thus exploit HD maps that contain the location of the lanes in bird's eye view (BEV), and parameterize the object placement as a tuple $(x,y,\theta)$ defining the object center and orientation in BEV, which we later convert to a 6DoF pose using the local ground elevation. Note that our object samples should have realistic physical interactions with existing objects, respect the flow of traffic, and be visible in the camera's field of view. To achieve this, we randomly sample a placement $(x,y)$ from the lane regions lying within the camera's field of view and retrieve the orientation from the lane. We reject all samples that result in collision with other actors or background objects. The aforementioned process provides the placement of the object in the initial frame. To simulate plausible placements over time for video simulation, we use the Intelligent Driver Model (IDM) \sm{\cite{schulz_interaction-aware_2018, suo2021trafficsim}} fitted to a kinematic model following \cite{Gonzales2016AutonomousDW}, to update the simulated object's state for realistic interactions with surrounding traffic. Fig.~\ref{fig:fig_placement} depicts the full procedure of placement and kinematics simulation. So far we have selected where to place an object and how is going to move, but we still need to select which object to place. In order to minimize the artifacts when rendering our assets, we propose to retrieve objects from the asset bank that were viewed with similar viewpoints and distance to the camera in the original footage. The former avoids the need to deal with large unseen object regions while the latter avoids utilizing assets that have been captured at lower resolution. % Please refer to the \shivam{supp.} for the specific scoring criteria. Objects are then sampled (as opposed to a hard max) according to a categorical distribution weighted by their inverse score. Once a segment is retrieved for a desired placement, we perform collision checking using the retrieved object shape to ensure that the placement is valid. % \subsection{Occlusion-Aware Neural Rendering} \label{sec:warping} Now that we have selected a source object and its corresponding camera image based on the segment retrieval mechanism defined above, we proceed to render this source object into the target scene. Since the object's target pose might differ from the original observed poses, we cannot simply paste the image segment from the source to the target. Thus we proposed to utilize the asset's 3D shape to warp the source to the novel target view. \paragraph{Novel-view Warping:} Let $\mathbf{M}$ be the selected object's 3D mesh, $\mathbf{I}_{\textrm{s}}$ be the source object's camera image, and $\mathbf{P_{\textrm{s}}}/ \mathbf{P_{\textrm{t}}}$ be the source/ target camera matrices. We first render $\mathbf{M}_{\textrm{s}}$ at the selected target viewpoint to generate the corresponding target depth map, $\mathbf{D_{\textrm{t}}}$. Then using the rendered depth map and source camera image $\mathbf{I}_{\textrm{s}}$, we generate the object's 2D texture map using the inverse warping operation \cite{lsiTulsiani18,jaderberg_spatial_2016} as: \begin{align*} \mathbf{I_{\textrm{t}}} &= \mathbf{I_{\textrm{s}}}(\pi(\pi^{-1}(\mathbf{D_{\textrm{t}}}, \mathbf{P_{\textrm{t}}}), \mathbf{P}_{\textrm{s}})) \text{\ , where \ \ \ } \mathbf{D_{\textrm{t}}} = \psi(\mathbf{M}, \mathbf{P_{\textrm{t}}}), \end{align*} $\psi$ is a differentiable neural renderer~\cite{chen2019dibrender} that produces a depth image given the 3D mesh $\mathbf{M}$ and camera matrix $\mathbf{P}$; $\pi$ is the perspective projection and $\pi^{-1}$ is the inverse projection that takes the depth image and camera matrix as input and outputs the 3D points. \paragraph{Shadow Generation:} Inserting an object into a scene will not \sm{only change} the pixels where the object is present, but can also change the rest of the scene (i.e. shadows and ambient occlusion). We improve the perceptual quality of the image by approximating these effects with image based rendering. \yun{While recent works~\cite{wang_people_2020,zhang2019shadowgan} learn shadow synthesis from scene context with a neural network, we render shadows with a graphics engine as geometry is available.} To estimate the shadow casted by each inserted object, we construct a virtual scene with the inserted object and a ground plane and exploit image-based rendering~\cite{debevec2008rendering}, where the environment light comes from a real-world HDRI. We render the scene with and without the inserted objects, and add the shadow by blending in the background image intensities with the ratio of the two rendered images intensities. \yun{As lighting estimation by manually waving a shadow-casting stick~\cite{chuang2003shadow} is not applicable,} we select a cloudy HDRI to cast shadows. In practice, we find this produces reasonable results. \yun{Please refer to the \shivam{supp.} for illustration. } \paragraph{Occlusion Reasoning:} An inserted object must respect occlusions from the existing scene elements. Vegetation, fences, and other dynamic objects, for example, may have irregular or thin boundaries, complicating occlusion reasoning. We employ a simple strategy to determine occlusions of the inserted objects and their shadow in the target scene by comparing their depth w.r.t the depth map of the existing 3D scene \yun{ (see Fig.~\ref{fig:occlusion_inpainting})}. To achieve this, we first estimate the target image's dense depth map through a depth completion network~\cite{Chen_2019_ICCV}. The input is the RGB image and a sparse depth map acquired by projecting the LiDAR sweep onto the image. Using the rendered depth of the object, the occlusion mask is then computed by evaluating for each object pixel if the target image's depth is smaller than the corresponding object pixel's depth. \paragraph{Post-Composition Synthesis:} After occlusion reasoning, the rendered image may still look unrealistic as the inserted segment may have inconsistent illumination and color-balancing w.r.t the target scene, discrepancies at the boundaries, and missing regions that were not available in the source view. To solve these issues, we leverage an image synthesis network % to naturally blend the source segment to the target scene (see Fig.~\ref{fig:occlusion_inpainting}). Our network takes the target background image $\mathbf{B}_t$, \yun{rendered target object} $\mathbf{I}_t$ as well as the object binary silhouette $\mathbf{S}_t$ as input, and outputs the final image that naturally composites the background and rendered object. Our synthesis network architecture is similar to \cite{yu_free-form_2019}, which is a generative image in-painting network % except that we take the rendered object mask as additional input. % Our network is trained using images with instance segmentation masks inferred by \cite{kirillov2020pointrend} in the target scene. Data augmentation, including random occlusion, color jittering, random contrast and saturation is applied to mimic the differences among real-world images. Two loss functions are adopted, namely perceptual loss \cite{DBLP:journals/corr/JohnsonAL16} to ensure the generated output's fidelity, as well as GAN loss to boost the realism of the in-painted region as well as the lighting consistency. Please refer to \shivam{supp.} for more details. \section{Realistic 3D Assets Creation} \label{sec:assets} In this paper we propose a novel image manipulation approach that inserts dynamic objects into an existing video footage and generates a \sm{high-quality} video of the augmented scene that is geometrically and semantically consistent with the scene. Key to the success of such \yun{an} approach is the availability of realistic 3D assets that contain accurate pose, shape and texture. Here we argue that rather than using artists to create these assets, we can leverage data captured by self-driving vehicles to reconstruct the objects around us. This is a much more scalable approach, as many self-driving datasets are available \cite{argoverse,waymo,kitti}% , each containing many thousands of unique assets that could potentially be reconstructed. { In Sec.~\ref{sec:multi_sensor_assets} we first describe how we leverage both LiDAR and camera sensor data from multiple viewpoints to generate realistic 3D vehicle assets using an asset reconstruction network. Sec.~\ref{sec:assets_learning} describes our self-supervised learning procedure to train the network.} \subsection{Multi-Sensor 3D Asset Reconstruction} \label{sec:multi_sensor_assets} Reconstructing 3D dynamic objects in the wild % is challenging: dynamic objects are often partially observed due to the sparsity of the sensor observations and occlusions, they are seen from a {limited} set of viewpoints, and they appear distorted due to lighting and shadows. To tackle these challenges, we propose a novel, learning-based, multi-view, multi-sensor reconstruction approach for 3D dynamic objects that does not require any ground-truth 3D-shape for training. Instead, we exploit weak annotations in the form of 3D bounding boxes, which % are readily available in most self-driving datasets. \begin{figure*}[t] \centering \includegraphics[width=\textwidth,trim={0 8cm 0 5cm},clip]{figs/sections/imagesim_sample_retrieval.pdf} \\ \caption{ {\textbf{3D-aware object placement, segment retrieval, and temporal simulation.}} } \label{fig:fig_placement} \vspace{-.5em} \end{figure*} More formally, let $\{\mathbf{B}_{i,j}\}_{\forall j}$ be the set of 3D bounding boxes where the i-th object is visible over $j$ views in the recorded snippet. Let $\{\mathbf{I}_{i,j}\}_{\forall j}$ be the {associated} set of {cropped} images, {and } $\{\mathbf{X}_{i,j}\}_{\forall j}$ be the associated set of lidar points recorded inside $\{\mathbf{B}_{i,j}\}$, transformed to a single canonical frame and let $\mathbf{X}_{i}$ be the set of {aggregated} LiDAR points across all views. Our 3D reconstruction network then processes the LiDAR points and image inputs in two streams that are later fused to produce the shape of the object. We refer the reader to Fig.~\ref{fig:architecture_assets} for an illustration. {We represent the shape as} a 3D mesh $\mathbf{M}_i = \{ \mathbf{V}_i, \mathbf{F}_i \}$ where $\mathbf{V}_i$ and $\mathbf{F}_i$ are the faces and vertices of the mesh, respectively. In addition, we also store $\{\mathbf{I}_{i,j}, \mathbf S_{i,j}\}_{\forall j}$ to encode object appearance, where $\mathbf S_{i,j}$ is the extracted object's silhouette obtained from a pre-trained instance segmentation model~\cite{kirillov2020pointrend}. We use this later on to perform novel-view warping. \paragraph{Network Architecture:} Our backbone consists of two submodules. A convolutional network takes each cropped camera image as input and outputs a corresponding feature map. The feature maps from multiple cameras % are then aggregated into a one-dimensional latent representation using max-pooling. A similar latent representation is extracted from the input LiDAR point cloud using a PointNet network \cite{qi2016pointnet}. The LiDAR and camera features are then passed through an MLP to output the final shape. \yun{Instead of employing a learned PCA shape space from CAD models to predict the shape of cars \cite{KunduLR18},} we \yun{take inspiration from \cite{kanazawa_learning_2018} and} parameterize the 3D shape as a category-specific \sm{canonical} mean shape \sm{with per-vertex deformations.} {This parameterization} encodes a categorical prior and ensures the completeness of the shape under partial observations. \subsection{Self-Supervised Learning} \label{sec:assets_learning} Note that we do not have supervision about the shape. We thus train our approach end-to-end in an {self-supervised} manner to obtain the parameters of the reconstruction network and the mean shape. % Our training objective encodes the agreement between the 3D shape and the camera and LiDAR observations, as well as a regularization term. % \begin{align*} \ell_{\textrm{total}} = \sum_i \{ \ell_{\textrm{sil}}(\mathbf{M}_i; \mathbf{P}_i, \mathbf{I}_i) \;+\; \ell_{\textrm{lidar}}(\mathbf{M}_i; \mathbf{X}_i) \;+\;\ell_{\textrm{reg}} (\mathbf{M}_i) \} \end{align*} where $i$ {ranges over all the training objects. } The \emph{silhouette loss} measures the consistency between the 2D silhouette (automatically generated using the state-of-the-art object segmentation method PointRend \cite{kirillov2020pointrend}) and the silhouette of the rendered 3D shape. \[\ell_{\textrm{sil}}(\mathbf{M}_i; \mathbf{P}_i, \mathbf{I}_i) = \sum_j\norm{\mathbf{S}_{i,j} - \tau(\mathbf{M_{i,j}}, \mathbf{P_{i,j}}) }_{2}^2\] where $\mathbf{S_{i,j}} \in \mathbb{R}^{ H \times W}$ is the 2D silhouette{, $j$ ranges over multiple views}, and $\tau(\mathbf{M}, \mathbf{P})$ is a differentiable neural rendering operator \cite{chen2019dibrender} that renders a differentiable mask on the camera image given a projection matrix $\mathbf{P}$. % The \emph{LiDAR loss} represents the consistency between the {accumulated} LiDAR point cloud and the mesh vertices, defined as the asymmetric Chamfer distance \[\ell_{\textrm{lidar}}(\mathbf{M_i}, \mathbf{X_i}) = \sum_{\mathbf{x} \in \mathbf{X}_i} \min_{\mathbf{v} \in \mathbf{V}_i} \norm{\mathbf{x} - \mathbf{v}}_{2}^2\] In addition, we also minimize a set of \emph{regularizers} to enforce prior knowledge over the {predicted} 3D shape, namely local smoothness on the vertices as well as normals. This includes 1) a Laplacian regularization which preserves local geometry and prevents intersecting faces; 2) mesh normal regularization which enforces smoothness of local surfaces; 3) edge regularization which penalizes long edges. Please refer to \shivam{supp.} material for details. \section{Experimental Evaluation} In this section we first introduce our experimental setting. We then compare GeoSim against a comprehensive set of image simulation baselines in visual realism through perceptual quality scores and human A/B tests, and in downstream tasks such as semantic segmentation. We also showcase our method generating multi-camera and temporally consistent video simulation. While our method can be adapted to handle most rigid objects, in our experiments we showcase vehicles, the most relevant objects in self driving. \subsection{Experimental Details} \label{sec:exp_setting} {We utilize two large-scale self-driving datasets (UrbanData and Argoverse \cite{argoverse}) to showcase GeoSim. \paragraph{UrbanData:} We collected a real-world dataset by having a fleet of self-driving cars drive in two major cities in North America. % We labeled 16,500 snippets, where each snippet contains 25 seconds (\textasciitilde{}250 frames, sampled at 10Hz) of video with 7 cameras, a 64-beam LiDAR sensor, and HD maps. We use 4500 for reconstruction and synthesis network training, 7000 for depth completion training, and 5000 for perceptual quality and downstream evaluation. Please see supp. for the full breakdown. \paragraph{Argoverse:} {We also evaluate on the Argoverse training split which contains 65 snippets from 2 different cities. We use the provided HD maps for vehicle placement. We directly adopt the 3D assets built from UrbanData, as Argoverse is too small for diverse asset creation. We train our image synthesis network on Argoverse, where \sm{80k} frames \yun{are} sampled for training and \sm{16k} are \yun{sampled} for evaluation. \paragraph{Asset Bank Creation:} We created automatically a large object bank of \textasciitilde{} 8000 vehicles, from cameras, LiDAR data and 3D bounding boxes using our 3D reconstruction network on UrbanData. Each successfully reconstructed object is registered in our 3D asset bank, with its 1) 3D mesh; 2) images; and 3) object pose in ego-vehicle-centric coordinates. We use a pre-trained instance segmentation to get the inferred instance mask~\cite{kirillov2020pointrend}, a LiDAR detector \cite{liang2020pnpnet} to acquire other actors' bounding boxes for collision avoidance (Sec. \ref{sec:placement}), and a depth completion network~\cite{Chen_2019_ICCV} to get dense depth for occlusion reasoning (Sec. \ref{sec:warping}). % \paragraph{Baselines:} \label{sec:exp_baselines} We compare our method against several deep learning based end-to-end 2D image synthesis and augmentation baselines~\cite{lee2018context,wang_high-resolution_2017,hong_learning_2018, park2019semantic}. Unlike GeoSim, these methods cannot perform placement directly and require an input mask based on the \yun{object's} shape and pose that \yun{denotes} the area to synthesize. We therefore use \cite{lee2018context} to insert object instances at the semantic level in a background semantic image. We then generate high resolution images from this augmented scene representation with three different approaches: (1) Holistic image generation (\textbf{"SPADE''}): we use the state-of-the-art conditional image generation model SPADE~\cite{park2019semantic} to generate the entire image given the semantic mask. (2) Retrieval-based generation (\textbf{"Cut-Paste''}): given the new object's 2D mask, we retrieve the most similar example from a bank of 2D object images. The similarity is defined using semantic mask IOU. The rest of the background comes from the corresponding real image; and (3) Guided semantic image editing (\textbf{"Guided-Editing''}): we use \cite{hong_learning_2018} to in-paint the tight bounding box region of the added object. Additionally, we compare against a graphics-based CAD model insertion baseline (\textbf{"CAD''}), in spirit of \cite{alhaija2018augmented} with the following differences: 1) we use our 3D placement in order to produce more realistic layout-aware insertion; 2) unlike the original work, we do not have environment lighting \sm{maps} and \sm{instead} use a HDRI captured on a cloudy day. } \subsection{Perceptual Quality Evaluation} \label{sec:exp_realism} \input{figure_baselines_argo.tex} \paragraph{Human Study:} To verify the realism of our approach, we conduct a human A/B test, where we show a pair of images generated from different approaches on the same background image, one from GeoSim and another one from a competing algorithm. We then ask the human judges to click the one they believe is more realistic. In total, 13 human judges participated and labeled \textasciitilde{} 1500 image pairs. % Tab.~\ref{tab:baseline} shows the human preference score for each algorithm, which measures the percentage of participants who prefer our GeoSim results over each baseline method. Results on Argoverse are presented in Tab.~\ref{tab:argo}. The A/B test confirms that our method produces drastically more realistic images than the baselines. \yun{The minimum $p$-value in the A/B test is 1.64e-18, demonstrating statistical significance.} % Please see the detailed A/B test interface and instructions in supp. % \paragraph{Perceptual Quality Score:} We further use the Fr\'echet Inception Distance (FID)~\cite{FID} between the synthesized images and the ground-truth images as an automatic measure of image quality. We report the FID on the full image {for GeoSim and the baselines} in Tab.~\ref{tab:baseline}. Our method significantly outperforms all competing methods on FID. \paragraph{Qualitative Comparison:} Fig.~\ref{fig:qual-baselines} \shivam{compares} simulated images. Note that GeoSim is significantly more realistic than the baselines. While one can easily and quickly detect the added object in other methods due to unrealistic generation \shivam{with} smeared cars ({"SPADE''}, {"Guided-Editing''}), or geometrically invalid results ({"Cut-Paste''}), or unrealistic appearance ({"CAD''}), one must look closely at {GeoSim} images to distinguish the added objects from the real ones. \shivam{In Fig.~\ref{fig:qual-baselines-argo}, we show qualitative examples on Argoverse, where GeoSim obtains very high visual quality.} This demonstrates GeoSim's potential to generalize across datasets. \paragraph{Effect of Rendering Approach:} We evaluate the importance of using a hybrid rendering module proposed in our method, compared to using solely physics-based rendering or 2D synthesis % with 3D placement constant across all approaches). As shown in Tab.~\ref{tab:ablation}, our proposed \yun{geometry-aware} synthesis significantly outperforms all other approaches on human scores. Additionally, enhancing hybrid-rendering with shadows significantly boosts the realism for humans, but such improvements are not reflected in FID score. This suggests there still exists a gap between computational perceptual quality measurements and humans' criteria. \sm{Please see supp. for ablation of other GeoSim components.} \paragraph{Video Simulation:} We showcase in the supp. video GeoSim's ability to simulate \sm{highly realistic and} temporally consistent video for multiple cameras. \paragraph{Failure Cases:} \yun{ While most GeoSim-simulated images \sm{are high-quality, there is room for improvement.} We find four major failure cases: (1) incorrect occlusion relationships in a complex scene, (2) irregular reconstructed mesh, (3) inaccurate object pose, usually caused by map error and (4) illumination failure \shivam{due to} illumination differences between rendered segment and target scene. Besides, we also notice blank pixel artifacts in long range video simulation, which \shivam{are caused by inverse warping textures from source viewpoints which are \yun{far} from the target viewpoint.} Please refer to the supp. for qualitative examples. } \begin{table}[] \centering \begin{tabular}{ccc} \hline \specialrule{.2em}{.1em}{.1em} Method & Human Score \shivam{(\%)} & FID \\ \hline SPADE~\cite{park2019semantic} & 99.3 & 43.2 \\ Guided Editing~\cite{hong_learning_2018} & 94.3 & 20.3 \\ Cut-Paste~\cite{dwibedi2017cut} & 98.5 & 22.1 \\ CAD~\cite{alhaija_augmented_2017} & 94.3 & 17.3 \\ GeoSim & \textbf{-} & \textbf{14.3} \\\hline \end{tabular} \vspace{-0.25em} \caption{\textbf{Perceptual quality evaluation.} Human score: \shivam{\%} of participants who \shivam{prefer} our GeoSim results over baseline.} \label{tab:baseline} \end{table} \begin{table}[] \scalebox{1}{ \centering \begin{tabular}{ccccc} \hline \specialrule{.2em}{.1em}{.1em} Approach & Shadow & Human Score \shivam{(\%)} & FID \\ \hline Physics & Yes & 94.2 & 17.3 \\ 2D Synthesis & - & 75.7 & \textbf{13.7} \\ Geo Synthesis & No & 71.9 & \textbf{13.7} \\ Geo Synthesis & Yes & \textbf{-} & 14.3 \\ \hline \end{tabular}} \label{tab:ablation_table} \vspace{-0.25em} \caption{\textbf{Ablation on rendering options for GeoSim.} Human score: \shivam{\%} of participants who \shivam{prefer} our GeoSim results over baseline. \label{tab:ablation}} \vspace{-.25em} \end{table} \begin{table}[] \centering \begin{tabular}{ccc} \hline \specialrule{.2em}{.1em}{.1em} Method & Human Score \yun{(\%)} & FID \\ \hline CAD & 84.0 & 28.3 \\ GeoSim & -& \textbf{24.5} \\ \hline \end{tabular} \vspace{-.25em} \caption{\textbf{Results on Argoverse}. Human score: \% of participants who prefer our GeoSim results over baseline. } \label{tab:argo} \end{table} \begin{table}[] \centering \begin{tabular}{ccccc} \hline \specialrule{.2em}{.1em}{.1em} Method & \multicolumn{2}{c}{PSPNet~\cite{zhao2017pspnet}} & \multicolumn{2}{c}{DeepLabv3~\cite{chen2017rethinking}} \\ & mIOU & carIOU & mIOU & carIOU \\ \hline Real & 93.5 & 87.8 & 94.0 & 88.7 \\ Real+GeoSim & \textbf{95.3} & \textbf{91.2} & \textbf{94.2} & \textbf{89.2} \\ \hline \end{tabular} \vspace{-.25em} \caption{\textbf{Sim2Real on semantic segmentation.} } \label{tab:sim2real} \end{table} \input{figure_dataaug.tex} \subsection{Downstream Perception Task} \label{sec:exp_real2sim} We now investigate data augmentation, where we use labeled real data combined with GeoSim to get performance gains, without the cost of large scale annotations (as seen in Fig.~\ref{fig:dataug}).} We first train a segmentation model on labeled real data with around 2000 images. We then use GeoSim to augment these images with inserted vehicles, obtaining 9879 additional training examples in total. We re-train the segmentation model on both real and augmented data for the same number of iterations. We evaluate the performance on real data and report the results on Tab.~\ref{tab:sim2real}. With these additional training images, we can further boost perception performance by $3.4\%$ (or $0.4\%$ on DeepLabv3~\cite{chen2017rethinking}) for car category and $1.8\%$ (or $0.2\%$ on DeepLabv3~\cite{chen2017rethinking}) for overall mIOU on PSPNet~\cite{zhao2017pspnet}. Importantly we can show consistent improvements across two segmentation models. % \section{Conclusion} In this work we presented a novel geometry-guided simulation procedure for flexible generation and rendering of synthetic scenes. Not only is our approach the first of its kind to fully take into consideration physical realism for dynamic object placement into images, it also bypasses the need for manual 3D asset creation and achieves greater visual realism than competing alternatives. % Moreover, we demonstrated improvements in downstream tasks through applications of our technique to \yun{semantic segmentation}. There are many exciting follow-up directions opened up by this work such as sim2real, autonomous system evaluation, video editing, {etc}. % and we look forward to future extensions of \textsc{GeoSim}{}. \subsection{A/B interface and instructions} As discussed in Section 5.2 of the main paper, we performed a human study to demonstrate that GeoSim images appear more realistic compared to other baseline methods. For interface simplicity and ease of annotation, we performed pairwise comparisons instead of ranking. We also perform pair-wise comparison over ranking to mitigate user-bias, as all the baselines we compare against use the same object proposal method. An example interface is shown in Fig. \ref{fig:ab_interface}. Each user would be provided two images, one generated with GeoSim and one generated with one of the baseline methods. Each image would be assigned with equal probability to the top or bottom location. Both the baseline and GeoSim would receive the same input real image and semantic segmentation. Based on the method and placement procedure, an object would be added to the scene, not necessarily in the same location (as seen in Fig. \ref{fig:ab_interface}). We asked 16 users who are familiar with self-driving but who had limited or no knowledge of image simulation or of our method to select which image they prefer. On average, users annotated approximately 120 images, for a total of $\sim$1900 images. Here are the detailed instructions we provided along with each query: Detailed instructions: \begin{quote} For each pair of images, please select the more realistic of the two by selecting either {TOP or BOTTOM} as the image label. Make sure to consider all relevant aspects of realism including but not limited to: visual appearance, lighting, shadows, relationship between elements within the image (ie occlusions), consistency in color, weather conditions, positioning on the road, and traffic regulations. We recommend clicking either on the images or on the top left blue arrow button to resize both images to fit the window. Click "next task" to move on. Note you will not be able to go back to a previous image, and the "Finish" button has no effect until all examples have been labeled, so there is no risk in accidentally clicking it. You can use the keyboard shortcuts 1,2 for TOP, BOTTOM, respectively. \end{quote} { We compute the success rate, i.e. whether our method is preferred over a specific baseline as: $\frac{\textrm{\# of times GeoSim selected}}{\textrm{\# of pairs with GeoSim and baseline}}$. {We perform one-tailed binomial testing with the null hypothesis that GeoSim is not better than the baseline in over 50\% of cases. The maximum $p$-value across all human evaluations} % is 1.64e-18 (when GeoSim was preferred in 211/279 pairs in the 2D synthesis ablation), indicating statistical significance. } \begin{figure*}[t] \includegraphics[width=\linewidth,trim={0 3cm 0 0},clip]{./figs/ab_interface.jpg} \vspace{-1em} \caption{\textbf{A/B test user interface}. Users must select which image ("TOP" or "BOTTOM") they consider most realistic. \label{fig:ab_interface}} \end{figure*} \subsection{Ablation Analysis} We also conduct qualitative ablation on three key components in post processing: occlusion reasoning, shadow and synnet. We compare GeoSim results against the one with the selected component removed. As seen in Fig. \ref{fig:qual-abl}, with any one of the component being removed, the realism of the synthesis results drop significantly. Removing occlusion reasoning, the synthesized vehicles aren't able to conform with the existing scene elements. Removing Shadow, the synthesized vehicles seems to be off-the ground. Removing synnet, the synthesized vehicles show inconsistent illumination and color-balancing w.r.t the target scene and discrepancies at the boundaries. Please zoom in to see the detail. { In addition, we also show lane map and synthesis network ablations. Specifically, Fig.~\ref{fig:lanemap} shows results of GeoSim using random sampling instead of the lane map, where we uniformly sample empty ground locations according to the LiDAR sensor data and draw uniform orientations. Random sampling generates some interesting and useful edge cases, but it is not temporally consistent, making it not amenable to video simulation. The A/B test result shows that humans prefer our GeoSim at 95.5\% of the time. As for human scores % for GeoSim without the image synthesis network, users overwhelmingly (95.0\%) prefer the full GeoSim. } \section{Qualitative Results} \subsection{Visual Comparisons} In addition to the qualitative results shown in Fig.~5 in the paper, we further showcase more qualitative comparisons among the previously-discussed image simulation baselines in Fig.~\ref{fig:qual-baselines-supp}. As can be seen from Fig.~\ref{fig:qual-baselines-supp}, GeoSim produces much more realistic and 3D aware simulated images, compared to the visually significant failures (e.g., blurred textures, implausible placements, distorted shapes, boundary artifacts) produced by the other simulation methods. We also showcase more qualitative comparisons on public dataset Argoverse in Fig.~\ref{fig:argo-qual-results-supp}. \subsection{Failure Cases} \input{figure_failures_supp} While GeoSim simulated images manages to produces realistic results in many cases, there is still space for potential improvements. In Fig.~\ref{fig:qual-failures-supp}, we highlight four major failure cases: (1) Incorrect Occlusion Relationships in complicated scene, (2) Irregular reconstructed mesh, (3) Inaccurate object poses, usually caused by Map error and (4) Illumination failure by distinct illumination difference between rendered segment and target scene. \begin{figure*}[!ht] \centering \includegraphics[width=1\textwidth]{figs_supp/abl.jpg} \\ \caption{\textbf{Qualitative ablation on the composition.}} \label{fig:qual-abl} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{figs/lane_qual.jpg} \caption{\textbf{GeoSim results without lane map.}} \label{fig:lanemap} \vspace{-2mm} \end{figure*} \subsection{Segment Retrieval Details} \paragraph{Single-view Segment Retrieval:} To retrieve object-views for rendering in target-view, we first eliminate candidates from significantly different viewpoints (larger than 10\si{\degree} view changes to the target view). Then we rank the existing objects by considering their similarity in relative view angle $\theta$ and distance $d$ from the camera. \[ \textrm{score}(\textrm{object}_\textrm{tgt}, \textrm{object}_\textrm{src}) = \abs{\theta_\textrm{tgt}-\theta_\textrm{src}} + 5\cdot\max(d_\textrm{tgt}-d_\textrm{src},0) \] Objects are then sampled (as opposed to a hard max) according to a categorical distribution weighted by their inverse score. \paragraph{Multi-view Segment Retrieval:} To retrieve object with multi views for rendering in videos and multi-cameras, we consider the view range of source object. For every object, we first calculate the view ranges overlap with the target object $\Delta \Theta$, and filter out source objects with small overlap ($\Delta \Theta<20 \si{\degree}$). Then we rank the existing objects by considering their overlap and minimum distance $d_\textrm{src}$ from the camera. \[ \textrm{score}(\textrm{object}_\textrm{tgt}, \textrm{object}_\textrm{src}) = 2\cdot { \Delta \Theta} + 5\cdot\max( \min{ d_\textrm{tgt}}- \min{d_\textrm{src}},0) \] Objects are then sampled (as opposed to a hard max) according to a categorical distribution weighted by their inverse score. \begin{figure*}[t] \centering \includegraphics[width=0.8\textwidth]{figs/meshnet_arch_compressed.pdf} \\ \caption{\textbf{3D reconstruction network architecture.} Left: Image feature extraction backbone; Right: Multi-view image fusion block. \label{fig:meshnet_arch}} \end{figure*} \subsection{Shadow Generation Details} As we discussed in Sec.~4.2 in the paper, Fig.~\ref{fig:shadow-gen-supp} shows the procedure that we applied to generate shadows. More qualitative comparison results on the difference between whether applied shadow generations are shown in Fig.~\ref{fig:qual-baselines-supp}. \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{figs_supp/shadow/shadow_schematics.jpg} \caption{\textbf{Schematics of shadow generation}. (left to right): result without shadow, schematics of virtual scene, shadow weight (ratio of intensity between rendered image with inserted object and without inserted object), result with shadow} \label{fig:shadow-gen-supp} \end{figure*} \subsection{UrbanData Dataset breakdown} In the UrbanData, we have roughly 16.5K labelled snippets. Out of these, 12K snippets have manually annotated 2D segmentation maps (one frame per snippet). As shown in Tab.~\ref{tab:urban-dataset-breakdown}, we divide these 16.5K snippets into multiple splits for training and testing various components of the GeoSim pipeline as well as the baslines. Tab.~\ref{tab:urban-dataset-experiment-breakdown} maps each task to its corresponding training/ testing dataset split. Note that GeoSim does not need any gt labeled data. All labeled data are for training/evaluation purposes. \begin{figure}[t] \begin{minipage}[c]{0.3\textwidth} \resizebox{0.95\textwidth}{!}{% \setlength{\tabcolsep}{3pt} \begin{tabular}{lcccc} \specialrule{.2em}{.1em}{.1em} Dataset Split & \#Logs & \#Frames used \\ \hline Split A & 5K & 1.2M \\ Split B & 7K & 7K \\ Split C & 2.8K & 2.8K \\ Split D & 2K & 2K \\ \specialrule{.2em}{.1em}{.1em} \end{tabular} } \captionof{table}{\textbf{UrbanData dataset splits.}} \label{tab:urban-dataset-breakdown} \end{minipage} \hfill \begin{minipage}[c]{0.65\textwidth} \resizebox{0.95\textwidth}{!}{% \setlength{\tabcolsep}{3pt} \begin{tabular}{lcccc} \specialrule{.2em}{.1em}{.1em} Task & SubTask & Dataset Split & Label Used\\ \specialrule{.1em}{.05em}{.05em} GeoSim & Mesh Reconstruction training & Split A & 3D Box and mask from \cite{kirillov2020pointrend} \\ GeoSim & Post-Compostion training & Split A & mask from \cite{kirillov2020pointrend} \\ GeoSim & Depth Completion training & Split B & Aggregated Lidar \\ GeoSim & Whole pipeline & Split C & 3D Box from \cite{liang2020pnpnet} \\ \hline Baselines & Training & Split B & GT Semantic Mask \\ Baselines & Test & Split C & GT Semantic Mask \\ Downstream & Sim2Real Training & Split C & GT Semantic mask and GeoSim Mask \\ Downstream & Sim2Real Evaluation & Split D & GT Semantic mask\\ \specialrule{.1em}{.05em}{.05em} \end{tabular} } \captionof{table}{\textbf{UrbanData experimental setting.}} \label{tab:urban-dataset-experiment-breakdown} \end{minipage} \end{figure} \subsection{Depth Completion} To realistically place the simulated object in the new scene, we need to infer the occlusion relations between the simulated object and the existing scene elements. To do that, we compare the rendered depth of each simulated object's pixel with the corresponding pixel of the background scene. Since the partial LiDAR sweep belonging to the background scene is not dense enough, we first perform depth completion to generate a dense depth map for the corresponding scene. In this section, we first provide the ground-truth dense depth dataset preparation details, followed by the architectural details of the depth completion model. \paragraph{Training Data Preparation:} The UrbanData dataset has long trajectories of LiDAR sensor and camera sensor data with manually annotated detection labels. We first generate a dense LiDAR point cloud by aggregate the multi-sweep LiDAR sensor data. For the static background scene, we aggregate the multi-sweep data by compensating the ego-motion of the SDV. For all the objects (filtered out using the detection labels), we aggregate the multi-sweep data by transforming each of them to the corresponding object-coordinate system. For the dynamic objects, we additionally perform color-based ICP to better register the multi-sweep data. We further densify all the aggregated ``vehicle'' point clouds by converting each of them to a dense watertight mesh using a pre-trained implicit surface reconstruction model, DeepSDF \cite{park2019deepsdf}. The "non-vehilce" categories were densified by splatting each LiDAR point onto a triangular surfel disk. The aggregated point cloud is then rendered to the corresponding camera images to generate the dense depth maps. We first render the background scene aggregated points, followed by (instance-segmentation map aware) rendering of all the detected objects in ascending order of the objects median depth. We used manually annotated ground-truth instance-segmentation maps for depth map refinement. \paragraph{Architecture Details:} We use the generated dense-depth dataset to train a depth completion model. For the model, we use same network architecture as DeepLabV3 \cite{chen2017rethinking}, except the first and the last convolution layers. The input to the model is a concatenated array of camera image, projected sparse depth image, projected sparse depth mask, and the dilated (dilated to 9-neighbouring pixels) sparse depth map. We intialize the DeepLabV3 architecture with the pre-trained COCO weights. \section{Additional Technical Details} \subsection{3D Reconstruction Network} Our 3D reconstruction network takes cropped images and LiDAR sweeps from multiple viewpoints. All cropped images are padded to have the same height and width and are then resized to 256 $\times$ 256. A small fully convolution network (as seen in Fig.~\ref{fig:meshnet_arch}) is used to extract image features. Note that in the figure, $Conv(K,S,C)$ refers to a convolution layer with kernel size $K$, stride $S$ and output channel $C$. Padding is adjusted to make sure the output size is the same as the input. GroupNorm~\cite{wu2018group} with 32 channels per group is used after each convolution. ReLU is used as the non-linear activation. % The final output is flattened to a single feature vector for each image. To fuse the image features from multiple views, we design a fuse block (as shown on the right of Fig \ref{fig:meshnet_arch}). In the block, multiview features are maxpooled to a single vector, which is then concatenated with each input. The augmented feature vector for each view is processed by a two layer MLP. The dimensions of all hidden layers and output layers of the MLP are 1024. We adopt 4 fuse blocks and maxpool to generate the final image-based feature vector. We adopt a standard PointNet~\cite{qi2016pointnet} as the LiDAR feature extractor, which produces a feature vector with size 1024. Two linear layers consume the final concatenated LiDAR and image features to produce the mean-shape mesh deformation $\delta V$. The mean-shape mesh is initialized from an icosasphere with 2562 vertices and 5120 triangle faces. As defined in the paper, we supervise the mesh reconstruction pipeline using silhoutte consistency loss, LiDAR consistency loss and the mesh regularization terms. The \emph{regularizers} are applied to enforce prior knowledge over the resultant 3D shape, namely local smoothness on the vertices as well as normals. $$\ell_{\textrm{regularization}}(\mathbf{M}_i) = \alpha \ell_{\textrm{edge}}(\mathbf{M}_i) \;+\; \beta\ell_{\textrm{normal}}(\mathbf{M}_i) \;+\; \gamma \ell_{\textrm{laplacian}}(\mathbf{M}_i)$$ The \emph{edge regularization} term penalizes long edges, thereby preventing isolated vertices. $ \ell_{\textrm{edge}}(\mathbf{M}_i) = % {\sum_{\mathbf{v} \in \mathbf{V_i}} \sum_{\mathbf{v}^\prime \in \mathbf{N_v}} \norm{\mathbf{v}-\mathbf{v}^\prime}_2^2} $, with $\mathbf{N_v}$ the first ring neighbour vertices of a given vertex $\mathbf{v}$. % The \emph{Laplacian regularization} \cite{mesh_laplacian} preserves local geometry and prevents intersecting mesh faces by encouraging the centroid of the neighbouring vertices to be close to the vertex: $ \ell_{\textrm{laplacian}}(\mathbf{M}_i) = {\sum_{v \in \mathbf{V_i}} % \norm{ \sum_{\mathbf{v}^\prime\in \mathbf{N_v}} (\mathbf{v}-\mathbf{v}^\prime)}_2^2} $. The \emph{normal regularization} enforces smoothness of the local surface normals, i.e., neighbouring faces are expected to have similar normal direction: $ \ell_{\textrm{normal}}(\mathbf{M}_i) = % {\sum_{(i, j) \in \mathbf{N}_\mathbf{F}} (1 - \langle \mathbf{n}_{i}, \mathbf{n}_{j}\rangle)} $, with $\mathbf{N}_\mathbf{F}$ the set of all the neighbouring faces indices, and $\mathbf{n}_i$ the surface normal of a given face $\mathbf{f}_i$. We set $\alpha, \beta, \gamma$ to 10.0 in our experiments. The model is trained for 200 epoches with 4 input views and batch-size 64 on 16 GPUs. We train it using Adam optimizer with initial learning rate 0.001 and decaied by 0.1 at 150, 180 epoch respectively. It takes about 6 hours to train. \subsection{Post-composition Synthesis Details} \begin{figure}[t] \centering \includegraphics[width=1\textwidth]{figs/synnet_data_aug_compressed.pdf} \\ \caption{\textbf{Input data preparation for training the synthesis network}. From left to right: scene image $I$, object segment $S$ and mask $M$ and three random data augmentation including color-jitter, segment boundaries erosion-expansion and random mask in the boundary. \label{fig:data_aug}} \end{figure} \paragraph{Training Data Preparation:} Our synthesis network is trained on dynamic object images with per-pixel instance labels inferred by \cite{kirillov2020pointrend} in the target scene. Given a scene image $I$, we first sample a vehicle binary mask $M$ in that scene, as well as its corresponding RGB segment $S=I\cdot M$. Then we apply data augmentation on the segment and mask to mimic the noisy input at the inference stage, with color inconsistency, missing texture and imperfect boundary. Specifically, we applied 1) random color jitter on the scene and segment separately, which randomly changes the brightness, contrast and saturation to mimic color inconsistency between foreground and target scene; 2) random erosion on the segment boundary from 3 to 20 pixels and a random dilation on the mask $M$ from 3 to 40 pixels to blend object boundary naturally; 3) a random drop on the segment $S$ with 0.1\%--1\% of total boundary pixels and applied random dilation on those samples to mimic missing textures of the inserted virtual object. Please refer to Fig.~\ref{fig:data_aug} for illustration of such data augmentation process. \paragraph{Architecture Details:} Our synthesis network architecture is inspired by \cite{yu_free-form_2019}. One difference is that our network takes the instance segment mask as an additional input. Thus, the network takes as input the object segment, target scene and mask region. We crop a 512$\times$512 region centered around the object center from the full scene. \paragraph{Loss Functions:} We apply perceptual loss \cite{DBLP:journals/corr/JohnsonAL16} on the generated image $I'$ and $I$, using the \texttt{conv3\_3} feature activations in a pretrained VGG16 network $F_v$. \[ L_G^{\textrm{perc}} = \sum \norm {F_v(I) - F_v(G(I\cdot (1-M^A), M^A, S^A))}_1 \] A GAN loss is also applied to optimize the synthesis network. \[ L_G^{\textrm{gan}} = - \mathbb{E}_{z\sim P_z(z)}[D(G(I\cdot (1-M^A), M^A, S^A))] \] For the discriminator, we adopt the same loss as \cite{yu_free-form_2019}. \paragraph{Inference:} At the inference stage during simulation, our network takes the raw composition image from novel view warping and occlusion reasoning as input and produces natural blended results. Specifically, we crop a square region with the visible rendered segment in the centre, which is twice the size of the larger side of rendered vehicles ($size$). The cropped size is set to be $128\times 128$ at least and $1024\times 1024$ at most. Invisible pixels due to inverse warping are filled with zeros. {Then we c erode the rendered segment by $\dfrac{size}{64}$ pixels and dilate the mask by $\dfrac{size}{32}$ pixels}. These inputs are fed into the synthesis network and the final outputs paste back to the original location. \subsection{Video Simulation Details} In order to simulate a realistic video footage with new dynamic objects inserted, we first select a subset of snippets from our dataset of real-world logs, each consisting of 50 consecutive frames sampled at 10\si{\hertz} (for a total duration of 5s) to augment with up to 5 simulated objects using our approach. For these relatively short sequences, we sample the object placement within the initial camera field-of-view in the first frame of the sequence and equip the object with a realistic path using the local lane graph, as illustrated in Figure 3 of the main text. To retrieve an object for insertion from the 3D asset bank, we consider the view-range of each source object. For every object, we first calculate the view-range overlap with the target object, $\Delta \Theta$, and filter out source objects with small overlap ($\Delta \Theta<20 \si{\degree}$). Then we rank the existing objects by considering their overlap and minimum distance $d_\textrm{src}$ from the camera. \[ \textrm{score}(\textrm{object}_\textrm{tgt}, \textrm{object}_\textrm{src}) = 2\cdot { \Delta \Theta} + 5\cdot\max( \min{ d_\textrm{tgt}}- \min{d_\textrm{src}},0) \] Objects are then sampled (as opposed to a hard max) according to a categorical distribution weighted by their inverse score. We now describe how we use a set of heuristics-based behavior and lateral/longitudinal driving models to refine the path into a timestep-by-timestep \emph{trajectory} of \emph{kinematic states} comprising of location, orientation, velocity, and acceleration. This realistic trajectory simulation helps the added vehicle in the simulated video achieve realistic motion such as braking and acceleration. After executing segment retrieval and performing a final collision check, the object and its refined trajectory is then inserted into the log for inclusion. Additional objects can be added in a similar fashion. We use a cost-based heuristic behavior model for determining lane change actions, a heuristic lateral model for the side-to-side motion of the object, and the Intelligent Driver Model~\cite{schulz_interaction-aware_2018}% for the longitudinal movement. We define the following notation: % \begin{align*} p_x &= \textrm{longitudinal position} \\ v_x &= \textrm{longitudinal velocity} \\ p_y &= \textrm{lateral position} \\ v_y &= \textrm{lateral velocity}. \end{align*} All models use a frequency, or reaction time, of 10\si{\hertz}. \subparagraph{Behavior Model Details:} Two actors are considered as colliding if the inter-vehicle distance between them is less than 2m. Let $d_h,d_f$ denote the distances between the given vehicle and the headway (nearest front) vehicle or following (nearest back) vehicle respectively. Similarly, define $c_h,c_f$ to be the headway and following costs. The cost functions we use are: \begin{align*} c_h &= \begin{cases} 10^8 & d_h < 2 \\ \tfrac{10^4}{d_h} & \textrm{otherwise} \end{cases} \\ c_f &= \begin{cases} 10^8 & d_f < 2 \\ \tfrac{10^2}{d_f} & \textrm{otherwise}. \end{cases} \end{align*} The cost of making a lane change is $10^3$, and the cost $c_l$ for being close to the end of a lane when distance $d_l$ from the lane end is \[ c_l = \tfrac{10^5}{d_l}. \] \subparagraph{Lateral Model Details:} We use a simple heuristic for the target lateral speed that seeks to return to the lane centerline, bounded by a function of the longitudinal (forward-backward) movement. Specifically, \[ v_x = \min(-p_x, 0.1 v_y), \] clipped so that the maximum acceleration magnitude is 3\si{ms^{-2}}. \subparagraph{Longitudinal Model Details:} We use the following parameters: minimum and maximum target speeds of 15\si{ms^{-1}} and 25\si{ms^{-1}} respectively, acceleration exponent of 4\si{ms^{-2}}, maximum acceleration and deceleration magnitudes of 5\si{ms^{-2}}, minimum gap of 2\si{m}, headway time of $1.5$\si{s}, and default vehicle length of $4.5$\si{m}.
1,108,101,562,379
arxiv
\section{Introduction} A charged drop of radius $a$ suspended in a medium with electrical permittivity, $\epsilon_e$ undergoes an instability when the total charge on the drop exceeds a critical value of $Q_c=8 \pi \sqrt{\gamma a^3 \epsilon_e}$, where $\gamma$ is the surface tension of the drop \cite{rayleigh1882}. This is termed as Rayleigh instability, which is believed to be responsible for the breakup of raindrops in thunderstorms, the formation of sub-nanometer droplets in electrosprays and generation of ions in ion-mass spectrometry \cite{rosell1994,fenn1989}. This instability occurs when the repulsive Coulombic force overcomes the restoring surface tension force. An infinitesimal quadrupolar shape perturbation (the $2^{nd}$ Legendre mode) on a spherical drop charged beyond $Q_c$ is known to be the most unstable mode \cite{thaokar10}. Although the Rayleigh limit predicts the point of onset of instability, it leaves the details of the break up pathway completely unspecified. The inherent complexity present in the breakup mechanisms was demonstrated only recently \cite{duft03,giglio08} through systematic experiments on a levitated charged drop in a quadrupolar trap. These experiments indicated that above its Rayleigh limit, a charged drop gradually deforms into the shape of a prolate spheroid, elongating further to form sharp conical tips, wherefrom a jet emerges out within a very short time. These jets further disintegrate into a cloud of smaller daughter droplets which eventually take away a significant fraction (20-50\%) of the original charge, although the associated mass loss is small (0.1-1\%) \cite {doyle64, abbas67, roulleau72, richardson89, taflin89, duft03}. The sizes and charge on the daughter droplets thus formed are important since they determine whether the progenies can undergo further breakup or not. In this letter, we focus on providing a model for explaining the observed break-up pathway and estimation of the size and charge on the daughter droplets. The Rayleigh fission process of an isolated charged drop is generally modeled under the assumption of a perfectly conducting (PC) liquid drop in which the charges are distributed uniformly on its equipotential surface. The flow equations are solved numerically using boundary element method (BEM) either in the viscous flow limit \cite{betelu06} or in the potential flow limit \cite{burton11}. Both these studies show that the charged drop deforms initially into the shape of a spheroid, progressively deforming into an elongated object with sharp conical tips, whereafter it undergoes a numerical singularity. The results in the viscous flow limit indicate that the capillary stresses at the sharp tips become subdominant and a balance of the viscous and the electric stresses leads to the formation of a dynamic cone angle of about $25^{o}$. In contrast, the simulations for the potential flow limit yield a cone angle of about $49.3^{o}$, coincidentally close to the classical equilibrium angle of the Taylor cone derived from static considerations. Actual experimental images show a cone angle of $30^o$ indicating significant viscous effects \cite{giglio08}. Furthermore, the PC model was also used for predicting fractional charge loss of about 39\% \cite{gawande2017} assuming negligible mass loss. To proceed beyond singularity within the framework of PC model and predict ejection, \citet{garzon14} performed BEM simulations coupled with a level set technique for inviscid drops. Although the model could predict daughter droplets, the ejection occurred from protrusions, whose lengths were far smaller (1/5th of the droplet diameter), in contrast to the experimental observation of long (3 times the drop diameter) jets \cite{duft03, giglio08}. Besides, in view of the absence of viscosity, these protrusions were likely to be artefacts of inertial excursions rather than due to sustained tangential stresses necessarily required for jet formation. Considering these, the PC model fails to predict jet formation and one needs to look for alternative mechanisms to explain the complex pathway of break up seen in experiments. \citet{collins08, collins13} had observed that charge dynamics and viscous stresses are necessary for the jet formation in the study of breakup of uncharged oil drops having low conductivity {\em{under strong applied electric fields}}. Taking a clue from this, we apply it to the case of Rayleigh break up of a charged drop ({\em{in the absence of any external electric field}}) with high but finite conductivity which involves faster dynamics and high electrical stresses in the viscous limit. In this spirit the problem is solved numerically by considering an electrically charged drop of a conducting liquid of viscosity ${\mu}_i$, density ${\rho}_i$ and the conductivity $\sigma_i$ suspended in a perfectly dielectric Newtonian fluid medium of viscosity $\mu_e$ with a permittivity, ${\epsilon}_e$. The external medium is considered to be a gas or air. The electric potential on the surface of the drop due to presence of surface charge is denoted by $(\tilde {\phi})$ and the electric field is expressed as $\tilde{{\bf E}}=-\tilde{{\bf \nabla}} (\tilde {\phi})$, with $\tilde{\nabla}^2 \tilde{\phi}=0$. We consider the hydrodynamics in the Stokes flow limit, such that the Ohnesorge number, $Oh=\mu_i/\sqrt{\rho_i a \gamma}$, is large. The dimensional quantities are indicated by tilde and non dimensional quantities are without tilde. The nondimensional parameters used in this problem are as follows: length scales are of the order of the initial radius $a$ of the spherical drop. The time is nondimensionalized by the hydrodynamic timescale, $t_h=\mu_i a/\gamma$, the velocity by $\gamma/\mu_i$, the surface charge density $\tilde q$ by $\sqrt{\frac{\gamma \epsilon_e}{a}}$ and the electric field by $\sqrt{\frac{\gamma}{a \epsilon_e}}$. The total surface charge is nondimensionalized by $\sqrt{\gamma a^3 \epsilon_e}$ such that the non dimensional Rayleigh charge is $8\pi$. The electrostatics and the Stokes equations are solved using the axisymmetric boundary integral method, using well established methodologies \cite{deshmukh12,lanauze2015,gawande2017}. For a finitely conducting (FC) charged drop the electric boundary conditions at the interface in the scaled variables can be written as, $E_{ne}-S E_{ni}=q$, ${E_{t_e}}={ E_{t_i}}$. Thus the boundary integral equation for the electric field calculation is given by, \begin{multline} \frac{(S+1)}{2S} E_{n_e}({ \bf r}_s)+\frac{1}{4\pi}(\frac{S-1}{S})\int {\bf{n}}\cdot { \bf {\nabla}} {\bf{G}}^e({ \bf r},{ \bf r}_s) E_{n_e}({ \bf r}) dA({ \bf r}) \\ =\frac{1}{2S}q({ \bf r})-\frac{1}{4\pi S}\int {\bf{n}}\cdot{ \bf {\nabla}} {\bf{G}}^e({ \bf r},{ \bf r}_s) q({ \bf r})dA({ \bf r}) \label{eqn:BIforE} \end{multline} and for the electrostatic potential $\phi({ \bf r}_s)$, \begin{equation} \phi({ \bf r}_s)=\frac{1}{4\pi} \int {\bf{G}}^e({ \bf r},{ \bf r}_s) (E_{n_e}({ \bf r})-E_{n_i}({ \bf r}))dA({ \bf r}) \label{eqn:BIforpot} \end{equation} where ${\bf{G}}^e ({ \bf r},{ \bf r}_s)=\frac{1}{ |{\bf r-r_s}|}$ while ${\bf{r}}$ and ${\bf {r}_s}$ are the position vectors on the surface of the drop with area $A$ and $E_{ne}=\bf{E_e}\cdot {\bf n}$ where ${\bf n}$ is the outward unit normal. The conservation of the total surface charge is ensured through the charge dynamics equation which on non-dimensionalisation reduces to, \begin{equation} \frac{\partial q}{\partial t}=\frac{S}{Sa}{\bf{E_{ni}}}- \left(\frac{1}{r}\Bigr{[}\frac{\partial}{\partial s}(q r v_t)\Bigr{]}+q({ \bf {\nabla}}_s \cdot \bf{n})(\bf{v}\cdot \bf{n}) \right) \label{eqn:charge_conservation} \end{equation} where, $S=\epsilon_i/\epsilon_e$ is the ratio between permittivities of the drop and the external medium while $Sa=t_e/t_h$, is the nondimensional number known as Saville number where, $t_e=\epsilon_i/\sigma_i$ is the charge relaxation time and $t_h$ is the hydrodynamic timescale. ${ \bf {\nabla}}_s=(\bf{I}-\bf{n}\bf{n})\cdot { \bf {\nabla}}$ represents the surface gradient \cite{stone90}. The first term on the right hand side of the equation \ref{eqn:charge_conservation} accounts for charges brought to the surface by conduction, while second and third terms are convection terms. The second term represents the meridional advection of charges while the third term is a source like term which accounts for local change in the charge density due to dilation of the drop surface. Here, the outside electric field does not appear as the conductivity of the external fluid medium, $\sigma_e$, is considered to be zero. The force density responsible for drop deformation is then given by, $\triangle{\bf{f}}= {\bf{n}}[{ \bf {\nabla}} \cdot {\bf{n}}-[{\bf{\tau}^e}]] $, where $[{\bf{\tau}^e}]$ is the nondimensional jump in the electrical traction across the interface and is given by, \begin{equation} [{\bf{\tau}^e}]=\frac{1}{2}[(E_{n_e}^2-SE_{n_i}^2)+(S-1)E_{t_e}^2]{\bf n}+q E_{t_e} \bf{t} \label{eqn:taue} \end{equation} To initiate the evolution process the drop is deformed slightly into a shape of a prolate spheroid and a total charge of $8.1\pi$ (which is slightly above the Rayleigh limit) is then distributed uniformly on the deformed drop. In the present simulations it is ensured that the total surface charge is conserved to an accuracy of 1\%. Numerically an adaptive meshing is used to ensure that the local grid size $\triangle s_{min}$ is always smaller than the minimum neck radius. The time steps are adapted using the criteria, $\triangle t= C \triangle s_{min}/v_{n_{max}}$, where $C$ denotes the CFL number which is kept constant at 0.01 and $v_{n_{max}}$ is maximum velocity with which the grid points move in the given timestep (see SM for details). \begin{figure} \centering \includegraphics[width=0.45\textwidth]{fig1_shapes_new.jpg} \caption{Comparison of the temporal evolution of the drop shapes for the two cases of PC (dotted line) and FC (solid line) drop models. PC drop model forms sharp conical ends at t=23.8 and exhibits numerical singularity, while FC drop model continues to deform further and jet is ejected out with a small progeny at the tip of the drop. The inset at the right bottom corner is the experimental image of drop breakup presented by \citet{duft03} (reprinted with permission).} \label{fig:sequence} \end{figure} The effect of conductivity is introduced through a $Sa$. Typically, for example, a methanol drop of radius 50 $\mu m$ size, with the conductivity ($\sigma_i=4\times10^{-4} S/m$) taken at room temperature has $Sa=0.55$. This indicates that, when the length scales are of the order of droplet radius $a$, the charge relaxation is faster than the characteristics timescales used in the simulations. Thus it appears that the PC drop model may suffice to predict the Rayleigh fission process. For the PC drop, the charge distribution is instantaneous and the inside electric field is zero due to the assumption of an equipotential surface. Thus for PC drop, equation \ref{eqn:BIforpot} is used to calculate the external electric field $E_{ne}$ by putting $E_{ni}=0$. The unknown potential $\phi({\bf r_s})$ is constant on the surface of the drop, and is determined by the condition of conservation of charge, $\int E_{ne} ({\bf r}) dA({\bf r})=Q$, where $Q$ is the initial charge on the drop surface which is conserved during the shape evolution. Thus the nondimensional jump in the electrical stresses in PC drop model is given by, $[{\bf{\tau}}^e]=\frac{1}{2} E_{ne}^2 {\bf n}$. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{fig2_t_qandk.jpg} \caption{Comparison of the temporal evolution of curvature and charge density at the tip of the drop for the two cases of PC (filled symbols) and FC (open symbols) drop models.} \label{fig:tvskandq} \end{figure} \begin{figure} \includegraphics[width=0.48\textwidth]{fig3_final_svsq.jpg} \caption{The distribution of charge density on the drop surface as a function of normalised arclength ($s$) at various times and drop shapes indicating point of maximum charge density (red dot) for the corresponding times in case of (a) PC and (b) FC drop model. For FC drop maximum charge density shifts from the poles of the drop towards the equator with time. Insets show the magnified figures near the poles for better clarity.} \label{fig:chargedensity} \end{figure} The typical drop deformation sequences with time in PC and FC drop models are shown in figure \ref{fig:sequence}. At $t=23.8$, the PC drop model exhibits a shape singularity owing to its limitation of instantaneous charge transport and the absence of tangential electric stresses. Precisely this limitation is overcome by the FC model in which the finite time taken by the flow of charge to the regions of high curvature delimits the build up of charge at the tip of the drop to a finite value since by then the capillary stresses relax the tip curvature ($\kappa_{tip}$) and the simulations can be continued further. A temporal analysis shown in figure \ref{fig:tvskandq} indicates that the charge density at the poles seems to reduce earlier in time (at $t$=22.5) than the curvature (maxima at $t$=23.2), suggesting that the reduction in curvature at the poles is a consequence of the reduction in charge density. At this stage the electric potential near the tip of the FC drop reduces and equipotential assumption is no more valid (refer supplementary material for potential distribution). The spatial variation of charge density and the curvature as shown in figure \ref{fig:chargedensity}(b) indicate that these variables are now extremized at a location below the poles unlike the PC drop where the charge density and curvature remains maximum at the tip of the drop (figure \ref{fig:chargedensity}(a)). \begin{figure} \centering \includegraphics[width=0.48\textwidth]{fig4_pc_fc_stress.jpg} \caption{Electric stress distribution and velocity profiles in case of (a),(b) PC drop and (c), (d) FC drop model at time t=23.8. The electric stress is purely normal in case of PC drop model with maximum stress acting on the tip of the drop while stress distribution is modified due to presence of weak tangential stresses in case of FC drop model. The velocity profiles show the reversal of flow due to modification of stress distribution in FC drop model.} \label{fig:pcfc_stress} \end{figure} The reason for deviation from the equipotential state is indeed the finite charge dynamics admitted by the FC model. As the dynamics accelerates after the formation of conical ends, the length scale independent charge dynamics ($t_e$) becomes comparable to the size ($l$) dependent hydrodynamic time scale ($t_{hl}=\mu_i l/\gamma$). While in a slightly deformed drop (upto the formation of conical ends), the length scale can be assumed to be of the order of the size of the drop, subsequently, the curvature at the poles becomes the more relevant length scale. The slow charge dynamics relative to the hydrodynamics now means that the charges cannot reach instantaneously to the new surface created, resulting in violation of the equipotential assumption. The variation of charge density and potential along the surface of the drop leads to tangential field and thereby tangential electric stresses. Unlike the normal electric stresses which can be balanced by the capillary forces, the tangential electric stresses lead to tangential fluid flow in the system. Thus a hyperboloidal tip is formed in the FC drop from where a jet emerges out (figure \ref{fig:sequence}). At the time of formation of conical tips the normal electric stresses are maximum at the poles in the PC model (figure \ref{fig:pcfc_stress}(a)). On the other hand, in the FC model, the normal electric stress increases with time upto the formation of conical ends, but subsequently shows a dramatic reduction at the poles. The tangential stress on the other hand, while nearly negligible upto the cone formation shows a buildup with time. In the PC model the pressure (see SM for pressure distribution) at the poles is negative due to large normal tensile electric stress, thereby leading to a parabolic axial velocity profile as shown in figure \ref{fig:pcfc_stress}(c). In the FC model, the pressure at the poles and in the neck region can be positive and high. The tangential stresses and the pressure distribution then leads to an axial velocity profile which is maximum at the drop interface in the jet region (figure\ref{fig:pcfc_stress}(d)). This causes a flow reversal inside the droplet. Thus the modification of normal electric stress distribution due to the presence of tangential electric stress leads to emergence and subsequent fattening of a jet from the conical ends of the droplet. The role of tangential stresses is affirmed by switching them off in the force balance and the jet formation is not observed. The charge dynamics has three contributions, conduction from the bulk, convection along the surface and change in charge density due to surface dilation. Our analysis indicates that the charge density and jet formation is most affected by the surface dilation terms. The high stretching of the interface due to normal forces at the poles, leads to depletion of charges thereby kick-starting the formation of a jet and its subsequent necking (see SM for more details). \begin{figure} \includegraphics[width=0.49\textwidth]{fig5_effectSa.pdf} \caption{Effect of Sa on the drop shapes formed before breakup in FC drop case. (a)Size of the progeny, (b)length of the jet, (c)the ratio of charge carried by the progeny droplet to its Rayleigh limit indicates that the progeny droplets are unstable at the time of their formation and (d) Deformed drop shapes at the onset of pinch off of progeny as a function of $Sa$.} \label{fig:rdqdscale} \end{figure} Figure \ref{fig:rdqdscale}(a) shows the effect of conductivity in terms of $Sa$ on the size of the progeny formed during the Rayleigh breakup. It is observed that the radius of progeny ($r_d$) formed is lower for lower $Sa$ which implies that a liquid drop of higher conductivity will form smaller progenies. This is in agreement with the previous studies \cite{burton11,collins13}. A naive scaling of a balance of the electric time scale $t_e$ and the hydrodynamic time scale $t_{hl}$ leads to $l/a\sim Sa$. On the other hand if we consider that the jet is issued after the conical tips approach the singularity, we find that radius of the jet ($r_j$) is equivalent to the reciprocal of the curvature ($1/\kappa$) at the tip of the drop that scales as $(t_o-t)^{1/2}$ (refer \cite{gawande2017}). In dimensional terms, this suggests that $ r_j/a\sim [(\tilde{t}_o-\tilde{t})/(\mu_i a/\gamma)]^{1/2}$. Realising that the charge loss occurs over the electric time scale $(\tilde{t}_o-\tilde{t})\sim t_e$, leads to $r_j/a\sim Sa^{1/2}$. Thus over the length scale $l$, the jet has a lower charge and thereby the surface tension forces become dominant in the jet region. This leads to jet breakup by the Rayleigh Plateau instability and forms the progeny droplets of size equivalent to the radius of the jet. This qualitatively explains the progeny droplet size $r_d\sim Sa^{1/2}$ as observed in the simulations. Similarly, the scalings (shown in figure \ref{fig:rdqdscale}(c)) for the dimensional charge present on the progeny can be explained by singular scaling of charge density at the incipience of a jet which is given by, $q_d\sim [(t_o-t)/(\mu_i a/\gamma)]^{-1/2}$ \cite{gawande2017}. Thus the total charge on the daughter droplet, $Q_d\sim q_d {r_d}^2$ implies that $Q_d\sim Sa^{1/2}$ over the electric timescale $t_e$. This when presented in terms of the fraction of the Rayleigh charge ($Q_c$) results in $\tilde{Q}_d\sim Q_c Sa^{-1/4}$ (figure \ref{fig:rdqdscale} (b)). This indicates that the Rayleigh fission of a charged droplet with high conductivity will produce the marginally stable progeny droplets. This result is in agreement with the results predicted by potential flow analysis \cite{burton11}. The asymptotic results of high $Sa$ are in agreement with perfect dielectric calculations (not discussed here) which are independent of the conductivity of the droplet. Thus, the weak scaling of $r_d$ ($\sim Sa^{0.1}$) at higher Sa can be attributed to strong dielectric effects. It is also observed that the jet length increases with the decrease in conductivity and reaches to a maximum value for $Sa=1.1$ but reduces for higher $Sa$ values (figure \ref{fig:rdqdscale}(c)). The drop shapes at the onset of breakup shown in figure \ref{fig:rdqdscale}(d) indicate that the drops with higher conductivity form a distinct jet before a progeny detaches from the tip of the drop by pinch-off. Our numerical analysis indicates that, while the jet incipience occurs over a fast time scale of $Sa$, the jet elongates over a slower time scale of $Sa^{1/2}$. Since, the jet velocity scales as $Sa^{-1/3}$ it leads to jet length scaling as $Sa^{1/6}$. However, at lower conductivities the dominance of capillary stresses occurs much earlier than the formation of sustained jet and the droplet breaks by pinch off. The high $Sa$ is not of much practical relevance in studies on Rayleigh breakup and electrosprays wherein salts are often added to increase the conductivity of the liquids. \\ We have investigated the formation of daughter droplets due to a highly nonlinear breakup of a charged jet, in the viscous limit. The analysis is valid when $Oh\gg1$. Thus the present analysis could be considered for the breakup of droplets of sizes, of the order of their viscous length scales ($\mu_i^2/(\rho \gamma)$) or smaller. For example, the results presented in this work will hold for the case of $4 \mu m$ droplets for $1-Octanol$, $6 \mu m$ droplet for n-Decanol or $32 \mu m$ droplet for 3-Ethylene glycol. We propose that since droplets in the processes such as electrospray or ionisation in ion mass spectroscopy, eventually undergo Rayleigh fission at the smallest length scales, the viscous analysis does become relevant in these processes at late stages, and could actually explain the nanometer sized daughter droplets formed in some experiments on electrospray \cite{chen1995,singh2016}. Thus while the analysis of \citet{collins13} will indeed hold good for prediction of the droplet size emerging from a Taylor cone in an electrospray experiment under an applied electric field, the final size distribution could be governed by the viscous scaling suggested in this work.
1,108,101,562,380
arxiv
\section{Introduction} Expanders are sparse graphs with high connectivity properties. Explicit constructions of expander graphs have potential applications in computer science and is an area of active research. One of the most significant recent results on expanders is that Cayley graphs of finite simple groups are expanders, see \cite{KasLubNik06},\cite{BreGreTao}. More precisely there is a $k$ and $\epsilon>0$ such that every non-abelian finite simple group G has a set of $k$ generators for which the Cayley graph $X(G; S)$ is an $\epsilon$ expander. The size of $k$ is estimated around $1000$. The present paper is a byproduct of the investigation in \cite{BloHof2009b, BloHof2009c} of Curtis-Tits structures and the associated groups. A Curtis-Tits (CT) structure over $\mathbb{F}_q$ with (simply laced) Dynkin diagram $\Gamma$ over a finite set $I$ is an amalgam ${\mathcal A}=\{G_i,G_{i,j}\mid i,j\in I\}$ whose rank-$1$ groups $G_i$ are isomorphic to $\SL_2(\mathbb{F}_q)$, where $G_{i,j}=\langle G_i,G_j\rangle$, and in which $G_i$ and $G_j$ commute if $\{i,j\}$ is a non-edge in $\Gamma$ and are embedded naturally in $G_{i,j}\cong \SL_3(\mathbb{F}_q)$ if $\{i,j\}$ is an edge in $\Gamma$. It was shown in \cite{BloHof2009b} that such structures are determined up to isomorphisms by group homomorphisms from the fundamental group of the graph $\Gamma$ to the group $\Aut(\mathbb{F}_q)\times \mathbb{Z}_2\le \Aut(\SL_2(\mathbb{F}_q))$. Moreover, in the case when the diagram is in fact a cycle, all such structures have non-collapsing completions, which are described in \cite{BloHof2009c}. It turns out that such groups can be described as fixed subgroups of certain automorphisms of Kac-Moody groups. This is an important point since they will turn out to have Kazhdan's property (T) hence they will give rise to families of expanders. Many of these groups will be Kac-Moody groups themselves but some will not. In particular again in the case where the diagram is a cycle we obtain a new group which turns out to be a lattice in $\SL_{2n}(K)$ for some local field $K$ and so it will have property (T). Moreover, the group in question will have finite unitary groups as quotients, giving a more concrete result for unitary groups than~\cite{KasLubNik06}. In particular we have \begin{mainthm}\label{maintheorem1} For any $n>1$ and prime power $q$, there exists an $\epsilon>0$ such that for any positive integer $s$, there exists a symmetric set $S_{n,q^s}$ of five generators for $\SU_{2n}(q^s)$ so that the family of Cayley graphs $X(\SU_{2n}(q^s), S_{n,q^s})$ forms an $\epsilon$-expanding family. \end{mainthm} One can view this result as a generalization of results that derive expander graphs from Kac-Moody groups. Indeed it is known c.f.~\cite{DymJan2002} that if $q>\frac{(1764)^n}{25}$, then the automorphism group $G(\mathbb{F}_q)$ of a Moufang twin-building over $\mathbb{F}_q$ has Property (T) and so by Margulis' theorem (Theorem~\ref{margulis}) if $G(\mathbb{F}_q)$ admits infinitely many finite quotients, their Cayley graphs form a family of expanders. In fact, by~\cite{CaRe2006}, non-affine Kac-Moody groups of rank $n<q$ over $\mathbb{F}_q$ are almost simple, so they don't have finite quotients. Therefore the above result only applies to locally finite Moufang twin-buildings of affine type. In this case, it is known that if $G$ is a connected almost ${\mathbb K}$-simple algebraic group over a local field ${\mathbb K}$ with rank $\ge 2$, then $G({\mathbb K})$ has property (T) (cf. Theorem 1.6.1 in~\cite{Bek08}). Again this allows one to create a family of expanders for each group $G({\mathbb K})$ and each characteristic. Our methods have been introduced in \cite{BOU00, GraHorMuh2011, GraMuh08} in a slightly more general setting. Theorem~\ref {maintheorem1} is weaker than the types of results in \cite{KasLubNik06} and \cite{BreGreTao} in the sense that the rank and the characteristic of the groups need to be fixed. However, our construction is very explicit and the generating set involved is very small compared to theirs. \paragraph{Acknowledgement} The second and third authors would like to thank the Isaac Newton Institute in Cambridge were part of this work was done. \section{The groups} Let $V$ be a free $\mathbb{F}_q[t,t^{-1}]$-module of rank $2n$ with basis $\{e_i,f_i\mid i\in I\}$. Here $I=\{i=1,\ldots,n\}$ and $\mathbb{F}_q[t,t^{-1}]$ denotes the ring of commutative Laurent polynomials in the variable $t$ over a finite field $\mathbb{F}_q$. Recall that a $\sigma$-sesquilinear form $\beta$ on $V$ is a map $\beta:V\times V \to \mathbb{F}_q$ so that $\beta$ is linear in the first coordinate and $\beta(u, \lambda v+ w)=\sigma(\lambda)\beta(u,v)+\beta(u,w)$ for all $u,v,w\in V$ and $\lambda\in \mathbb{F}_q$. Such a form is determined by its values on basis elements. Let $\beta$ be such that, for all $i,j\in I$, $\beta(e_{i},e_{j})=\beta(f_{i},f_{j})=0, \beta(e_{i}, f_{j})=t\delta_{ij}$ and $\beta(f_{i},e_{j})=\delta_{ij}$ where $\sigma\in \Aut(\mathbb{F}_q[t,t^{-1}])$ fixes each element of $\mathbb{F}_q$ and interchanges $t$ and $t^{-1}$. More precisely \[G^\tau:=\{ g \in \SL_{2n}(\mathbb{F}_q[t,t^{-1}])| \forall x,y \in V, \beta(gx,{g}y)=\beta(x,y)\}\] In \cite{BloHof2009c} it was proved that $G^\tau$ is a universal ``non-orientable'' Curtis-Tits group. It turns out that the group $G^\tau$ has some very interesting natural quotients and that its action on certain Clifford-like algebras is related to phenomena in quantum physics. The aim of this paper is to prove that the group $G^{\tau}$ has Kazhdan's property (T). This implies that the finite quotients of this group will form families of expanders. Before doing this we record the following lemma. \begin{lem}\label{lem:5-generation} The group $G^\tau$ can be generated with a symmetric set $S$ of size at most $5$. \end{lem} \noindent{\bf Proof}\hspace{7pt} Consider the element $s\in \SL_{2n}(\mathbb{F}_q[t,t^{-1}])$ transforming the basis above as follows. For each $i=1, \dots n-1$, $e_i^s=e_{i+1}$ and $f_i^s=f_{i+1}$, $e_n^s = f_1$ and $f_n^s=t^{-1}e_{1}$ It is not too hard to see that $s$ is in fact an element of $G^\tau$. Moreover consider the subgroup $$L_0=\{\diag(A, I_{n-2}, {}^tA^{-1}, I_{n-2})\mid A\in \SL_2(\mathbb{F}_q)\}.$$ It follows from~\cite{BloHof2009c} that $G^\tau$ is generated by $s$ and $L_0$. Now, since $\mathbb{F}_q$ is finite, $L_0$ is generated by an involution $x$ and another element $y$. Hence we can take $S=\{x,y,y^{-1}, s, s^{-1}\}$. \qed \ \medskip \newline Let $\bar{\mathbb{F}}_q$ denote the algebraic closure of $\mathbb{F}_q$. For any $a\in \bar{\mathbb{F}}_q^*$ consider the specialization map $\epsilon_a\colon\mathbb{F}_q[t,t^{-1}]\rightarrow \bar{\mathbb{F}}_q$ given by $\epsilon_a(f)=f(a)$. The map induces a homomorphism $\epsilon_a\colon\SL_{2n}(\mathbb{F}_q[t,t^{-1}]) \to \SL_{2n}(\mathbb{F}_q(a)) $. In some instances the map commutes with the automorphism $\sigma$ so that one can define a map $\epsilon_a\colon G^\tau \to \SL_{2n}(\bar{\mathbb{F}}_q) $. The most important specialization maps are given by $a=\pm1$ or $a=\zeta$, a $(q^s+1)$-st root of $1$ where $s$ is a positive integer. In case $a=\pm 1$, the automorphism $\sigma$ is trivial and if $q>2$, the specialization maps yield surjections of $G^\tau$ onto $\Sp_{2n}(\mathbb{F}_q)$ and $\Omega^{+}_{2n}(\mathbb{F}_q)$. \begin{lem}\label{lem:Gtau onto SU} Suppose that $a\in \bar{\mathbb{F}}_q$ is a primitive $(q^{s}+1)$-st root of $1$, for some positive integer $s$. Then, $\epsilon_a(G^\tau)=\SU_{2n}(q^s)$. \end{lem} \noindent{\bf Proof}\hspace{7pt} Define $\tilde V= V\otimes_{\mathbb{F}_q[t,t^{-1}]}\mathbb{F}_q(a)$ and let $\tilde \beta$ be the respective evaluation of $\beta$. We shall also denote by $\bar \lambda$ the image of $\lambda$ under the Galois automorphism given by $a \mapsto a^{-1}$. Define the transvection map $T_{v}(\lambda): \tilde V \to \tilde V$ by $T_{v}(\lambda)(x)=x+\lambda\tilde \beta (x,v) v$. Note that the group $\SU_{2n}(q^{s})$ is generated by the set $${\mathcal T}=\{T_{v}(\lambda) | \lambda\in \mathbb{F}_q(a)\mbox{ with }\bar\lambda + a\lambda =0, \mbox{ and } v \in \{\tilde e_1, \ldots, \tilde f_{n} \}\}$$ since the elements in $\{\langle T_{e_{i}}, T_{f_{i}}\rangle\mid i\in I\}$ generate a weak Phan system (see~\cite{BSh} for details). Therefore if we can lift each map in ${\mathcal T}$ to $G^{\tau}$, Lemma~\ref{lem:Gtau onto SU} will be proved. We propose that for each $v \in \{\tilde e_1, \ldots, \tilde f_{n} \}$, the lift of $T_{v}(\lambda)$ would be given by a ``transvection'' map $\Phi_{v}(x)=x+F\beta(x, v)v$ where $F\in \mathbb{F}_{q}[t,t^{-1}]$ satisfies $F(a)=\lambda$. This map is obviously in $\SL_{2n}(\mathbb{F}_{q}[t,t^{-1}])$ so the only thing one needs to check is the fact that it leaves $\beta$ invariant. An immediate computation gives \begin{align*}\beta(x,y)-\beta(\Phi_{v}(x), \Phi_{v}(y)) &=\sigma( F)\beta(x,v)\sigma({\beta(y,v)})+ F\beta(x,v)\beta(v,y)\\ & = (\sigma(F) +t F)\beta(x,v)\sigma({\beta(y,v) })\end{align*} and so the sufficient conditions are $F(a)=\lambda$ and $\sigma(F)+tF=0$. To find $F$ we shall need the following. Let $f_a\in \mathbb{F}_q[t]$ be the minimal (monic) polynomial for $a$. Then $\sigma(f_a)=t^{-2s}f_a$. Namely, note that $a$ and $a^{-1}$ are conjugate. Moreover if $b$ is another root of $f_a$ then $b$ is a root of $x^{q^s+1}-1$ so it is a power of $a$ and in particular $b^{-1}$ is also a root of $f_a$ and of course $b\ne b^{-1}$ since otherwise $f_a$ will not be irreducible. In conclusion the roots of $f_a$ come in pairs $b, b^{-1}$. This means that $f_a(0)=1$. Now $\sigma( f_a)= f_a(t^{-1})=t^{-2s}f_a'$ where $f_a'$ is a monic irreducible polynomial that has the same roots as $f_a$ so $\sigma(f_a)=t^{-2s}f_a$. We now find $F$. Pick a polynomial $P\in \mathbb{F}_q[t]$ so that $P(a)=\lambda$. Since $\bar\lambda + a\lambda=0$, $a$ is a root of $\sigma(P)+tP$ and so $\sigma(P)+tP=f_aG$ for some $G\in \mathbb{F}_q[t, t^{-1}]$. Applying $\sigma$ shows that $\sigma(f_a)\sigma(G)=t^{-1}f_aG$ and we get the condition $\sigma(G)=t^{2s-1}G$. Assume $G=\sum _{i=-r}^{l}a_it^i$, the condition above gives that $r= 2s-1+l$ and $a_{-r+i}=a_{l-i}$ for each $i=1, \dots l+r$. We now propose to find an element $H \in \mathbb{F}_q[t, t^{-1}]$ so that $\sigma(f_aH)+ tf_aH= f_aG$. Then, $F= P-f_aH$ will have the property that $F(a)=\lambda$ and $\sigma(F)+tF=0$. The condition on $H$ is that $\sigma (H)t^{-2s}+tH=G$. The conditions on $G$ imply that one choice for $H$ is $$H=t^{-l-2s}+ t^{-l-2s+1}+\dots t^{-s-1}+ (a_{-s+1}-1)t^{-s}+ \dots (a_l-1)t^{l-1}.$$ \qed \ \medskip \newline It follows from the next result, that, in order to conclude that $G^\tau$ has property (T), it suffices to show that $G^\tau$ is a lattice in $\SL_{2n}(k((t)))$. \begin{thm} \begin{enumerate} \item {\rm (Theorem 1.4.15 in~\cite{Bek08})} Let $K$ be a local field. The group $\SL_n (K)$ has Property (T) for any integer $n \ge 3$. \item {\rm (Theorem 1.7.1 in~\cite{Bek08})} If $G$ is a locally compact group and $H$ is a lattice in $G$ then $H$ has property (T) if and only if $G$ does. \end{enumerate} \eth \noindent To do this we use the methods of \cite{BOU00,GraHorMuh2011,GraMuh08}. The more general argument is briefly described in Remark 7.11 in \cite{GraHorMuh2011}. For convenience we state Lemma 6.22 and 6.23 from \cite{BloHof2009c}: \begin{lem}\label{lem:equal distances} Suppose that $c_\varepsilon\in \Delta$ satisfies $\delta_*(c_\varepsilon,c_\varepsilon^\tau)=w$, let $i\in I$ and suppose that $\pi$ is the $i$-panel on $c_\varepsilon$. Then, \begin{enumerate} \item\label{lem:equal distances part a} There exists a word $u\in W$ such that $u(u^{-1})^\tau$ is a reduced expression for $w$. \item\label{lem:equal distances part b} If $l(s_iw)>l(w)$, then all chambers $d_\varepsilon\in \pi-\{c_\varepsilon\}$ except one satisfy $\delta_*(d_\varepsilon,d_\varepsilon^\tau)=w$. The remaining chamber $\check{c}_\varepsilon$ satisfies $\delta_*(\check{c}_\varepsilon,(\check{c}_\varepsilon)^\tau)=s_iws_{\tau(i)}$. \item\label{lem:equal distances part c} If $l(s_i w)<l(w)$, then all chambers $d_\varepsilon\in \pi-\{c_\varepsilon\}$ satisfy $\delta_*(d_\varepsilon,d_\varepsilon^\tau)=s_i w s_{\tau(i)}$. \end{enumerate} \end{lem} For each $w\in W$ with $w^{-1}=w^\tau$, let $C_w=\{ c \in \Delta_+ | \delta_*(c,c^\tau)=w\}$. We now have the following strong version of Theorem 6.16 of ~\cite{BloHof2009c}. \begin{lem}\label{lem:transtive on Cw} The group $G^\tau$ is transitive on $C_w$ for each $w\in W$ with $w^{-1}=w^\tau$. \end{lem} \noindent{\bf Proof}\hspace{7pt} For $w=1$, this is Theorem 6.16 from~\cite{BloHof2009c}. We now use induction on the length of $w$. To prove the induction step let $c,d\in C_w$ and let $i\in I$ be such that $l(s_iw)<l(w)$. Let $c',d'\in \Delta_+$ be $i$-adjacent to $c$ and $d$ respectively. Then, by Lemma~\ref{lem:equal distances} part~\ref{lem:equal distances part c}, $c',d'\in C_{s_iws_{\tau(i)}}$ and $l(s_iws_{\tau(i)})<l(w)$. By induction there is a $g\in G^\tau$ with $g(c')=d'$. By Lemma~\ref{lem:equal distances} part~\ref{lem:equal distances part b} $g(c)=d$. \qed \begin{cor}\label{cor gtau has T} For $n>1$, $G^\tau$ is a non-uniform lattice in $\SL_{2n}(\mathbb{F}_q((t)))$ with property (T). \end{cor} \noindent{\bf Proof}\hspace{7pt} We apply Lemma 1.4.2 of \cite{BOU00} to conclude that $G^{\tau}$ is a lattice. Since $\det\colon M_n(\mathbb{F}_q((t)))\to \mathbb{F}_q((t))$ is a continuous map between locally compact Hausdorff spaces, $\SL_n(\mathbb{F}_q((t)))$ is locally compact. The group $\SL_n(\mathbb{F}_q((t)))$ acts cocompactly on $\Delta$ because it is chamber transitive, and it acts properly discontinuously because the residues of $\Delta$ are finite. Moreover, $G^\tau$ is discrete because $G^\tau\cap U_\varepsilon=1$, where $U_\varepsilon$ is the unipotent radical of the Borel group of $\SL_{2n}(\mathbb{F}_q[t,t])$ for its action on $\Delta_\varepsilon$ ($\varepsilon=+,1$). By Lemma~\ref{lem:transtive on Cw}, $G^\tau$ acts transitively on the sets $C_w$ and these partition $\Delta_+$. For each $u\in W$ pick an element $c_u$ so that $\delta_*(c_u,c_u^\tau)=u(u^{-1})^\tau$ to parametrise the orbits of $G^\tau$ on $\Delta_+$. It now follows from Lemma 1.4.2 of \cite{BOU00} that $G^{\tau}$ is a lattice if and only if the series $\sum_{u\in W} |\Stab_{G^\tau}(c_w)|^{-1}$ converges. By Lemma~\ref{lem:equal distances} there are exactly $q^{l(u)}$ elements of $C_{1_w}$ at distance $u$ from $c_u$ and for each of these $q^{l(u)}$ chambers $d$, $c_u$ is the unique chamber in $ C_{u({u^-1})^\tau}$ such that $\delta(c,d)=u$. Therefore the group $\Stab_{G^\tau}(c_w)$ acts transitively on these $q^{l(u)}$ chambers and $|\Stab_{G^\tau}(c_w)|^{-1}\le (q^{-1})^{l(u)}$. Thus it suffices to show that the Poincar\'e series of $W$, defined as $W(x)=\sum_{u\in W} x^{l(u)}$ converges for $x=q^{-1}$. It follows from a result of Bott~\cite{Bot1956} that $W(x)=\frac{1-x^{n+1}}{(1-x)^{n+1}}$, which clearly converges for small $x$. That $G^\tau$ is non-uniform follows from~\cite[1.5.8]{BasLub2001} since $\SL_{2n}(\mathbb{F}_q((t)))$ is transitive on $\Delta_+$ and $G^\tau$ has infinitely many orbits.\qed \section{The expanders} \begin{defn} Let $X=(V,E)$ be a finite $k$-regular graph with $n$ vertices. we say that $X$ is an $(n,k,c)$ expander if for any subset $A\subset V$, $|\partial A|\ge c(1- \frac{|A|}{N})|A|.$ Here $\partial A =\{v \in V \mid d(v, A)=1\}$. \end{defn} \begin{thm}\label{margulis}{\rm (Margulis \cite{Mar73})} Let $\Gamma$ be a finitely generated group that has property (T). Let ${\mathcal L}$ be a family of finite index normal subgroups of $\Gamma$ and let $S=S^{-1}$ be a finite symmetric set of generators for $\Gamma$. Then the family $\{X(\Gamma/N, S)\mid N \in {\mathcal L}\}$ of Cayley graphs of the finite quotients of $\Gamma$ with respect to the image of $S$ is a family of $(n,k,c)$ expanders for $n=|\Gamma/N|, k=|S|$ and some fixed $c>0$. \eth \smallskip\noindent Lemma~\ref{lem:Gtau onto SU} therefore has the following consequence. \begin{cor} \label{cor:expanders} Let $n>1$. If $S$ be a symmetric generating set for $G^\tau$ then the family of Cayley graphs $\{X(\SU_{2n}(q^s), S)\mid s\ge 1\}$ is an expanding family. \end{cor} \begin{prop}\label{prop:almost always non-trivial images} The image of any non-trivial $g\in G^\tau$ in $\SU_{2n}(q^s)$ is non-trivial for all but finitely many $s$. \end{prop} \noindent{\bf Proof}\hspace{7pt} Suppose that the image of $g\in G^\tau$ in $\SU_{2n}(q^s)$ is trivial for infinitely many $s$. Then $\epsilon_{a}(g)=I_{2n}\in \SU_{2n}(\mathbb{F}_q(a))$ for infinitely many $a$. In particular, if $g_{ij}(t)$ is any entry of $g$, then $g_{ij}(a)=\delta_{ij}$ for infinitely many $a$. But $g_{ij}\in \mathbb{F}_q[t,t^{-1}]$, so $g=I_{2n}$. \qed \ \medskip \newline Finally Lemma~\ref{lem:5-generation}, Lemma~\ref {lem:Gtau onto SU}, and Corollary~\ref{cor:expanders} prove Theorem~\ref{maintheorem1}.
1,108,101,562,381
arxiv
\section{Introduction}\label{sec:intro} In the current paradigm of structure formation, the concordance $\Lambda$CDM scenario, the dark matter halos that host galaxy systems are assembled hierarchically, through the merger and accretion of smaller subunits. One relic of this process is the presence of {\it substructure}, which consists of the self-bound cores of accreted subsystems that have so far escaped full disruption in the tidal field of the main halo (Klypin et al 1999; Moore et al 1999). Although substructure halos (referred to hereafter as ``subhalos'', for short) typically make up only a small fraction ($5$ to $10\%$) of the total mass of the system, they chart the innermost regions of accreted subsystems, and are thus appealing tracers of the location and kinematics of the {\it galaxies} that subhalos may have hosted. Substructure is thus a valuable tool for studying galaxies embedded in the potential of a much larger system, such as satellite galaxies orbiting a primary, or individual galaxies orbiting within a group or cluster of galaxies. This realization has prompted a number of studies over the past few years, both analytical and numerical, aimed at characterizing the main properties of subhalos, such as their mass function, spatial distribution, and kinematics (e.g. Ghigna et al 1998, 1999; Moore et al 1999; Taylor \& Babul 2005a,b; Benson 2005; Gao et al 2004; Diemand et al 2007a,b). Consensus has been slowly but steadily emerging on these issues. For example, the {\it mass function} of subhalos has been found to be rather steep, $dN/dM \propto M_{\rm sub}^{-1.9}$ or steeper, implying that the subhalo population is dominated in number by low-mass systems but that most of the substructure mass resides with the few most massive subhalos (Springel et al 2001, Helmi, White \& Springel 2002, Gao et al 2004). Confirmation of this comes from the fact that the total fraction of mass in subhalos is rather low (typically below 10\%) even in the highest resolution simulations published so far (although see Diemand et al 2007a,b for a differing view). Subhalos have also been found to be {\it spatially biased} relative to the smooth dark matter halo where they are embedded, avoiding in general the innermost regions. Furthermore, the number density profile of the subhalo population also differs markedly from that of galaxies in clusters, and possibly from the radial distribution of luminous satellites around the Milky Way (Kravstov et al 2004; Willman et al 2004; Madau et al 2008). This precludes identifying directly the population of ``surviving'' subhalos with galaxies in clusters, and highlights the need for either more sophisticated numerical modeling techniques, or for pairing up the N-body results with semi-analytic modeling in order to trace more faithfully the galaxy population (Springel et al 2001, De Lucia et al 2004, Gao et al 2004, Croton et al 2006). One intriguing result of all these studies has been the remarkably weak dependence of the properties of substructure on subhalo mass. Gao et al (2004) and Diemand, Moore \& Stadel (2004), for example, find that the radial distribution of subhalos is largely independent of their self-bound mass. This is surprising given the strong mass dependence expected for the processes that dictate the evolution of subhalos within the main halo, such as dynamical friction and tidal stripping. Although efficient mixing within the potential of the main halo is a possibility, an alternative explanation has been advanced by Kravtsov, Gnedin \& Klypin (2004). These authors argue that the {\it present-day} mass of a subhalo may be a poor indicator of the {\it original} mass of the system, which may have been substantially larger at the time of accretion, and used this idea to motivate how the faintest dwarf companions of the Milky Way were able to build up a sizable stellar mass through several episodes of star formation despite their shallow present-day potential wells. The same idea was also adopted by Libeskind et al (2007) as a possible reason for the peculiar spatial alignment of satellites around the Milky Way (Lynden-Bell 1976, 1982; Kunkel \& Demers 1976; Kroupa, Thies \& Boily 2005). We revisit here these issues with the aid of a suite of high-resolution N-body simulations of galaxy-sized halos. We extend prior work by carefully tracking the orbits of surviving subhalos back in time. This allows us to select a complete set of subhalos {\it physically associated} with the main halo, rather than only the ones that happen to be within the virial radius at a particular time. As we discuss below, a large fraction of the associated subhalo population are on unorthodox orbits that take them well beyond the virial radius, a result with important implications for studies of satellite galaxies and of halos clustered around massive systems. The plan of this paper is as follows. We introduce briefly the numerical simulations in \S~\ref{sec:numexp} and describe our subhalo detection algorithm and tracking method in \S~\ref{sec:anal}. Our main results are presented in \S~\ref{sec:res}: we begin by exploring the subhalo spatial distribution and kinematics, as well as their dependence on mass, and discuss the consequences of our findings for the subhalo mass function. We end with a brief summary and discussion of possible implications and future work in \S~\ref{sec:conc}. \section{The Numerical Simulations} \label{sec:numexp} \subsection{The Cosmological Model} All simulations reported here adopt the concordance $\Lambda$CDM model, with parameters chosen to match the combined analysis of the first-year WMAP data release (Spergel et al 2003) and the 2dF Galaxy Redshift Survey (Colless et al 2001). The chosen cosmological parameters are $\Omega_{\rm m}=\Omega_{\rm dm}+\Omega_{\rm b}=0.25$, $\Omega_{\rm b}=0.045$, $h=0.73$, $\Omega_{\rm \Lambda} = 0.75$, $n=1$, and $\sigma_8=0.9$. Here $\Omega$ denotes the present-day contribution of each component to the matter-energy density of the Universe, expressed in units of the critical density for closure, $\rho_{\rm crit}=3H^2/8\pi G$; $n$ is the spectral index of the primordial density fluctuations, and $\sigma_8$ is the rms linear mass fluctuation in spheres of radius $8 \, h^{-1}$ Mpc at $z=0$. Hubble's ``constant'' is given by $H(z)$ and parameterized at $z=0$ by $H(z=0)=H_0=100\, h $ km s$^{-1}$ Mpc$^{-1}$. \subsection{The Runs} Our analysis is based on a suite of $5$ high-resolution simulations of the formation of galaxy-sized $\Lambda$CDM halos. The simulations target halos of virial mass\footnote{We define the virial mass of a halo, $M_{200}$, as that contained within a sphere of mean density $200\times \rho_{\rm crit}$. The virial mass defines implicitly the virial radius, $r_{200}$, and virial velocity, $V_{200}=\sqrt{GM_{200}/r_{200}}$, of a halo, respectively. We note that other definitions of ``virial radius'' have been used in the literature; the most popular of the alternatives adopts a density contrast (relative to critical) of $\Delta\approx 178 \, \Omega_{\rm m}^{0.45}\sim 100$ (for our adopted cosmological parameters, see Eke et al 1996). We shall refer to these alternative choices, where appropriate, with a subscript indicating the value of $\Delta$; i.e., $r_{100}$ is the virial radius obtained assuming $\Delta=100$.}, $M_{200} \sim 10^{12}\, h^{-1} \, M_{\odot}$, and have at $z=0$ between $3$ and $5$ million particles within the virial radius, $r_{200}$. Each halo was selected at random from a list of candidates compiled from a cosmological N-body simulation of a large ($100\, h^{-1}$ Mpc) periodic box and resimulated individually at higher resolution using the technique described in detail by Power et al (2003). We imposed a mild isolation criterion (that no neighbors with mass exceeding $5\times 10^{11} h^{-1} M_\odot$ be found within $1 h^{-1}$ Mpc at $z=0$) in order to exclude systems formed in the periphery of much larger groups or clusters. The simulations were run with {\tt Gadget2}, a massively-parallel cosmological N-body code (Springel 2005). Particle pairwise interactions were softened using the ``optimal'' gravitational softening length scale suggested by Power et al (2003); i.e., a spline lengthscale $h_{\rm s}=1.4 \epsilon_{\rm G} \approx 4\, r_{200}/\sqrt{N_{200}}$, kept fixed in comoving coordinates. Numerical details of each run are listed in Table~\ref{tab:numexp}. \section{The Analysis} \label{sec:anal} \subsection{Substructure Finding} \label{ssec:subfind} We use {\tt SUBFIND} (Springel et al 2001) in order to identify self-bound structures in N-body simulations. {\tt SUBFIND} finds substructure within friends-of-friends (FOF; Davis et al 1985) associations by locating overdense regions within each FOF halo and identifying the bound subset of particles associated with each overdensity. {\tt SUBFIND} also works recursively and its output readily identifies ``subhalos within subhalos'', thus characterizing fully the various levels of the hierarchy of substructure present within a given FOF halo. We retain for our catalogue all {\tt SUBFIND} subhalos with more than $20$ particles. The main output of {\tt SUBFIND} is a list of subhalos within each FOF halo, together with their structural properties. For the purposes of this paper, we shall focus on: (i) the subhalo self-bound mass, $M_{\rm sub}$; (ii) the peak of its circular velocity profile (characterized by $r_{\rm max}$ and $V_{\rm max}$); and (iii) the position of the subhalo center, which we identify with the particle with minimum gravitational potential energy. We have run {\tt SUBFIND} on all $100$ snapshots (equally spaced in scale factor, $a$) of each of our runs, and are therefore able to track in detail the evolution of individual subsystems and their particle members. \subsection{Substructure Tracking} \label{ssec:subtrack} Our analysis focuses on all {\it surviving} subhalos at $z=0$ and relies heavily on tracking accurately their accretion history. To this aim, we trace each subhalo backwards in time by identifying the central particle at $z=0$ and searching for the group it belongs to in the immediately preceding snapshot. A new central particle is then selected and the procedure is iterated backwards in time until $z=9$, the earliest time we consider in the analysis. This procedure leads in general to a well-defined evolutionary track for each subhalo identified at $z=0$. When no subhalo is found to contain a subhalo's central particle in the immediately preceding snapshot, the search is continued at earlier times until either a progenitor subhalo is found or $z=9$ is reached. This is necessary because a subhalo may temporarily disappear from the catalogue, typically at times when it falls below the minimum particle number or else when it is passing close to the center of a more massive system and its density contrast is too low to be recognized by {\tt SUBFIND}. Our procedure overcomes this difficulty and in most cases recovers the subhalo at an earlier time. We note that these complications are a fairly common occurrence in the analysis procedure, and we have gone to great lengths to make sure that these instances are properly identified and dealt with when constructing our subhalo catalogue and their accretion histories. The tracking procedure described above defines a unique trajectory for each subhalo identified at $z=0$. This trajectory may be used to verify whether a subhalo has, at any time in the past, been accreted within the (evolving) virial radius of the main halo. If this is the case, we record the time it first crosses $r_{200}(z)$ as the ``accretion redshift'', $z_{\rm acc}$, and label the subhalo as {\it associated} with the main system. Analogously, we identify a set of {\it associated dark matter} particles by compiling a list of all particles that were at some time within the virial radius of the main halo but are not attached to any substructure at $z=0$. On the other hand, halos that have {\it never} been inside the virial radius of the main halo will be referred to as ``field'' or ``infalling'' halos. Using the subhalo trajectories, we compute and record a few further quantities of interest for each subhalo; namely, \begin{itemize} \item its ``turnaround'' distance, $r_{\rm ta}$, defined as the {\it maximum separation} between a subhalo and the center of the main progenitor before $z=z_{\rm acc}$ (for associated subhalos) or before $z=0$ (for field subhalos); \item the structural properties of associated subhalos at $z=0$ and at accretion time, such as mass and peak circular velocity; \item an apocentric distance, $r_{\rm apo}$, defined as the apocenter of its orbit computed at $z=0$ using the subhalo's instantaneous kinetic energy and orbital angular momentum, together with the potential of the main halo{\footnote{We have checked that our results are insensitive to the triaxial nature of the halo by recomputing $r_{\rm apo}$ using the potential along each of the principal axes of the halo's mass distribution rather than the spherical average. This leads to typical variations of less then $\sim 20\%$ in $r_{\rm apo}$.}} \end{itemize} Subhalo quantities measured at accretion time will be referred to by using the sub/superscript ``{\rm acc}''; for example, $V_{\rm max}^{\rm acc}$ refers to the peak circular velocity of a subhalo at $z=z_{\rm acc}$. Quantities quoted without superscript are assumed to be measured at $z=0$ unless otherwise specified; e.g., $V_{\rm max}=V_{\rm max}(z=0)$. \begin{figure} [ht] \begin{center} \epsfig{file=./f1.ps,height=16.5 cm,width=8.5 cm,angle=0} \caption[] {\footnotesize{{{\it Upper panel:} Radial velocity versus distance to the main halo center for all subhalos within $5\times r_{200}$ in our simulations. Velocities and distances are normalized to the virial velocity, $V_{200}$, and virial radius, $r_{200}$, of each host. All ``associated'' halos are shown in color, subhalos on first infall are shown in black. Different colors are used according to the peak circular velocity of the subhalo at the time of accretion. Blue denotes the quartile with smallest $V_{\rm max}^{\rm acc}$, red those with largest $V_{\rm max}^{\rm acc}$. Green denotes the rest of the associated subhalo population. Upward vertical arrows of matching color indicate the half-number radius for the various subhalo populations. A shorter black arrow marks the half-number radius for ``associated'' dark matter particles. We find that $65\%$ of subhalos in the range $r_{200} < r < 2 \, r_{200}$ are actually ``associated'' and have thus already been within the host virial radius in the past. Roughly one third of subhalos between $2 \, r_{200} < r < 3 \, r_{200}$ are also physically ``associated'' with the main halo. The upper and lower bounding curves denote the escape velocity for each of the five simulated halos. {\it Lower panel:} Radial distribution of subhalos. Color key is the same as in the upper panel.}}} \label{fig:VradVersusRad} \end{center} \end{figure} \section{Results} \label{sec:res} The basic properties of our simulated halos at $z=0$ are presented in Table~\ref{tab:numexp}. Here, $\epsilon_{\rm G}(=h_{\rm s}/1.4)$ is the {\tt Gadget} gravitational softening input parameter, and $M_{200}$, $r_{200}$, and $N_{200}$ are, respectively, the halo virial mass, radius, and number of particles within $r_{200}$. Table~\ref{tab:numexp} also lists the peak of the circular velocity of the main halo, $V_{\rm max}$, and its location, $r_{\rm max}$; the total number of ``associated'' subhalos; as well as the number of those found within various characteristic radii at $z=0$. \begin{figure} [ht] \begin{center} \epsfig{file=./f2.ps,height=8.5 cm,width=8.5 cm,angle=0} \caption[] {\footnotesize{{Turnaround radius versus apocentric distance at $z=0$ for all {\it associated } subhalos in our simulations. The turnaround distance is the maximum distance from the main halo before accretion. Subhalos on ``traditional'' orbits are expected to have $r_{\rm apo} < r_{\rm ta}$ and, thus, to be to the left of the $1$:$1$ curve in this plot. Subhalos near the $1$:$1$ line have $r_{\rm apo} \approx r_{\rm ta}$ and are therefore on orbits which have not been decelerated substantially since turnaround. Subhalos with $r_{\rm apo} > r_{\rm ta}$ are on unorthodox orbits and they have {\it gained} orbital energy during or after accretion. The blue symbols in the panel highlight subhalos on extreme orbits, that will take them more than $\sim 2.5$ times farther than their turnaround radius. The fraction of associated subhalos and associated dark matter particles in each region of the plot is given in the legends.}}} \label{fig:RapoRturn} \end{center} \end{figure} \subsection{Subhalos beyond the virial radius} \label{ssec:outersubh} One surprise in Table~\ref{tab:numexp} is that the number of ``associated'' subhalos exceeds by about a factor of $\sim 2$ the total number of subhalos identified within $r_{200}$. This result is also illustrated in Fig.~\ref{fig:VradVersusRad}, where we show, at $z=0$, the distance from the main halo center vs radial velocity for all subhalos identified in our simulations. Distances and velocities have been scaled to the virial quantities of each primary halo. Colored dots are used to denote ``associated'' subhalos, black symbols for ``field'' halos. Different colors correspond to different subhalo masses, as measured by the peak circular velocity at accretion time (in units of the present-day primary halo virial velocity, $V_{200}$): red is used for subhalos with $V_{\rm max}^{\rm acc}\ge 0.72 \, V_{200}$, blue for those with $V_{\rm max}^{\rm acc}\le 0.038 \, V_{200}$, green for the rest. Note that the distribution of associated subhalos extends well past $\sim 3\, r_{200}$; indeed, a few associated subsystems are found at $r\sim 4\, r_{200}$ moving outwards with radial velocity of order $V_r \sim V_{200}$. A careful search shows that there are actually several associated subhalos presently at distances larger than $\sim 5 \, r_{200}$. This result is unexpected in simple formation scenarios, such as the {\it spherical secondary infall model} (SSIM, for short). SSIM identifies at any time three distinct regions around a halo: (i) an inner ``virialized'' region where accreted mass shells have had time to cross their orbital paths; (ii) a surrounding ``infall'' region, where shells are still on first approach and have not yet crossed; and (iii) a still expanding outer envelope beyond the current turnaround radius (Fillmore \& Goldreich 1984; Bertschinger 1985, White et al 1993, Navarro \& White 1993). One of the premises of the secondary infall model is that the energy of a mass shell accreted into the halo is gradually reduced after its first pericentric passage until it reaches equilibrium. During this process, the apocentric distance of the shell is constantly reduced; for example, taking as a guide the SSIM self-similar solutions of Bertschinger (1985), the {\it second} apocenter of an accreted shell (the first would be its ``turnaround'' radius) is roughly $90\%$ of its turnaround distance, $r_{\rm ta}$, and the shell gradually settles to equilibrium, approaching a periodic orbit with $r_{\rm apo}\sim 0.8 \, r_{\rm ta}$. Thus, according to the SSIM, few, if any, associated subhalos are expected to populate the region outside $\sim 0.8 \, r_{\rm ta}\approx 1.6 \, r_{200}$. This is clearly at odds with the results shown in Fig.~\ref{fig:VradVersusRad} and Table~\ref{tab:numexp}: more than a {\it quarter} of all associated subhalos are found beyond $1.6 \ r_{200}$ at $z=0$! \begin{figure*} [ht] \begin{center} \epsfig{file=./f3.ps,height=15. cm,width=15. cm,angle=0} \caption[] {\footnotesize{{ Orbital trajectories of selected subhalos. Upper panels show the distance to the center of the main progenitor as a function of expansion factor. The top-left panel shows the trajectories of $4$ subhalos on ``extreme'' orbits (blue points in Fig.~\ref{fig:RapoRturn}). Note that all of these systems gain energy during their first pericentric approach to the main halo. The top-right panel illustrates that interactions occurring during the tidal dissociation of bound groups of subhalos are responsible for propelling some satellites onto extreme orbits. At pericentric approach, the tidal field of the main halo breaks apart the group, and redistributes each member onto orbits of varying energy. The most affected are, on average, the least massive members of the group, some of which are pushed onto orbits with extremely large apocenters. The dashed curve shows the growth of the virial radius of the most massive progenitor of the main halo. Bottom panels show the radial velocity of the subhalos shown in the upper panels.}}} \label{fig:orbits} \end{center} \end{figure*} \begin{figure} [] \begin{center} \includegraphics[width=\linewidth,clip]{./f4.ps} \caption[] {\footnotesize{{Black dots show the position, at $z=0$, of particles belonging, at accretion time, to the most massive subhalo of the group shown in the right-hand panels of Fig.~\ref{fig:orbits}. This subhalo has been fully disrupted in the potential of the main halo. Large circles show the position of the center of mass of the other (surviving) subhalos in the group. Curves show the evolution of each of these subgroups since accretion. Note how the surviving subhalos align themselves with tidal streams stripped from the main subhalo during the disruption process. The ``ejection'' of subhalos is thus due to the same mechanism that leads to the formation of outgoing tidal tails in a merger event and should occur naturally during the tidal dissociation of any bound group of subhalos.}}} \label{fig:PhaseAcc} \end{center} \end{figure} \subsection{The orbits of associated subhalos} \label{ssec:subhorbs} The discrepancy between the simulation results and the naive expectation of the SSIM was pointed out by Balogh, Navarro \& Morris (2000), and confirmed by subsequent studies (Mamon et al 2004, Gill et al 2005, Diemand et al 2007) but its physical origin has not yet been conclusively pinned down. Associated subhalos found today beyond their turnaround radius have clearly evolved differently from the SSIM prescription, and it is instructive to study the way in which the difference arises. One possibility is that deviations from spherical symmetry during accretion might be responsible for the outlying associated subhalos. Accretion through the filamentary structure of the cosmic web surrounding the halo, for example, might result in a number of subhalos on orbits of large impact parameter that simply ``graze'' the main halo and are therefore not decelerated significantly after their first pericentric approach, as assumed in the secondary infall model. These subhalos would lose little orbital energy, and should presumably be today on orbits with apocentric distances of the order of their original turnaround radii. According to the analytic calculation of Mamon et al (2004), systems on such orbits may reach distances as far as $r_{\rm apo} \sim 2.3\, r_{200}$. We explore this in Fig.~\ref{fig:RapoRturn}, where we show the turnaround radius of each associated subhalo versus their apocentric distance estimated at $z=0$, both normalized to the virial radius of the main halo. Subhalos that have followed the traditional orbits expected from the SSIM should lie to the left of the $1$:$1$ curve in this panel. These, indeed, make up the bulk ($\sim 62\%$) of the associated population. Note as well that there are a number of subhalos near the $1$:$1$ line, whose orbits have not been decelerated since accretion into the main halo. These are objects that are either on their way to first pericentric passage or, as discussed in the above paragraph, that have somehow evaded significant braking during accretion. More intriguingly, Fig.~\ref{fig:RapoRturn} also shows that there are a significant number of subhalos on decidedly unorthodox orbits, with apocenters exceeding their SSIM theoretical ``maximum''; i.e., $r_{\rm apo} > r_{\rm ta}$. Indeed, $\sim 38\%$ of associated subhalos are on such orbits, and about $\sim 1\%$ are on orbits so extreme that their apocenters exceed their original turnaround distance by more than a factor of $\sim 2.5$ (the latter are highlighted in blue in Fig.~\ref{fig:RapoRturn} if, in addition, $r_{\rm apo} > 2\, r_{200}$). The large fraction of systems in such peculiar orbits, where the subhalo has {\it gained} orbital energy since turnaround, indicates that deviations from spherical symmetry play a minor role in pushing subhalos beyond $r_{200}$, and suggests that another mechanism is at work. \subsection{Subhalo mass dependence of unorthodox orbits} \label{ssec:subhmdep} One clue to the mechanism responsible for pushing some subhalos onto highly energetic orbits is the dependence of the effect on the mass of the subhalo. This is illustrated in the bottom panel of Fig.~\ref{fig:VradVersusRad}, which shows that low-mass subhalos are the ones being preferentially pushed to the outskirts of the halo. Further clues result from inspecting individually the trajectories of some of the subhalos on extreme orbits. This is shown in the top-left panel of Fig.~\ref{fig:orbits}, where we show the orbits of a few of the associated subhalos with $r_{\rm apo} > 2.5 \, r_{\rm ta}$. Interestingly, all of these subhalos have very low mass at accretion ($V_{\rm max}^{\rm acc} \simless 0.08 V_{200}$) and acquire their ``boost'' in orbital energy during their {\it first} pericentric passage. The ``wiggles'' in their trajectories prior to pericenter betray the fact that they actually belong to a bound system of multiple subhalos accreted as a single unit (Sales et al 2007a, Li \& Helmi 2007). This is shown more clearly in the top-right panel of Fig.~\ref{fig:orbits}, where we show the trajectories of $5$ subhalos belonging to one such group. The mass of the group is concentrated in the most massive member (see legends in the figure), which is surrounded (prior to accretion) by 4 bound satellites. The group contributes about $5\%$ of the main halo's mass at accretion time, $a_{\rm acc}=(1+z_{\rm acc})^{-1} \approx 0.65$. The group as a whole turns around at $a_{\rm ta}\approx 0.35$ and accretes on a ($r_{\rm per}$:$r_{\rm apo})=(1$:$10)$ orbit that reaches $r_{\rm per} \sim 0.25\, r_{200}$ at $a_{\rm per}\sim 0.69$. Adding to this evidence, we find that $\sim 95\%$ of subhalos with $r_{\rm apo} > 2\ r_{200}$ and $r_{\rm apo} > 2.5 \ r_{\rm ta}$ were each, at accretion, members of an FoF group with multiple subhalos. During pericentric passage, the group is dissociated by the tidal field of the main halo, and its $5$ members are flung onto orbits of widely different energy. The most massive object (single dot-dashed curve in the right panels of Fig.~\ref{fig:orbits}) follows a ``traditional'' orbit, rebounding to a second apocenter which is only $\sim 30\%$ of its turnaround distance. The rest evolve differently; the least massive subhalos, in particular, tend to {\it gain} energy during the disruption of the group and recede to a second apocenter well beyond the original turnaround. As anticipated by the work of Sales et al (2007b) this is clearly the result of energy re-distribution during the tidal dissociation of the group. The bottom panels in Fig.~\ref{fig:orbits}, which show the evolution of the radial velocity of each subhalo, confirm this suggestion. The least massive member of the group is, in this case, the least bound as well, judging from its excursions about the group's center of mass. This subhalo (solid black line in the right panels of Fig.~\ref{fig:orbits}) happens to be approaching the group's orbital pericenter at about the same time as when the group as a whole approaches the pericenter of its orbit. This coincidence in orbital phase allows the subhalo to draw energy from the interaction; the subhalo is thus propelled onto an orbit that will take it beyond three times its turnaround distance, or $\sim 6 \, r_{200}$. Although technically still bound, for all practical purposes this subhalo has been physically ejected from the system and might be easily confused for a system that has evolved in isolation. There are similarities between this ejection process and the findings of early N-body simulations, which showed that a small but sizable fraction of particles are generally ejected during the collapse of a dynamically cold N-body system (see, e.g., van Albada 1982). The latter occur as small inhomogeneities are amplified by the collapse, allowing for substantial energy re-distribution between particles as the inhomogeneities are erased during the virialization of the system. In a similar manner, the tidal dissociation of bound groups of subhalos leads to the ejection of some of the group members. The main difference is that, in this case, no major fluctuations in the gravitational potential of the main system occur. Indeed, in the case shown in the right-hand panels of Fig.~\ref{fig:orbits} the main halo adds only $\sim 5\%$ of its current mass and its potential changes little in the process. A more intuitive illustration is perhaps provided by Fig.~\ref{fig:PhaseAcc}, where we show, in the $r$-$V_{\rm rad}$ plane and at $z=0$, the location of the same accreted group of subhalos. Black dots indicate particles beloging to the main subhalo at the time of accretion. Large circles mark the location of the center of mass of each surviving subhalo, and the curves delineate their past evolution in the $r$-$V_{\rm rad}$ plane. The three outermost subhalos track closely a stream of particles formerly belonging to the main subhalo: the ``ejected'' subhalos are clearly part of the outgoing ``tidal tail'' stripped from the system during first approach. The origin of subhalos on extreme orbits is thus the same as that of particles at the tip of the outgoing tidal tails during a merger, and it should therefore be a common occurrence during the accretion of any bound group of subhalos. \begin{figure} [] \begin{center} \includegraphics[width=\linewidth,clip]{./f5.ps} \caption[] {\footnotesize{{The ratio of apocentric radius (estimated at $z=0$) to turnaround distance as a function of the peak circular velocity, $V_{\rm max}$, of a subhalo. Two estimates of $V_{\rm max}$ are used for each subhalo, one measured at accretion time and another at $z=0$. Symbols correspond to the median of the distribution, shaded areas encompass 25\% of the distribution around the median, and the extremes of the error bars correspond to the 25th and 75th centiles. Note that only fairly massive associated subhalos are today on orbits substantially more bound than when they turned around. The median apocentric radius of low-mass subhalos is of order of the virial radius, indicating that about {\it half} of all associated subhalos spend a substantial fraction of their orbital period outside $r_{200}$. Note that the effect depends only weakly on $V_{\rm max}$ below a certain threshold; this presumably indicates that, below a certain mass limit, subhalos behave like test particles in the potential of the main halo.}}} \label{fig:RaRtVmax} \end{center} \end{figure} It is also important to point out that not all low-mass subhalos are affected equally. For example, despite being of comparable mass to the ejected object, one of the low mass members of the group ends up on an orbit almost as tightly bound as the main subhalo (red triple dot-dashed curve in Fig.~\ref{fig:orbits}). This shows that the orbital fate of a subhalo is mainly determined by its orbital phase within the accreting group at the time of accretion. Depending on this, subhalos may either {\it lose} or {\it gain} orbital energy during the interaction. Low mass halos are, however, the ones preferentially ``ejected'' or placed on high-energy orbits through this process (see Fig.~\ref{fig:VradVersusRad} and Fig.~\ref{fig:NrhoFit}). This is because low-mass members of accreting groups will have orbits of larger amplitude about its center of mass, enhancing the probability of capturing orbital energy when its orbit within the group is in phase with the orbit of the group within the main halo. In turn, this enhances the survival probability of low mass systems by placing them on orbits where they spend extended periods in the periphery of the main halo, outside the region where tidal fields may effectively strip and disrupt them. The combination of these two effects (energy gain and enhanced survival likelihood) leads to a strong mass dependence on the orbital properties of associated subhalos at $z=0$. This is illustrated in Fig.~\ref{fig:RaRtVmax}, where we show the ratio of apocenter (estimated at $z=0$) to turnaround distance as a function of subhalo peak circular velocity. This figure shows clearly that the most massive subhalos are found today in orbits with apocentric distances much smaller than their turnaround: halos with $V_{\rm max}^{\rm acc}\sim 0.4 \, V_{200}$ (which corresponds to roughly $M_{\rm sub}^{\rm acc} \sim 0.1\, M_{200}$) have median apocenters of order half their turnaround distance. On the other hand, the median apocenter of associated subhalos with $V_{\rm max}^{\rm acc}\simless 0.1 V_{200}$ is of the order of the turnaround radius. Note as well that the $V_{\rm max}$ dependence is quite pronounced at the high-mass end but rather weak for low-mass subhalos. This presumably reflects the fact that, once a subhalo is small enough, it behaves more or less like a test particle in the potential of the main system. Finally, note that the mass dependence is less pronounced when the {\it present-day} subhalo $V_{\rm max}$ is used. This is because tidal stripping has a more pronounced effect on systems that orbit nearer the center of the main halo. The more massive the subhalo at accretion the closer to the center it is drawn and the more substantial its mass loss, weakening the mass-dependent bias illustrated in Fig.~\ref{fig:RaRtVmax}. We shall see below that the mass dependence becomes even weaker when expressed in terms of the {\it present-day} subhalo mass. \begin{figure*} [ht] \begin{center} \includegraphics[width=\linewidth,clip]{./f6.ps} \caption[] {\footnotesize{{Number density profile of associated subhalos, after stacking the results of all $5$ simulations in our series. The black solid symbols show the result for all subhalos, the other symbols correspond to various subsamples obtained after splitting by either $V_{\rm max}^{\rm acc}$ (left panel) or by subhalo mass at $z=0$ (right-hand panel). Details on the velocity and mass range for each subsample are given in the legend. Solid lines through each curve correspond to the best fits obtained with eq.~\ref{eq:rhoalpha}. The parameters of each fit are listed in Table~\ref{tab:nrho}. Lines without symbols show the dark matter density profile. Note that the spatial distribution of subhalos depends sensitively on subhalo mass when measured by $V_{\rm max}^{\rm acc}$, but that, in agreement with prior work, the mass bias essentially disappears when adopting $M_{\rm sub}$ to split the sample. See text for further discussion.}}} \label{fig:NrhoFit} \end{center} \end{figure*} \begin{figure*} [ht] \begin{center} \includegraphics[width=\linewidth,clip]{./f7.ps} \caption[] {\footnotesize{{Radial velocity dispersion and anisotropy profiles for dark matter (thin black lines) and associated subhalos (thick colored lines). Symbols are described in the legend and are the same as in Fig.~\ref{fig:NrhoFit}. Note that the mass-dependent bias shown in Fig.~\ref{fig:NrhoFit} is also reflected in the subhalo kinematics: low mass subhalos tend to have higher velocity dispersions than their high-mass counterparts. This bias is clearer when measuring subhalo mass by the peak circular velocity at accretion time, $V_{\rm max}^{\rm acc}$, rather than by the self-bound mass at $z=0$, $M_{\rm sub}$. Note as well that subhalos tend to be on orbits less radially biased than the dark matter, especially near the center. This is presumably because subhalos on tangentially-biased orbits avoid the innermost regions of the main halo, thus enhancing their survival probability.}}} \label{fig:VdispProf} \end{center} \end{figure*} \subsection{Subhalo spatial distribution} \label{ssec:subhrdist} The number density profile of all associated subhalos is shown by the solid (black) curve in Fig.~\ref{fig:NrhoFit}. The profile may be approximated rather accurately by the same empirical formula introduced by Navarro et al (2004) to describe the mass profile of CDM halos. This profile is characterized by a power-law dependence on radius of the logarithmic slope of the density, $d\log\rho/d\log r \propto r^{\alpha}$, which implies a density profile of the form, \begin{equation} \ln (n(r)/n_{-2}) = -(2/\alpha) [(r/r_{-2})^\alpha -1]. \label{eq:rhoalpha} \end{equation} This density law was first introduced by Einasto (1965), who used it to describe the distribution of old stars within the Milky Way. For convenience, we will refer to it as the Einasto profile. The scaling parameters $n_{-2}$ and $r_{-2}$ may also be expressed in terms of the {\it central} value of the density, $n_0=n(r=0)=\exp (2/\alpha)\, n_{-2}$, and of the radius containing {\it half} of the associated subhalos, $r_{\rm h}$. We list in Table~\ref{tab:nrho} the parameters obtained by fitting eq.~\ref{eq:rhoalpha} to the subhalo number density profiles. (Note that the units used for $n_0$ are arbitrary, but they are consistent, in a relative sense, for the various subhalo populations.) As discussed by Navarro et al (2004), Merritt et al (2005, 2006), and more recently by Gao et al (2007), $\Lambda$CDM halo density profiles are well described by $\alpha_{\rm DM}$ in the range $\sim 0.15-0.3$. This is in sharp contrast with the much larger values obtained for the subhalo number density profile ($\alpha_{\rm sub}\sim 1.0$; i.e., the 3D radial distribution of subhalos is approximately ``exponential''), and quantifies the well-established spatial bias between the subhalo population and the dark matter mass profile of the main halo. The larger values of $\alpha$ characterizing the subhalo spatial distribution imply a large nearly constant density ``core'' in their profile, in contrast with the ``cuspy'' density profile of the dark halo, shown as a solid line (without symbols) in Fig.~\ref{fig:NrhoFit}. The left panel in Fig.~\ref{fig:NrhoFit} shows that the subhalo spatial distribution depends sensitively on subhalo mass, as measured by the peak circular velocity at accretion, $V_{\rm max}^{\rm acc}$ (see also, Nagai \& Kravtsov (2005), Faltenbacher \& Diemand (2006), Kuhlen et al (2007)). The various colored profiles in this panel correspond to splitting the sample of subhalos in four groups, according to the value of $V_{\rm max}^{\rm acc}$ (normalized to $V_{200}$, the virial velocity of the main halo at $z=0$). The concentration increases systematically with $V_{\rm max}^{\rm acc}$; for example, half of the $\sim 150$ (surviving) subhalos with $V_{\rm max}^{\rm acc} > 0.17\, V_{200}$ are contained within $\sim 0.7 \, r_{200}$ at $z=0$. The corresponding radius for subhalos with $0.04<V_{\rm max}^{\rm acc}/V_{200} < 0.05$ is $\sim 1.1 \, r_{200}$ (see details in Table~\ref{tab:nrho}). Interestingly, the mass dependence of the subhalo number density profile essentially disappears when the {\it present-day} subhalo mass is used to split the sample. This is illustrated in the right-hand panels of Fig.~\ref{fig:NrhoFit}, which shows that the shape of the density profile of subhalos differing by up to two decades in mass is basically the {\it same}. This is in agreement with the earlier results of Gao et al (2004) and Diemand et al (2004), but indicates that the apparent mass-independence of the subhalo spatial distribution is {\it not} the result of efficient mixing within the main halo, but rather a somewhat fortuitous result of the cancellation of the prevailing trend by dynamical friction and tidal stripping. It is conceivable that numerical artifact may also help to erase the dependence of $n_{\rm sub}(r)$ on present-day subhalo mass. Indeed, {\tt SUBFIND} (like every subhalo finder) will tend to assign masses to subhalos which depend slightly, but systematically, on their location within the main halo. The mass of subhalos near the center is more likely to be underestimated, and some subhalos may, indeed, even be missed altogether if close enough to the central cusp. Splitting the sample by $V_{\rm max}^{\rm acc}$ minimizes such effects and allows for the subhalo mass bias to be properly established. \subsection{Velocity anisotropies} \label{ssec:subhvelan} The mass dependence of the subhalo spatial distribution discussed in the previous subsection is significant, but not very large, and thus is less clearly apparent in their kinematics, as shown in Fig.~\ref{fig:VdispProf}. The top panels of this figure show the radial velocity dispersion profile, $\sigma_r=\langle v_r^2 \rangle^{1/2}$, computed in spherical shells for the same subsamples discussed in Fig.~\ref{fig:NrhoFit}. The bottom panels show the anisotropy profile, defined as $\beta\equiv 1-(\sigma_{\theta}^2+\sigma_{\phi}^2)/2\sigma_r^2$. The mean values of the velocity dispersion for each component are listed in Table~\ref{tab:nrho}. The solid lines without symbols in Fig.~\ref{fig:VdispProf} correspond to dark matter particles of the main halo, randomly sampled in order to match the total number of subhalos. As expected, the dark matter velocity distribution is mildly anisotropic, with a radial bias that increases outward and reaches a maximum near the virial radius. The radial velocity dispersion profile of the subhalo population follows closely that of the dark matter, although as a whole, the subhalo population is kinematically biased relative to the dark halo. The effect, however, is barely detectable; we find $\sigma_r^{\rm sub}/\sigma_r^{\rm DM} \sim 0.98$. Our results thus confirm the earlier conclusions of Ghigna et al (1998), Gao et al (2004), Diemand et al (2004) about the presence of a slight kinematic bias between subhalos and dark matter. Unlike the conclusions of Diemand et al, however, we find a significant discrepancy between the anisotropy profiles of the subhalo population and of the dark halo. As shown in the lower panels of Fig.~\ref{fig:VdispProf}, subhalos are on orbits less dominated by radial motions than the dark matter and, indeed, have a pronounced {\it tangential} bias near the center (i.e., for $r\simless 0.3\, r_{200}$). With hindsight, this is not entirely unexpected, since subhalos nearer the center are more likely to survive if they are on tangentially-biased orbits that keep them away from the innermost regions of the halo, where tidal effects are strongest. \begin{figure*} [ht] \begin{center} \includegraphics[width=\linewidth,clip]{./f8.ps} \caption[] {\footnotesize{{Associated subhalo mass ($M_{\rm sub}$) and peak circular velocity ($V_{\rm max}$) cumulative distributions (both quantities measured at $z=0$). Black lines correspond to all associated subhalos; red lines to subhalos identified within $r_{200}$. Thick lines in each panel denote the average of our $5$ simulations. Note that the number of associated subhalos exceeds by about a factor of $\sim 2$ the number of subhalos found within $r_{200}$. The residuals are computed relative to the subhalo population within the virial radius. }}} \label{fig:MsVmf} \end{center} \end{figure*} \subsection{Subhalo mass function} \label{ssec:subhmf} The large number of associated subhalos on high-energy orbits discussed above imply that subhalos within the virial radius are just a fraction of all subhalos physically influenced by the main halo. This is illustrated quantitatively in Fig.~\ref{fig:MsVmf}, where we show the cumulative peak circular velocity and mass functions of subhalos identified within $r_{200}$. The thin red lines in this figure correspond to subhalos identified within $r_{200}$; black to the full sample of associated subhalos. Thick lines show the average results for the $5$ simulated halos considered here. The residuals shown in the small panels are computed relative to the average for the associated subhalo population, and show that, on average, the total number of associated subhalos exceed those within $r_{200}$ by a factor of $\sim 2$. Fig.~\ref{fig:MsVmf} illustrates a number of interesting results. One is that, at the low mass end, the shape of the subhalo mass and velocity function is insensitive to the radius adopted for selection. Indeed, there is no obvious systematic trend with $V_{\rm max}$ or $M_{\rm sub}$ for $V_{\rm max}/V_{\rm max}^{\rm host} \simless 0.2$. Below certain threshold, low mass subhalos behave as ``test particles'' in the potential of the main halo and their radial distribution becomes independent of mass. This implies that attempts to determine the asymptotic slope of the subhalo mass function are unlikely to be compromised by selecting for analysis only halos within the virial radius, as is traditionally done. On the other hand, the subhalo mass function {\it shape} is substantially affected at the opposite end; although about half of all associated subhalos with $V_{\rm max} \simless 0.15 \, V_{\rm max}^{\rm host}$ are missing from within $r_{200}$, this fraction declines to one quarter for $V_{\rm max} \sim 0.28 \, V_{\rm max}^{\rm host}$, and to zero for $V_{\rm max}> 0.31 \, V_{\rm max}^{\rm host}$. As a result, in that mass range, the mass function of subhalos identified within $r_{200}$ is shallower than that of associated systems. This should have interesting consequences for semianalytic modeling of the luminosity function in galaxy groups and clusters, which traditionally assume that all accreted subhalos remain within the virial radius of the main system. \section{Summary and Discussion} \label{sec:conc} We have used a suite of cosmological N-body simulations to study the orbital properties of substructure halos (subhalos) in galaxy-sized cold dark matter halos. We extend prior work on the subject by considering the whole population of {\it associated} subhalos, defined as those that (i) survive as self-bound entities to $z=0$, and (ii) have at some time in the past been within the virial radius of the halo's main progenitor. Our main findings may be summarized as follows. \begin{itemize} \item The population of {\it associated} subhalos extends well beyond three times the virial radius, $r_{200}$, and contains a number of objects on extreme orbits, including a few with velocities approaching the nominal escape speed from the system. These are typically the low-mass members of accreted groups which are propelled onto high energy orbits during the tidal dissociation of the group in the potential of the main halo. \item The net result of this effect is to push low-mass subhalos to the periphery of the system, creating a well-defined mass-dependent bias in the spatial distribution of associated subhalos. For example, only about $\sim 29\%$ of subhalos which, at accretion time, had peak circular velocities of order $3\%$ of the present-day virial velocity ($V_{\rm max}^{\rm acc} \sim 0.03 \, V_{200}$), are found today within $r_{200}$. This fraction climbs to $\sim 61\%$ and to $\sim 78\%$ for subhalos with $V_{\rm max}^{\rm acc}\sim 0.1\, V_{200}$ and $\sim 0.3 \, V_{200}$, respectively. \item The strength of the bias is much weaker when expressed in terms of the subhalo {\it present-day} mass, due to the increased effect of dynamical friction and tidal stripping on the most massive subsystems. \item The spatial distribution, kinematics, and velocity anisotropy of the subhalo population are distinct from the properties of the dark matter. Subhalos are less centrally concentrated, have a mild velocity bias, and are, near the center, on more tangential orbits than the dark matter. \end{itemize} The unorthodox orbits of substructure halos that result from the complex history of accretion in hierarchical formation scenarios have a number of interesting implications for theoretical and observational studies of substructure and of the general halo population. One implication is that subhalos identified within the virial radius represent a rather incomplete census of the substructures physically related to (and affected by) a massive halo. This affects, for example, the interpretation of galaxy properties in the periphery of galaxy clusters, and confirms earlier suggestions that evolutionary effects normally associated with passage through the innermost regions of a massive halo, such as tidal truncation or ram-pressure stripping, should be detectable well outside the traditional virial boundaries of a group or cluster (Balogh, Navarro \& Norris 2000). Furthermore, associated subhalos pushed well outside the virial radius of their main halo might be erroneously identified as separate, isolated structures in studies that do not follow in detail the orbital trajectories of each system. This effect would be most prevalent at low masses, and it is likely to have a significant effect on the internal properties of halos in the vicinity of massive systems. We expect, for example, halos in the periphery of groups/clusters to show evidence of truncation and stripping, such as higher concentrations and/or sharp cutoffs in their outer mass profiles. The same effect may also introduce a substantial environmental dependence in the formation-time dependence of halo clustering reported in recent studies (Gao et al 2005; Zhu et al 2006; Jing et al 2007; see also Diemand et al 2007b). In particular, at fixed mass, early collapsing halos might be more clustered because they are physically associated with a more massive system from which they were expelled. A proposal along these lines has recently been advanced by Wang, Mo \& Jing (2007) (see also Hahn et al 2008), who argue that such environmental effects might be fully responsible for the age-dependence of halo clustering. Our physical interpretation, however, differs in detail from theirs. Whereas Wang et al argue for the suppression of mass accretion onto ``old'' halos by ``heating by large-scale tidal fields'' as responsible for their enhanced clustering, our results suggest that the real culprit is the orbital energy gain associated with the tidal dissociation of bound groups of subhalos, which allows ``old'' low-mass halos to evade merging and to survive in the vicinity of massive systems until the present. A further implication of our results concern the spatial bias of the most massive substructures discussed in S.~\ref{ssec:subhrdist}. If, for example, luminous substructures in the Local Group trace the most massive associated subhalos at the time of accretion, they may actually be significantly more concentrated and kinematically biased relative to the dark matter, a result that ought to be taken into account when using satellite dynamics to place constraints on the mass of the halos of the Milky Way and M31. Finally, as already pointed out by Sales et al (2007a,b), gravitational interactions during accretion may also be responsible for the presence of dynamical outliers in the Local Group, such as Leo I and And XII. Further work is needed to assess whether the exceptional orbits of such systems could indeed have originated in the tidal dissociation of groups recently accreted into the Local Group. Since the latest proper motion studies of the Magellanic Clouds seem to suggest that the Clouds are on their first pericentric passage (Kallivayalil et al 2006; Piatek et al 2007), this is a possibility to consider seriously when trying to puzzle out the significance of the motion of the satellites of the Local Group. \vskip 2.5cm We thank Simon White and Vincent Eke for useful discussions, and an anonymous referee for a constructive report. ADL would like to thank Jorge Pe\~{n}arrubia and Scott Chapman for many useful discussion which have improved this work. The simulations reported here were run on the Llaima Cluster at the University of Victoria, and on the McKenzie Cluster at the Canadian Institute for Theoretical Astrophysics. This work has been supported by various grants to JFN and a post-graduate scholarship to ADL from Canada's NSERC. AH gratefully acknowledges financial support from NOVA and NWO. \begin{table*} \center \caption{Properties of simulated halos used in this study.} \begin{tabular}{l c c c c c c c c c c c c c } \hline \hline Halo & $\epsilon_{\rm G}$& $M_{200}$ & $M_{\rm DM}^{\rm assoc}$ &$r_{200}$ & $V_{\rm max}$ & $r_{\rm max}$ & $N_{200}$ &$M_{\rm sub}^{\rm assoc}$ & $N_{\rm sub}$ & $N_{\rm sub}$ & $N_{\rm sub}$ & $N_{\rm sub}$ & $N_{\rm sub}$ \\ & [kpc/h] & [$M_{\odot}$/h]&[$M_{\odot}$/h] & [kpc/h] & [km/s] & [kpc/h] & & [$M_{\odot}$/h] & [assoc] &($r<r_{200}$) & ($r<r_{100}$) & ($r<r_{50}$) & $r_{\rm apo}>2.5r_{\rm ta}$\\ \hline 9-1-53 & 0.250 & 9.17$\times10^{11}$& 12.0$\times10^{11}$& 158.0 & 184.8 & 36.0 & 3.24e6 & 0.89$\times10^{11}$& 904 & 513 & 742 & 1017 & 3 \\ 9-12-46 & 0.181 & 6.44$\times10^{11}$& 9.79$\times10^{11}$& 140.4 & 159.9 & 34.9 & 4.82e6 & 0.47$\times10^{11}$ & 2020 & 865 & 1314& 1828 & 14 \\ 9-13-74 & 0.220 & 8.76$\times10^{11}$& 12.4$\times10^{11}$& 155.6 & 175.9 & 32.0 & 4.18e6 & 1.04$\times10^{11}$& 1683 & 831 & 1232& 1645 & 15 \\ 9-14-39 & 0.186 & 12.6$\times10^{11}$& 17.9$\times10^{11}$& 175.7 & 203.5 & 37.8 & 3.30e6 & 1.27$\times10^{11}$ & 1416 & 594 & 1050& 1581 & 4 \\ 9-14-56 & 0.275 & 8.57$\times10^{11}$& 12.2$\times10^{11}$& 154.4 & 178.6 & 33.7 & 2.61e6 &1.00$\times10^{11}$ & 1160 & 469 & 646 & 930 & 12 \\ \hline \end{tabular}\label{tab:numexp} \end{table*} \begin{table*} \center \caption{Dynamical properties of subhalo population} \begin{tabular}{c| c c c c c c c c}\hline Sample &$N_{\rm sub}$ & $n_{0}$ & $r_{-2}$ &$r_{\rm h}$&$\alpha$&$\langle \beta \rangle$&$\langle \sigma_r \rangle$ \\ &[5 sims] & [arb.units] &$[r_{200}]$&$[r_{200}]$& & & $ [V_{200}]$ \\\hline\hline Assoc. dark matter & & 5422 & 0.l12 & 0.558 & 0.159 & 0.270 & 0.734 \\ & & & & & & \\ Assoc. subhalos & 7183 & 4691.0 & 1.08 & 1.09 & 0.842 & 0.053 & 0.712 \\ & & & & & & \\ ($X=V_{\rm max}^{\rm acc}/V_{200}$) & & & & & & & \\ ${\rm X}> 0.170$ & 150 & 42848.5 & 0.70 & 0.70 & 0.352 & 0.059 & 0.699 \\ $0.063 < X < 0.17$ & 2110 & 3709.2 & 0.80 & 0.87 & 0.754 & 0.091 & 0.712 \\ $0.040 < X < 0.063 $ & 2110 & 875.9 & 1.09 & 1.10 & 0.991 & 0.018 & 0.699 \\ $0.025 < X < 0.040 $ & 750 & 222.0 & 1.26 & 1.39 & 1.001 & 0.034 & 0.762 \\ & & & & & & \\ ($Y=\log_{10}M_{\rm sub}/M_{200}$) & & & & & & \\ $Y> -3.3 $ & 150 & 28.3 & 1.12 & 1.14 & 1.536 & 0.207 & 0.769 \\ $-4.6 < Y < -3.3$ & 1918 & 1181.3 & 0.99 & 1.06 & 0.886 & -0.001 & 0.723 \\ $-5.1 < Y < -4.6$ & 1918 & 1275.5 & 1.11 & 1.08 & 0.897 & 0.021 & 0.735 \\ $-5.42 < Y < -5.1$ & 750 & 341.6 & 1.09 & 1.13 & 0.939 & -0.028 & 0.708 \\ \end{tabular}\label{tab:nrho} \end{table*}
1,108,101,562,382
arxiv
\section{Introduction}\label{sec:Intro} The problem of ranking a set of $n$ items is of significant recent interest due to its popularity in various machine learning problems, such as recommendation systems \citep{RecommendationSystem2010}, web search \citep{WebSearch2001}, crowd sourcing \citep{CrowdSourcing2013}, and social choices \citep{SocialChoice2011Lu,SocialChoice2013Caragiannis,SocialChoice2005Conitzer}. The goal of these ranking problems is to recover a total or partial ranking from noisy queries (aka samples) about users' preferences. A query presents a user with a set of $l$ items, such as products, movies, pages, and candidates, and asks him/her to select the most favored item. An interesting area of study is active ranking, where the learner can actively select the items to be queried based on past query results in order to reduce the number of queries needed (sample complexity). The work in \citep{CrowdSourcing2013} shows that adaptive ranking algorithms can achieve almost the same accuracy as non-adaptive ones using only about 3\% of the samples. In this paper, we focus on listwise ranking (i.e., each time, the learner can query more than two items) instead of querying only two items at a time, i.e., pairwise ranking. There are several motivations to study listwise ranking, which is a relatively unexplored area compared to pairwise ranking. Results on listwise ranking can be directly applied to pairwise ranking, which is a special case of listwise ranking, and numerical results in this paper indicate that traditional algorithms designed for pairwise ranking problems do not work well on listwise settings. More importantly, in many applications such as web search and online shopping, presenting more than two items to the users is more common and typical, and can provide better user experiences. In these applications, using adaptive listwise ranking algorithms, the server can adaptively choose items to present and learn the users' preference in a shorter time. There are mainly two classes of ranking problems, one aims to find the $k$ most preferred items, and the other aims to recover the total ranking (or full ranking). This paper studies both problems. Instead of exact ranking, this paper focuses on the PAC ranking \citep{OnlineRankingElicitation2015,MaxingAndRanking2017,Falahatgar2017}, in which an $\epsilon$-bounded error on the preference scores do not influence the correctness. See more detailed definition of PAC in Section~\ref{sec:PF}. We also consider the exact ranking problem based on our results on PAC ranking. \section{Problems Formulation}\label{sec:PF} Under the multinomial logit (MNL) model, each item is associated with a preference score represented by a real number. A more preferred item has a larger preference score. The items are ranked based on their preference scores. A query over a set $S=\{i_1,i_2,...,i_l\}$ will return $i_m$ as the most favored item with probability \begin{equation} \mathbb{P}(i_m\mid S) = \frac{\exp(\gamma_{i_m})}{\sum_{j=1}^l\exp(\gamma_{i_j})}, \end{equation} where $\gamma_{i_m}$ is the preference score of item $i_m$. The MNL model was introduced by \citep{Bradley1952}, and has been widely adopted in many areas \citep{MMMethod2004}. We also assume that the queries are independent across time and items. Mathematically speaking, if we query $t$ sets (some of them can be the same) $S_1,S_2,...,S_t$, where $S_\tau=\{i_{\tau,1},i_{\tau,2},...,i_{\tau,l}\}$, then we will get query result $(i_{a_1},i_{a_2},...,i_{a_t})$ with probability $\prod_{\tau=1}^t{\mathbb{P}(i_{a_\tau}|S_\tau)}=\prod_{\tau=1}^t{\frac{\exp(\gamma_{i_{a_\tau}})}{\sum_{b=1}^l{\exp(\gamma_{i_{\tau,b}})}}}$. Items with larger preference scores are more likely to win a query (i.e., be the query result), and thus, the items are ranked through this hidden information. When $l=2$, the MNL model reduces to the Bradley-Terry-Luce (BTL) model \citep{Bradley1952}, which is equivalent to the Plackett-Luce (PL) model. Under the PL model, item $i$ will be the result of a query over $i$ and $i$ with probability $\frac{\theta_i}{\theta_i+\theta_j}$, where $\theta_i = \exp(\gamma_i)$ and $\theta_j=\exp(\gamma_j)$. To simplify the notations, in this paper, we let $\theta_i = \exp(\gamma_i)$, for all items $i$. In this paper, we only use $\theta_i$'s instead of $\gamma_i$'s to avoid ambiguity. Now, assume that there are a total of $n$ items indexed by $1,2,...,n$, and we use $[n]:=\{1,2,...,n\}$ to denote the set of all items. Since only the ratios of $\theta_i$'s matter, in this paper we normalize $\max_{i\in[n]}{\theta_i} = 1$. Define $\theta_{[i]}$ as the $i$-th largest preference score of all items. For $\epsilon\in(0,1)$, an item is said to be $(\epsilon, k)$-optimal if its preference score is no less than $\theta_{[k]}-\epsilon$. For $\epsilon\in(0,1)$, we define \begin{equation}\label{Ukepsilon} U_{k,\epsilon} := \{i\in[n]:\theta_i\geq \theta_{[k]}-\epsilon \}, \end{equation} i.e., $U_{k,\epsilon}$ is the set of all $(\epsilon, k)$-optimal items. A set $R$ is said to be $\epsilon$-top-$k$ if $|R|=k$ and $R\subset U_{k,\epsilon}$, i.e., all items in it are $(\epsilon,k)$-optimal. Here we given a simple example. Let $[n]=\{1,2,3,4\}$, and the preference scores are 1.0, 0.9, 0.89, 0.87 respectively. Let $k=2$ and $\epsilon=0.02$. We have $\theta_{[k]} = 0.9$, $U_{k,\epsilon} = \{1,2,3\}$, and every 2-sized subset of $U_{k,\epsilon}$ is $\epsilon$-top-$k$. Now we define the PAC top-$k$ item selection problem. \begin{problem}\label{problem1} [PAC Top-$k$ Item Selection ($k$-IS)] Given a set of $n$ items, $k\in\{1,2,3,...,\left\lfloor n/2 \right\rfloor\}$, $l\in\{2,3,4,...,n\}$, $\delta \in (0,1)$, and $\epsilon\in(0,1)$, we want to find a correct $\epsilon$-top-$k$ subset with error probability no more than $\delta$, and use as few $l$-wise queries as possible. \end{problem} Beyond Problem~\ref{problem1}, we further explore the PAC total ranking problem. A function $\Pi$ is said to be a permutation of $[n]$ if it is a bijection from $[n]$ to $[n]$, where $\Pi(i)=j$ means item $i$ ranks the $j$-th largest. Given $\epsilon\in(0,1)$, a permutation $\Pi$ is said to be an $\epsilon$-ranking if for all $i$ and $j$ in $[n]$, $\Pi(i) < \Pi(j)$ (i.e., $i$ ranks higher than $j$) implies $\theta_i \geq \theta_j - \epsilon$. In other words, an $\epsilon$-ranking is a correct ranking except that incorrect orders among items with preference scores' difference no greater than $\epsilon$ are allowed. Now we define the PAC total ranking problem. \begin{problem}\label{problem2} [PAC Total Ranking (TR)] Given a set of $n$ items, $l\in\{2,3,4,...,n\}$, $\delta \in (0,1)$, and $\epsilon\in(0,1)$, we want to find a correct $\epsilon$-ranking with error probability no more than $\delta$, and use as few $l$-wise queries as possible. \end{problem} In this paper, we add two constraints to our problems. The first one is that we can only perform $l$-wise queries. As has been explained in Section~\ref{sec:Intro}, this constraint is reasonable and of interest. The second constraint is that the ratios of preference scores between items are upper bounded by a constant. In this paper, it is referred to as the RBC (ratios bounded by a constant) condition. The RBC condition implies that there exists some constant $C$ such that $\sup_{i,j\in[n]}{\theta_i}/{\theta_j} \leq C$. Under the RBC condition, the least preferred item has a lower bounded probability to win the most preferred item. The RBC condition has been adopted by many previous works \citep{SpectralMLE2015,RankCentrarity2016,LimitedRounds2017,ListwisePL2017,BothOptimal2017}. The rationale behind the RBC condition is as follows. First, the RBC condition is a good model of the situations where noises are not insignificant, and the least preferred items still have a chance to win the most preferred ones. Second, \citet{Chen2018} showed that if the RBC condition does not hold, one can get an $l$-reduction for Problem~\ref{problem1}. However, when the RBC condition holds, they showed that the sample complexity is lower bounded by $\Omega(n)$, and their algorithms' sample complexity is $O(n\log^{14}{n})$ under default parameters, far higher than the lower bound. Thus, we are interested whether their results can be improved when the RBC condition holds. \section{Related Work}\label{sec:RW} To the best of our knowledge, the first and most recent paper that focuses on listwise active ranking under the MNL model was \citep{Chen2018}, which proposed an algorithm that finds the top-$k$ out of $n$ items with high probability using $O(n \log^{14}{n})$\footnote{All $\log$ in this paper, unless explicitly noted, are natural log.} $l$-wise comparisons when using default parameters, and can obtain up to an $l$-reduction if the preference scores vary significantly. However, when the RBC condition holds, \citet{Chen2018} showed that $\Omega(n)$ queries are necessary, and their algorithms suffer from a $\log^{14}{n}$ factor, which can be large. Motivated by their work, we investigate whether we can tighten the lower bound or remove the $\log^{14}{n}$ factor when the RBC conditions hold. When $l=2$, the MNL model reduces to PL model \citep{BTLModel2012individual}. Under this model, active ranking has been studied extensively. For the top-$k$ ranking problem, to our knowledge, the best asymptotic result was given by \citep{LimitedRounds2017}. Given $\Delta_k$, the minimal difference between the preference scores of the $k$-th preferred item and the others, its top-$k$ ranking algorithm returns a correct solution with error probability no greater than $\delta$ using $O(\Delta_k^{-2} n \log{(k/\delta)})$ comparisons, which meets the lower and upper bound proved in this paper. However, a weakness of the algorithm in \citep{LimitedRounds2017} is that one needs to know the $\Delta_k$ value a priori. \citet{LimitedRounds2017} also proved an $\Omega(\Delta_k^{-2} n \log{(1/\delta)})$ lower bound on sample complexity. Further, they showed that any algorithm able to solve the (PAC) full exploration multi-armed bandit (FEMAB) problem can solve the (PAC) top-$k$ ranking problem by transforming the latter to the former. Thus, FEMAB algorithms such as that in \citep{Halving2010,LowerBound2012,Top-kBernoulli2015,PureExploration2016} also meet the lower bound proved in this paper. The algorithm in \citep{LimitedRounds2017} also relies on this kind of transformation. However, numerical results in this paper shows that such a transformation performs poorly when $l$ is large. For the pairwise total ranking problem, the best theoretical result so far has been given by \citet{OnlineRankingElicitation2015} to our knowledge, where they proposed a total ranking algorithm PLPAC-AMPR using $O\left(\epsilon^{-2} n \log{n} \log(n \delta^{-1}\epsilon^{-1})\right)$ comparisons. This result is looser than our upper and lower bound by a $\log{n}$ factor. Though not stated, it can be proved that Borda Ranking in \citep{MaxingAndRanking2017} solves the pairwise total ranking problem with sample complexity $O\left(\epsilon^{-2} n \log(n \delta^{-1})\right)$. However, numerical results in this paper shows that it does not work well in the listwise settings, especially when $l$ is large. Works in \citep{SpectralMLE2015,RankCentrarity2016,BothOptimal2017} proposed non-adaptive top-$k$ ranking algorithms by using the special property of the PL model, and the \citet{LimitedRounds2017,BothOptimal2017} showed that these algorithms are sample complexity optimal in the non-adaptive setting (i.e., with sample complexity $O(n\log{n})$). \citet{MultiwiseSpectral2017} studied the listwise top-$k$ ranking, but under a different model. Works in \citep{NoisyComputing1994,Ailon2012active,Top-kSelection2013,MaxingAndRanking2017,Falahatgar2017,RankingLimits2018,mohajer2016active} explored maxing and ranking under different settings, and proposed optimal or nearly optimal algorithms. For the Borda-score model, \citet{ActiveRanking2016,CoarseRanking2018} proposed partition (or coarse ranking) algorithms that solve the top-$k$ item selection problem with sample complexity $O(n\log{n})$, and \citet{MaxingAndRanking2017} explored the maxing and total ranking. \section{Lower Bound Analysis}\label{sec:LBA} In this section, we establish the sample complexity (number of queries needed) lower bounds for the two problems defined above. Both the lower bounds in this paper are for the worst case. We do not consider average lower bounds in this paper, since they necessitate assumptions on a prior distribution on the preference scores, which is beyond the scope of this paper. For instance, when deriving the $\Omega(n\log{n})$ lower bound of sorting, people normally assume that all numbers are distinct and each permutation has the same prior probability. There are instances like the one where all numbers take value in $\{0,1\}$, for which $O(n)$ time is sufficient for sorting. Also, due to the PAC setting, there are instances (e.g., preference scores of the items are closer than $\epsilon$) whose ranking can be recovered even by a constant number of queries. We note that the lower bounds derived in this paper are not restricted to the problems defined in this paper, and can also be applied to others \citep{OnlineRankingElicitation2015,ActiveRanking2016,LimitedRounds2017,Chen2018}. We will provide more detailed discussions after presenting the lower bounds for the PAC problems. \subsection{Lower Bound for the $k$-IS Problem} First, we establish the worst case lower bound for the $k$-IS problem (Problem~\ref{problem1}) stated in Theorem~\ref{LB-k-IS}. \begin{theorem}[Lower bound of the $k$-IS problem]\label{LB-k-IS} Given $\epsilon \in (0, \sqrt{1/32}]$, $\delta \in (0, 1/4)$, $6 \leq k \leq n/2$, and $2\leq l \leq n$, there is an instance such that to find an $\epsilon$-top-$k$ subset with error probability no more than $\delta$, any algorithm must conduct $\Omega( \frac{n}{\epsilon^2}\log{\frac{k}{\delta}} )$ $l$-wise queries in expectation. \end{theorem} \begin{proof} We prove that any algorithm able to solve the $k$-IS problem can be transformed to solve the PAC top-$k$ arm selection ($k$-AS) problem with Bernoulli rewards defined in \citep{LowerBound2012}. The sample complexity lower bound of the latter is $\Omega(\frac{n}{\epsilon^2}\log\frac{k}{\delta})$, and completes the proof. See Section~\ref{section9} for details. \end{proof} The reduction mentioned above also implies that we can use the ranking algorithms to solve the corresponding FEMAB algorithms, and builds a bridge between ranking and FEMAB problems. We note that this bound can also be applied to the exact top-$k$ subset selection problem \citep{SpectralMLE2015,RankCentrarity2016,LimitedRounds2017,ListwisePL2017,BothOptimal2017}. Let $\Delta_k:=\theta_{[k]}-\theta_{[k+1]}$. When $\epsilon < \Delta_k$, the unique $\epsilon$-top-$k$ subset is exactly the top-$k$ subset. Thus, for the exact top-$k$ item selection problem, the worst case lower bound is $\Omega(\frac{n}{\Delta_k^2}\log\frac{k}{\delta})$. This bound is higher than the $\Omega(\frac{n}{\Delta_k^2}\log\frac{1}{\delta})$ one derived by \citet{LimitedRounds2017}. \begin{corollary}[Lower bound of identifying the exact top-$k$ items]\label{LB-k-IS2} Define $\Delta_k:=\theta_{[k]}-\theta_{[k+1]}$. Given $\Delta_k \in (0, \sqrt{1/32}]$, $\delta \in (0,1/4)$, $6 \leq k \leq n/2$, and $2\leq l \leq n$, there is an instance such that to find the exact top-$k$ subset with error probability no more than $\delta$, any algorithm must conduct $\Omega( \frac{n}{\Delta_k^2}\log{\frac{k}{\delta}} )$ $l$-wise queries in expectation. \end{corollary} To the best of our knowledge, Theorem~\ref{LB-k-IS} is the first known $\Omega( \frac{n}{\epsilon^2}\log{\frac{k}{\delta}} )$ lower bound for the PAC top-$k$ ranking under the MNL model and the PL model, and Corollary~\ref{LB-k-IS2} is the first known $\Omega(\frac{n}{\Delta_k^2}\log{\frac{k}{\delta}})$ lower bound for the exact top-$k$ ranking. Later, in Theorem~\ref{TP-TopK2}, we will show that our lower bounds are tight if $l=2$ (i.e. the pairwise case) or $l=O(poly(k))$. It remains an open problem whether these lower bound are tight for $l>2$. \subsection{Lower Bound for the TR Problem} Next, we establish the worst case lower bound for the TR problem (Problem~\ref{problem2}). Recall from the definition of $\epsilon$-ranking that the $k$ highest ranked items in an $\epsilon$-ranking form an $\epsilon$-top-$k$ subset. Thus, the lower bound of the TR problem is no lower than the PAC top-$(n/2)$ selection, i.e., $\Omega(\frac{n}{\epsilon^2}\log\frac{n}{\delta})$. The result is presented in Theorem~\ref{LB-TR}. Later in Theorem~\ref{TP-TR}, we will show that this lower bound is tight. \begin{theorem}[Lower bound for the total ranking problem]\label{LB-TR} Given $\epsilon \in (0, \sqrt{1/32}]$, $\delta \in (0, 1/4)$, and $2\leq l\leq n$, there is an instance such that to find a correct $\epsilon$-ranking with error probability no more than $\delta$, any algorithm must conduct $\Omega\left( \frac{n}{\epsilon^2}\log{\frac{n}{\delta}} \right)$ $l$-wise queries in expectation. \end{theorem} \begin{proof} If one finds an $\epsilon$-ranking of $A$, then the top $\lfloor\frac{n}{2}\rfloor$ items form an $\epsilon$-top-$\lfloor\frac{n}{2}\rfloor$ subset of $A$. The lower bound of the latter one is $\Omega\left( \frac{n}{\epsilon^2}\log{\frac{n}{\delta}} \right)$, and thus, the desired result follows. \end{proof} For the exact total ranking problem, when $n$ increases, the minimal gap of preference scores between the items is of the order $O(1/n)$. To distinguish two items whose preference scores' difference is $O(1/n)$ with probability $3/4$, at least $\Omega(n^2)$ queries are required by Corollary~\ref{LB-k-IS2}. Thus, for any $n$-sized instance, the exact total ranking takes at least $\Omega(n^2)$ queries. The worst case lower bound is $\Omega(n^3\log{n})$ (consider the instance where preferences scores' differences between consecutive items are all $O(1/n)$. The lower bounds of these two problems are not dependent on the value of $l$. It indicates that by listwise queries, one can only get up to constant reductions on the sample complexity. However, in practice, numerical results provided in Figure~\ref{fig:lComparison} suggest that when $l$ increases, the number of queries decreases. Furthermore, as noted in Section~\ref{sec:Intro}, for many applications such as web searching and online shopping, listwise queries are more typical and common. When users use these applications, the server, by adaptively presenting items in a listwise manner, can learn the users' preference in a shorter time compared with randomly presenting. \section{Algorithms for the PAC Total Ranking}\label{sec:TR} In this section, we present our algorithm called PairwiseDefeatingTotalRanking (PDTR) for Problem~\ref{problem2} (Algorithm~\ref{AL-TR}) and its analysis. Its theoretical performance is stated in Theorem~\ref{TP-TR}. The key idea of this algorithm is to first bound the probability that $j$ wins a query given $i$ or $j$ wins the query (see Lemma~\ref{pi_ij}), and then use this bound to establish a UCB (upper confidence bound)-like method that bounds the probability that an unwanted item is added to the result. The key difference between our algorithm and the UCB-like algorithms for the FEMAB problems (see \citep{LowerBound2012} as an example) is that they bound the empirical means of the bandit arms' rewards, while in our algorithm, we bound the ratio of winning numbers (i.e., $w_j/(w_i+w_j)$) for each pair of items. Our contribution lies in extending the upper confidence bounds of the arm's empirical mean rewards to that of the ratios between items' wins. \begin{lemma}\label{pi_ij} In Algorithms~\ref{AL-TR} with input $\alpha \leq \frac{l-1}{4(l+C-1)}$, for any $i,j$ in $R$ with $\theta_i>\theta_j$, the probability that $i$ wins a query given $i$ or $j$ wins the query is at least $\frac{1}{2}+\alpha(\theta_i-\theta_j)$. \end{lemma} \begin{proof} We will show that for each set $S$ containing $i$, there is an one-to-one corresponding set $S'$ such that $\frac{Pr\{i\mbox{ wins the query over } S\}}{Pr\{i\mbox{ wins the query over } S\}+Pr\{j\mbox{ wins the query over } S'\}} \geq \frac{1}{2}+\alpha(\theta_i-\theta_j)$, and derive the desired result. See Section~\ref{section10} for details. \end{proof} \begin{algorithm}[bht] \caption{PairwiseDefeatingTotalRanking$(A, \delta, \epsilon, \alpha)$}\label{AL-TR} \hspace*{\algorithmicindent} \textbf{Input:} $A$ the $n$-sized set to be ranked, $\delta$ a desired error probability bound, $\epsilon$ the error tolerance, and $\alpha$ a parameter balancing correct probability and sample complexity.\\ \hspace*{\algorithmicindent} \textbf{Output:} An $\epsilon$-ranking that is correct w.p. $\geq 1-\delta$.\\ \hspace*{\algorithmicindent} \textbf{Initialize:} $\delta^*_1 \gets \frac{\delta}{n(n-1)+1}$; $lo\gets 1$; $hi\gets n$; $R\gets A$; $\forall i\in A$, $w_i\gets 0$; $\Pi\gets$ empty map; \Comment{$lo$ and $hi$ are pointers; $R$ stores the remaining items; $w_i$ records the wins of item $i$;} \begin{algorithmic}[1] \Repeat \If{$|R| \geq l$} $S \gets$ a random $l$-sized subset of $R$; \Else \ \ $S \gets R\ \cup$\ \{last $l-|R|$ items removed from $R$\}; \EndIf \State Query $S$ once; Let $q$ denote the winner; \State $w_q \gets w_q+1$; \If{$w_q \geq \frac{1}{4 \alpha^2 \epsilon^2}\log{\frac{1}{\delta^*_1}}$} \State $\forall j\in R-\{q\}$, mark "$q$ \textsl{defeats} $j$"; \EndIf \For{$j \in R$ such that $j$ does not \textsl{defeat} $q$} \If{$\frac{w_q}{w_q+w_j} \geq b_{w_q+w_j}$} mark "$q$ \textsl{defeats} $j$"; \indent \Comment{Def $b_{w_q\!+\!w_j}\!:=\!\frac{1}{2}\!-\!\alpha\epsilon\!+\!\sqrt{\frac{1}{2(w_q+w_j)}\!\log\!{\frac{\pi^2 (w_q\!+\!w_j)^2}{6 \delta^*_1}}}$} \EndIf \EndFor \If{$q$ \textsl{defeats} every other element of $R$} \State $\Pi(q)\gets lo$; $R\gets R - \{q\}$; $lo\gets lo+1$; \EndIf \For{$i\in R$} \If{$i$ is \textsl{defeated} by every other element of $R$} \State $\Pi(i)\gets hi$; $R\gets R - \{i\}$; $hi\gets hi-1$; \EndIf \EndFor \Until{$lo \geq hi$}\\ \Return $\Pi$ \end{algorithmic} \end{algorithm} Here we explain the main idea of PDTR. In this algorithm, upper confidence bounds on $\frac{w_j}{w_i+w_j}$ are established to make sure the following event happens with probability at least $1-n(n-1)\delta^*_1$: for all $i,j$ with $\theta_i > \theta_j+\epsilon$, during the time they are in $R$, (i) it always holds that $\frac{w_j}{w_i+w_j} < b_{w_i+w_j}$, and (ii) $w_j$ does not reach $\frac{1}{4\alpha^2\epsilon^2}\log\frac{1}{\delta^*_1}$ before $w_i$. It can be seen from the algorithm that if the above event happens, for all $i,j\in A$ with $\theta_i > \theta_j+\epsilon$, $i$ ranks higher than $j$ in $\Pi$, and thus, the returned value $\Pi$ is correct. We say a query is \textsl{useful} if its query result (i.e., the winner) is in $R$ at the time when the result is revealed, and is \textsl{useless} otherwise. By removing an item as soon as it wins $\frac{1}{4\alpha^2\epsilon^2}\log\frac{1}{\delta^*_1}$ queries, we can bound the number of \textsl{useful} queries by $O(\frac{n}{\alpha^2\epsilon^2}\log\frac{1}{\delta^*_1})$. The number of \textsl{useless} queries is upper bounded by $O(\frac{l}{\alpha^2\epsilon^2}\log\frac{1}{\delta^*_1})$ with probability $1-\delta^*_1$. Thus, the sample complexity is $O(\frac{n}{\alpha^2\epsilon^2}\log\frac{1}{\delta^*_1})$. Based on this intuition, we characterize the theoretical performance of PDTR in Theorem~\ref{TP-TR}. See Section~\ref{section11} for complete proof. \begin{theorem}[Theoretical performance of PDTR]\label{TP-TR} With probability at least $1-\delta$, PDTR terminates after $O(\frac{n}{\alpha^2\epsilon^2}\log\frac{n}{\delta})$ $l$-wise queries, and, if $\alpha \leq \frac{l-1}{4(l+C-1)}$, returns a correct $\epsilon$-ranking. \end{theorem} As $\frac{l-1}{4(l+C-1)} = \Omega(1)$, we can let $\alpha=\Omega(1)$, and PDTR's sample complexity upper bound is $O(\frac{n}{\epsilon^2}\log\frac{n}{\delta})$. Here, we add an $\alpha$ parameter to the input in order to balance the trade-off between error probability and sample complexity in practice. We note that $\alpha \leq \frac{l-1}{4(l+C-1)}$ is a sufficient condition for PDTR to achieve an error probability no greater than $\delta$ for any input instance. However, in practice, most cases do not need a small $\alpha$ value to achieve the target success probability. So do Algorithms~\ref{AL-TopK1} and ~\ref{AL-TopK2}. By Theorem~\ref{LB-TR}, we can see that PDTR is sample complexity optimal in order sense. We note that \citet{MaxingAndRanking2017} proposed algorithms with the same upper bound for the special case $l=2$. However, when $l>2$, they have no theoretical guarantees, and numerical results in Section~\ref{sec:NR} indicate that our algorithm outperforms theirs. \section{Algorithms for the PAC Top-$\mathbf{k}$ Item Selection}\label{sec:kIS} In this section, we provide the algorithm TournamentKSelection (TNKS) for Problem~\ref{problem1} (Algorithm~\ref{AL-TopK2}). This algorithm is inspired by "Halving" proposed in \citep{Halving2010}. "Halving" divides $\delta$ and $\epsilon$ into $\delta_r$'s and $\epsilon_r$'s, and eliminate half the remaining items for each round while guaranteeing $(\delta_r,\epsilon_r)$-correctness of this round. We first modify PDTR to establish the PairwiseDefeatingKSelection (PDKS) algorithm (Algorithm~\ref{AL-TopK1}). PDKS has a special property (stated in Lemma~\ref{kappaCorrectness}) that will be used in the establishment of TNKS. Then, we use the similar ideas as Halving to design TNKS, which solves the $k$-IS problem with $O(\frac{n}{\epsilon^2}\log\frac{k+l}{n})$ sample complexity. We first present PDKS, which is similar to PDTR with the difference being that the former returns immediately after $k$ $(\epsilon,k)$-optimal items are found. The sample complexity of PDKS is still $O(\frac{n}{\epsilon^2}\log\frac{n}{\delta})$, but the constant factor is smaller as it can be viewed as an early-stopped version of PDTR. \begin{algorithm}[bht] \caption{PairwiseDefeatingKSelection$(A, k, \delta,\epsilon, \alpha)$}\label{AL-TopK1} \hspace*{\algorithmicindent} \textbf{Input:} $A$ the $n$-sized set to be ranked, $k$ the number of top items to be selected, $\delta$ a desired error probability bound, $\epsilon$ the error tolerance, and $\alpha$ a parameter balancing success probability and sample complexity.\\ \hspace*{\algorithmicindent} \textbf{Initialize:} $\delta^*_2\gets \frac{\delta}{2k(n-1)+1}$; $Ans\gets \emptyset$; $R \gets A$; $\forall i\in A$, $w_i\gets 0$; \Comment{$w_i$ stores item $i$'s number of wins;} \begin{algorithmic}[1] \Repeat \If{$|R| \geq l$} $S \gets$ a random $l$-sized subset of $R$; \Else \ \ $S \gets R\ \cup$\ \{last $l-|R|$ items removed from $R$\}; \EndIf \State Query $S$ once; Let $q$ be the winner; \State $w_q\gets w_q+1$; \If{$w_q \geq \frac{1}{4 \alpha^2 \epsilon^2}\log{\frac{1}{\delta^*_2}}$} \State $\forall j\in R-\{q\}$, mark "$q$ \textsl{defeats} $j$"; \EndIf \For{$j \in R$ such that $j$ does not \textsl{defeat} $q$} \If{$\frac{w_q}{w_q+w_j} \geq b_{w_q+w_j}$} mark "$q$ \textsl{defeats} $j$"; \indent \Comment{Def $b_{w_q\!+\!w_j}\!:=\!\frac{1}{2}\!-\!\alpha\epsilon\!+\!\sqrt{\frac{1}{2(w_q+w_j)}\!\log\!{\frac{\pi^2 (w_q\!+\!w_j)^2}{6 \delta^*_2}}}$} \EndIf \EndFor \If{$i$ \textsl{defeats} every other element of $R$} \State $Ans\gets Ans\cup \{i\}$; $R\gets R - \{i\}$; \EndIf \For{$i\in R$} \If{$i$ is \textsl{defeated} by every other element of $R$} \State $R\gets R - \{i\}$;\Comment{Discard $i$} \EndIf \EndFor \Until{$|Ans|=k$}\\ \Return $Ans$ \end{algorithmic} \end{algorithm} Lemma~\ref{kappaCorrectness} is a property of PDKS, which will be used to establish TNKS. The theoretical performance of PDKS is stated in Theorem~\ref{AL-TopK1}. \begin{lemma}\label{kappaCorrectness} Let $\kappa\in\{1,2,...,k\}$ be arbitrary. With probability at least $1-\frac{2\kappa(n-1)\delta}{2k(n-1)+1}$, the returned value of PDKS contains at least $\kappa$ items whose preference scores are no less than $\theta'_{[\kappa]}-\epsilon$, where $\theta'_{[\kappa]}$ is the $\kappa$-th largest preference score among all items in $A$. \end{lemma} \begin{proof} The proof is almost the same as that of Theorem~\ref{TP-TR}. See Section~\ref{section12} for details. \end{proof} \begin{theorem}[Theoretical performance of PDKS]\label{TP-TopK1} With probability at least $1-\delta$, PDKS terminates after $O(\frac{n}{\alpha^2\epsilon^2}\log\frac{n}{\delta})$ $l$-wise queries, and, if $\alpha\leq \frac{l-1}{4(l+C-1)}$, returns a correct $\epsilon$-top-$k$ subset of $A$. \end{theorem} \begin{proof} First fix $\alpha\leq \frac{l-1}{4(l+C-1)}$. Letting $\kappa=k$ in Lemma~\ref{kappaCorrectness}, we have that with probability at least $1-2k(n-1)\delta^*_2$, the returned value is correct. As for the sample complexity, we consider all positive $\alpha$ values. The number of \textsl{useful} queries is obviously upper bounded by $O(\frac{n}{\alpha^2\epsilon^2}\log\frac{1}{\delta^*_2}) = O(\frac{n}{\alpha^2\epsilon^2}\log\frac{n}{\delta})$ (recall that $\delta^*_2=\Theta({\delta}/{n})$). By Chernoff Bound and some computation, we can prove that the number of \textsl{useless} queries is at most $O(\frac{l}{\alpha^2\epsilon^2}\log\frac{n}{\delta})$ with probability $1-\delta^*_2$. The desired result follows. See Section~\ref{section13} for details. \end{proof} Based on PDKS, we design TNKS. TNKS runs like a tournament. At each round $r$ (i.e., the $r$-th repetition of lines 2 to 7), it divides the remaining items $R$ into groups of size $m\geq 2k$. Then, items within each group compete and only $k$ of them survive. After each round, only half (with at most $k$ more) of the remaining items will survive. Precisely speaking, \begin{equation} \label{T_rSize} |T_r| \leq \left\lceil {|T_{r-1}|}/{m} \right\rceil k \leq \left\lceil {\left\lceil \frac{n}{k} \right\rceil}{2^{-r}} \right\rceil k, \end{equation} where $T_r$ is the set of remaining items after round $r$. Rounds will be repeated until only $k$ items remain. Thus, by at most $\left\lceil \log_2{n} \right\rceil$ rounds, TNKS terminates. \begin{algorithm}[bht] \caption{TournamentKSelection$(A,k, \delta,\epsilon, \alpha)$}\label{AL-TopK2} \hspace*{\algorithmicindent} \textbf{Input:} $A$ the $n$-sized set to be ranked, $k$ the number of top items to be selected, $\delta$ a desired error probability bound, $\epsilon$ the error tolerance, and $\alpha$ a parameter balancing success probability and sample complexity.\\ \hspace*{\algorithmicindent} \textbf{Output:} An $\epsilon$-top-$k$ subset correct w.p. $\geq 1-\delta$.\\ \hspace*{\algorithmicindent} \textbf{Initialize:} $m \gets \min \{n, \max\{2k, k+l-1\}\}$; $T_0 \gets A$; $r \gets 0$; $\delta_r \gets \frac{6 \delta}{r^2 \pi^2}$, and $\epsilon_r \gets \frac{\epsilon}{4}(\frac{4}{5})^r$ for $r\in \mathbbm{Z}^+$; \begin{algorithmic}[1] \Repeat \State $r\gets r+1$; $T_r\gets \emptyset$; $R\gets T_{r-1}$; \Repeat \If{$|R|\geq m$} $B$ $\gets$ \{$m$ random items in $R$\}; \Else \ $B\gets R\ \cup$ \{$(m-|R|)$ random items\}; \EndIf \State $D \gets$ PDKS$(B,k,\delta_r,\epsilon_r, \alpha)$ \State $T_r \gets T_r \cup D$, $R\gets R-B$ \Until{$R=\emptyset$} \Until{$|T_r|=k$} \\ \Return $T_r$ \end{algorithmic} \end{algorithm} By using Lemma~\ref{kappaCorrectness}, we can prove the theoretical performance of TNKS, which is stated in Theorem~\ref{TP-TopK2}. \begin{theorem}[Theoretical performance of TNKS]\label{TP-TopK2} With probability at least $1-\delta$, TNKS terminates after $O(\frac{n}{\alpha^2\epsilon^2}\log\frac{k+l}{\delta})$ $l$-wise queries, and, if $\alpha\leq \frac{l-1}{4(l+C-1)}$, returns a correct $\epsilon$-top-$k$ subset of $A$. \end{theorem} \begin{proof} We can prove that when $|T_{s-1}\cap U_{k,\sum_{r=1}^{s-1}\epsilon_r}|\geq k$ ($U_{k,\sum_{r=1}^{s-1}\epsilon_r}$ is defined in (\ref{Ukepsilon})), at round $s$, with probability at least $1-\delta_s$, TNKS takes at most $O(\frac{|T_{s-1}|}{\epsilon^2}\log\frac{m}{\delta_s})$ queries, and $|T_{s}\cap U_{k,\sum_{r=1}^{s}\epsilon_r}|\geq k$. The desired result then follows from the choices of $\delta_r$ and $\epsilon_r$ in TNKS. See Section~\ref{section14} for details. \end{proof} Fix $\alpha = \Omega(1)$, the sample complexity is $O(\frac{n}{\epsilon^2}\log\frac{k+l}{\delta})$. Clearly, under the PL model (i.e., $l=2$), our algorithm has order-optimal sample complexity in the worst case. When $l>2$, if $l=O(poly(k))$, our algorithm is still order-optimal in the worst case. When $l$ increases, the theoretical upper bound of TNKS' sample complexity increases. However, it can be seen later from the numerical results in Figure~\ref{fig:lComparison} that as $l$ increases, the actual number of queries decreases. This can be explained as follows: The required $\alpha$ value ($\frac{l-1}{4(C+l-1)}$) decreases as $l$ increases, and the sample complexity upper bound scales as $O(\alpha^{-2})$. \section{Numerical Results}\label{sec:NR} In this section, we compare our algorithms with the state-of-the-art by running simulations on synthetic data as well as real-world data. \subsection{Synthetic Data} In this subsection, we perform comparisons using synthetic data. In the datasets, all items' preference scores are independently generated uniformly at random in $[1/C,1]$, and then rescaled to let the maximal preference score be $1$. All algorithms are tested on the same datasets for fair comparisons. Every point in the figures is averaged over 100 trails. In this part we fix $n=10$, $\epsilon=0.05$, $\delta=0.05$, and $C=10$, and vary the other parameters to perform the comparisons. We first compare TNKS (Algorithm~\ref{AL-TopK2}) for the top-$k$ ranking problems with the state-of-the-art algorithms including Spectral MLE \citep{SpectralMLE2015}, AlgPairwise (and AlgMultiwise, its listwise version) \citep{Chen2018}, and Halving \citep{Halving2010}. Spectral MLE is with $O(n\log{n})$ sample complexity in the pairwise case. AlgMultiwise's sample complexity is $O(n\log^{14}{n})$ under default parameters. Halving is an example of FEMAB algorithms, with sample complexity $O(\frac{n}{\epsilon^2}\log\frac{k}{\delta})$ for $l=2$. In the implementations of these algorithms, we vary the parameters (e.g., $\alpha$ of TNKS) to balance the trade-off between success rate and sample complexity. For Spectral MLE, we fix $L$ and vary input parameter $p$. For AlgMultiwise and AlgPairwise, we vary $\kappa$. For Halving, we vary input parameter $\delta$, the desired error probability bound. \begin{figure}[bht] \begin{subfigure}[b]{0.23\textwidth} \includegraphics[scale=0.5]{TKPW_Synthetic_1.eps} \caption{$l=2,k=1$.} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[scale=0.5]{TKPW_Synthetic_2.eps} \caption{$l=2,k=2$.} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[scale=0.5]{TKPW_Synthetic_3.eps} \caption{$l=2,k=5$.} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[scale=0.5]{TKLW_Synthetic_1.eps} \caption{$l=3,k=2$.} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[scale=0.5]{TKLW_Synthetic_2.eps} \caption{$l=5,k=2$.} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[scale=0.5]{TKLW_Synthetic_3.eps} \caption{$l=10,k=2$.} \end{subfigure} \caption{Comparisons of top-$k$ ranking algorithms.}\label{fig:TKSyn} \end{figure} We begin with the pairwise case (i.e., $l=2$). The results are shown in Figure~\ref{fig:TKSyn} (a)-(c). We can see that when $k=1$ or $k=2$, TNKS outperforms other algorithms. When $k=5$, the performances of all algorithms are close. This is because the sample complexity of TNKS is $O(\frac{n}{\epsilon^2}\log{\frac{k}{\delta}})$, while that of Spectral MLE and AlgPairwise is $O(n \cdot poly(\log{n}))$, so TNKS performs better when $k$ is small. The results indicate that the advantage of TNKS is greater when $k$ is small, consistent with our theoretical results. Next, we compare these algorithms in the listwise case (i.e, $l>2$). The results are illustrated in Figure~\ref{fig:TKSyn} (d)-(f). Spectral MLE works only for pairwise ranking, which is not comparable in this part. As we can see, TNKS' performance is better than AlgMultiwise overall, consistent with our theoretical results. Also, TNKS is better than Halving. Further, it can be seen that when $l$ increases, the gap between TNKS and Halving increases. This indicates that the approach of transforming listwise ranking problems to the FEMAB ones does not work well for large $l$. Secondly, we compare PDTR (Algorithm~\ref{AL-TR}) with total ranking algorithms including PLPAC-AMPR \citep{OnlineRankingElicitation2015} and Borda Ranking \citep{MaxingAndRanking2017}. PLPAC-AMPR only works in the pairwise case, so it is not comparable in the listwise case. In the implementations, we vary the $\delta$ value of PLPAC-AMPR and Borda Ranking to balance the trade-off of sample complexity and success rate. The results are illustrated in Figure~\ref{fig:TRSyn}. \begin{figure}[bht] \begin{subfigure}[b]{0.23\textwidth} \includegraphics[scale=0.5]{TR_Synthetic_1.eps} \caption{$l=2$.} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[scale=0.5]{TR_Synthetic_2.eps} \caption{$l=5$.} \end{subfigure} \caption{Comparisons of total ranking algorithms.}\label{fig:TRSyn} \end{figure} According to the figures, the performance of PLPAC-AMPR is much worse than PDTR, which is consistent with the theoretical results that PLPAC-AMPR's sample complexity is $O(n\log^2{n})$. We can also see that when $l=2$, Borda Ranking is slightly better than PDTR. An explanation is that when $l=2$, Borda Ranking is of sample complexity $O(\frac{n}{\epsilon^2}\log\frac{n}{\delta})$, the same as PDTR, and may have a smaller constant factor. However, when $l=5$, PDTR outperforms Borda Ranking significantly. This again indicates that traditional pairwise ranking algorithms do not work well in the listwise cases. Thirdly, we test the performance of TNKS and PDTR under different $l$ values. We show that although their upper bounds of sample complexity increases as $l$ increases, their actual performances are better for larger $l$ values. One possible explanation is that as $l$ increases, the required $\alpha$ values is larger, and the sample complexity upper bound scales as $O(\alpha^{-2})$. The results are illustrated in Figure~\ref{fig:lComparison}. \begin{figure}[bht] \begin{subfigure}[b]{0.23\textwidth} \includegraphics[scale=0.5]{lComparisonK2.eps} \caption{TNKS, $k=2$.} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[scale=0.5]{lComparisonTR.eps} \caption{PDTR.} \end{subfigure} \caption{Performance of the proposed algorithms under different $l$ values.}\label{fig:lComparison} \end{figure} \subsection{Real-World Data} In this subsection, we compare the algorithms on real-word data. We use datasets "ED-00004-00000189", "ED-00004-00000190", and "ED-00004-00000198" from PrefLib \citep{PrefLib2007} to conduct real-world data experiments. Each dataset contains several hundreds entries. Each entry provides a strict order of four movies annotated by a user. Here we present four entries of "ED-00004-00000189" to help the readers understand the datasets: "90,1,3,2,4", "45,1,2,3,4", "35,1,3,4,2", and "29,2,3,4,1". The entry "90,1,3,2,4" means that there are 90 users who prefer movie 1 the best, movie 2 the second, and movie 4 the last. In the implementations of algorithms, we generate the query results by the empirical marginals, that is $\mathbb{P}(i|S)=$ the empirical frequency that $i$ is more preferred than all other items of $S$ in the dataset. We use the corresponding pairwise preference data to compute the preference scores with highest likelihood ratio by MM method \citep{MMMethod2004}, and use them to generate the correct ranking. All algorithms are tested on these three datasets. For each dataset, we perform 100 trials for each point, and then take average over the three datasets. Here, we take parameters $\epsilon=0.05$ and $\delta=0.05$. \begin{figure}[bht] \begin{subfigure}[b]{0.23\textwidth} \includegraphics[scale=0.5]{TKRW_1.eps} \caption{$l=2$.} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[scale=0.5]{TKRW_2.eps} \caption{$l=4$.} \end{subfigure} \caption{Comparisons of the top-$k$ ranking algorithms on real-world data.}\label{fig:TKRW} \end{figure} First we compare the top-$k$ ranking algorithms and the results are shown in Figure~\ref{fig:TKRW}. We can see that TNKS still outperforms other algorithms and the gaps are larger when $l=4$. The results are consistent to our theoretical and numerical results on synthetic data. Next, we compare the total ranking algorithms, and the results are shown in Figure~\ref{fig:TRRW}. We do not test PLPAC-AMPR, since \citet{OnlineRankingElicitation2015} showed that it does not fit well for some real-word data, especially for those whose empirical marginals are far from the PL model. We ran PLPAC-AMPR on a computer with 8 Intel Core i7-6700 CPUs, but it did not return within a reasonable amount of time. \begin{figure}[bht] \begin{subfigure}[b]{0.23\textwidth} \includegraphics[scale=0.5]{TRRW_1.eps} \caption{$l=2$.} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \includegraphics[scale=0.5]{TRRW_2.eps} \caption{$l=4$.} \end{subfigure} \caption{Comparisons of the total ranking algorithms on real-world data.}\label{fig:TRRW} \end{figure} According to the results, when $l=2$, the performances of these two algorithms are close. However, when $l=4$, PDTR clearly outperforms Borda Ranking. The results on real-world data are consistent with our theoretical and numerical results on synthetic data. \section{Conclusion}\label{sec:Conclusion} In this paper, we studied the PAC top-$k$ ranking problem and the PAC total ranking problem, both under the MNL model. For the first problem, we derived a lower bound on the sample complexity, and proposed an algorithm that is optimal up to a $\log(k+l)/\log{k}$ factor. When $l=2$ (i.e. pairwise) or $l=O(poly(n))$, our result is order-optimal. For the second problem, we derived a tight lower bound, and proposed an algorithm that matches the lower bound. Numerical experiments on synthetic data and real-world data confirmed the improvement for both pairwise and listwise ranking.
1,108,101,562,383
arxiv
\subsection*{Methods} \textbf{\emph{Green Bank Telescope Observations.}} Both NANOGrav and targeted observations were conducted using the Green bank Ultimate Pulsar Processing Instrument (GUPPI, \cite{dup08}). Observations at 1500 MHz were acquired with 800 MHz of bandwidth split into 512 frequency channels (which were summed to 64 channels before analysis), sampling every 0.64\,$\mu$s. At an observing frequency of 820 MHz, 200 MHz of bandwidth over 128 channels was acquired with an identical sampling rate (and later also summed to 64 channels). These dual-polarization observations at both frequencies were coherently dedispersed at the known DM of 15.0\,pc\,cm$^{-3}$. Data were processed using NANOGrav pipelines for consistency with the existing four-year-long NANOGrav J0740+6620 data set (see \cite{arz15} for a thorough description of NANOGrav observing procedures, and \cite{dem18} for a description of NANOGrav's main data processing pipeline, \texttt{nanopipe}).\\ \textbf{\emph{Generation of TOAs and the Timing Model.}} The measurement and modeling of pulse times of arrival (TOAs) closely mirrors the procedure described by Arzoumanian et al.~2018 \cite{arz18}. We provide a summary of the analysis procedure in this section. During offline processing, total-intensity profile data were integrated over $\sim$20--30 minute intervals to yield one or two TOAs per downsampled frequency interval for a normal NANOGrav observation, and $\sim$10 minutes for the long scans near or during conjunction. We extracted TOAs from each of the 64 integrated channels over the entire observing bandwidth through cross correlation between the data and a smoothed profile template using the software package \texttt{PSRCHIVE} (source code in \cite{van11}; see \url{http://psrchive.sourceforge.net}). We used standard pulsar-timing analysis tools, namely \texttt{TEMPO} (\url{http://tempo.sourceforge.net}) and \texttt{TEMPO2} (source code in \cite{hob12}; see \url{https://www.atnf.csiro.au/research/pulsar/tempo2}) for modeling TOA variation in terms of many physical mechanisms. \texttt{TEMPO} and \texttt{TEMPO2}, while not fully independent timing packages, yield consistent results. For J0740+6620, fitted parameters include: celestial (ecliptic) coordinates; proper motion; spin frequency and its first derivative; and binary orbital parameters (see Table 1 which lists best-fit values for these parameters as determined with \texttt{TEMPO}). We used the DE436 (\url{https://naif.jpl.nasa.gov/pub/naif/JUNO/kernels/spk/de436s.bsp.lbl}) Solar System ephemeris, maintained by the NASA Jet Propulsion Laboratory, for correction to the barycentric reference frame. The time standard used was BIPM2017. The overall RMS timing residual value for the timing model presented in this work is 1.5\,$\mu$s. The $\chi^2$ of our fit is 7314.35 with 7334 degrees of freedom, yielding a reduced-$\chi^2$ value of 0.997; note that the noise modeling (see Assessment of Timing Noise) will always yield a $\chi^2$ of $\sim$1. We employed the ELL1 binary timing model \cite{lan01} in describing the nearly-circular orbital dynamics of the J0740+6620 system. Parameters of the ELL1 binary model consist of the projected semi-major axis, orbital period, epoch of passage through the ascending orbital node, and two ``Laplace-Lagrange parameters" ($\epsilon_1$ and $\epsilon_2$; the orbital eccentricity multiplied by the sine and cosine of periastron longitude, respectively; \cite{lan01}) that quantify departures from perfectly circular orbits.\\ \textbf{\emph{Assessment of Timing Noise.}} MSP rotation often exhibits a limit in achievable precision due to the presence of stochastic processes that act as noise to timing measurements. Examples of timing noise include systematic errors from cross-correlation template matching and ``spin noise" due to irregular rotation of the neutron star. We use a noise model similar to those developed in the NANOGrav 9-year and 11-year data releases in order to quantify these noise terms in the J0740+6620 data set. The noise model consists of white-noise components that combine to form additive Gaussian noise. For each of the two frontend receivers used in this work, we use three parameters to describe the white-noise contribution to timing noise: a scaling factor applied to all raw TOA uncertainties (``EFAC"); a term added in quadrature to the TOA uncertainties (``EQUAD"); and a noise term that quantifies TOA correlations purely across observing frequency (``ECORR"). We used the \texttt{Enterprise} (\url{https://enterprise.readthedocs.io/en/latest}) modeling suite for estimation of the white components of the noise model using a Markov chain Monte Carlo (MCMC)-based algorithm. Enterprise uses the \texttt{TEMPO(2)} fit as the maximum-likelihood fit for the timing parameters and the basis of the fit for the red noise parameters, should they be found to be significant. In our \texttt{TEMPO(2)} fits, we include an EFAC of 1.036 for L-band (1500-MHz) TOAs and 1.013 for 820-MHz TOAs. EQUAD for L-band is 0.00610\,$\mu$s, and 0.18310\,$\mu$s for 820 MHz. ECORR values for L-band and 820-MHz TOAs are 0.00511\,$\mu$s and 0.00871\,$\mu$s, respectively. Bayesian model selection via an \texttt{Enterprise} MCMC run disfavors the inclusion of red noise; therefore, the noise model includes only white noise components.\\ \textbf{\emph{Dispersion Measure Modeling.}} The complexity of modeling DM variations arising from a dynamic interstellar medium has been discussed at length in previous works (see, for example, Lam et al.~2016 and Jones et al.~2017 \cite{lam15,jon17}). We have adopted the standard NANOGrav piecewise-constant model for DM trends wherein each epoch of data is fit with a constant ``DMX” value; in other words, each of these parameters is a deviation from some nominal DM and is fixed over a single epoch. The observation that J0740+6620's DM behavior is somewhat smooth over the duration of our data set (see Figure 2) led us to attempt alternatively modeling the entire data set by fitting for only the first and second derivatives of DM. In theory, this approach could be advantageous given the ability of DMX to absorb Shapiro delay signals (thanks to the similar duration of conjunction and a DMX epoch). While this strategy does reduce the formal parameter uncertainties from the fit, both an F-test and an Akaike information criterion test strongly favor the DMX model over the quadratic DM fit. This indicates the DM variation is not fully characterized by a quadratic model, and parameter values (including pulsar mass) derived from this model are likely to have systematic biases not reflected in their formal uncertainties.\\ \\ \textbf{\emph{Simulations.}} Analysis of the NANOGrav 12.5-year data set without supplemental data yielded $m_{\rm p}$ = $2.00 \pm 0.20\,$M$_{\odot}$. After the initial 6-hour supplemental observation, we measured the mass of J0740+6620 to be $2.18 \pm 0.15\,$M$_{\odot}$. We conducted simulations of future observations both to predict the constraining power of a concentrated Director's Discretionary Time campaign as well as to determine how our mass measurement may improve with additional observations going forward. For these simulations, we first generated an arbitrary array of TOAs that mirror the desired observing cadence, starting date, etc. The TOAs were then fit (with pulsar timing software such as \texttt{TEMPO} or \texttt{PINT}; \url{https://github.com/nanograv/PINT}) using the known parameters for J0740+6620. Residuals from this fit were then subtracted from the original TOAs to create ``perfect" TOAs, to which stochastic noise was then added. Two notable types of simulations were conducted. The first was an estimation of the improvement in our measurement of $m_{\rm p}$ given random orbital sampling (the ``NANOGrav-only observation" scenario); this solidified our conclusion that the concentrated GBT campaigns were necessary. The second served to optimize our observing strategy during a targeted orbital phase campaign by trying various permutations of orbital phase, number of observing sessions, and observing session lengths. The results of this simulation informed our GBT Director's Discretionary Time request for five hours over conjunction and five hours in one of the Shapiro ``troughs" (we were awarded time in the first trough --- around orbital phase 0.15 --- in addition to conjunction). In order to ensure that obtaining data in this asymmetric fashion would not bias our mass measurement, we ran 10,000 simulations of a five-hour conjunction observation plus five hours in either the first or second Shapiro trough. The averages of the 10,000 mass measurements obtained from each of these troughs were consistent within 1\%, implying that our orbital sampling is not biasing our results (as one would expect, given that the Shapiro delay response curve is symmetric about superior conjunction).\\ \\ \textbf{\emph{Data Availability.}} PSR J0740+6620 TOAs from both the 12.5-year data set and from the two supplemental Green Bank Telescope observations will be available at \url{https://data.nanograv.org} upon publication of this manuscript. \\ \\ \textbf{\emph{Code Availability.}} All code mentioned in this work is open source and available at the links provided in the manuscript.
1,108,101,562,384
arxiv
\section{Introduction} \label{sec:intro} Single-hidden layer neural networks are superpositions of \emph{ridge functions}. A ridge function is any multivariate function mapping $\R^d \to \R$ of the form \begin{equation} \vec{x} \mapsto \rho(\vec{w}^\T \vec{x}), \label{eq:ridge-function} \end{equation} where $\rho: \R \to \R$ is a univariate real-valued function and $\vec{w} \in \R^d\setminus \curly{\vec{0}}$. Single-hidden layer neural networks, in particular, are superpositions of the form \begin{equation} \vec{x} \mapsto \sum_{k=1}^K v_k \, \rho(\vec{w}_k^\T \vec{x} - b_k), \label{eq:nn-as-superposition} \end{equation} where $\rho: \R \to \R$ is the \emph{activation function}, $K$ is the \emph{width} of the network, and for $k = 1, \ldots, K$, $v_k \in \R$ and $\vec{w}_k \in \R^d \setminus \curly{\vec{0}}$ are the \emph{weights} of the neural network and $b_k \in \R$ are the \emph{biases} or \emph{offsets}. This paper focuses on the \emph{practical problem} of fitting a finite-width neural network to finite-dimensional data, with an eye towards characterizing the properties of the resulting functions. We view this problem as a function recovery problem, where we wish to recover an \emph{unknown function} from \emph{linear measurements}. We deviate from the usual finite-dimensional recovery paradigm and pose the problem in the continuous-domain, allowing us to use techniques from the theory of variational methods. We show that continuous-domain linear inverse problems with total variation regularization in the Radon domain admit sparse atomic solutions, with the atoms being the familiar neurons of a neural network. \subsection{Contributions} Let $\native$ be a topological vector space of multivariate functions, $\sensing: \native \to \R^N$ a continuous linear \emph{sensing} or \emph{measurement} operator ($N$ can be viewed as the number of measurements or data)\footnote{For example, $\sensing f = (f(\vec{x}_1), \ldots, f(\vec{x}_N)) \in \R^N$, for some data $\curly{\vec{x}_n}_{n=1}^N \subset \R^d$.}, and let $f: \R^d \to \R$ be a multivariate function such that $f \in \native$. Consider the continuous-domain inverse problem \begin{equation} \min_{f \in \native} \: \datafit(\sensing f) + \norm{f}, \label{eq:generic-inverse-problem} \end{equation} where $\norm{\dummy}: \native \to \R_{\geq 0}$ is a (semi)norm or \emph{regularizer} and $\datafit: \R^N \to \R$ is a convex \emph{data fitting} term. We summarize the contributions of this paper below. \begin{enumerate} \item Our main result is the development of a family of seminorms $\norm{\dummy}_{(m)}$, where $m \geq 2$ is an integer, so that the solutions to the problem \cref{eq:generic-inverse-problem} with $\norm{\dummy} \coloneqq \norm{\dummy}_{(m)}$ take the form \begin{equation} \vec{x} \mapsto \sum_{k=1}^K v_k \, \rho_m(\vec{w}_k^\T \vec{x} - b_k) + c(\vec{x}), \label{eq:generic-solution-inverse-problem} \end{equation} where $\rho_m = \max\curly{0, \dummy}^{m-1} / (m - 1)!$ is the $m$th-order \emph{truncated power function}, $c(\dummy)$ is a polynomial of degree strictly less than $m$, and $K \leq N$. These seminorms are inspired by the seminorm proposed in~\cite{function-space-relu}, which is equivalent to $\norm{\dummy}_{(m)}$ with $m=2$. Specifically, the seminorm $\norm{f}_{(m)}$ is the total variation (TV) norm (in the sense of measures) of $\partial_t^m\ramp^{d-1} \RadonOp f$, where $\RadonOp$ is the Radon transform, $\ramp^{d-1}$ is a ``ramp'' filter, and $\partial_t^m$ is the $m$th partial derivative with respect to the ``offset'' variable of the Radon domain. In other words, our main result is the derivation of a \emph{neural network representer theorem}. Our result says that single-hidden layer neural networks are solutions to continuous-domain linear inverse problems with TV regularization in the Radon domain. When $m = 2$, the solutions correspond to ReLU networks. \item We propose the notion of a \emph{ridge spline} by noticing that our problem formulation in \cref{eq:generic-inverse-problem} is similar to those studied in variational spline theory~\citep{splines-variational, splines-sobolev-seminorm, L-splines}, with the key twist being that our family of seminorms are in the Radon domain. Thus, we refer to the solutions \cref{eq:generic-solution-inverse-problem} with our family of seminorms as \emph{$m$th-order polynomial ridge splines} to emphasize that the solutions are superpositions of ridge functions. We view our notion of a ridge spline as a kind of spline in-between a univariate spline and a traditional multivariate spline. Unlike polyharmonic splines, the usual multivariate analogue of univariate polynomial splines, ridge splines are multivariate piecewise polynomial functions. Moreover, by specializing our result to the univariate case, our notion of a ridge spline exactly coincides with the notion of a univariate polynomial spline. \item By specializing our main result to setting in which $\sensing$ corresponds to \emph{ideal sampling}, i.e., point evaluations, the generality of \cref{eq:generic-inverse-problem} allows us to consider the machine learning problem of approximating the scattered data $\curly{(\vec{x}_n, y_n)}_{n=1}^N \subset \R^d \times \R$ with both \emph{interpolation constraints} in the case of noise-free data as well as \emph{regularized problems} where we have soft-constraints in the case of noisy data. Thus, a direct consequence of our representer theorem result says the infinite-dimensional problem in \cref{eq:generic-inverse-problem} can be recast as a \emph{finite-dimensional neural network training problem} with various regularizers that are related to weight decay~\citep{weight-decay} and path-norm~\citep{path-norm} regularizers, which are used in practice. In other words, a neural network trained to fit data with an appropriate regularizer is ``optimal'' in the sense of the seminorm $\norm{\dummy}_{(m)}$, characterizing a key property of the learned function. We also note that in these neural network training problems, it is sufficient that the width $K$ of the network be $N$, the size of the data. \item Specializing our results to the supervised learning problem of binary classification shows that neural network solutions with small seminorm make good predictions on new data. Binary classification corresponds to the ideal sampling setting, restricting ourselves to $y_n \in \curly{-1, +1}$, $n = 1, \ldots, N$, and predicting these by the sign of the function that solves \cref{eq:generic-inverse-problem} (this can be done with an appropriate data fitting term). We derive statistical \emph{generalization bounds} for the class of neural networks with uniformly bounded seminorm $\norm{\dummy}_{(m)}$. In particular, we show that the seminorm bounds the \emph{Rademacher complexity} of these neural networks and use standard results from machine learning theory to relate this to the generalization error. This says that a small seminorm implies good generalization properties. \end{enumerate} \subsection{Related work} Ridge functions are ubiquitous in mathematics and engineering, especially due to the popularity of neural networks, and we refer to the book of~\cite{ridge-functions-book} and the survey of~\cite{ridge-functions-survey} for a fairly up-to-date treatment on the current state of research regarding ridge functions. One of the most popular areas of research has been regarding approximation theory with superpositions of ridge functions (i.e., single-hidden layer neural networks). Variants of the well-known universal approximation theorem state that \emph{any} continuous function can be approximated arbitrarily well by a superposition of the form in \cref{eq:nn-as-superposition}, under extremely mild conditions on the activation function~\citep{uat1, uat2, uat3, uat4, nn-approx-non-poly}. There are also many papers establishing optimal or near-optimal approximation rates for various function spaces~\citep{approx-Lp,approx-ridge-splines, dimension-independent-approx-bounds}. Another, less popular (though practically more interesting), research area studies what happens when you fit data with a single-hidden layer neural network. This question has been viewed from both a statistical perspective, where risk bounds are established~\citep{risk-bounds-ridge}, and more recently, in the univariate case, from a functional analytic perspective, where connections to variational spline theory are established~\citep{relu-linear-spline,gradient-dynamics-shallow,min-norm-nn-splines}. We also remark that these questions have also been studied in the context of deep neural networks. See, for example,~\cite{deep-approx1,deep-approx2} for approximation theory,~\cite{statistical-deep} for statistical properties, and~\cite{balestriero2018spline,representer-deep, convex-duality-deep} for connections to splines. Although the term \emph{ridge function} is rather modern, it is important to note that such functions have been studied for many years under the name \emph{plane waves}. Much of the early work with plane waves revolves around representing solutions to partial differential equations (PDE), e.g., the wave equation, as a superposition of plane waves. We refer the reader to the classic book of~\cite{plane-waves-pdes} for a full treatment of this subject. The key analysis tool used in these PDE problems is the \emph{Radon transform}. Since a ridge function as in \cref{eq:ridge-function} is constant along the hyperplanes $\vec{w}^\T \vec{x} = c$, $c \in \R$, analysis of such functions becomes convenient in the \emph{Radon domain}. More modern applications of ridge functions arise in computerized tomography following the seminal paper of~\cite{computerized-tomography}, where they coined the term ``ridge function'', and the development of ridgelets in the 1990s, a wavelet-like system inspired by neural networks, independently proposed by~\cite{murata-ridgelets},~\cite{rubin-ridgelets}, and~\cite{candes-phd, candes-ridgelets}. Many refinements to the ridgelet transform have been made recently~\citep{ridgelet-transform-distributions,ridgelet-uat}. As one might expect, the main analysis tool used in these applications is the Radon transform. Thus, we see that ridge functions and the Radon transform are intrinsically connected. Recent work from the machine learning community has used this connection to understand what kinds of functions can be represented by \emph{infinite-width} (continuum-width) single-hidden layer neural networks with Rectified Linear Unit (ReLU) activation functions, where the ``norm'' of the network weights is bounded~\citep{function-space-relu}. They ask the question about what functions can be represented by such infinite-width, but bounded norm, networks. They show that a TV seminorm in the Radon domain exactly captures the Euclidean norm of the network weights, but do not address the optimization problem of fitting neural networks to data. Inspired by this seminorm, we develop and study a family of TV seminorms in the Radon domain and consider the problem of scattered data approximation. We show that single-hidden layer neural networks, with fewer neurons than data, are solutions to the problem of minimizing these seminorms over the space of all functions in which the seminorms are well-defined, subject to data fitting constraints. A side effect of our analysis also provides an understanding of the topological structure, specifically a non-Hilbertian Banach space structure, of the spaces defined by these seminorms. Although our main result might seem obvious on a surface level, actually proving it is quite delicate. The problem of learning from a continuous dictionary of atoms with TV-like regularization has been studied before, both in the context of splines~\citep{fisher-jerome,locally-adaptive-regression-splines} and machine learning~\citep{l1-prob,convex-nn}. It is extremely important to note that all of these prior works make the assumption that the relevant spaces are compact. This allows appealing to standard arguments which are useful for proving, e.g., that minimizers to their problem even exist. We also remark that some of these prior works simply assume, without proof, existence of minimizers. Since the Radon domain is an unbounded domain, we cannot appeal to these types of arguments for the problem we study. Thus, a very important question we ask, and subsequently answer, regards existence of solutions to \cref{eq:generic-inverse-problem} with our family of seminorms. To this end, we draw on techniques from the recently developed variational framework of $\Ell$-splines~\citep{L-splines}. We also remark that we cannot directly apply the results from this framework since the fundamental assumption about splines is that spline atoms are translates of a single function. Meanwhile, neural network atoms as in \cref{eq:nn-as-superposition} are parameterized by both a direction $\vec{w}_k$ and a translation $b_k$. We also draw on recent results from variational methods~\citep{sparsity-variational-inverse}. Thus, the results of this paper provide a general variational framework as well as novel insights into understanding the properties of functions learned by neural networks fit to data. \subsection{Roadmap} In \cref{sec:main-results} we state our main results and highlight some of the technical challenges and novelties in proving our results. In \cref{sec:prelim} we introduce the notation and mathematical formulation used throughout the paper. In \cref{sec:rep-thm} we prove our main result, the representer theorem. In \cref{sec:splines} we discuss connections between ridge splines and classical polynomial splines. In \cref{sec:nn-training} we discuss applications of the representer theorem to neural network training, regularization, and generalization. \section{Main Results} \label{sec:main-results} Our main contribution is a representer theorem for problems of the form in \cref{eq:generic-inverse-problem} with our proposed family of seminorms. Our other contributions are (rather straightforward) corollaries to this result. In this section we will state the main results of this paper along with relevant historical remarks. \subsection{The representer theorem} \label{subsec:rep-thm} The notion of a \emph{representer theorem} is a fundamental result regarding kernel methods~\citep{spline-rep-thm, generalized-rep-thm, learning-with-kernels}. In particular, let $(\mathcal{H}, \norm{\dummy}_{\mathcal{H}})$ be any real-valued Hilbert space on $\R^d$ and consider the scattered data $\curly{(\vec{x}_n, y_n)}_{n=1}^N \subset \R^d \times \R$. The classical representer theorem considers the variational problem \begin{equation} \bar{f} = \argmin_{f \in \mathcal{H}} \sum_{n=1}^N \ell(f(\vec{x}_n), y_n) + \lambda \norm{f}_\mathcal{H}^2, \label{eq:kernel-opt} \end{equation} where $\ell(\cdot, \cdot)$ is a convex loss function and $\lambda > 0$ is an adjustable regularization parameter. The representer theorem then states that the solution $\bar{f}$ is unique and $\bar{f} \in \spn\curly{k(\dummy, \vec{x}_n)}_{n=1}^N$, where $k(\dummy, \dummy)$ is the \emph{reproducing kernel} of $\mathcal{H}$. Kernel methods (even before the term ``kernel methods'' was coined) have received much success dating all the way back to the 1960s, especially due to the tight connections between kernels, reproducing kernel Hilbert spaces, and splines~\citep{splines-minimum, scattered-data, spline-models-observational}. Recently, the term ``representer theorem'' has started being used for general problems of convex regularization~\citep{L-splines, rep-thm-convex-reg,unifying-representer} as a way to designate a parametric formulation of solutions to a variational optimization problem, ideally being a linear combination from some dictionary of atoms. This has allowed more much more general problems to be considered than ones like \cref{eq:kernel-opt}, which are restricted to regularizers which are Hilbertian (semi)norms. In particular, some of the recent theory is able to to consider problems where the search space is a locally convex topological vector space and the regularizers being a seminorm defined on that space~\citep{sparsity-variational-inverse}. The main utility of these more general representer theorems arise in understanding \emph{sparsity-promoting} regularizers such as the $\ell^1$-norm or its continuous-domain analogue, the $\M$-norm (the total variation norm in the sense of measures), of which the structural properties of the solutions are still not completely understood, though a theory is beginning to emerge. The generality of these kinds of representer theorems has been especially useful in some of the recent development of the notion of \emph{reproducing kernel Banach spaces}~\citep{rkbs,rkbs-book} and of an infinite-dimensional theory of compressed sensing~\citep{infinite-dim-cs1, infinite-dim-cs2} as well as other inverse problems set in the continuous-domain~\citep{inv-prob-space-measures}. We build off of these recent results, and propose a family of seminorms (indexed by an integer $m \geq 2$) \begin{equation} \norm{f}_{(m)} \coloneqq c_d \norm{\partial_t^m \ramp^{d-1} \RadonOp f}_{\M(\cyl)}, \label{eq:seminorms} \end{equation} where $\RadonOp$ is the Radon transform defined in \cref{eq:radon-transform}, $\Lambda^{d-1}$ is a ramp filter in the Radon domain defined in \cref{eq:ramp-filter}, $\partial_t^m$ is the $m$th partial derivative with respect to $t$, the offset variable in the Radon domain discussed in \cref{subsec:radon-transform}, and $c_d$ is a dimension dependent constant defined in \cref{eq:cd}, $\norm{\dummy}_{\M(\cyl)}$ denotes the total variation norm (in the sense of measures) on the Radon domain. We remark that the $\M$-norm can be viewed, as a ``generalization'' of the $L^1$-norm, with the key property that we can apply the $\M$-norm to distributions that are also ``absolutely integrable'' such as the Dirac impulse (i.e., distributions that can be associated with a finite Radon measure). The space $\cyl$ denotes the Radon domain; in particular, the Radon transform computes integrals over hyperplanes in $\R^d$. Since every hyperplane can be written as $\curly{\vec{x} \in \R^d \st \vec{\gamma}^\T\vec{x} = t}$ for $\vec{\gamma} \in \Sph^{d-1}$, the surface of the $\ell^2$-sphere in $\R^d$, and $t \in \R$, the Radon domain is $\cyl$. Finally, the space $\M(X)$ is the Banach space of finite Radon measures on $X$. The family of seminorms in \cref{eq:seminorms} are thus exactly total variation seminorms in the Radon domain. For brevity, we will write \[ \ROp_m \coloneqq c_d\,\partial_t^m \ramp^{d-1} \RadonOp. \] Before stating our representer theorem, we remark that our result requires that the null space of the operator $\ROp_m$ is small, i.e., finite-dimensional. As discussed in~\cite{L-splines} and in the $L^2$-theory of radial basis functions and polyharmonic splines~\cite[Chapter~10]{scattered-data-approx}, constructing operators acting on multivariate functions with finite-dimensional null spaces is nearly impossible\footnote{For example, consider $\Delta$, the Laplacian operator in $\R^d$. Its null space is the space of harmonic functions which is infinite-dimensional for $d \geq 2$. On the other hand, the univariate Laplacian operator, $\d^2/\d x^2$, has a finite-dimensional null space which is simply $\spn\curly{1, x}$.}. To bypass this technicality, we use a common technique from variational spline theory (see, e.g.,~\cite{L-splines}) and impose a \emph{growth restriction} to the functions of interest via the weighted Lebesgue space $L^{\infty, n_0}(\R^d)$ (not to be confused with the Lorentz spaces), defined via the weighted $L^\infty$-norm \[ \norm{f}_{\infty, n_0} \coloneqq \esssup_{\vec{x} \in \R^d}\: \abs{f(\vec{x})}\paren{1 + \norm{\vec{x}}_2}^{-n_0}, \] where $n_0 \in \mathbb{Z}$ is the \emph{algebraic growth rate}. In other words, the space $L^{\infty, n_0}(\R^d)$ is the space of functions mapping $\R^d \to \R$ with algebraic growth rate $n_0$. We will later see in \cref{eq:growth-restriction} that the appropriate choice of algebraic growth rate for the operator $\ROp_m$ is $n_0 \coloneqq m - 1$. This allows us to define the (growth restricted) \emph{null space} of $\ROp_m$ as \begin{equation} \N_m \coloneqq \curly{q \in L^{\infty, m - 1}(\R^d) \st \ROp_m q = 0} \label{eq:null-space} \end{equation} and the (growth restricted) \emph{native space} of $\ROp_m$ as \begin{equation} \F_m \coloneqq \curly{f \in L^{\infty, m - 1}(\R^d) \st \ROp_m f \in \M(\cyl)}. \label{eq:native-space} \end{equation} We prove in \cref{lemma:finite-dim-null-space} that $\N_m$ is indeed finite-dimensional. We now state our representer theorem, in which we show that there exists a sparse solution to the inverse problem in \cref{eq:generic-inverse-problem} when the seminorm takes the form in \cref{eq:seminorms} and the search space is $\F_m$. In particular, we show that that the sparse solution takes the form of the sum of a single-hidden layer neural network as in \cref{eq:nn-as-superposition} and a low degree polynomial. In this context, sparse means that the width of the neural network and the degree of the polynomial are, a priori, bounded from above. \begin{theorem} \label{thm:rep-thm} Assume the following: \begin{enumerate}[label=\arabic*.] \item The function $G: \R^N \to \R$ is a strictly convex, coercive, and lower semi-continuous. \item The operator $\sensing: \F_m \to \R^N$ is continuous\footnote{In order to define continuity, $\F_m$ needs to be a \emph{topological} vector space. We prove in \cref{thm:banach-space} that $\F_m$ is a Banach space, which provides it a topology, allowing continuity to be defined.}, linear, and surjective. \item The inverse problem is well-posed over the null space $\N_m$ of $\ROp_m$, i.e., $\sensing q_1 = \sensing q_2$ if and only if $q_1 = q_2$, for any $q_1, q_2 \in \N_m$. \label{item:well-posed-null-space} \end{enumerate} Then, there exists a sparse minimizer to the variational problem \begin{equation} \min_{f \in \native_m} \: \datafit(\sensing f) + \norm{\ROp_m f}_{\M(\cyl)} \label{eq:inverse-problem} \end{equation} that takes the form \begin{equation} s(\vec{x}) = \sum_{k=1}^K v_k \, \rho_m(\vec{w}_k^\T \vec{x} - b_k) + c(\vec{x}), \label{eq:ridge-spline} \end{equation} where $K \leq N - \dim \N_m$, $\rho_m = \max\curly{0, \dummy}^{m-1} / (m - 1)!$, $\vec{w}_k \in \Sph^{d-1}$, $v_k \in \R$, $b_k \in \R$, and $c(\dummy)$ is a polynomial of degree strictly less than $m$. \end{theorem} Proving \cref{thm:rep-thm} hinges on several technical results, the most important being the topological structure of the native space $\native_m$. In order to do any kind of analysis (e.g., proving that minimizers of \cref{eq:inverse-problem} even exist), we require the native space $\native_m$ to have some ``nice'' topological structure. We prove in \cref{thm:banach-space} that $\native_m$, when equipped with a proper direct-sum topology, is a Banach space. This key result hinges on being able to construct a stable right inverse of the operator $\ROp_m$, which we outline in \cref{lemma:right-inverse}. We remark that exhibiting a Banach space structure of the native space of an operator is common in variational inverse problems, e.g., in the theory of $\Ell$-splines~\citep{L-splines, native-banach}. We do remark, however, our result is, to the best of our knowledge, the first time exhibiting this structure on a non-Euclidean domain, which causes some nuances compared to prior work of \citet{L-splines, native-banach}. \Cref{thm:rep-thm} shows that while the problem is posed in the continuum, it admits \emph{parametric} solutions in terms of a finite number of parameters. This demonstrates the sparsifying effect of the $\M$-norm, similar to its discrete analogue, the $\ell^1$-norm. We also remark that although the problem in \cref{thm:rep-thm} admits a sparse solution, it is important to note that the solution may not be unique and there may also exist non-sparse solutions. \begin{remark} The polynomial term $c(\vec{x})$ that appears in \cref{eq:ridge-spline} corresponds to a term in the null space $\N_m$. When $m = 2$, the network in \cref{eq:ridge-spline} is a ReLU network and $c(\vec{x})$ takes the form \[ c(\vec{x}) = \vec{u}^\T\vec{x} + s, \] where $\vec{u} \in \R^d$ and $s \in \R$. Thus, when $m = 2$, \cref{eq:ridge-spline} corresponds to a ReLU network with a \emph{skip connection}~\citep{skip-connections}. \end{remark} \begin{remark} \label[Remark]{rem:rescale} The fact that $\vec{w}_k \in \Sph^{d-1}$ does not restrict the single-hidden layer neural network due to the homogeneity of the truncated power functions. Indeed, given any single-hidden layer neural network with $\vec{w}_k \in \R^d \setminus \curly{\vec{0}}$, we can use the fact that $\rho_m$ is homogeneous of degree $m - 1$ to rewrite the network as \[ \vec{x} \mapsto \sum_{k=1}^K v_k \norm{\vec{w}_k}_2^{m - 1} \rho_m(\tilde{\vec{w}}_k^\T \vec{x} - \tilde{b}_k) + c(\vec{x}), \] where $\tilde{\vec{w}}_k \coloneqq \vec{w}_k / \norm{\vec{w}_k}_2 \in \Sph^{d-1}$ and $\tilde{b}_k \coloneqq b_k / \norm{\vec{w}_k}_2 \in \R$. We use this fact to prove \cref{prop:equiv-opts} which recasts the variational problem in \cref{eq:inverse-problem} as a finite-dimensional neural network training problem (with no constraints on the input layer weights). \end{remark} The proof of \cref{thm:rep-thm} appears in \cref{sec:rep-thm}. \subsection{Ridge splines} Splines and variational problems are tightly connected~\citep{splines-sobolev-seminorm, splines-variational, L-splines}. In the framework of $\Ell$-splines~\citep{L-splines}, a pseudodifferential operator, $\Ell: \Sch'(\R^d) \to \Sch'(\R^d)$, where $\Sch'(\R^d)$ denotes the space of tempered distributions on $\R^d$, is associated with a spline, and variational problems of the form \begin{equation} \min_{f \in \native_m} \: \datafit(\sensing f) + \norm{\Ell f}_{\M(\R^d)} \label{eq:L-spline-problem} \end{equation} are studied, where $\datafit$ is a data fitting term, $\sensing$ is a measurement operator, $\native_m$ is the native space of $\Ell$, and $\M(\R^d)$ is the space of finite Radon measures on $\R^d$ which forms a Banach space when equipped with $\norm{\dummy}_{\M(\R^d)}$, the total variation norm in the sense of measures. The key result from~\cite{L-splines} is a representer theorem for the above problem which states that there exists a sparse solution which is a so-called $\Ell$-spline. By associating an operator to a spline, we have a simple way to characterize what functions are splines via the following definition. \begin{definition}[nonuniform $\Ell$-spline {\cite[Definition~2]{L-splines}}] \label[Definition]{defn:L-spline} A function $s: \mathbb{R}^d \to \mathbb{R}$ (of slow growth) is said to be a \emph{nonuniform $\Ell$-spline} if \[ \Ell\curly{s} = \sum_{k=1}^K v_k \, \delta_{\R^d}(\dummy - \vec{x}_k), \] where $\delta_{\R^d}$ denotes the Dirac impulse on $\R^d$, $\curly{v_k}_{k=1}^K$ is a sequence of weights and the locations of Dirac impulses are at the spline knots $\curly{\vec{x}_k}_{k=1}^K$. \end{definition} Due to the similarities between the variational problem in \cref{eq:L-spline-problem} and our variational problem in \cref{eq:inverse-problem}, we can similarly define the notion of a (polynomial) \emph{ridge spline}. Before stating this definition, we remark that in this paper we will be working with Dirac impulses on different domains. For clarity, we will subscript the ``$\delta$'' with the appropriate domain, e.g., $\delta_{\R^d}$ denotes the Dirac impulse on $\R^d$ and $\delta_{\cyl}$ denotes the Dirac impulse on $\cyl$. \begin{definition}[nonuniform polynomial ridge spline] \label[Definition]{defn:ridge-spline} A function $s: \mathbb{R}^d \to \mathbb{R}$ (of slow growth) is said to be a \emph{nonuniform polynomial ridge spline} of order $m$ if \begin{equation} \ROp_m\curly{s} = \sum_{k=1}^K v_k \, \sq{\frac{\delta_\cyl(\dummy - \vec{z}_k) + (-1)^m \delta_\cyl(\dummy + \vec{z}_k)}{2}}, \label{eq:ridge-spline-innovation} \end{equation} where $\curly{v_k}_{k=1}^K$ is a sequence of weights and the locations of the Dirac impulses are at $\vec{z}_k = (\vec{w}_k, b_k) \in \cyl$. The collection $\curly{\vec{z}_k}_{k=1}^K$ can be viewed as a collection of Radon domain spline knots. \end{definition} \begin{remark} \label[Remark]{rem:radon-impulse} The reason \begin{equation} \frac{\delta_\cyl(\dummy - \vec{z}_k) + (-1)^m \delta_\cyl(\dummy + \vec{z}_k)}{2} \label{eq:radon-impulse} \end{equation} appears in \cref{eq:ridge-spline-innovation} rather than $\delta_\cyl(\dummy - \vec{z}_k)$ is due to the fact that the operator $\ROp_m$ maps functions $f \in \F_m$ to even (respectively odd) elements of $\M(\cyl)$ when $m$ is even (respectively odd). Thus, \cref{eq:radon-impulse} can be viewed as an even or odd version of the normal translated Dirac impulse in the sense that when acting on even or odd test functions defined on $\cyl$, it is the point evaluation operator. \end{remark} \begin{remark} \label[Remark]{rem:nn-atoms-sparsified} When $s$ is a neural network as in \cref{eq:ridge-spline}, we have that \cref{eq:ridge-spline-innovation} holds. The way to understand this is that the neurons in \cref{eq:ridge-function} are ``sparsified'' by $\ROp_m$ in the sense that \[ \ROp_m r_{(\vec{w}, b)}^{(m)} = \frac{\delta_\cyl(\dummy - \vec{z}_k) + (-1)^m \delta_\cyl(\dummy + \vec{z}_k)}{2}, \] where $r_{(\vec{w}, b)}^{(m)}(\vec{x}) \coloneqq \rho_m(\vec{w}^\T\vec{x} - b)$, $(\vec{w}, b) \in \cyl$. We show that this is true in \cref{lemma:nn-atoms-sparsified}. In other words, $r_{(\vec{w}, b)}^{(m)}$ can be viewed as a translated \emph{Green's function} of $\ROp_m$, where the translation is in the Radon domain. \end{remark} \begin{figure}[htb!] \centering \begin{tikzpicture} \node (a) at (0, 0) { \includegraphics[scale=0.4]{fig/cubic-spline.pdf} }; \node (b) at (8, 0) { \includegraphics[scale=0.4]{fig/cubic-spline-innovation.pdf} }; \draw[gray, -{Latex[width=2mm]}] (a) -- (b) node[midway, above]{\footnotesize$\D^4$}; \end{tikzpicture} \caption{In the left plot we have a cubic spline with 7 knots. After applying $\D^4$, the fourth derivative operator, we are left with 7 Dirac impulses as seen in the right plot.} \label{fig:cubic-spline} \end{figure} \begin{figure}[htb!] \centering \begin{tikzpicture} \node (a) at (0, 0) { \includegraphics[scale=0.4]{fig/cubic-ridge-spline.pdf} }; \node (c) at (8, 0) { \includegraphics[scale=0.4]{fig/cubic-ridge-spline-innovation.pdf} }; \draw[gray, -{Latex[width=2mm]}] (a) -- (c) node[midway, above]{\footnotesize$\ROp_4$}; \end{tikzpicture} \caption{In the left plot we have a two-dimensional cubic ridge spline with 7 neurons. After applying $\ROp_4$, we are left with 7 Dirac impulses in the Radon domain, which are designated by the dots in the right plot. We have parameterized the directions in the Radon domain by $\theta \in [0, \pi)$. This parameterization of the two-dimensional Radon domain is known as a \emph{sinogram}. This parameterization of the Radon domain eliminates the two impulses per neuron we see in \cref{eq:ridge-spline-innovation} as $\theta \in [0, \pi)$ is only ``half'' of the unit circle $\Sph^1$.}. \label{fig:cubic-ridge-spline} \end{figure} We illustrate the sparsifying effect of the operator $\Ell$ in the case of cubic splines, i.e., $\Ell = \D^4$, the fourth derivative operator, in \cref{fig:cubic-spline}. We also illustrate the sparsifying effect of the operator $\ROp_m$ in the case of cubic ridge splines, i.e., $m = 4$, in \cref{fig:cubic-ridge-spline}. We also remark that in the univariate case ($d = 1$), our notion of a polynomial ridge spline of order $m$ exactly coincides with the classical notion of a univariate polynomial spline of order $m$. We show this in \cref{subsec:1D-splines}. We finally remark that when $m$ is even, we have the equality \[ \ROp_m = c_d\,\ramp^{d-1} \RadonOp \Delta^{m/2}, \] by the intertwining relations of the Radon transform and the Laplacian, which we later discuss in \cref{eq:radon-intertwining}. This provides another way to understand how $\ROp_m$ sparsifies ridge splines. We illustrate this in the $m = 2$ (i.e., ReLU network) case in \cref{fig:linear-ridge-spline}. \begin{figure}[htb!] \centering \begin{tikzpicture} \node (a) at (0, 0) { \includegraphics[scale=0.33]{fig/linear-ridge-spline.pdf} }; \node (b) at (5, 0) { \includegraphics[scale=0.33]{fig/linear-ridge-spline-laplacian.pdf} }; \node (c) at (10, 0) { \includegraphics[scale=0.33]{fig/linear-ridge-spline-innovation.pdf} }; \draw[gray, -{Latex[width=2mm]}] (a) -- (b) node[midway, above]{\footnotesize$\Delta$}; \draw[gray, -{Latex[width=2mm]}] (b) -- (c) node[midway, above]{\footnotesize$c_d \ramp \RadonOp$}; \end{tikzpicture} \caption{On the left plot we have a two-dimensional linear ridge spline (single-hidden layer ReLU network) with 7 neurons. After applying $\Delta$, we get an ``impulse sheet'', i.e., a mapping of the form $\vec{x} \mapsto \delta_\R(\vec{w}_k^\T\vec{x} - b_k)$, for each neuron, designated by the black lines in the top down view of the linear ridge spline in the middle plot. Then, after applying the Radon transform and ramp filter to the middle plot, we arrive with 7 Dirac impulses in the Radon domain, which are designated by the dots in the left plot. Just as in \cref{fig:cubic-ridge-spline}, we have parameterized the directions in the Radon domain by $\theta \in [0, \pi)$, eliminating the two impulses per neuron we see in \cref{eq:ridge-spline-innovation}.} \label{fig:linear-ridge-spline} \end{figure} \subsection{Scattered data approximation and neural network training} Since \cref{thm:rep-thm} says that a single-hidden layer neural network as in \cref{eq:ridge-spline} is a solution to the continuous-domain inverse problem in \cref{eq:inverse-problem}, we can recast the continuous-domain problem in \cref{eq:inverse-problem} as the \emph{finite-dimensional neural network training} problem \begin{equation} \min_{\vec{\theta} \in \Theta} \: \datafit(\sensing f_\vec{\theta}) + \norm{\ROp_m f_\vec{\theta}}_{\M(\cyl)}, \label{eq:nn-problem} \end{equation} so long as the number of neurons $K$ is large enough\footnote{We characterize what large enough means in \cref{prop:equiv-opts}.} ($K \geq N$ suffices, giving insight into the efficacy of overparameterization in neural network models), where \[ f_\vec{\theta}(\vec{x}) \coloneqq \sum_{k=1}^K v_k \, \rho_m(\vec{w}_k^\T \vec{x} - b_k) + c(\vec{x}), \] where $\vec{\theta} = (\vec{w}_1, \ldots, \vec{w}_K, v_1, \ldots, v_K, b_1, \ldots, b_K, c)$ contains the neural network parameters and $\Theta$ is the collection of all $\vec{\theta}$ such that $v_k \in \R$, $\vec{w}_k \in \R^d$, and $b_k \in \R$ for $k = 1, \ldots, K$, and where $c$ is a polynomial of degree strictly less than $m$. We show in \cref{lemma:nn-norm} that \[ \norm{\ROp_m f_\vec{\theta}}_{\M(\cyl)} = \sum_{k=1}^K \abs{v_k} \norm{\vec{w}_k}_2^{m-1}, \] and then use this fact to show that \cref{eq:nn-problem} is equivalent to two neural network training problems, with variants of well-known neural network regularizers, in the following proposition. \begin{theorem} \label{prop:equiv-opts} The solutions to the finite-dimensional optimization in \cref{eq:nn-problem} are solutions to optimization in \cref{eq:inverse-problem} so long as $K \geq N - \dim \N_m$. Additionally, the optimization in \cref{eq:nn-problem} is equivalent to \begin{equation} \min_{\vec{\theta} \in \Theta} \: \datafit(\sensing f_\vec{\theta}) + {\sum_{k=1}^K \abs{v_k} \norm{\vec{w}_k}_2^{m - 1}}, \label{eq:nn-training-with-pathnorm} \end{equation} for any $K \in \mathbb{N}$. Furthermore, the solutions to \begin{equation} \min_{\vec{\theta} \in \Theta} \: \datafit(\sensing f_\vec{\theta}) + {\frac{1}{2}\sum_{k=1}^K \paren{\abs{v_k}^2 + \norm{\vec{w}_k}_2^{2m - 2}}} \label{eq:nn-training-with-weight-decay} \end{equation} are also solutions to \cref{eq:nn-training-with-pathnorm} for any $K \in \mathbb{N}$. Finally, for both problems in \cref{eq:nn-training-with-pathnorm,eq:nn-training-with-weight-decay}, for $K_1$ and $K_2$ such that $K_1 > K_2$, a global minimizer when $K = K_2$ will always be a global minimizer when $K = K_1$. \end{theorem} When $m = 2$, which coincides with neural networks with ReLU activation functions, \cref{eq:nn-training-with-pathnorm,eq:nn-training-with-weight-decay} correspond to previously studied training problems. The regularizer in \cref{eq:nn-training-with-pathnorm} coincides with the notion of the \emph{$\ell^1$-path-norm} regularization as proposed in~\cite{path-norm} and the regularizer in \cref{eq:nn-training-with-weight-decay} coincides with the notion of training a neural network with \emph{weight decay} as proposed in~\cite{weight-decay}. Thus, our result shows that these notions of regularization are intrinsically tied to the ReLU activation function, and, perhaps, variants such as the regularizers that appear in \cref{eq:nn-training-with-pathnorm,eq:nn-training-with-weight-decay} should be used in practice for non-ReLU activation functions, where $m - 1$ could corresponds to the algebraic growth rate of the activation function. In machine learning, the measurement model is usually \emph{ideal sampling}, i.e., the measurement operator $\sensing$ acts on a function $f: \R^d \to \R$ via \begin{equation} \sensing: \F_m \ni f \mapsto \begin{bmatrix} \ang{\delta_{\R^d}(\dummy - \vec{x}_1), f} \\ \vdots \\ \ang{\delta_{\R^d}(\dummy - \vec{x}_N), f} \end{bmatrix} = \begin{bmatrix} f(\vec{x}_1) \\ \vdots \\ f(\vec{x}_N) \end{bmatrix} \in \R^N, \label{eq:ideal-sampling} \end{equation} so the problem is to approximate the scattered data $\curly{(\vec{x}_n, y_n)}_{n=1}^N \subset \R^d \times \R$. For the above $\sensing$ to be a valid measurement operator for \cref{thm:rep-thm}, it must be continuous. \begin{lemma} \label[Lemma]{lemma:ideal-sampling-continuous} The operator $\sensing: \F_m \to \R^N$ defined in \cref{eq:ideal-sampling} is continuous. \end{lemma} The proof of \cref{lemma:ideal-sampling-continuous} appears in \cref{app:aux-proofs}. \Cref{lemma:ideal-sampling-continuous} also says that $\F_m$ is a \emph{reproducing kernel Banach space}. By choosing an appropriate data fitting term $G$, the generality of our main result in \cref{thm:rep-thm} says that the solutions problems with interpolation constraints \[ \min_{f \in \native_m} \: \norm{\ROp_m f}_{\M(\cyl)} \quad\subj\quad f(\vec{x}_n) = y_n, \: n = 1, \ldots, N \] and to regularized problems where we have soft-constraints in the case of noisy data \begin{equation} \min_{f \in \native_m} \: \sum_{n=1}^N \ell(f(\vec{x}_n), y_n) + \lambda \norm{\ROp_m f}_{\M(\cyl)}, \label{eq:regularized-problem} \end{equation} where $\lambda > 0$ is an adjustable regularization parameter and $\ell(\dummy, \dummy)$ is an appropriate loss function, e.g., the squared error loss, are single-hidden layer neural networks. We can then invoke \cref{prop:equiv-opts} to recast the problem in \cref{eq:regularized-problem} with either of the equivalent finite-dimensional neural network training problems: \begin{align*} &\min_{\vec{\theta} \in \Theta} \: \sum_{n=1}^N \ell(f_\vec{\theta}(\vec{x}_n), y_n) + \lambda \sum_{k=1}^K \abs{v_k} \norm{\vec{w}_k}_2^{m - 1} \numberthis \label{eq:tikhonov-problem} \\ &\min_{\vec{\theta} \in \Theta} \: \sum_{n=1}^N \ell(f_\vec{\theta}(\vec{x}_n), y_n) + \frac{\lambda}{2}\sum_{k=1}^K \paren{\abs{v_k}^2 + \norm{\vec{w}_k}_2^{2m - 2}}, \end{align*} so long as the number of neurons $K$ is large enough as stated in \cref{prop:equiv-opts}. The two problems in the above display correspond to how neural network training problems are actually set up. \subsection{Statistical generalization bounds} Neural networks are widely used for pattern classification. In the ideal sampling scenario, the generality of \cref{thm:rep-thm} allows us to consider optimizations of the form \begin{equation} \min_{f \in \F_m} \: \sum_{n=1}^N \ell\big(y_n f(\vec{x}_n)\big) \quad\subj\quad \norm{\ROp_m f}_{\M(\cyl)} \leq B, \label{eq:class-opt} \end{equation} for some constant $B < \infty$, where $\ell(\dummy)$ is an appropriate $L$-Lipschitz loss function of the product $y_n f(\vec{x}_n)$. If we assume that $\curly{(\vec{x}_n,y_n)}_{n=1}^N$ are drawn independently and identically from some unknown underlying probability distribution, $y_n \in \curly{-1,+1}$, $n=1,\dots,N$, and the loss function assigns positive losses when $\sgn(f(\vec{x}_n)) \neq y_n$ (or equivalently when $y_n f(\vec{x}_n) < 0$), this is the \emph{binary classification} setting. Given this set up, it is natural to examine if solutions to \cref{eq:class-opt} predict well on new random examples $(\vec{x},y)$ drawn independently from the same underlying distribution. We can invoke \cref{prop:equiv-opts} and consider optimization over neural network parameters by considering the recast optimization to \cref{eq:class-opt} \begin{equation} \min_{\vec{\theta} \in \Theta} \: \sum_{n=1}^N \ell\big(y_n f_\vec{\theta}(\vec{x}_n)\big) \quad\subj\quad \sum_{k=1}^K \abs{v_k} \norm{\vec{w}_k}_2^{m-1} \leq B, \label{eq:nn-class-opt} \end{equation} where \[ f_\vec{\theta}(\vec{x}) \coloneqq \sum_{k=1}^K v_k \rho_m(\vec{w}_k^\T \vec{x} - b_k) + c(\vec{x}). \] In particular, the solution to \cref{eq:nn-class-opt} is known as an \emph{Ivanov estimator}~\citep{estimators} which is equivalent to the solution to \cref{eq:tikhonov-problem} for a particular choice of $\lambda$ which depends on $B$ and the data through the data fitting term. In this section we provide a \emph{generalization bound} for the Ivanov estimator. Let $\bar{f}$ be a minimizer of the optimization in \cref{eq:nn-class-opt}. We show that $B$ directly controls the error probability of $\bar{f}$, i.e., ${\mathbb{P}}\big(y \bar{f}(\vec{x})<0\big)$, where $(\vec{x},y)$ is an independent sample from the underlying distribution. This is referred to as the \emph{generalization error} in machine learning parlance. We follow the standard approach based on Rademacher complexity \citep{bartlett2002rademacher,shalev2014understanding}. Let $\F$ be a hypothesis space. For every $f \in \F$, define its \emph{risk} and \emph{empirical risk} \[ R(f) \coloneqq \E\left[\ell\big(y f(\vec{x})\big)\right] \quad\text{and}\quad \hat{R}_N(f) \coloneqq \frac{1}{N} \sum_{n=1}^N \ell\big(y_n f(\vec{x}_n)\big), \] and assume the loss function satisfies $0 \leq \ell\big(y_n f(\vec{x}_n)\big) \leq C_0$ almost surely, for $n=1,\ldots,N$ and some constant $C_0 < \infty$. Then, for every $f \in \F$, we have the following generalization bound. With probability at least $1 - \delta$, \[ R(f) \leq \hat{R}_N(f) + L\, \Rad(\F) + C_0 \sqrt{\frac{\log(1 / \delta)}{2N}}, \] where we use the fact that the loss $\ell$ is $L$-Lipschitz and $\Rad(\F)$ is the \emph{Rademacher complexity} of the class $\F$ defined via \begin{equation} \Rad(\F) \coloneqq 2 \E\left[\sup_{f\in\F} \frac{1}{N} \sum_{n=1}^N\sigma_n f(\vec{x}_n) \right], \label{eq:rad} \end{equation} where $\curly{\sigma_n}_{n=1}^N$ are independent and identically distributed Rademacher random variables. In particular, if the expected loss is an upper bound on the probability of error (e.g., squared error, $(1-yf(\vec{x}))^2$, or hinge loss, $\max\{0,1-yf(\vec{x})\}$), then we may use this to bound the probability of error $\P\left(y \bar{f}(\vec{x}) < 0\right) \leq R(\bar{f})$. To provide a generalization bound for the minimizer $\bar{f}$ of \cref{eq:class-opt}, we assume that the empirical data satisfies $\norm{\vec{x}_n}_2 \leq C/2$ almost surely for $n=1,\dots,N$ and some constant $C < \infty$ and consider the hypothesis space \[ \F_{\Theta} \coloneqq \curly{f_\vec{\theta} \st \vec{\theta} \in \Theta, \:\: \sum_{k=1}^K \abs{v_k} \norm{\vec{w}_k}_2^{m-1} \leq B, \:\: \abs{b_k} \leq \frac{C}{2}, k = 1, \ldots, K, \:\: K \geq 0}. \] The reason we impose that $\abs{b_k} \leq C/2$, $k = 1, \ldots, K$, is because we will later show in \cref{lemma:bias-bound} that all solutions to \cref{eq:class-opt} satisfy \[ \abs{b_k} \leq \max_{n=1, \ldots, N} \norm{\vec{x}_n}_2. \] for every $k = 1, \ldots, K$. We bound the Rademacher complexity of $\F_\Theta$ in the following theorem. \begin{theorem} Assume that $\norm{\vec{x}_n}_2 \leq C/2$ almost surely for $n=1,\dots,N$ and some constant $C < \infty$. Then, \[ \Rad(\F_\Theta) \leq \frac{2B C^{m-1}}{\sqrt{N} (m-1)!} + \Rad(c), \] where $\Rad(c)$ denotes the Rademacher complexity of the polynomial terms $c(\vec{x})$ that appear in the solutions to the optimization in \cref{eq:nn-class-opt}. \label{thm:rad} \end{theorem} \begin{remark} \cref{thm:rad} shows that the Rademacher complexity, and hence the generalization error, is controlled by bounding the seminorm $\norm{\ROp_m f}_{\M(\cyl)} \leq B$. In practice, neural networks are typically implemented without the polynomial term $c(\vec{x})$, in which case the same bound holds without $\Rad(c)$. \end{remark} \section{Preliminaries \& Notation} \label{sec:prelim} In this section we will introduce the mathematical formulation and notation used in the remainder of the paper. \subsection{Spaces of functions and distributions} \label{sec:dist} Let $\Sch(\R^d)$ be the Schwartz space of smooth and rapidly decaying test functions on $\R^d$. Its continuous dual, $\Sch'(\R^d)$, is the space of tempered distributions on $\R^d$. Since we are interested in the Radon domain, we are also interested in these spaces on $\cyl$. We say $\psi \in \Sch(\cyl)$ when $\psi$ is smooth and satisfies the decay condition~\cite[Chapter~6]{fourier} \[ \sup_{\substack{\vec{\gamma} \in \Sph^{d-1} \\ t \in \R}} \abs{\paren{1 + \abs{t}^k} \de[^\ell]{t^\ell} (\D\psi)(\vec{\gamma}, t)} < \infty \] for all integers $k, \ell \geq 0$ and for all differential operators $\D$ in $\vec{\gamma}$. Since the Schwartz spaces are nuclear, it follows that the above definition is equivalent to saying $\Sch(\cyl) = \mathcal{D}(\Sph^{d-1}) \,\hat{\otimes}\, \Sch(\R)$, where $\mathcal{D}(\Sph^{d-1})$ is the space of smooth functions on $\Sph^{d-1}$ and $\hat{\otimes}$ is the \emph{topological} tensor product~\cite[Chapter~III]{tvs}. We can then define the space of tempered distributions on $\cyl$ as its continuous dual, $\Sch'(\cyl)$. We will later see in \cref{subsec:radon-transform} that in order to define the Radon transform of distributions, we will be interested in the \emph{Lizorkin test functions} $\Sch_0(\R^d)$ of highly time-frequency localized functions over $\R^d$~\citep{wavelets-lizorkin}. This is a closed subspace of $\Sch(\R^d)$ consisting of functions with all moments equal to $0$, i.e., $\varphi \in \Sch_0(\R^d)$ when $\varphi \in \Sch(\R^d)$ and \[ \int_{\R^d} \vec{x}^\vec{\alpha} \varphi(\vec{x}) \dd \vec{x} = 0, \] for every multi-index $\vec{\alpha}$. We can then define the space of \emph{Lizorkin distributions}, $\Sch_0'(\R^d)$, the continuous dual of the Lizorkin test functions. The space of Lizorkin distributions can be viewed as being topologically isomorphic to the quotient space of tempered distributions by the space of polynomials, i.e., if $\mathcal{P}(\R^d)$ is the space of polynomials on $\R^d$, then $\Sch_0'(\R^d) \cong \Sch'(\R^d) / \mathcal{P}(\R^d)$~\cite[Chapter~8]{lizorkin-triebel}. Just as above, we can define the Lizorkin test functions on $\cyl$ as $\Sch_0(\cyl) = \mathcal{D}(\Sph^{d-1}) \,\hat{\otimes}\, \Sch_0(\R)$ and the space of Lizorkin distributions on $\cyl$ as its continuous dual, $\Sch_0'(\cyl)$. Let $X$ be a locally compact Hausdorff space. The Riesz--Markov--Kakutani representation theorem says that $\M(X)$, the space of finite Radon measures on $X$, is the continuous dual of $C_0(X)$, the space of continuous functions vanishing at infinity~\cite[Chapter~7]{folland}. Since $C_0(X)$ is a Banach space when equipped with the uniform norm, we have \begin{equation} \norm{u}_{\M(X)} \coloneqq \sup_{\substack{\varphi \in C_0(X) \\ \norm{\varphi}_\infty = 1}} \ang{u, \varphi}. \label{eq:M-norm} \end{equation} The norm $\norm{\dummy}_{\M(X)}$ is exactly the \emph{total variation} norm (in the sense of measures). As $\Sch_0(X)$ is dense in $C_0(X)$ \cite[cf.][]{denseness-lizorkin}, we can associate every measure in $\M(X)$ with a Lizorkin distribution and view $\M(X) \subset \Sch_0'(X) \subset \Sch(X)$, providing the description \[ \M(X) \coloneqq \curly{u \in \Sch_0'(X) \st \norm{u}_{\M(X)} < \infty}, \] and so the duality pairing $\ang{\dummy, \dummy}$ in \cref{eq:M-norm} can be viewed, formally, as the integral \[ \ang{u, \varphi} = \int_{X} \varphi(\vec{x}) u(\vec{x}) \dd \vec{x}, \] where $u$ is viewed as an element of $\Sch_0'(X)$. In this paper, we will be mostly be working with $X = \cyl$, the Radon domain. As we will later see in \cref{subsec:radon-transform}, the Radon transform of a function is necessarily even, so we will be interested in the Banach spaces of odd and even finite Radon measures on $\cyl$. Viewing $\M(X) \subset \Sch_0'(X)$, put \begin{align*} \Me(\cyl) &\coloneqq \curly{u \in \M(\cyl) \st u(\vec{\gamma}, t) = u(-\vec{\gamma}, -t)} \\ \Mo(\cyl) &\coloneqq \curly{u \in \M(\cyl) \st u(\vec{\gamma}, t) = -u(-\vec{\gamma}, -t)}. \end{align*} One can then verify that the predual of $\Mo(\cyl)$ is the subspace of odd functions in $C_0(\cyl)$ and the predual of $\Me(\cyl)$ is the subspace of even functions in $C_0(\cyl)$, and so the associated norms of $\Mo(\cyl)$ and $\Me(\cyl)$ can be defined accordingly. Finally, $\Me(\cyl)$ can equivalently be viewed as $\M(\P^d)$ where $\P^d$ denotes the manifold of hyperplanes in $\R^d$. This follows from the fact that every hyperplane in $\R^d$ takes the form $h_{(\vec{\gamma}, t)} \coloneqq \curly{\vec{x} \in \R^d \st \vec{\gamma}^\T\vec{x} = t}$ for some $(\vec{\gamma}, t) \in \cyl$ and $h_{(\vec{\gamma}, t)} = h_{(-\vec{\gamma}, -t)}$. It will sometimes be convenient to work with $\M(\P^d)$ instead of $\Me(\cyl)$ as $\P^d$ is a locally compact Hausdorff space. \subsection{The Fourier transform} The Fourier transform $\FourierOp$ of $f: \R^d \to \mathbb{C}$ and inverse Fourier transform $\FourierOp^{-1}$ of $F: \R^d \to \mathbb{C}$ are given by \begin{align*} \Fourier{f}(\vec{\xi}) &\coloneqq \int_{\R^d} f(\vec{x}) e^{-\imag \vec{x}^\T\vec{\xi}} \dd \vec{x}, \quad \vec{\xi} \in \R^d \\ \InvFourier{F}(\vec{x}) &\coloneqq \frac{1}{(2\pi)^d}\int_{\R^d} e^{\,\imag \vec{x}^\T \vec{\xi}} F(\vec{\xi}) \dd\vec{\xi}, \quad \vec{x} \in \R^d, \end{align*} where $\imag^2 = -1$. We will usually write $\hat{\dummy}$ for $\Fourier{\dummy}$. The Fourier transform and its inverse can be applied to functions in $\Sch(\R^d)$, resulting in functions in $\Sch(\R^d)$. These transforms can be extended to act on $\Sch'(\R^d)$ by duality. \subsection{The Hilbert transform} The Hilbert transform $\HilbertOp$ of $f: \R \to \mathbb{C}$ is given by \[ \Hilbert{f}(x) \coloneqq \frac{\imag}{\pi} \pv \int_{-\infty}^\infty \frac{f(y)}{x - y} \dd y, \quad x \in \R, \] where $\pv$ denotes understanding the integral in the Cauchy principal value sense. The prefactor was chosen so that \[ \hat{\Hilbert{f}}(\omega) = \paren{\sgn \omega} \hat{f}(\omega) \quad\text{and}\quad \HilbertOp \HilbertOp f = f. \] The Hilbert transform can be applied to functions in $\Sch(\R^d)$, resulting in functions in $\Sch(\R^d)$. This transform can be extended to act on $\Sch'(\R^d)$ by duality. \subsection{The Radon transform} \label{subsec:radon-transform} The Radon transform $\RadonOp$ of $f: \R^d \to \R$ and the dual Radon transform $\RadonOp^*$ of $\Phi: \cyl \to \R$ are given by \begin{align*} \Radon{f}(\vec{\gamma}, t) &\coloneqq \int_{\curly{\vec{x}: \vec{\gamma}^\T \vec{x} = t}} f(\vec{x}) \dd s(\vec{x}), \quad (\vec{\gamma},t) \in \cyl\numberthis \label{eq:radon-transform} \\ \DualRadon{\Phi}(\vec{x}) &\coloneqq \int_{\Sph^{d-1}} \Phi(\vec{\gamma}, \vec{\gamma}^\T \vec{x}) \dd\sigma(\vec{\gamma}), \quad \vec{x} \in \R^d, \end{align*} where $s$ denotes the surface measure on the plane $\curly{\vec{x} \st \vec{\gamma}^\T \vec{x} = t}$, and $\sigma$ denotes the surface measure on $\Sph^{d-1}$. We will sometimes write $\vec{z} = (\vec{\gamma}, t)$ as the variable in the Radon domain. We discuss the spaces which we can apply the Radon transform and its dual in the sequel. We also remark that the Radon transform of a function is always even, i.e., $\Radon{f}(\vec{\gamma}, t) = \Radon{f}(-\vec{\gamma}, -t)$. Another way to view the Radon transform and its dual is to consider, formally, the integrals \begin{align} \Radon{f}(\vec{\gamma}, t) &= \int_{\R^d} f(\vec{x}) \delta_\R(\vec{\gamma}^\T\vec{x} - t) \dd \vec{x}, \quad (\vec{\gamma}, t) \in \cyl \label{eq:formal-radon-transform} \\ \DualRadon{\Phi}(\vec{x}) &= \int_{\cyl} \delta_\R(\vec{\gamma}^\T\vec{x} - t) \Phi(\vec{\gamma}, t) \dd(\sigma \times \lambda)(\vec{\gamma}, t), \quad \vec{x} \in \R^d, \label{eq:formal-dual-radon-transform} \end{align} where $\lambda$ denotes the univariate Lebesgue measure. The fundamental result of the Radon transform is the \emph{Radon inversion formula}, which states for any $f \in \Sch(\R^d)$ \begin{equation} 2(2\pi)^{d-1} f = \RadonOp^* \Lambda^{d - 1} \RadonOp f, \label{eq:radon-inversion} \end{equation} where the \emph{ramp filter} $\Lambda^d$ of a function $\Phi(\vec{\gamma}, t)$ is given by \begin{equation} \Lambda^d\curly{\Phi}(\vec{\gamma}, t) \coloneqq \begin{cases} \partial_t^d \Phi(\vec{\gamma}, t), & \text{$d$ even} \\[0.5ex] \HilbertOp_t \partial_t^d \Phi(\vec{\gamma}, t), & \text{$d$ odd}, \end{cases} \label{eq:ramp-filter} \end{equation} where $\HilbertOp_t$ is the Hilbert transform (in the variable $t$) and $\partial_t$ is the partial derivative with respect to $t$. It is easier to see that $\Lambda^d$ is indeed a ramp filter by looking at its frequency response with respect to the $t$ variable. We have \[ \hat{\Lambda^d \Phi}(\vec{\gamma}, \omega) = \imag^d \abs{\omega}^d \hat{\Phi}(\vec{\gamma}, \omega). \] Some care has to be taken to understand the Radon transforms of distributions. Just as the Fourier and Hilbert transforms can be extended to distributions via duality, we do the same with the Radon transform, though some care has to be taken. In particular, the choice of test functions must be carefully chosen and cannot be the space of Schwartz functions. It is easy to verify that if $\varphi \in \Sch(\R^d)$, then $\Radon{\varphi} \in \Sch(\cyl)$. This is not true about the dual transform. Indeed, if $\psi \in \Sch(\cyl)$, then it may not be true that $\DualRadon{\psi} \in \Sch(\R^d)$. Due to recent developments in ridgelet analysis~\citep{ridgelet-transform-distributions}, specifically regarding the continuity of the Radon transform of Lizorkin test functions, we have the following result. \begin{proposition}[{\citet[][Corollary~6.1]{ridgelet-transform-distributions}}] \label[Proposition]{thm:radon-bijections} The transforms \begin{align*} \RadonOp: \Sch_0(\R^d) \to \Sch_0(\P^d) \\ \RadonOp^*: \Sch_0(\P^d) \to \Sch_0(\R^d) \end{align*} are continuous bijections, where $\Sch_0(\P^d) \subset \Sch_0(\cyl)$ denotes the subspace of even Lizorkin test functions. \end{proposition} \Cref{thm:radon-bijections} allows us to define the Radon transform and dual Radon transform of distributions by duality by choosing our test functions to be Lizorkin test functions, i.e., the action of the Radon transform of $f \in \Sch_0'(\R^d)$ on $\psi \in \Sch_0(\P^d)$ is defined to be $\ang{\RadonOp f, \psi} \coloneqq \ang{f, \RadonOp^* \psi}$, and the action of the dual Radon transform of $\Phi \in \Sch_0'(\P^d)$ on $\varphi \in \Sch_0(\R^d)$ is defined to be $\ang{\RadonOp^* \Phi, \varphi} \coloneqq \ang{\Phi, \RadonOp \varphi}$. This means we have the following corollary to \cref{thm:radon-bijections}. \begin{corollary} \label[Corollary]{cor:radon-bijections} The transforms \begin{align*} \RadonOp: \Sch_0'(\R^d) \to \Sch_0'(\P^d) \\ \RadonOp^*: \Sch_0'(\P^d) \to \Sch_0'(\R^d) \end{align*} are continuous bijections. \end{corollary} We also have the following inversion formula for the dual Radon transform~\cite[Theorem~3.7]{integral-geometry-radon-transforms}. For any $\Phi \in \Sch_0(\P^d)$ \begin{equation} 2(2\pi)^{d-1} \Phi = \Lambda^{d-1} \RadonOp \RadonOp^* \Phi. \label{eq:dual-radon-inversion} \end{equation} The inversion formulas in \cref{eq:radon-inversion,eq:dual-radon-inversion} can be rewritten in many ways using the \emph{intertwining relations} of the Radon transform and its dual with the Laplacian operator~\cite[Lemma~2.1]{integral-geometry-radon-transforms}. We have \begin{equation} (-\Delta)^{\frac{d-1}{2}} \RadonOp^* = \RadonOp^* \Lambda^{d - 1} \qquad\text{and}\qquad \RadonOp (-\Delta)^{\frac{d-1}{2}} = \Lambda^{d - 1} \RadonOp. \label{eq:radon-intertwining} \end{equation} As the constant $2(2\pi)^{d-1}$ arises often when working with the Radon transform, we put \begin{equation} c_d \coloneqq \frac{1}{2(2\pi)^{d-1}}. \label{eq:cd} \end{equation} \begin{remark} We warn the reader that here and in the rest of the paper, we use the pairing $\ang{\dummy, \dummy}$ to generically denote the duality pairing between a space and its continuous dual. We will not use different notation for different pairings to reduce clutter. The exact pairings should be clear from context. \end{remark} With these definitions in hand, we see that the seminorms \cref{eq:seminorms} studied in this paper are well-defined. Recall from \cref{eq:seminorms} that \[ \norm{f}_{(m)} = \norm{\ROp_m f}_{\M(\cyl)} = c_d \norm{\partial_t^m \ramp^{d-1} \RadonOp f}_{\M(\cyl)}, \] where $f \in \F_m$. We will later show in \cref{lemma:finite-dim-null-space} that the null space $\N_m$ of $\ROp_m$ is the space of polynomials of degree strictly less than $m$. Thus, to understand that the seminorms studied in this paper are well-defined, we can view $f \in \Sch_0'(\R)$. From \cref{cor:radon-bijections}, it follows that $\RadonOp f \in \Sch_0'(\P^d) \subset \Sch_0'(\cyl) \subset \Sch'(\cyl)$. Since $\partial_t^m \ramp^{d-1}$ is a Fourier multiplier and from the definition of $\F_m$, we see that $\partial_t^m \ramp^{d-1} \RadonOp f \in \M(\cyl)$, and so the seminorms are well-defined. \section{The Representer Theorem} \label{sec:rep-thm} In this section we will prove \cref{thm:rep-thm}, our representer theorem. The general strategy will be to reduce the problem in \cref{eq:inverse-problem} to one that is similar to the classical problem of \emph{Radon measure recovery}, which has been studied since as early as the 1930s~\citep{radon-measure-recovery1,radon-measure-recovery2}. Let $\Omega \subset \R^d$ be a bounded domain. The prototypical Radon measure recovery problem studies optimizations of the form \begin{equation} \min_{u \in \M(\Omega)} \: \norm{u}_{\M(\Omega)} \quad\subj\quad \sensing u = \vec{y}, \label{eq:radon-measure-recovery} \end{equation} where $\sensing: \M(\Omega) \to \R^N$ is a continuous linear operator and $\vec{y} \in \R^N$. The first ``representer theorem'' for \cref{eq:radon-measure-recovery} is from~\cite{rep-thm-radon-measure-recovery}. This representer theorem essentially states that there exists a sparse solution to \cref{eq:radon-measure-recovery} of the form \[ \sum_{k = 1}^K v_k \, \delta_{\R^d}(\dummy - \vec{x}_k), \] with $K \leq N$. Refinements to this result have been made over the years, e.g.,~\cite[Theorem~1]{fisher-jerome}, including very modern results, e.g.,~\cite[Theorem~7]{L-splines},~\cite[Section~4.2.3]{rep-thm-convex-reg}, and~\cite[Theorem~4.2]{sparsity-variational-inverse}. For our problem, we reduce \cref{eq:inverse-problem} to one of the following Radon measure recovery problems in \cref{prop:radon-measure-recovery-even,prop:radon-measure-recovery-odd}, depending on if $m$ is even or odd. Reducing our problem to one of these Radon measure recovery problems requires several steps. The first step is to understand what functions are sparsified by $\ROp_m$. The next step is the construction of a stable right-inverse of $\ROp_m$. The final step is to understand the topological structure of the native space $\F_m$. These are outlined in the remainder of this section before finally proving \cref{thm:rep-thm} at the end of this section. \begin{proposition}[{Based on~\citet[Theorem~4.2]{sparsity-variational-inverse}}] \label[Proposition]{prop:radon-measure-recovery-even} Assume the following: \begin{enumerate}[label=\arabic*.] \item The function $\datafit: \R^N \to \R$ is strictly convex, coercive, and lower semi-continuous. \item The operator $\sensing: \M(\P^d) \to \R^N$ is continuous, linear, and surjective, where we recall that $\P^d$ is the manifold of hyperplanes in $\R^d$. \end{enumerate} Then, there exists a sparse minimizer to the Radon measure recovery problem \[ \min_{u \in \M(\P^d)} \: \datafit(\sensing u) + \norm{u}_{\M(\P^d)} \] of the form \[ \bar{u} = \sum_{k=1}^K v_k\, \delta_{\P^d}(\dummy - \vec{z}_k) = \sum_{k=1}^K v_k\, \sq{\frac{\delta_\cyl(\dummy - \vec{z}_k) + \delta_\cyl(\dummy + \vec{z}_k)}{2}} \] with $K \leq N$, $v_k \in \R \setminus \curly{0}$, and $\vec{z}_k = (\vec{w}_k, b_k) \in \cyl$, $k = 1, \ldots, K$. \end{proposition} Although \citet[Theorem~4.2]{sparsity-variational-inverse} considers an open, bounded subset of $\R^d$ rather than $\P^d$, their proof is general enough to apply to any locally compact Hausdorff space, and, in particular, for $\P^d$. Analogously, we immediately have the following result for the Radon measure recovery problem posed over $\Mo(\cyl)$. \begin{proposition} \label[Proposition]{prop:radon-measure-recovery-odd} Assume the following: \begin{enumerate}[label=\arabic*.] \item The function $\datafit: \R^N \to \R$ is a strictly convex function that is coercive and lower semi-continuous. \item The operator $\sensing: \Mo(\cyl) \to \R^N$ is continuous, linear, and surjective. \end{enumerate} Then, there exists a sparse minimizer to the Radon measure recovery problem \[ \min_{u \in \Mo(\cyl)} \: \datafit(\sensing u) + \norm{u}_{\Mo(\cyl)} \] of the form \[ \bar{u} = \sum_{k=1}^K v_k\, \sq{\frac{\delta_\cyl(\dummy - \vec{z}_k) - \delta_\cyl(\dummy + \vec{z}_k)}{2}} \] with $K \leq N$, $v_k \in \R \setminus \curly{0}$, and $\vec{z}_k = (\vec{w}_k, b_k) \in \cyl$, $k = 1, \ldots, K$. \end{proposition} In order to reduce \cref{eq:inverse-problem} to one of the problems in \cref{prop:radon-measure-recovery-even,prop:radon-measure-recovery-odd}, we need to understand which functions are sparsified by $\ROp_m$. As claimed in \cref{rem:nn-atoms-sparsified}, these are exactly the neurons in \cref{eq:ridge-spline}. \begin{lemma} \label[Lemma]{lemma:nn-atoms-sparsified} The atoms of the single-hidden layer neural network as in \cref{eq:ridge-function} are ``sparsified'' by $\ROp_m$ in the sense that \[ \ROp_m r_{(\vec{w}, b)}^{(m)} = \frac{\delta_\cyl(\dummy - \vec{z}) + (-1)^m \delta_\cyl(\dummy + \vec{z})}{2}, \] where $\vec{z} = (\vec{w}, b) \in \cyl$ and $r_{(\vec{w}, b)}^{(m)}(\vec{x}) \coloneqq \rho_m(\vec{w}^\T\vec{x} - b)$. \end{lemma} Before proving \cref{lemma:nn-atoms-sparsified}, we first prove the following intermediary result which may be of independent interest. \begin{lemma} \label[Lemma]{lemma:radon-integrator} Let $r_{(\vec{\gamma}, t)}^{(m)}(\vec{x}) \coloneqq \rho_m(\vec{\gamma}^\T\vec{x} - t)$. For all $\varphi \in \Sch_0(\R^d)$ and any $m \in \mathbb{N}$ we have \[ \ang{r_{(\vec{\gamma}, t)}^{(m)}, \varphi} = (-1)^m(\partial_t^{-m}) \Radon{\varphi}(\vec{\gamma}, t), \] where $\partial_t^{-m}$ is a stable left-inverse of $\partial_t^m$, i.e., an $m$-fold integrator in the $t$ variable of the Radon domain. In particular, by the intertwining relations of the Radon transform with the Laplacian operator in \cref{eq:radon-intertwining}, this says for even $m$ \[ \ang{\Delta^{m/2}r_{(\vec{\gamma}, t)}^{(m)}, \varphi} = \Radon{\varphi}(\vec{\gamma}, t), \] for all $\varphi \in \Sch_0(\R^d)$. \end{lemma} \begin{proof} The proof is a direct computation. For all $\varphi \in \Sch_0(\R^d)$ and any $m \in \mathbb{N}$ we have \begin{align*} \ang{r_{(\vec{\gamma}, t)}^{(m)}, \varphi} &= \int_{\R^d} \rho_m(\vec{\gamma}^\T\vec{x} - t) \varphi(\vec{x}) \dd\vec{x} \\ &= \int_{\R^d} (-1)^m(\partial_t^{-m})\delta_\R(\vec{\gamma}^\T\vec{x} - t) \varphi(\vec{x}) \dd\vec{x} \\ &= (-1)^m(\partial_t^{-m}) \int_{\R^d} \delta_\R(\vec{\gamma}^\T\vec{x} - t) \varphi(\vec{x}) \dd\vec{x} \\ &= (-1)^m(\partial_t^{-m}) \Radon{\varphi}(\vec{\gamma}, t). \end{align*} In particular, the stability of $\partial_t^{-m}$ ensures that the third line is well-defined\footnote{One choice of $\partial_t^{-m}$ appears in \citet[pg. 781]{L-splines}, where the (Schwartz) kernel of $\partial_t^{-m}$ is compactly supported and bounded, which justifies changing the order of the operator and the integral.}. The last line follows from \cref{eq:formal-radon-transform}. \end{proof} \begin{proof}[Proof of \cref{lemma:nn-atoms-sparsified}] One may verify that when $m$ is even, $\ROp_m f$ is even and when $m$ is odd, $\ROp_m f$ is odd. Next, as discussed in \cref{rem:radon-impulse}, if we consider the subspace of even or odd Lizorkin distributions, then the point evaluation functionals (i.e., Dirac impulses) are \[ \frac{\delta_\cyl(\dummy - \vec{z}_k) + \delta_\cyl(\dummy + \vec{z}_k)}{2} \] for the even subspace and \[ \frac{\delta_\cyl(\dummy - \vec{z}_k) - \delta_\cyl(\dummy + \vec{z}_k)}{2} \] for the odd subspace. Thus, it suffices to check that \begin{equation} c_d\ang{\partial_t^m \ramp^{d-1} \RadonOp r_{(\vec{w}, b)}^{(m)}, \psi} = \psi(\vec{w}, b) \label{eq:radon-measure-recovery-verify} \end{equation} for either all even Lizorkin test functions $\psi \in \Sch_0(\cyl)$ if $m$ is even or for all odd Lizorkin test functions $\psi \in \Sch_0(\cyl)$ is $m$ is odd. We remark that for the even (respectively odd) subspace, the pairing $\ang{\dummy, \dummy}$ in the above display is of even (respectively odd) Lizorkin test functions and even (respectively odd) Lizorkin distributions. We now verify \cref{eq:radon-measure-recovery-verify}. Suppose that $m$ is even. For all even $\psi \in \Sch_0(\cyl)$ we have \begin{align*} c_d\ang{\partial_t^m \ramp^{d-1} \RadonOp r_{(\vec{w}, b)}^{(m)}, \psi} &= c_d \ang{r_{(\vec{w}, b)}^{(m)}, \RadonOp^*\ramp^{d-1}\curly{(-1)^m \partial_t^m \psi}} \\ &= \sq{(-1)^m \partial_t^{-m} c_d \RadonOp \RadonOp^*\ramp^{d-1}\curly{(-1)^m \partial_t^m \psi}}(\vec{w}, b) \\ &= \sq{(-1)^m \partial_t^{-m} (-1)^m \partial_t^m \psi}(\vec{w}, b) \\ &= \psi(\vec{w}, b), \end{align*} where the first line holds since the formal adjoint of $\partial_t^m$ is $(-1)^m \partial_t^m$, the second line holds via \cref{lemma:radon-integrator}, the third line holds by the dual Radon transform inversion formula \cref{eq:dual-radon-inversion} and the intertwining relations \cref{eq:radon-intertwining} combined with the fact that since $\psi$ is even and $\psi \in \Sch_0(\cyl)$ we have that $\partial_t^m \psi$ is also even and $\partial_t^m \psi \in \Sch_0(\cyl)$, and the fourth line holds by the left-inverse property for our choice of construction for $\partial_t^{-m}$. The case when $m$ is odd is analogous, with the key fact that if $\psi$ is odd and $\psi \in \Sch_0(\cyl)$ we have that $\partial_t^m \psi$ is \emph{even} and $\partial_t^m \psi \in \Sch_0(\cyl)$, which justifies the use of the dual Radon transform inversion formula. \end{proof} Since $\ROp_m$ sparsifies functions of the form $r_{(\vec{w}, b)}^{(m)} = \rho_m(\vec{w}^\T\vec{x} - b)$, we choose to define the growth restriction previously discussed in \cref{subsec:rep-thm} for the null space and native space of $\ROp_m$ via the algebraic growth rate \begin{equation} n_0 \coloneqq \inf \curly{n \in \mathbb{N} \st r_{(\vec{w}, b)}^{(m)} \in L^{\infty, n}(\R^d)} = m - 1. \label{eq:growth-restriction} \end{equation} In \cref{eq:inverse-problem}, we are optimizing over the native space $\native_m$. Although $\native_m$ is defined by the seminorm $\norm{\ROp_m f}_{\M(\cyl)}$, we show in \cref{thm:banach-space} that, if we equip $\F_m$ with the proper direct-sum topology, it forms a bona fide Banach space. In order to prove that $\F_m$ is a Banach space, we require two intermediary results. \begin{lemma} \label[Lemma]{lemma:finite-dim-null-space} The null space $\N_m$ of $\ROp_m$ defined in \cref{eq:null-space} is finite-dimensional. In particular, it is the space of polynomials of degree strictly less than $m$. \end{lemma} \Cref{lemma:finite-dim-null-space} says, in particular, that we can find a \emph{biorthogonal system} for $\N_m$. \begin{definition} \label[Definition]{defn:biorthogonal-system} Consider a finite-dimensional space $\N$ with $N_0 \coloneqq \dim \N$. The pair $(\vec{\phi}, \vec{p}) = \curly{(\phi_n, p_n)}_{n=1}^{N_0}$ is called a \emph{biorthogonal system} for $\mathcal{N}$ if $\curly{p_n}_{n=1}^{N_0}$ is a basis of $\N$ and the vector of ``boundary'' functionals $\vec{\phi} = (\phi_1, \ldots, \phi_{N_0})$ with $\phi_n \in \N'$ (the continuous dual of $\N$) satisfy the biorthogonality condition $\ang{\phi_k, p_n} = \delta[k - n]$, $k, n = 1, \ldots, N_0$, where $\delta[\dummy]$ is the Kronecker impulse. \end{definition} Put $N_0 \coloneqq \dim \N_m$ and let $(\vec{\phi}, \vec{p})$ be a biorthogonal system for $\N_m$. \Cref{defn:biorthogonal-system} says that any $q \in \N_m$ has the \emph{unique representation} \[ q = \sum_{n = 1}^{N_0} \ang{\phi_n, q} p_n. \] We will sometimes write $\vec{\phi}(f)$ to denote the vector $(\ang{\phi_1, f}, \ldots, \ang{\phi_{N_0}, f}) \in \R^{N_0}$. \begin{lemma} \label[Lemma]{lemma:right-inverse} Let $(\vec{\phi}, \vec{p})$ be a biorthogonal system for $\N_m \subset \F_m \subset L^{\infty, m-1}(\mathbb{R}^d)$. Then, there exists a \emph{unique operator} \[ \ROp_{m, \vec{\phi}}^{-1}: \psi \mapsto \ROp_{m, \vec{\phi}}^{-1}\psi = \int_{\mathbb{S}^{d-1} \times \mathbb{R}} g_{m, \vec{\phi}}(\cdot, \vec{z}) \psi(\vec{z}) \dd(\sigma \times \lambda)(\vec{z}), \] where we recall that $\sigma$ is the surface measure on $\Sph^{d-1}$ and $\lambda$ is the univariate Lebesgue measure. Then, for all even $\psi \in \Sch_0(\cyl)$ if $m$ is even and all odd $\psi \in \Sch_0(\cyl)$ if $m$ is odd, the operator $\ROp_{m, \vec{\phi}}^{-1}$ satisfies \begin{equation} \begin{aligned} \ROp_m\ROp_{m, \vec{\phi}}^{-1}\psi &= \psi \qquad\text{(right-inverse property)} \\ \vec{\phi}(\ROp_{m, \vec{\phi}}^{-1}\psi) &= \vec{0} \qquad\text{(boundary conditions)} \end{aligned} \label{eq:stable-right-inverse-props} \end{equation} The kernel of this operator is \[ g_{m, \vec{\phi}}(\vec{x}, \vec{z}) = r_\vec{z}^{(m)}(\vec{x}) - \sum_{n=1}^{N_0} p_n(\vec{x}) q_n(\vec{z}), \] where $r_\vec{z}^{(m)} = r_{(\vec{w}, b)}^{(m)} = \rho_m(\vec{w}^\T(\dummy) - b)$ and $q_n(\vec{z}) \coloneqq \ang{\phi_n, r_\vec{z}}$. If $\ROp_{m, \vec{\phi}}^{-1}$ is bounded, then it admits a continuous extension from $\Me(\cyl)$ or $\Mo(\cyl)$ to $L^{\infty, m-1}(\mathbb{R}^d)$ when $m$ is even or odd, with \cref{eq:stable-right-inverse-props} holding for all $\psi \in \Me(\cyl)$ or $\psi \in \Mo(\cyl)$. \end{lemma} With these two results, we can now establish the Banach space structure of $\native_m$. \begin{theorem} \label{thm:banach-space} Let $(\vec{\phi}, \vec{p})$ be a biorthogonal system for the null space $\N_m$ of $\ROp_m$ as defined in \cref{eq:null-space} and let $\native_m$ be the native space of $\ROp_m$ as defined in \cref{eq:native-space}. Then, the following hold: \begin{enumerate}[label=\arabic*.] \item The right-inverse operator $\ROp_{m, \vec{\phi}}^{-1}$ specified by \cref{lemma:right-inverse} isometrically maps $\Me(\cyl)$ (respectively $\Mo(\cyl)$) to $\native_m$ when $m$ is even (respectively odd). Moreover, this map is necessarily bounded. \item Every $f \in \F_m$ admits a \emph{unique} representation \begin{equation} f = \ROp_{m, \vec{\phi}}^{-1} u + q, \label{eq:unique-representation} \end{equation} where $u = \ROp_m f \in \mathcal{M}(\cyl)$\footnote{More specifically, we have that $u \in \Me(\cyl)$ when $m$ is even and $u \in \Mo(\cyl)$ when $m$ is odd.} and $q = \sum_{n=1}^{N_0} \ang{\phi_n, f}p_n \in \N_m$. In particular, this specifies the structural property $\F_m = \F_{m,\vec{\phi}} \oplus \N_m$, where \[ \F_{m,\vec{\phi}} \coloneqq \curly{f \in \F \st \vec{\phi}(f) = \vec{0}}. \] \label{item:unique-rep} \item $\F_m$ is a Banach space when equipped with the norm \[ \norm{f}_{\F_m} \coloneqq \norm{\ROp_m f}_{\M(\cyl)} + \norm{\vec{\phi}(f)}_{2}. \] \label{item:banach-norm} \end{enumerate} \end{theorem} \begin{remark} \cref{item:unique-rep} in \cref{thm:banach-space} says that every $f \in \F_m$ admits an \emph{integral representation} via \cref{eq:unique-representation}, that can be viewed as an infinite-width (continuum-width) neural network. Since $u$ in \cref{eq:unique-representation} depends on $f$, it follows that \cref{eq:unique-representation} is a kind of \emph{Calder{\'o}n-type reproducing formula}~\citep{calderon-reproducing}. Integral representations have been studied in several recent works~\citep{convex-nn, function-space-relu,dimension-independent-approx-bounds}. We also remark that \cref{eq:unique-representation} shares many similarities with the \emph{dual Ridgelet transform}~\citep{murata-ridgelets,rubin-ridgelets,candes-phd,candes-ridgelets}. \end{remark} The proofs of \cref{lemma:finite-dim-null-space,lemma:right-inverse,thm:banach-space} appear in \cref{app:aux-proofs}. We will now prove \cref{thm:rep-thm}. \subsection{Proof of \cref{thm:rep-thm}} \begin{proof} We first recast the problem in \cref{eq:inverse-problem} as one with interpolation constraints. To do this use a technique from~\cite{unifying-representer}. We use the fact that $G$ is a strictly convex function. In particular, for any two solutions $\bar{f}$, $\tilde{f}$ of \cref{eq:inverse-problem}, we must have $\sensing \bar{f} = \sensing \tilde{f}$ (since otherwise, it would contradict the strict convexity of $G$). Hence, there exists $\vec{z} \in \R^N$ such that $\vec{z} = \sensing \bar{f} = \sensing \tilde{f}$. Although $\vec{z} \in \R^N$ is not usually known before hand, this property provides us with the parametric characterization of the solution set to \cref{eq:inverse-problem} as \begin{equation} S_\vec{z} \coloneqq \argmin_{f \in \F_m} \: \norm{\ROp_m f}_{\M(\cyl)} \quad\subj\quad \sensing f = \vec{z}, \label{eq:interpolation-constraints} \end{equation} for some $\vec{z}\in\R^N$. Hence, it suffices to show that there exists a solution to \cref{eq:interpolation-constraints} of the form in \cref{eq:ridge-spline}. We will now show this for a fixed $\vec{z} \in \R^N$. Consider the $N \times N_0$ matrix \begin{equation} \mat{A} \coloneqq \begin{bmatrix} \sensing p_1 & \cdots & \sensing p_{N_0} \end{bmatrix}, \label{eq:thm-1-system} \end{equation} where $\curly{p_n}_{n=1}^{N_0}$ is a basis for $\N_m$. Since every $q \in \N_m$ has a \emph{unique} expansion $q = \sum_{n=1}^{N_0} c_n p_n$, we have that the linear system $\mat{A} \vec{c} = \sensing q$, where $\vec{c} = (c_1, \ldots, c_{N_0})$, has a unique solution. From \cref{item:well-posed-null-space} in \cref{thm:rep-thm}, we see that the system in \cref{eq:thm-1-system} satisfies $N \geq N_0$. Hence, it is solvable if and only if $\mat{A}^\T\mat{A}$ is invertible and the solution is given by the least-squares solution \[ \vec{c} = (\mat{A}^\T \mat{A})^{-1} \mat{A}^\T (\sensing q). \] Thus, we know $\mat{A}^\T \mat{A}$ must be invertible. Invertibility of $\mat{A}^\T \mat{A}$ says, in particular, that $\spn\curly{\vec{a}_n}_{n=1}^N = \R^{N_0}$, where $\vec{a}_n^\T$ is the $n$th row of $\mat{A}$. Therefore, there exists a subset of $N_0$ rows of $\mat{A}$ that span $\R^{N_0}$. Without loss of generality, suppose this subset is $\curly{\vec{a}_n}_{n=1}^{N_0}$. Then, the submatrix $\mat{A}_0$ of $\mat{A}$ defined by \[ \mat{A}_0 \coloneqq \begin{bmatrix} \vec{a}_1^\T \\ \vdots \\ \vec{a}_{N_0}^\T \end{bmatrix} \] is invertible. Consider the components $(\nu_1, \ldots, \nu_N)$ of $\sensing$ via $\sensing: f \mapsto (\ang{\nu_1, f}, \ldots, \ang{\nu_N, f})$. We can write $\vec{a}_n$ as the vector $(\ang{\nu_n, p_1}, \ldots, \ang{\nu_n, p_{N_0}})$, $n = 1, \ldots, N_0$. Hence the reduced subset of measurements $(\nu_1, \ldots, \nu_{N_0})$ are linearly independent with respect to $\N_m$. Let $\sensing_0$ denote this reduced set of measurements and let $\sensing_1$ denote the remaining set of measurements, i.e., $\sensing = (\sensing_0, \sensing_1)$. Next, notice that $\sensing_0 p_n$ is the $n$th column of $\mat{A}_0$. Let $\vec{e}_n$ denote the $n$th canonical basis vector. Then, we have the equality $\sensing_0 p_n = \mat{A}_0 \vec{e}_n$. Using the invertibility of $\mat{A}_0$, we have \[ \mat{A}_0^{-1} (\sensing_0 p_n) = \vec{e}_n. \] If we put $\vec{\phi}_0 \coloneqq \mat{A}_0^{-1} \circ \sensing_0$, the above display is exactly the biorthogonality property and hence $(\vec{\phi}_0, \vec{p})$ form a biorthogonal system for $\N_m$. One can verify that \begin{equation} \norm{\ROp_m f}_{\M(\cyl)} = \begin{cases} \norm{\ROp_m f}_{\Me(\cyl)}, & \text{$m$ even} \\ \norm{\ROp_m f}_{\Mo(\cyl)}, & \text{$m$ odd}. \end{cases} \label{eq:odd-even-norm} \end{equation} Suppose that $m$ is even. By \cref{thm:banach-space}, every $f \in \F_m$ admits a unique representation $f = \ROp_{m, \vec{\phi}_0}^{-1} u + q_0$, where $u = \ROp_m f \in \Me(\cyl)$ and $q_0 \in \N_m$. We can use the above display to rewrite the problem in \cref{eq:interpolation-constraints} as \begin{equation} \begin{aligned} \min_{u \in \Me(\cyl)} \quad & \norm{u}_{\Me(\cyl)} \\ \subj \quad & \sensing f = \vec{z}, \\ & f = \ROp_{m, \vec{\phi}}^{-1} u + q_0. \end{aligned} \label{eq:problem-over-radon-measures} \end{equation} Write $\vec{z}_0 = (z_1, \ldots, z_{N_0})$ and $\vec{z}_1 = (z_{N_0 + 1}, \ldots, z_N)$. Then, the constraint $\sensing f = \vec{z}$ can be written as the two constraints $\sensing_0 f = \vec{z}_0$ and $\sensing_1 f = \vec{z}_1$. By the boundary conditions in \cref{eq:stable-right-inverse-props} we have that $\vec{\phi}_0(f) = \vec{\phi}_0(\ROp_{m, \vec{\phi}_0}^{-1} u + q_0) = \vec{\phi}_0(\ROp_{m, \vec{\phi}_0}^{-1} u) + \vec{\phi}_0(q_0) = \vec{\phi}_0(q_0)$. Thus, by definition of $\vec{\phi}_0$, we have that $\vec{z}_0 = \sensing_0 f = \sensing_0 q_0$. Hence, \cref{eq:problem-over-radon-measures} can be rewritten as \[ \begin{aligned} \min_{u \in \Me(\cyl)} \quad & \norm{u}_{\Me(\cyl)} \\ \subj \quad & \sensing_1 f = \vec{z}_1, \\ & f = \ROp_{m, \vec{\phi}}^{-1} u + q_0 \end{aligned} \quad=\quad \begin{aligned} \min_{u \in \Me(\cyl)} \quad & \norm{u}_{\Me(\cyl)} \\ \subj \quad & \sensing_1 (\ROp_{m, \vec{\phi}_0}^{-1} u) = \vec{z}_1 - \sensing_1 q_0, \\ & f = \ROp_{m, \vec{\phi}}^{-1} u + q_0. \end{aligned} \] This says there exists a solution to the above display of the form $\bar{f} = \ROp_{m, \vec{\phi}_0}^{-1} \bar{u} + q_0$, where \[ \bar{u} \in \argmin_{u \in \Me(\cyl)} \norm{u}_{\Me(\cyl)} \quad\subj\quad \sensing_1 (\ROp_{m, \vec{\phi}_0}^{-1} u) = \vec{z}_1 - \sensing_1 q_0. \] By \cref{prop:radon-measure-recovery-even}, there exists a sparse minimizer to the above display with $N - N_0$ terms. The result then follows by invoking \cref{lemma:right-inverse} and \cref{lemma:nn-atoms-sparsified}, which says $\bar{f} = \ROp_{m, \vec{\phi}_0}^{-1} \bar{u} + q_0$ takes the form in \cref{eq:ridge-spline}. An analogous argument holds when $m$ is odd by invoking \cref{prop:radon-measure-recovery-odd} instead of \cref{prop:radon-measure-recovery-even}. \end{proof} \section{Ridge Splines and Polynomial Splines} \label{sec:splines} In this section we establish connections between ridge splines and classical polynomial splines in both the univariate ($d = 1$) and multivariate ($d > 1$) cases. \subsection{Univariate ridge splines are univariate polynomial splines} \label{subsec:1D-splines} Univariate ridge splines and classical univariate splines are in fact the same object. To see this, it suffices to verify that \[ \norm{\ROp_m f}_{\M(\cyl)} = c_d \norm{\partial_t^m \ramp^{d-1} \RadonOp f}_{\M(\cyl)} = \norm{\D^mf}_{\M(\R)}, \] when $d = 1$ and then simply invoke the result of \citet{L-splines}, where $\D^m$ is the univariate $m$th derivative operator. Certainly this is true. Indeed, when $d = 1$, we have that $c_d = 1/2$ and from \cref{eq:formal-radon-transform} that the univariate Radon transform is simply \[ \Radon{f}(\gamma, t) = \int_\R f(x) \delta_\R(\gamma x - t) \dd x = \int_\R \frac{f(x)}{\abs{\gamma}} \delta_\R\paren{x - \frac{t}{\gamma}} \dd x = \int_\R f(x) \delta_\R\paren{x - \frac{t}{\gamma}} \dd x = f\paren{\frac{t}{\gamma}}, \] where the second equality holds since the Dirac impulse is homogeneous of degree $-1$ and the third equality holds since $\gamma \in \Sph^0 = \curly{-1, +1}$. Thus, \begin{align*} \eval{c_d\norm{\partial_t^m \ramp^{d-1} \RadonOp f}_{\M(\cyl)}}_{d=1} &= \frac{1}{2}\norm{\partial_t^m \RadonOp f}_{\M(\curly{-1, +1} \times \R)} \\ &= \frac{1}{2}\sum_{\gamma \in \curly{-1, +1}} \norm{\D^m f\paren{\frac{\dummy}{\gamma}}}_{\M(\R)} \\ &= \norm{\D^mf}_{\M(\R)}, \end{align*} where the last equality holds since $f(\dummy / \gamma)$ is either $f$ or its reflection, both of which will have the same $\norm{\D^m\curly{\dummy}}_{\M(\R)}$ value. Thus, the main result from the framework of $\Ell$-splines~\citep{L-splines}, we see that univariate polynomial ridge splines of order $m$ are exactly the same as classical univariate polynomial splines of order $m$. This connection between regularized univariate single-hidden layer neural networks and classical notions of univariate splines being fit to data have been recently explored in~\cite{relu-linear-spline,min-norm-nn-splines}. This says, by \cref{prop:equiv-opts}, that training a wide enough univariate neural network with either an appropriate path-norm regularizer or an appropriate weight decay regularizer on data results in an optimal polynomial spline fit of the data. Moreover, these splines are in fact the well-known \emph{locally adaptive regression splines} of~\cite{locally-adaptive-regression-splines}. \subsection{Ridge splines correspond to univariate splines in the Radon domain} \label{subsec:spline-radon-domain} Another way to view a ridge spline is as a continuum of univariate polynomial splines in the Radon domain, where the continuum is indexed by directions $\vec{\gamma} \in \Sph^{d-1}$. Suppose $\sensing$ corresponds to the ideal sampling setting where the sampling locations are located at $\curly{\vec{x}_n}_{n=1}^N \subset \R^d$. Then, using the same technique we did in the proof of \cref{thm:rep-thm}, we can recast the continuous-domain inverse problem in \cref{eq:inverse-problem} as one with interpolation constraints: \begin{equation} \min_{f \in \F_m} \: \norm{\ROp_m f}_{\M(\cyl)} \quad\subj\quad f(\vec{x}_n) = z_n, \: n = 1, \ldots, N, \label{eq:inverse-problem-interp-constraints} \end{equation} for some $\vec{z} \in \R^N$. By \cref{eq:radon-inversion}, the Radon inversion formula, we can always write $f = c_d\RadonOp^* \Lambda^{d - 1} \RadonOp f$ for any $f \in \F_m$, where the operators are understood in the distributional sense via \cref{cor:radon-bijections}. Thus, we see that the above optimization can be rewritten as \[ \min_{f \in \F_m} \: c_d\norm{\partial_t^m \ramp^{d-1} \RadonOp f}_{\M(\cyl)} \quad\subj\quad (c_d\RadonOp^* \Lambda^{d - 1} \RadonOp f)(\vec{x}_n) = z_n, \: n = 1, \ldots, N. \] If we put $\Phi \coloneqq c_d \Lambda^{d-1}\RadonOp f$, then the above optimization is \begin{equation} \min_{\Phi \in \mathfrak{F}_m} \: \norm{\partial_t^m \Phi}_{\M(\cyl)} \quad\subj\quad \DualRadon{\Phi}(\vec{x}_n) = \int_{\Sph^{d-1}} \Phi(\vec{\gamma}, \vec{\gamma}^\T\vec{x}_n) \dd \sigma(\vec{\gamma}) = z_n, \: n = 1, \ldots, N, \label{eq:radon-domain-opt} \end{equation} where $\mathfrak{F}_m$ is the image of $c_d \ramp^{d-1} \RadonOp$ applied to $\F_m$. This essentially says for a fixed direction $\vec{\gamma} \in \Sph^{d-1}$, the function $\bar{\Phi}(\vec{\gamma}, \dummy): \R \to \R$ is an $m$th-order polynomial spline. This follows by considering an optimization for each $\vec{\gamma} \in \Sph^{d-1}$: \begin{equation} \min_{\Phi(\vec{\gamma}, \dummy)} \: \norm{\partial_t^m \Phi(\vec{\gamma}, \dummy)}_{\M(\R)} \quad\subj\quad \Phi(\vec{\gamma}, \vec{\gamma}^\T\vec{x}_n) = z_n(\vec{\gamma}), \: n = 1, \ldots, N, \label{eq:1D-spline-radon-domain} \end{equation} where \[ \int_{\Sph^{d-1}} z_n(\vec{\gamma}) \dd \sigma(\vec{\gamma}) = z_n, \: n = 1, \ldots, N \] and noting that by finding a solution for each fixed $\vec{\gamma} \in \Sph^{d-1}$, we can find a $\bar{\Phi}$ that attains a lower bound for \cref{eq:radon-domain-opt}\footnote{Since the integral of a $\min$ is less than or equal to the $\min$ of an integral.}, but this $\bar{\Phi}$ is clearly feasible for \cref{eq:radon-domain-opt} and is hence a solution to \cref{eq:radon-domain-opt}. It then follows that $\bar{\Phi}(\vec{\gamma}, \dummy)$ is a polynomial spline of order $m$. In particular, due to the structure of the interpolation constraints in \cref{eq:1D-spline-radon-domain}, we see that $\bar{\Phi}(\vec{\gamma}, \dummy)$ is interpolates data with sampling locations at $\curly{\vec{\gamma}^\T\vec{x}_n}_{n=1}^N \subset \R$. This viewpoint allows us to understand additional structural information about the sparse (i.e., single-hidden layer neural network) solutions to \cref{eq:inverse-problem-interp-constraints}. In particular, its classically known\footnote{Since we can always explicitly construct spline solutions with the spline knots bounded by the data.} that the univariate spline $\bar{\Phi}(\vec{\gamma}, \dummy)$ has some set of adaptive knot locations $\curly{t_\ell(\vec{\gamma})}_{\ell=1}^{K_\vec{\gamma}} \subset \R$ with $K_\vec{\gamma} \leq N - m$ and there are no knots outside the sampling locations\footnote{Notice that the number of knots and the knot locations depend on the direction $\vec{\gamma} \in \Sph^{d-1}$.}, i.e., \[ \abs{t_\ell(\vec{\gamma})} \leq \max_{n=1, \ldots, N} \abs{\vec{\gamma}^\T\vec{x}_n}, \quad \ell = 1, \ldots, K_\vec{\gamma}. \] It is then clear that for $\bar{\Phi}$ to be a sparse minimizer of \cref{eq:inverse-problem}, it must satisfy \cref{defn:ridge-spline}. This implies that the \emph{biases} in a ridge spline solution to \cref{eq:inverse-problem} exactly correspond to these knot locations. Thus, we can see that for a ridge spline solution to \cref{eq:inverse-problem} as in \cref{eq:ridge-spline} with $K$ neurons, we have the additional information about a bound on the bias terms, which we summarize in the following lemma. \begin{lemma} \label[Lemma]{lemma:bias-bound} In the ideal sampling scenario, the biases in the sparse solution \cref{eq:ridge-spline} of the variational problem in \cref{eq:inverse-problem} satisfy \[ \abs{b_k} \leq \max_{n=1, \ldots, N} \norm{\vec{x}_n}_2, \] for all $k = 1, \ldots, K$. \end{lemma} \begin{proof} The proof follows from the discussion above. In particular, \[ \abs{b_k} \leq \sup_{\vec{\gamma} \in \Sph^{d-1}} \max_{\ell=1, \ldots, K_\vec{\gamma}} \abs{t_\ell(\vec{\gamma})} \leq \sup_{\vec{\gamma} \in \Sph^{d-1}} \max_{n=1, \ldots, N} \abs{\vec{\gamma}^\T\vec{x}_n} \leq \max_{n=1, \ldots, N} \norm{\vec{x}_n}_2. \] for all $k = 1, \ldots, K$. \end{proof} \section{Applications to Neural Networks} \label{sec:nn-training} In this section we will apply \cref{thm:rep-thm} to neural network training, regularization, and generalization. \subsection{Finite-dimensional neural network training problems} In this section we will prove \cref{prop:equiv-opts}. \begin{lemma} \label[Lemma]{lemma:nn-norm} Consider the single-hidden layer neural network \[ f_\vec{\theta}(\vec{x}) \coloneqq \sum_{k=1}^K v_k\, \rho_m(\vec{w}_k^\T \vec{x} - b_k) + c(\vec{x}), \] where $\vec{\theta} = (\vec{w}_1, \ldots, \vec{w}_K, v_1, \ldots, v_K, b_1, \ldots, b_K, c)$ contains the neural network parameters such that $v_k \in \R$, $\vec{w}_k \in \R^d$, and $b_k \in \R$ for $k = 1, \ldots, K$, and where $c$ is a polynomial of degree strictly less than $m$. Also assume without loss of generality that the weight-bias pairs $(\vec{w}_k, b_k)$ are unique\footnote{In the sense that $(\vec{w}_k, b_k) \neq (\vec{w}_n, b_n)$ for $k \neq n$.}. Then, \[ \norm{\ROp_m f_\vec{\theta}}_{\M(\cyl)} = \sum_{k=1}^K \abs{v_k} \norm{\vec{w}_k}_2^{m-1}. \] \end{lemma} \begin{proof} This proof is a direct calculation. Write \begin{align*} \ROp_m f_{\vec{\theta}} &= \sum_{k=1}^K v_k \ROp_m \rho_m(\vec{w}_k^\T(\dummy) - b_k) \\ &= \sum_{k=1}^K v_k \norm{\vec{w}_k}_2^{m-1} \ROp_m \rho_m(\tilde{\vec{w}}_k^\T(\dummy) - \tilde{b}_k) \\ &= \sum_{k=1}^K v_k \norm{\vec{w}_k}_2^{m-1} \sq{\frac{\delta_\cyl(\dummy - (\tilde{\vec{w}}_k, \tilde{b}_k)) + (-1)^m \delta_\cyl(\dummy + (\tilde{\vec{w}}_k, \tilde{b}_k))}{2}}, \end{align*} where the second line follows from the substitution $\tilde{\vec{w}}_k \coloneqq \vec{w}_k / \norm{\vec{w}_k}_2 \in \Sph^{d-1}$ and $\tilde{b}_k \coloneqq b_k / \norm{\vec{w}_k}_2 \in \R$ combined with the homogenity of degree $m - 1$ of $\rho_m$ and the third line follows from \cref{lemma:nn-atoms-sparsified}. Taking the $\M$-norm proves the lemma. \end{proof} \subsubsection{Proof of \cref{prop:equiv-opts}} \begin{proof} Recasting the problem in \cref{eq:inverse-problem} as \cref{eq:nn-problem} follows from \cref{thm:rep-thm}. Equivalence of the problem in \cref{eq:nn-problem} and \cref{eq:nn-training-with-pathnorm} follows from \cref{lemma:nn-norm}. Thus, we just need to show that the solutions to the problem in \cref{eq:nn-training-with-weight-decay} are also solutions to problem in \cref{eq:nn-training-with-pathnorm}. To see this, let $\vec{\theta}$ be a solution to \cref{eq:nn-training-with-weight-decay} with network weights $\{(v_k,\vec{w}_k)\}_{k=1}^K$. Consider the regularizer from \cref{eq:nn-training-with-weight-decay}: \[ \frac{1}{2}\sum_{k=1}^K \paren{\abs{v_k}^2 + \norm{\vec{w}_k}_2^{2m-2}}. \] Since $\rho_m$ is homogeneous of degree $m-1$, the weights may be rescaled so that $|v_k|=\norm{\vec{w}_k}_2^{m-1}$, $k=1,\dots,K$, without altering the function of the network and its fit to the data. Note that each term of the regularizer is a sum of squares $\abs{v_k}^2 + \big(\norm{\vec{w}_k}_2^{m-1}\big)^2$, and thus each term is minimized when $|v_k|=\norm{\vec{w}_k}_2^{m-1}$. Thus, at the minimizer we have \[ \frac{1}{2}\sum_{k=1}^K \paren{\abs{v_k}^2 + \norm{\vec{w}_k}_2^{2m-2}} = \sum_{k=1}^K \abs{v_k} \norm{\vec{w}_k}_2^{m-1}, \] which is exactly the regularizer of \cref{eq:nn-training-with-pathnorm}. \end{proof} \subsection{Generalization bounds for binary classification} \label{sec:generalization} In this section we will prove \cref{thm:rad}. \subsubsection{Proof of \cref{thm:rad}} \begin{proof} Using the rescaling technique discussed in \cref{rem:rescale}, without loss of generality, we may assume that $\norm{\vec{w}_k}_2 = 1$ (since we can absorb the norm of $\vec{w}_k$ into the magnitude of $v_k$). In this case, \[ \norm{\ROp_m f_\vec{\theta}}_{\M(\cyl)} = \sum_{k=1}^K |v_k| \leq B. \] To begin we bound the \emph{empirical Rademacher complexity} of $\F_\Theta$. The empirical Rademacher complexity, denoted by $\widehat \Rad(\F_\Theta)$, is computed by taking the conditional expectation, conditioning on $\curly{\vec{x}_n}_{n=1}^N$ in place of the total expectation in \cref{eq:rad}. In other words, the only random variables are $\{\sigma_n\}_{n=1}^N$. The Rademacher complexity is then \[ \Rad(\F_\Theta)= \E\left[\hat \Rad(\F_\Theta)\right]. \] We will first consider the empirical Rademacher complexity of a single neuron, i.e., functions of the form $\vec{x}\mapsto \rho_m(\vec{w}^\T\vec{x}-b)$, with $\norm{\vec{w}}_2=1$ and $|b|\leq C/2$. Write $\E_\vec{\sigma}\sq{\,\dummy\,}$ for $\E\sq{\:\dummy \given \curly{\vec{x}_n}_{n=1}^N}$. The empirical Rademacher complexity of a single neuron is defined to be \[ \hat{\Rad}\paren{\rho_m(\vec{w}^\T(\dummy)-b)} \coloneqq 2\, \E_\vec{\sigma}\left[\sup_{\substack{\vec{w}:\norm{\vec{w}}_2 = 1 \\ b:\abs{b} \leq C/2}} \frac{1}{N} \sum_{n=1}^N\sigma_n \rho_m(\vec{w}^\T\vec{x}_n-b) \right]. \] First notice that when $m$ is odd \begin{equation} \rho_m(\vec{w}^\T\vec{x} - b) = \frac{(\vec{w}^\T\vec{x} - b)^{m - 1} + \abs{\vec{w}^\T\vec{x} - b}^{m - 2}(\vec{w}^\T\vec{x} - b)}{2(m-1)!}, \label{eq:decomp-odd} \end{equation} and when $m$ is even \begin{equation} \rho_m(\vec{w}^\T\vec{x} - b) = \frac{(\vec{w}^\T\vec{x} - b)^{m - 1} + \abs{\vec{w}^\T\vec{x} - b}^{m - 1}}{2(m - 1)!}. \label{eq:decomp-even} \end{equation} It is well-known that for two function spaces $\F$ and $\mathcal{G}$, the empirical Rademacher complexity satisfies $\hat{\Rad}(\F \oplus \mathcal{G}) = \hat{\Rad}(\F) + \hat{\Rad}(\mathcal{G})$, where $\oplus$ is the direct-sum. With this property, we see from \cref{eq:decomp-odd,eq:decomp-even} that the empirical Rademacher complexity of a single neuron is \begin{align*} &\hat{\Rad}\paren{\rho_m(\vec{w}^\T(\dummy)-b)} \\ &\qquad = \frac{1}{2(m - 1)!} \paren*[\Bigg]{\underbrace{\hat{\Rad}\paren{(\vec{w}^\T(\dummy)-b)^{m - 1}}}_{(*)} + \underbrace{\left.\begin{cases} \hat{\Rad}\paren{\abs{\vec{w}^\T(\dummy)-b}^{m - 2}(\vec{w}^\T(\dummy)-b)}, & \text{$m$ is odd} \\ \hat{\Rad}\paren{\abs{\vec{w}^\T(\dummy)-b}^{m - 1}}, & \text{$m$ is even} \end{cases}\right\}}_{(\mathsection)}}. \end{align*} Since the functions in $(*)$ and $(\mathsection)$ are the same up to a sign, it follows that the Rademacher complexities are the same due to the symmetry of the Rademacher random variables. Thus, \[ \hat{\Rad}\paren{\rho_m(\vec{w}^\T(\dummy)-b)} = \frac{\hat{\Rad}\paren{(\vec{w}^\T(\dummy)-b)^{m-1}}}{(m - 1)!} = \frac{2}{N(m-1)!} \E_\vec{\sigma}\left[\sup_{\substack{\vec{w}:\norm{\vec{w}}_2 = 1 \\ b:\abs{b} \leq C/2}} \sum_{n=1}^N\sigma_n (\vec{w}^\T\vec{x}_n-b)^{m-1} \right]. \] Next, by the binomial theorem \begin{align*} \hat{\Rad}\paren{\rho_m(\vec{w}^\T(\dummy)-b)}&\leq \frac{2}{N(m-1)!}\sum_{k=0}^{m-1} \binom{m - 1}{k} \E_\vec{\sigma}\left[\sup_{\substack{\vec{w}:\norm{\vec{w}}_2 = 1 \\ b:\abs{b} \leq C/2}} \sum_{n=1}^N\sigma_n (\vec{w}^\T\vec{x}_n)^k (-b)^{m-1-k}\right] \\ &\leq \frac{2}{N(m-1)!}\sum_{k=0}^{m-1} \binom{m - 1}{k} \paren{\frac{C}{2}}^{m - 1 - k} \E_\vec{\sigma}\left[\sup_{\vec{w}:\norm{\vec{w}}_2 = 1} \sum_{n=1}^N\sigma_n (\vec{w}^\T\vec{x}_n)^k \right] \\ &= \frac{2}{N(m-1)!}\sum_{k=0}^{m-1} \binom{m - 1}{k} \paren{\frac{C}{2}}^{m - 1 - k} \E_\vec{\sigma}\left[\sup_{\vec{w}:\norm{\vec{w}}_2 = 1} \paren{\sum_{n=1}^N\sigma_n \vec{x}_n^{\otimes k}}^\T \vec{w}^{\otimes k} \right] \\ &\leq \frac{2}{N(m-1)!}\sum_{k=0}^{m-1} \binom{m - 1}{k} \paren{\frac{C}{2}}^{m - 1 - k} \E_\vec{\sigma}\left[ \norm{\sum_{n=1}^N\sigma_n \vec{x}_n^{\otimes k}}_2 \right], \end{align*} where $(\dummy)^{\otimes k}$ denotes the $k$th order Kronecker product. By Jensen's inequality we have \[ \E_\vec{\sigma} \left[\norm{\sum_{n=1}^N\sigma_n \vec{x}_n^{\otimes k}}_2 \right] \leq \E_\vec{\sigma} \left[\norm{\sum_{n=1}^N\sigma_n \vec{x}_n^{\otimes k}}_2^2 \right]^{1/2} = \paren{\sum_{n=1}^N \norm{\vec{x}_n^{\otimes k}}_2^2}^{1/2} \leq \sqrt{N} \paren{\frac{C}{2}}^{k}, \] and so \[ \hat{\Rad}\paren{\rho_m(\vec{w}^\T(\dummy)-b)} \leq \frac{2C^{m - 1}}{\sqrt{N} (m - 1)!}. \] Therefore, the empirical Rademacher complexity of $\F_\Theta$ is bounded as follows \[ \hat{\Rad}(\F_\Theta) = \sum_{k=1}^K \abs{v_k} \hat{\Rad}\paren{\rho_m(\vec{w}_k^\T(\dummy)-b_k)}+ \hat{\Rad}(c) \leq \frac{2BC^{m-1}}{\sqrt{N}(m - 1)!} + \hat{\Rad}(c). \] Taking the expectation of both sides proves the theorem. \end{proof} \section{Conclusion} \label{sec:conclusion} In this paper we have developed a variational framework in which we propose and study a family of continuous-domain linear inverse problems in order to understand what happens on the function space level when training a single-hidden layer neural network on data. We have exploited the connection between ridge functions and the Radon transform to show that training a single-hidden layer neural network on data with an appropriate regularizer results in a function that is optimal with respect to a total variation-like seminorm in the Radon domain. We also show that this seminorm directly controls the generalizability of these neural networks. Our framework encompasses ReLU networks and the appropriate regularizers correspond to the well-known weight decay and path-norm regularizers. Moreover, the variational problems we study are similar to those that are studied in variational spline theory and so we also develop the notion of a ridge spline and make connections between single-hidden layer neural networks and classical polynomial splines. There are a number of followup research questions that may be asked. \subsection{Computational issues} Empirical and theoretical work from the machine learning community has shown that simply running (stochastic) gradient descent on a neural network seems to find global minima~\citep{rethink-generalization, regularization-matters, gd-provably, gd-finds-global-min}, though full theoretical justifications of why these algorithms work currently do not exist. Thus, it remains an open question about designing neural network training algorithms that provably find global minimizers. Such algorithms could then be used to find the sparse solutions the continuous-domain inverse problems studied in this paper. \subsection{Deep networks} Another important followup question revolves around deep, multilayer networks. Can a variational framework be used to understand what happens when a deep network is trained on data? Answering this question would require posing a continuous-domain inverse problem and deriving a representer theorem showing that deep networks are solutions. We believe answering this question will be challenging, due to the compositions of ridge functions that arise in deep networks. \acks{This work is partially supported by AFOSR/AFRL grant FA9550-18-1-0166, the NSF Research Traineeship Program under grant 1545481, and the NSF Graduate Research Fellowship Program under grant DGE-1747503. The authors thank Greg Ongie for helpful feedback and discussions related to the initial draft of this paper. The authors also thank the anonymous reviewers for their constructive feedback.}
1,108,101,562,385
arxiv
\section{Introduction} Cosmic acceleration \cite{Riess:1998cb,Perlmutter:1998np} can be caused by new fluids, new theories of gravity, or some admixture of both \cite{Uzan:2006mf}. This uncertainty places a premium on descriptions of the so-called ``dark physics'' which remain useful across different models and in spite of varying assumptions. In the case of new fluids (dark energy), the literature chooses to speak in terms of the equation of state $w$ and its derivative \cite{Albrecht:2006um}. In the case of new gravitational physics, the model-independent {\it lingua franca} is the relationship between the Newtonian ($\psi$) and longitudinal ($\phi$) gravitational potentials. The potentials, implicitly defined through the perturbed Robertson-Walker metric \begin{equation} ds^2=a^2[-(1+2\psi)d\tau^2+(1-2\phi)d\vec{x}^2], \end{equation} are most familiar for their roles in Newton's equation, $\ddot{\vec x}=-\vec{\nabla}\psi$, and the Poisson equation, $\nabla^2\phi=4\pi G a^2\delta\rho$ under general relativity (GR). The gravitational potentials are equal in the presence of non-relativistic stress-energy under GR. Alternate theories of gravity make no such guarantee. Scalar-tensor \cite{Carroll:2004st,Schimd:2004nq} and $f(R)$ theories \cite{Capozziello:2003tk,Acquaviva:2004fv,Zhang:2005vt}, braneworld scenarios such as Dvali-Gabadadze-Porrati gravity \cite{Dvali:2000hr,Lue:2005ya,Song:2006jk}, and massive gravity \cite{Dubovsky:2004sg,Bebronne:2007qh} all predict a systematic difference or ``slip", so that $\phi\ne\psi$ in the presence of non-relativistic stress-energy. Efforts to develop a parametrized-post-Friedmannian (PPF) framework to phenomenologically describe this behavior are just as prolific: Refs.~ \cite{Bertschinger:2006aw,Caldwell:2007cw,Zhang:2007nk,Hu:2007pj,Amendola:2007rr,Jain:2007yk,Zhang:2008ba,Daniel:2008et,Bertschinger:2008zb,Hu:2008zd} all offer parametrizations quantifying the departure from $\phi=\psi$ due to new gravitational effects. We choose to work with the parametrization proposed in Ref.~\cite{Caldwell:2007cw}: \begin{eqnarray} \psi&=&[1+\varpi(z)]\phi\label{parametrization}\\ \varpi(z) &=& \varpi_0 (1+z)^{-3}.\label{varpieqn} \end{eqnarray} We assume the existence of a theory of gravitation that leads to an expansion history that is indistinguishable from that produced by a spatially-flat, $\Lambda$CDM scenario with density parameters $\Omega_m$ and $\Omega_\Lambda = 1-\Omega_m$. This assumption is not essential, but it allows our analysis to focus solely on PPF effects. Our naive expectation is that $\varpi \simeq \Omega_\Lambda/\Omega_m$ by today. [Note that we have changed our notation, having previously defined $\varpi(z) = \varpi_0 (\Omega_\Lambda/\Omega_m) (1+z)^{-3}$.] The departure from GR kicks in only when the cosmic expansion begins to accelerate. Daniel {\it et al.} \cite{Daniel:2008et} (hereafter DCCM) discuss the compatibility with other parametrizations (especially that of Ref.~\cite{Bertschinger:2008zb}) and compare the implications of $\varpi_0 \neq 0$ to data from the Wilkinson Microwave Anisotropy Probe (WMAP) \cite{wmap3data}, the Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) \cite{Fu:2007qq}, and various galaxy surveys \cite{Gaztanaga:2004sk,Giannantonio:2006du,Cabre:2006qm}. We expand upon their analysis in this work by performing a full likelihood analysis of the cosmological parameter space. The previous work by DCCM considered the effects of modified gravity on cosmological perturbations in a one-parameter context: i.e., ``how does the the new (modified gravity) parameter affect cosmological data when all other parameters are held fixed (at the WMAP 3 year maximum likelihood values)?'' They used a modified version of the Boltzmann code CMBfast \cite{Seljak:1996is} to evaluate the effect of $\varpi_0$ on the cosmic microwave background (CMB) anisotropy, matter power spectrum, weak lensing convergence correlation function, and galaxy-CMB cross-correlation power spectrum. While this analysis was useful for testing for the existence of PPF effects, the results glossed over degeneracies that exist between $\varpi_0$ and traditional cosmological parameters. DCCM's Figure 9 already demonstrates a potential degeneracy between $\varpi_0$ and $\sigma_8$. Identifying further degeneracies and more rigorously motivating the possiblity of non-zero $\varpi_0$ requires analysis across the full cosmological parameter space. In the following, we present the results of a likelihood analysis based on a Monte Carlo Markov chain sampling of the space of cosmological parameters. The parameters, $\{\Omega_b h^2, \Omega_c h^2, \theta, \tau_\text{ri}, n_s, A_s, A_{\text{SZ}}, \varpi_0\}$, are respectively the baryon density, cold dark matter density, the ratio of the sound horizon to the angular diameter distance, the optical depth to last scattering, the scalar spectral index, the amplitude of the primordial curvature perturbations, and a normalization parameter for the SZ effect. These are the standard parameters in the convention used by the publicly-available code CosmoMC \cite{CosmoMCreadme}. We generate our Markov Chains using CosmoMC \cite{Lewis:1999bs,Lewis:2002ah,CosmoMC_notes} with modules added to calculate likelihoods based on the weak lensing \cite{Massey:2007gh,Lesgourgues:2007te} and galaxy-CMB cross-correlation spectra \cite{Ho:2008bz}. The CMB data and likelihood code comes from the WMAP team's 5-year release \cite{Dunkley:2008ie}. Supernova data comes from the Union data set produced by the Supernova Cosmology Project \cite{Kowalski:2008ez}. The weak lensing data comes from the CFHTLS weak lensing survey \cite{Fu:2007qq,Kilbinger:2008gk}. To help understand our results, we present a closed system of ordinary differential equations describing the evolution of $\phi$ and the matter overdensity $\delta$ under $\varpi_0\ne0$. These results imply a correction to the Poisson equation that was neglected in DCCM. Section \ref{quadrupole} presents these equations and uses them to describe the dependence of the large-angle CMB anisotropy on $\varpi_0$. Section \ref{code} discusses the modifications made to the public CosmoMC codes to implement Eq.~(\ref{parametrization}). Section \ref{results} presents the likelihood contours found from our Markov chains. Section \ref{forecasts} makes an attempt at forecasting the results of future experiments. We conclude in Section \ref{conclusions}. \section{Evolution of Perturbations} \label{quadrupole} The procedure for evolving the matter and metric perturbations is as follows. We assume that the perturbed stress-energy tensors for all matter and radiation are conserved independently of the theory of gravitation: \begin{equation} \nabla_\mu T^{\mu\nu}=0. \label{tmunueqn} \end{equation} We next impose the relationship given by Eq.~(\ref{parametrization}) between potentials $\phi$ and $\psi$, which upon translation into synchronous gauge implies an evolution equation for the metric variable $\alpha \equiv (\dot h + 6 \dot\eta)/2 k^2$: \begin{equation} \dot{\alpha}=-(2+\varpi)\mathcal{H}\alpha+(1+\varpi)\eta -12\pi G a^2(\bar{\rho}+\bar{p})\sigma/k^2. \label{alphadot} \end{equation} Here, a dot indicates the derivative with respect to conformal time, $h$ and $\eta$ are the synchronous-gauge metric perturbations, $\mathcal{H} = \dot a/a$ is the conformal-time Hubble parameter, and $\sigma$ is the shear in a fluid with mean density $\bar\rho$ and pressure $\bar p$. (We use the same notation as Ref.~\cite{Ma:1995ey}.) We further assume that there is no preferred reference frame introduced by the new gravitational effects; there is no ``dark fluid" momentum flux or velocity relative to the dark matter and baryon cosmic rest frame. This condition is imposed by enforcing the same perturbed time-space equation as in GR, \begin{equation} k^2\dot\eta = 4\pi G a^2(\bar\rho+\bar p)\theta , \label{etadot} \end{equation} where $\theta$ is the divergence of the velocity field in a fluid with mean density $\bar\rho$ and pressure $\bar p$. Satisfying this equation automatically means that Bertschinger's consistency condition, that long-wavelength curvature perturbations should evolve like separate Robertston-Walker spacetimes, is satisfied \cite{Bertschinger:2006aw}. The model of $\varpi(z)$ plus the three Eqs.~(\ref{tmunueqn}-\ref{etadot}) close the system of equations. (See Refs.~\cite{Caldwell:2007cw,Daniel:2008et} for further details.) In order to study the late-time behavior of the system of equations, we may neglect the shear and velocity perturbations and express the evolution equations in conformal-Newtonian/longitudinal gauge as \begin{eqnarray} \ddot{\phi}&=&-(3+\varpi)\mathcal{H}\dot{\phi}-\dot{\varpi}\mathcal{H}\phi -(1+\varpi)(\mathcal{H}^2+2\dot{\mathcal{H}})\phi,\quad\label{phidoteqn}\\ \dot{\delta}&=&3\dot{\phi}- \left(\frac{k}{\mathcal{H}}\right)^2 \frac{\dot{\phi}+(1+\varpi)\mathcal{H}\phi}{1-\dot{\mathcal{H}}/\mathcal{H}^2 }\label{deltadoteqn} \end{eqnarray} where $\delta$ is the matter density contrast. \begin{figure}[!t] \includegraphics[scale=0.35]{figure1.eps} \caption{The potentials $\phi$ and $\psi$ are shown versus the scale factor, for different values of $\varpi_0$. The blue, dark curves are $\phi$, whereas the green, light curves are $\psi$. Note that they behave oppositely; when $\phi$ becomes shallower, $\psi$ becomes deeper, and {\it vice versa}.} \label{phifig}% \end{figure} \begin{figure}[!t] \includegraphics[scale=0.35]{figure2.eps} \caption{The matter density contrast is shown versus the scale factor, for different values of $\varpi_0$. The time evolution is obtained by integrating Eqs.~(\ref{phidoteqn}) and (\ref{deltadoteqn}), with initial conditions $\phi=-10^{-5}$, $\dot{\phi}=0.0$ for $k=0.01 \text{Mpc}^{-1}$. Positive (negative) values of $\varpi_0$ enhance (slow) the growth of density perturbations.} \label{deltafig}% \end{figure} \begin{figure}[!t] \includegraphics[scale=0.35]{figure3.eps} \caption{The degree of deviation from the Poisson equation versus scale factor is shown for different values of $\varpi_0$. Because $\varpi$ is scale-independent, so too is the ratio $-k^2 \phi/(4 \pi G a^2 \delta\rho)$. For positive (negative) $\varpi_0$, a given $\phi$ corresponds to a larger (smaller) density contrast than in GR.} \label{poissonfig}% \end{figure} Consider the behavior of an overdense region, $\delta >0$, as it evolves from early times when GR is valid to late times when new gravitational effects characterized by $\varpi$ become important. At early times, when the Poisson equation is valid, $\phi<0$ for the overdensity. While the expansion is matter-dominated, the potential remains static. However, at late times, with the onset of cosmic acceleration, the potential begins to evolve. In the case of GR, $\dot{\phi}>0$ so the potential is stretched shallower. The density contrast $\delta$ continues to grow via gravitational instability, although the rate of growth is slowed. The evolution of $\phi$ can be understood in terms of a competition between the expansion diluting the matter density and stretching $\phi$ shallower, and the accretion of matter sourcing and deepening $\phi$. In GR, the accelerated expansion upsets the balance in favor of dilution, so that $\phi$ becomes shallower and $\delta$ grows more slowly. When $\varpi_0 \neq 0$, the competition between effects changes. Numerically integrating Eqs.~(\ref{phidoteqn}) and (\ref{deltadoteqn}), we find that $\varpi_0>0$ causes $\phi$ to become even shallower, yet the density contrast grows faster, as illustrated in Figs.~\ref{phifig} and \ref{deltafig}. This seems counter-intuitive, since the shallower potential should provide weaker attraction for the accretion of surrounding matter. In the case $\varpi_0<0$, the potential $\phi$ becomes more negative or deeper, and the density contrast grows more slowly. Likewise, the deeper potential should provide greater attraction. But here the difference between $\phi$ and $\psi$ is important. As seen in Fig.~\ref{phifig}, the potential $\phi$ grows shallower (deeper) for $\varpi_0 >0$ ($<0$). However, the potential responsible for acceleration $\psi = (1+\varpi)\phi$ behaves oppositely, becoming deeper (shallower). Hence, the competition swings in favor of increased clustering over dilution by the expansion. The new behavior of $\phi$ and $\delta$ implies a correction to the Poisson equation. As seen in Fig.~\ref{poissonfig}, for $\varpi_0>0$ ($<0$), the density contrast grows more (less) rapidly and the potential $\phi$ becomes shallower (deeper), so that the ratio \begin{equation} \Gamma \equiv -k^2 \phi / (4 \pi G a^2 \delta\rho) \label{curlygeqn} \end{equation} grows smaller (larger). This suggests that we can restore the Poisson equation by introducing a time-dependent gravitational constant $G_{eff} = G\, \Gamma$, whence $-k^2 \phi = 4 \pi G_{eff} a^2 \delta\rho$. Note that $G_{eff}$ is not a free function, but is determined by Eqs.~(\ref{tmunueqn}-\ref{etadot}). Because we have chosen $\varpi$ to be scale-independent, $G_{eff}$ is too. A different strategy, whereby the time- and space-dependence of $G_{eff}$ is imposed separately \cite{Jain:2007yk}, will not necessarily satisfy Eqs.~(\ref{tmunueqn}-\ref{etadot}). \begin{figure}[!t] \includegraphics[scale=0.35]{figure4.eps} \caption{The CMB quadrupole moment is shown versus $\varpi_0$. As explained in the text, the quadratic dependence can be understood in terms of the influence of $\varpi_0$ on the ISW effect.(Reproduced from DCCM with our new normalization Eq.~(\ref{varpieqn}).)} \label{quadrupolefig}% \end{figure} \begin{figure}[!t] \includegraphics[scale=0.35]{figure5_corrected.eps} \caption{The conformal time derivative of the gravitational potential $\phi$ is shown versus scale factor, for different values of $\varpi_0$. Initial conditions are the same as in Fig. ~\protect{\ref{deltafig}}. The potential well is decaying when $\frac{d\phi}{d\tau}/|\phi_i H_0|>0$, and is deepening when negative.} \label{phidotfig}% \end{figure} We can use this new understanding to explain the curious behavior of the large-angular scale CMB anisotropy spectrum. The effect of $\varpi_0\ne0$ on the low $l$ moments of the CMB anisotropy is not monotonic, as seen in Fig.~\ref{quadrupolefig}. The cause is the suppression of the integrated Sachs-Wolfe effect (ISW) at $\varpi_0\simeq 1$. If the gravitational potentials in the Universe are evolving with time, CMB photons will lose less (more) energy climbing out of potential wells than they gained falling in, resulting in a net blue (red) shift as the potentials shrink (grow). This is the ISW effect whereby time-evolving gravitational potentials contribute to the moments of the photon distribution function, $\Theta_l(k,\,\eta)$, via \begin{equation} \label{phidotpsidotbessel} \int_0^{\tau_0} d\tau(\dot{\phi}(k,\tau)+\dot{\psi}(k,\tau)) j_l(k(\tau_0-\tau))\exp[-\tau_\text{ri}(z)]. \end{equation} (See equation 8.55 of Ref.~\cite{Dodelson:2003ft}.) Here $j_l$ is a spherical Bessel function of the first kind, $\tau$ is the conformal time, $\tau_0$ is the conformal time at $z=0$, and $\tau_\text{ri}(z)$ is the optical depth to redshift $z$. The strength of the ISW effect is determined by the sum $\dot\phi + \dot\psi$, which, using Eqs.~(\ref{parametrization}-\ref{varpieqn}), is given by \begin{eqnarray} \dot{\phi}+\dot{\psi}&=&\dot{\phi}(2+\varpi)+\phi\dot{\varpi}\nonumber\\ &=&\dot{\phi}(2+\varpi)+3\phi\mathcal{H}\varpi. \label{quadisw} \end{eqnarray} Again consider the evolution of an overdensity $\delta>0$ with $\phi <0$. In GR, the sum is positive, $\dot\phi + \dot\psi>0$. When $\varpi_0<0$, the second term in Eq.~(\ref{quadisw}) is always positive. The first term is generally subdominant, since $|\dot\phi| < \mathcal{H} \phi$, as can be inferred from Fig.~\ref{phidotfig}. Therefore $\varpi_0<0$ enhances the ISW effect. When $\varpi_0 >0$, there is a competition between the first and second terms; the first term is positive, whereas the second term is negative, The first term always wins, but at some intermediate value of $\varpi_0$ the two terms nearly cancel, thereby suppressing the ISW effect relative to the case with $\varpi_0=0$. This explains the dip in the quadrupole moment versus $\varpi_0$, seen in Fig.~\ref{quadrupolefig}. \section{Implementation} \label{code} The modifications of the Monte Carlo Markov chain software CosmoMC to allow for $\varpi_0\ne 0$ proceed almost identically to the modifications made to CMBfast for DCCM, with a few differences. To compare the predictions of our model with weak lensing data we adapt the weak lensing module provided by Refs.~\cite{Massey:2007gh,Lesgourgues:2007te}. We modify it to assess the likelihood in terms of the variance of the aperture mass (Eq.~5 of \cite{Fu:2007qq}) with a full covariance matrix \cite{CFHTLSdata}. Because we probe weak-lensing at non-linear scales, we calculate the power spectrum of the lensing potential by extrapolating the linear matter power spectrum $P_\delta$ to non-linear scales and using the relationship (\ref{curlygeqn}) between the matter overdensity $\delta$ and the gravitational potential $\phi$ to find the non-linear $P_\phi$. Whereas CMBfast calculates the non-linear matter power spectrum from the phenomenological fit of Peacock and Dodds \cite{Peacock:1996ci}, CosmoMC (having been built around the code CAMB \cite{Lewis:1999bs}) uses Smith {\it et al.}'s fit \cite{Smith:2002dz} (see their Appendix C). Smith {\it et al.} express their fit as a non-trivial function of the linear power spectrum and $\Omega_m$. This function assumes the $\Lambda$CDM relationship between $\Omega_m$ and perturbation growth. Gravitational slip alters this relationship, as discussed above in section \ref{quadrupole}. Therefore, to adapt Smith {\it et al.}'s fit to the case $\varpi_0\neq 0$, we use the phenomenological relationship (DCCM equation 24) \begin{equation} \label{omegamfit} \Omega_{m}|_{\varpi_0=0}=\Omega_{m}|_{\varpi_0\ne0}+0.13\varpi_0\frac{\Omega_m}{\Omega_\Lambda} \end{equation} to find a $\varpi_0=0$, $\Lambda$CDM model with a similar growth history to our $\varpi_0\ne 0$ model and use that value of $\Omega_{m}|_{\varpi_0=0}$ in Smith {\it et al.}'s equations (C18). Eq.~(\ref{omegamfit}) breaks down for $\Omega_{m}|_{\varpi_0\ne0} \le 0.15$, but this region of parameter space is excluded to at least $2\sigma$ (see Figure \ref{omgfig}). A second CosmoMC run with a more accurate fitting function yielded identical results to those obtained using Eq.~(\ref{omegamfit}). This is not a precise method for determining the non-linear power spectrum in the presence of gravitational slip. Precision would require examination of N-body simulations which, unfortunately, implies assumptions about what alternative theory of gravity we are constraining. Recently, much work has been done attempting to calculate the non-linear power spectrum directly, without the aid of an N-body simulation. Crocce and Scoccimarro propose to expand the non-linear power spectrum as a Taylor-like sum \begin{equation} \label{psum} P_\delta=\sum_iP_\delta^{(i)} \end{equation} where the different orders of $P_\delta$ are derived from a diagrammatic scheme similar to Feynman diagrams \cite{Crocce:2005xy}. They find that the resulting sum (\ref{psum}) is much better behaved than results derived from perturbation theory (see their Figure 1). Matarrese and Pietroni \cite{Matarrese:2007wc} use the formalism of renormalization group theory to derive a generating functional for the different orders of $P_\delta$. Taruya and Hiramatsu adapt methods from the statistical studies of fluid instabilities to separate out and solve for the cross-mode interactions in $\tilde\delta$ \cite{Taruya:2007xy}. All of these methods yield better agreement with the results of N-body simulations than standard perturbation theory in the case of $\varpi=0$ (see Figure 2 of Ref.~\cite{Crocce:2007dt}, Figure 8 of Ref.~\cite{Matarrese:2007wc}, and Figure 3 of Ref.~\cite{Hiramatsu:2009ki}). Work has already begun adapting them to alternative gravity theories. In Ref.~\cite{Koyama:2009me}, Koyama, Taruya, and Hiramatsu extend the method of Ref.~\cite{Taruya:2007xy} to include $f(R)$ and DGP gravity theories by assuming that they can be approximated with a Brans-Dicke scalar-tensor theory on sub-horizon scales. Hiramatsu and Taruya \cite{Hiramatsu:2009ki} also try to encompass modified gravity theories by parametrizing them in terms of their implied effective Newton's constant $G_\text{eff}=\Gamma G$ (see equation \ref{poissoncorrection} of the present work). Following their lead, it should be possible to adapt the non-linear power spectrum calculations of Ref.~\cite{Taruya:2007xy} -- or even \cite{Crocce:2005xy} and \cite{Matarrese:2007wc} -- to account for model-independent gravitational slip. Such a calculation is beyond the scope of this work. Given the relatively well-behaved regions of parameter space allowed by experiments (see Section \ref{results} below), we do not expect this limitation to significantly influence our findings. To incorporate the galaxy-CMB cross-correlation, we use the module written by Ho {\it et al.} \cite{Ho:2008bz}. Modifications for $\varpi_0\ne 0$ enter as modifications to the $\phi+\psi$ power spectrum (see section II of \cite{Ho:2008bz}), \begin{equation} \label{poissoncorrection} P_{\phi+\psi} = \frac{9}{4}\Omega_{m,0}^2\left(\frac{H_0}{ck}\right)^4 \left(\frac{D_{\varpi}}{a}\right)^2 \left[(1 + \frac{1}{2}\varpi)\Gamma\right]^2 \times P_{\delta}. \end{equation} Note that equation (27) of DCCM neglected the factor $\Gamma$, defined in Eq.~(\ref{curlygeqn}), to correct the Poisson equation. The corrected weak lensing statistics show the same qualitative behavior as in Figure 10 of DCCM. However, large values $|\varpi_0| \gg 1$ have a weaker effect on the amplitude of the convergence spectrum. \section{Results} \label{results} \begin{figure}[!t] \includegraphics[scale=0.42]{figure6.eps} \caption{The 68\% and 95\% likelihood contours in the $\varpi_0-\Omega_m$ parameter space are shown. The blue contours are based on CMB data alone. The red contours add weak lensing, type 1a supernovae, and galaxy-CMB cross-correlation data.} \label{omgfig}% \end{figure} \begin{figure}[!t] \includegraphics[scale=0.42]{figure7.eps} \caption{The 68\% and 95\% likelihood contours in the $\varpi_0-\sigma_8$ parameter space are shown. Shading is the same as in Fig.~\protect{\ref{omgfig}}. Note that the addition of large-scale structure data breaks the degeneracy in $\varpi_0-\sigma_8$.} \label{sigfig}% \end{figure} The results of our multiparameter investigation are shown in Figs.~\ref{omgfig}, \ref{sigfig}. Fig.~\ref{omgfig} shows the $68\%$ and $95\%$ contours in $(\Omega_m,\varpi_0)$ space marginalized over all other parameters. Fig.~\ref{sigfig} shows the same contours in $(\sigma_8,\varpi_0)$ space. Red (smaller) likelihood contours were generated using all available data sets (WMAP 5 year \cite{Dunkley:2008ie}, Supernova Union \cite{Kowalski:2008ez}, CFHTLS \cite{Fu:2007qq}, and the galaxy surveys selected by \cite{Ho:2008bz}). Blue (larger) contours were generated using only the WMAP 5 year data. For each set of constraints, we generated four independent Markov Chains. We achieved convergence by running the calculations until the statistic $|1-R|$ was much less than unity, where $R$ is Gelman and Rubin's potential scale reduction factor, defined as the ratio of the variance across all of the chains to the mean of the variance of each individual chain evaluated for the least converged parameter. \cite{GelmanRubin,BrooksGelman,CosmoMCreadme}. Our conclusions are three-fold: \begin{itemize} \item Present cosmological data constrains gravity to agree with GR, assuming a background evolution consistent with $\Lambda$CDM. \item Very negative values of $\varpi_0$ are ruled out. This should not be surprising, since a sign difference between the longitudinal and Newtonian gravitational potentials would mean that test particles are repelled by overdense regions. \item Large-scale structure data (in our case, weak lensing and the galaxy-CMB correlation) are critical to constraining $\varpi_0$. \end{itemize} The effects described in section \ref{quadrupole} mean that any CMB anisotropy spectrum can be reasonably well approximated (modulo a normalization) by two possible values of $\varpi_0$. DCCM Figure 1 showed that $\varpi_0\ne 0$ has no effect on the shape of higher $l$ multipoles within linear theory. This explains the double-peaked likelihood curve in DCCM Fig.~3 and the broad blue contours in Figs.~\ref{omgfig} and \ref{sigfig} in this work. Fortunately, the effect of $\varpi_0\ne 0$ on cosmic structure is monotonic in the range of interest (as discussed in DCCM), so that only one value of $\varpi_0$ is maximally likely for any given realization of weak lensing and galaxy-CMB cross-correlation data, hence the smaller red contours in Fig.~\ref{omgfig} and \ref{sigfig}. Marginalizing over all other parameters, the WMAP 5 year data alone gives $\varpi_0=1.7^{+4.0}_{-2.0} \, (2\sigma)$. Including supernovae, weak lensing, and the galaxy-CMB cross-correlation data improves the constraint to $\varpi_0 = 0.09{}^{+0.74}_{-0.59}\, (2\sigma)$. Table \ref{paramtable} presents the marginalized $1\sigma$ limits on the other cosmological parameters of note. \begin{table*}[!t] \begin{tabular}{l l l l} parameter&\qquad$\varpi_0\ne0$&\qquad$\varpi_0=0$&\qquad WMAP 5-year\\ \hline $\Omega_b h^2$&\qquad$0.02262^{+0.00059}_{-0.00058}$&\qquad$0.02264_{-0.00057}^{+0.00058}$&\qquad$0.02273\pm0.00062$\\ $\Omega_\text{cdm}h^2$&\qquad$0.1167\pm0.0026$&\qquad$0.1170\pm0.0016$&\qquad$0.1109\pm0.0062$\\ $\theta_s$&\qquad$1.0417_{-0.0028}^{+0.0029}$&\qquad$1.0419_{-0.0029}^{+0.0028}$&\qquad$1.0400\pm0.0029$\\ $\tau_\text{ri}$&\qquad$0.085\pm0.016$&\qquad$0.087_{-0.016}^{+0.017}$&\qquad$0.087\pm0.017$\\ $n_s$&\qquad$0.964\pm0.014$&\qquad$0.965\pm0.014$&\qquad$0.963_{-0.015}^{+0.014}$\\ $\Omega_\Lambda$&\qquad$0.712\pm0.014$&\qquad$0.710_{-0.011}^{+0.012}$&\qquad$0.742\pm0.030$\\ $\sigma_8$&\qquad$0.842\pm0.014$&\qquad$0.844\pm0.015$&\qquad$0.796\pm0.036$\\ $h$&\qquad$0.696\pm0.014$&\qquad$0.695\pm0.013$&\qquad$0.719^{+0.026}_{-0.027}$ \end{tabular} \caption{Marginalized ($1\sigma$) constraints for cosmological parameters resulting from Monte Carlo Markov chain analysis. The left and center columns are generated using all available data sets (CMB, weak lensing, supernovae, and galaxy-CMB cross-correlation). The left column is generated allows $\varpi_0$ to vary. The center column fixes $\varpi_0=0$. Because our constraint on $\varpi_0$ is consistent with $\varpi_0=0$, we find little difference between the two columns. The right column shows the constraints reported by the WMAP team in Ref.~\cite{Dunkley:2008ie} based on just the WMAP 5-year data. The principal improvements from adding supernova, weak lensing, and galaxy-CMB cross-correlation data lie in constraining $\Omega_\Lambda$ (a result of adding the supernovae) and $\sigma_8$ (a result of adding weak lensing).} \label{paramtable} \end{table*} \section{Forecasts} \label{forecasts} \begin{figure}[t] \includegraphics[scale=0.42]{figure8.eps} \caption{The projected 68\% and 95\% likelihood contours in the $\varpi_0-\Omega_m$ parameter space are shown. The yellow contours are based on mock Planck data. The green contours add mock weak lensing data. The underlying model is assumed to be $\varpi_0=0$ with $\Omega_m=0.26$. The current constraints are shown for reference.} \label{omgfigproj}% \end{figure} \begin{figure}[t] \includegraphics[scale=0.42]{figure9.eps} \caption{The projected 68\% and 95\% likelihood contours in the $\varpi_0-\sigma_8$ parameter space are shown. Shading is the same as in Fig.~\protect{\ref{omgfigproj}}.} \label{sigfigproj}% \end{figure} It is useful to ask how much better our constraints will be under future experiments. We generate two mock data sets -- one simulating the results of the upcoming Planck CMB experiment, the other simulating the results of a future weak lensing survey, modeled after the proposed ESA experiment Euclid -- and feed them into our modified CosmoMC. To simulate Planck data we use a fiducial model given by the best fit parameters of WMAP \cite{Dunkley:2008ie} with noise properties consistent with a combination of Planck $100$-$143$-$217$ GHz channels of HFI \cite{bluebook}; in this case we fit also for B-modes produced by lensing of the CMB (see Ref.~\cite{Paolo}) and we use the full-sky likelihood function given in \cite{Lewis:2005tp}. To simulate weak lensing data, we generate a mock convergence power spectrum $P_\kappa (l)$ (equation (2) of Ref.~\cite{Fu:2007qq}) corrected for alternative gravity as in Eq.(\ref{poissoncorrection}). We generate data in bins of size $\Delta_l=1$ for $2\le l<100$ and $\Delta_l=40$ for $100<l<2980$. We simulate the (1$\sigma$) errors as (Eq.~11 of Ref.~\cite{Cooray:1999rv}) \begin{equation} \nonumber \sigma_l=\sqrt{(2/(2l+1))/(\Delta_l f_{\text{sky}})}(P_{\kappa}(l)+\sigma_\epsilon^2/n_\text{gal}), \end{equation} taking $\sigma_\epsilon=0.25$, $n_{\text{gal}}=35 \text{(arc minute)}^{-2}$ and $f_\text{sky}=0.48$, consistent with values projected for ESA's Euclid experiment (Table 1 of Ref.~\cite{Kitching:2008dp}). These assumptions will give us a tighter constraint than if we had used SNAP/JDEM parameters, since SNAP/JDEM has a smaller $f_\text{sky}$ by a factor of $10$ \cite{SNAP}. We fit the redshift distribution of sources $n(z)$ from a mock data set based on Eq.~14 of Ref.~\cite{Fu:2007qq} with parameter values taken from their Table 1. The 1$\sigma$ errors in our mock $n(z)$ are reduced from actual values \cite{CFHTLSdata} by a factor of $1/\sqrt{2}$. The likelihood relative to the mock weak lensing data is calculated as a simple $\chi^2$ (i.e., we assume that the covariance matrix is diagonal). This is a safe assumption according to \cite{Cooray:2000ry}. Figs.~\ref{omgfigproj} and \ref{sigfigproj} show the resulting likelihood contours. Looking at the Planck-only (yellow) contours, we see the weakness of using CMB measurements alone to constrain $\varpi_0$, as a bimodal distribution is obtained once again. We also see more clearly in Fig.~\ref{sigfigproj} the degeneracy between $\varpi_0$ and $\sigma_8$ as normalization parameters (one can interpret the effect of $\varpi_0$ on $\delta$ in Fig.~\ref{deltafig} as a renormalization of the matter power spectrum). Since weak lensing statistics depend sensitively on the power spectrum normalization, they once again break the degeneracy. Marginalizing over all other parameters, the mock datasets give the constraint $\varpi_0=-0.07{}^{+0.13}_{-0.16}\,(2\sigma)$, a factor of $\sim 4$ improvement over the current constraint. \section{Conclusions} \label{conclusions} If we are justified in describing the background evolution by a $\Lambda$CDM universe, then the results illustrated in Figs.~\ref{omgfig} and \ref{sigfig} do not appear to indicate a significant departure from GR. In fact,these results conflict with our naive expectation that $\varpi_0 \simeq\Omega_\Lambda / \Omega_m$. However, these results allow the ratio of $\phi$ to $\psi$ to vary by order unity from the predictions of GR at the present epoch. (Weaker constraints yet result if the redshift dependence of $\varpi (z)$ is allowed to vary; see Ref.~\cite{Paolo}.) These are not very tight constraints. As shown in Sec.~\ref{forecasts}, it seems likely that experiments already under consideration will give us much tighter constraints on parametrized-post-Friedmannian departures from GR in the near future. If, indeed, future constraints improve, we may need to reconsider the assumption of homogeneous $\varpi$. Throughout this paper we neglect any possible scale-dependence of $\varpi$. This simplifying assumption seems justified given the absence of any significant departure from GR. Were we to see evidence of a departure from GR, the onus would be on us to demonstrate the new theory's consistency with solar system-scale tests, all of which prefer GR to one part in $10^5$ (e.g. Ref.~\cite{Bertotti:2003rm}). Beyond this experimental evidence, we expect that $\varpi$ should be scale dependent simply due to the differing evolution histories of sub- and super-horizon perturbation modes. Other work has already attempted tackling this expectation. Hu and Sawicki implement a scale-dependent gravitational slip, based on the behavior seen in $f(R)$ models of gravity. \cite{Hu:2007pj}. Afshordi {\it et al}. offer a scale-dependent parametrization of $-(\psi-\phi)/(\phi+\psi)$ designed to be consistent with higher-dimensional generalizations of DGP gravity \cite{Afshordi:2008rd}. Though they find that their parametrization is capable of describing effects qualitatively consistent with tensions in current data sets, none of those tensions is strong enough to warrant a detection of alternative gravity. Koivisto and Mota explore a different set of new gravitational effects by supposing that dark energy is an imperfect (non-zero shear) fluid \cite{Koivisto:2005mm}. Shear $\sigma$, like gravitational slip, affects the space-space, off-diagonal perturbed Einstein equation, $k^2(\phi-\psi)=12\pi G a^2\bar\rho(1+w)\sigma$. The imperfect fluid introduces a dark flow, however, so that the gravitational effects are not fully equivalent to the results of gravitational slip. Like the present work, they find that data cannot yet definitively rule in or out the interesting regions of their parameter space. Specifically, they find that the effect of non-zero shear on the CMB anisotropy spectrum is weaker than the effect of $\varpi$ demonstrated in DCCM. We have also shown that the modification of the Poisson equation follows uniquely from the assumptions of our model: the enforced relationship between $\phi$ and $\psi$, stress-energy conservation, and the absence of a preferred frame indicated by a ``dark flow''. This must be taken into account when conducting future tests of GR on cosmological scales. \acknowledgments This work was supported by NSF CAREER AST-0349213 (RC) and AST-0645427 (AC). AC and RC thank Caltech for hospitality while this work was completed. AM research is supported by ASI contract I/016/07/0 ``COFIS''.
1,108,101,562,386
arxiv
\section{Introduction} \IEEEPARstart{I}{n} the presence of missing data, the representativeness of data samples may be reduced significantly and the inference about data is therefore distorted seriously. Given this pressing circumstance, it is crucially important to devise computational methods that can restore unseen data from available observations. As the data in practice is often organized in matrix form, it is considerably significant to study the problem of \emph{matrix completion}~\cite{tao:2009:mc,CandesPIEEE,Mohan:2010:isit,rahul:jlmr:2010,akshay:2013:nips,william:2014:nips,raghunandan:jmlr:2010,raghunandan:tit:2010,troy:2013:nips}, which aims to fill in the missing entries of a partially observed matrix. \begin{prob}[Matrix Completion]\label{pb:mc} Denote by $[\cdot]_{ij}$ the $(i,j)$th entry of a matrix. Let $L_0\in\Re^{m\times{}n}$ be an unknown matrix of interest. The rank of $L_0$ is unknown either. Given a sampling of the entries in $L_0$ and a 2D sampling set $\Omega\subseteq{}\{1,\cdots,m\} \times\{1,$ $\cdots,n\}$ consisting of the locations of observed entries, i.e., given \begin{eqnarray*} \Omega\quad\textrm{and}\quad\{[L_0]_{ij} |(i,j)\in\Omega\}, \end{eqnarray*} can we identify the target $L_0$? If so, under which conditions? \end{prob} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{missing.pdf}\vspace{-0.15in} \caption{The unseen future values of time series are essentially a special type of missing data.}\label{fig:miss}\vspace{-0.25in} \end{center} \end{figure} In general cases, matrix completion is an ill-posed problem, as the missing entries can be of arbitrary values. Thus, some assumptions are necessary for studying Problem~\ref{pb:mc}. Cand{\`e}s and Recht~\cite{Candes:2009:math} proved that the target $L_0$, with high probability, is exactly restored by convex optimization, provided that $L_0$ is \emph{low rank} and \emph{incoherent} and the set $\Omega$ of locations corresponding to the observed entries is a set sampled \emph{uniformly at random} (i.e., uniform sampling). This pioneering work provides people several useful tools to investigate matrix completion and many other related problems. Its assumptions, including low-rankness, incoherence and uniform sampling, are now standard and widely used in the literatures, e.g.,~\cite{Candes:2009:JournalACM,xu:2012:tit,sun:2016:tit,tpami_2013_lrr,Jain:2014:nips,liu:tpami:2016,zhao:nips:2015,ge:nips:2016}. However, the assumption of uniform sampling is often invalid in practice: \begin{itemize} \item[$\bullet$] A ubiquitous type of missing data is the unseen future data, e.g., the next few values of a time series as shown in Figure~\ref{fig:miss}. It is certain that the (missing) future data is not randomly selected, not even being sampled uniformly at random. In this case, as will be shown in Section~\ref{sec:exp:rcn}, the theories built upon uniform sampling are no longer applicable. \item[$\bullet$] Even when the underlying regime of the missing data pattern is a probabilistic model, the reasons for different observations being missing could be correlated rather than independent. In fact, most real-world datasets cannot satisfy the uniform sampling assumption, as pointed out by~\cite{ruslan:2010:nips,Meka:2009:MCP}. \end{itemize} There has been sparse research in the direction of deterministic or nonuniform sampling, e.g.,~\cite{Kiraly:2012:icml,Kiraly:2015:jmlr,Negahban:2012:JMLR,ruslan:2010:nips,Meka:2009:MCP,JMLR:v16:chen15b,daniel:2016:jstsp}. For example, Negahban and Wainwright~\cite{Negahban:2012:JMLR} studied the case of weighted entrywise sampling, which is more general than the setup of uniform sampling but still a special form of random sampling. In particular, Kir\'{a}ly et al.~\cite{Kiraly:2012:icml,Kiraly:2015:jmlr} treated matrix completion as an algebraic problem and proposed deterministic conditions to decide whether a particular entry of a \emph{generic} matrix can be restored. Pimentel{-}Alarc{\'{o}}n et al.~\cite{daniel:2016:jstsp} built deterministic sampling conditions for ensuring that, \emph{almost surely}, there are only finitely many matrices that agree with the observed entries. However, strictly speaking, those conditions ensure only the recoverability of a special kind of matrices, but they cannot guarantee the identifiability of an arbitrary $L_0$ for sure. This gap is indeed striking, as the data matrices arising from modern applications are often of complicate structures and unnecessary to be generic. Moreover, the sampling conditions given in~\cite{Kiraly:2012:icml,Kiraly:2015:jmlr,daniel:2016:jstsp} are not so interpretable and thus not easy to use while applying to the other related problems such as \emph{matrix recovery} (which is matrix completion with $\Omega$ being unknown)~\cite{Candes:2009:JournalACM}. To break through the limits of random sampling, we propose in this work two deterministic conditions, \emph{isomeric condition}~\cite{liu:nips:2017} and \emph{relative well-conditionedness}, for guaranteeing an \emph{arbitrary} matrix to be recoverable from a sampling of its entries. The isomeric condition is a mixed concept that combines together the rank and coherence of $L_0$ with the locations and amount of the observed entries. In general, isomerism (noun of isomeric) ensures that the \emph{sampled submatrices} (see Section \ref{sec:notation}) are not \emph{rank deficient}\footnote{In this paper, rank deficiency means that a submatrix does not have the largest possible rank. Specifically, suppose that $M'$ is a submatrix of some matrix $M$, then $M'$ is rank deficient iff (i.e., if and only if) $\rank{M'}<\rank{M}$. Note here that a submatrix is rank deficient does not necessarily mean that the submatrix does not have full rank, and a submatrix of full rank could be rank deficient.}. Remarkably, it is provable that isomerism is \emph{necessary} for the identifiability of $L_0$: Whenever the isomeric condition is violated, there exist infinity many matrices that can fit the observed entries not worse than $L_0$ does. Hence, logically speaking, the conditions given in~\cite{Kiraly:2012:icml,Kiraly:2015:jmlr,daniel:2016:jstsp} should suffice to ensure isomerism. While necessary, unfortunately isomerism does not suffice to guarantee the identifiability of $L_0$ in a deterministic fashion. This is because isomerism does not exclude the unidentifiable cases where the sampled submatrices are severely ill-conditioned. To compensate this weakness, we further propose the so-called \emph{relative well-conditionedness}, which encourages the smallest singular values of the sampled submatrices to be away from 0. Equipped with these new tools, isomerism and relative well-conditionedness, we prove a set of theorems pertaining to \emph{missing data recovery}~\cite{Zhang06} and matrix completion. In particular, we prove that the exact solutions that identify the target matrix $L_0$ are strict local minima to the commonly used bilinear programs. Although theoretically sound, the classic bilinear programs suffer from a weakness that the rank of $L_0$ has to be known. To fix this flaw, we further consider a method termed \emph{isomeric dictionary pursuit} (IsoDP), the formula of which can be derived from Schatten quasi-norm minimization~\cite{rahul:jlmr:2010}, and we show that IsoDP is superior to the traditional bilinear programs. In summary, the main contribution of this work is to establish deterministic sampling conditions for ensuring the success in completing arbitrary matrices from a subset of the matrix entries, producing some theoretical results useful for understanding the completion regimes of arbitrary missing data patterns. \section{Summary of Main Notations}\label{sec:notation} Capital and lowercase letters are used to represent (real-valued) matrices and vectors, respectively, except that some lowercase letters, such as $i,j,k,m,n,l,p,q,r,s$ and $t$, are used to denote integers. For a matrix $M$, $[M]_{ij}$ is the $(i,j)$th entry of $M$, $[M]_{i,:}$ is its $i$th row, and $[M]_{:,j}$ is its $j$th column. Let $\omega_1=\{i_1,i_2,\cdots,i_k\}$ and $\omega_2=\{j_1,j_2,\cdots,j_s\}$ be two 1D sampling sets. Then $[M]_{\omega_1,:}$ denotes the submatrix of $M$ obtained by selecting the rows with indices $i_1,i_2,\cdots,i_k$, $[M]_{:,\omega_2}$ is the submatrix constructed by choosing the columns at $j_1,j_2,\cdots,j_s$, and similarly for $[M]_{\omega_1,\omega_2}$. For a 2D sampling set $\Omega\subseteq{}\{1,\cdots,m\} \times\{1,\cdots,n\}$, we imagine it as a sparse matrix and define its ``rows'', ``columns'' and ``transpose'' as follows: the $i$th row $\Omega_i = \{j_1 | (i_1,j_1)\in\Omega, i_1 = i\}$, the $j$th column $\Omega^j = \{i_1 | (i_1,j_1)\in\Omega, j_1 = j\}$, and the transpose $\Omega^T = \{(j_1,i_1) | (i_1,j_1)\in\Omega\}$. These notations are important for understanding the proposed conditions. For the ease of presentation, we shall call $[M]_{\omega,:}$ as a \emph{sampled submatrix} of $M$ (see Figure~\ref{fig:sub}), where $\omega$ is a 1D sampling set. \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{submatrix.pdf}\vspace{-0.15in} \caption{Illustrations of the sampled submatrices.}\label{fig:sub}\vspace{-0.25in} \end{center} \end{figure} Three types of matrix norms are used in this paper: 1) the operator norm or 2-norm denoted by $\|M\|$, 2) the Frobenius norm denoted by $\|M\|_F$ and 3) the nuclear norm denoted by $\|M\|_*$. The only used vector norm is the $\ell_2$ norm, which is denoted by $\|\cdot\|_2$. Particularly, the symbol $|\cdot|$ is reserved for the cardinality of a set. The special symbol $(\cdot)^+$ is reserved to denote the Moore-Penrose pseudo-inverse of a matrix. More precisely, for a matrix $M$ with SVD\footnote{In this paper, SVD always refers to skinny SVD. For a rank-$r$ matrix $M\in\mathbb{R}^{m\times{}n}$, its SVD is of the form $U_M\Sigma_MV_M^T$, where $U_M\in\Re^{m\times{}r},\Sigma_M\in\Re^{r\times{}r}$ and $V_M\in\Re^{n\times{}r}$.} $M=U_M\Sigma_MV_M^T$, its pseudo-inverse is given by $M^+=V_M\Sigma_M^{-1}U_M^T$. For convenience, we adopt the conventions of using $\mathrm{span}\{M\}$ to denote the linear space spanned by the columns of a matrix $M$, using $y\in\mathrm{span}\{M\}$ to denote that a vector $y$ belongs to the space $\mathrm{span}\{M\}$, and using $Y\in\mathrm{span}\{M\}$ to denote that all the column vectors of a matrix $Y$ belong to $\mathrm{span}\{M\}$. \section{Identifiability Conditions}\label{sec:setting} In this section, we introduce the so-called \emph{isomeric condition}~\cite{liu:nips:2017} and \emph{relative well-conditionedness}. \subsection{Isomeric Condition}\label{sec:setting:iso} For the ease of understanding, we shall begin with a concept called \emph{$k$-isomerism} (or \emph{$k$-isomeric} in adjective form), which can be regarded as an extension of low-rankness. \begin{defn}[$k$-isomeric]\label{def:iso:k} A matrix $M\in\Re^{m\times{}l}$ is called $k$-isomeric iff any $k$ rows of $M$ can linearly represent all rows in $M$. That is, \begin{align*} &\rank{[M]_{\omega,:}} = \rank{M}, \forall{}\omega\subseteq\{1,\cdots,m\}, |\omega| = k, \end{align*} where $|\cdot|$ is the cardinality of a sampling set and $[M]_{\omega,:}\in\mathbb{R}^{|\omega|\times{}l}$ is called a ``sampled submatrix'' of $M$. \end{defn} In short, a matrix $M$ is $k$-isomeric means that the sampled submatrix $[M]_{\omega,:}$ (with $|\omega|=k$) is not rank deficient\footnote{Here, the largest possible rank is $\rank{M}$. So $\rank{[M]_{\omega,:}} = \rank{M}$ gives that the submatrix $[M]_{\omega,:}$ is not rank deficient.}. According to the above definition, $k$-isomerism has a nice property; that is, suppose $M$ is $k_1$-isomeric, then $M$ is also $k_2$-isomeric for any $k_2\geq{}k_1$. So, to verify whether a matrix $M$ is $k$-isomeric with unknown $k$, one just needs to find the smallest $\bar{k}$ such that $M$ is $\bar{k}$-isomeric. Generally, $k$-isomerism is somewhat similar to \emph{Spark}~\cite{Donoho:spark:2003}, which defines the smallest linearly dependent subset of the rows of a matrix. For a matrix $M$ to be $k$-isomeric, it is necessary that $\rank{M}\leq{}k$, not sufficient. In fact, $k$-isomerism is also somehow related to the concept of \emph{coherence}~\cite{Candes:2009:math,liu:tsp:2016}. For a rank-$r$ matrix $M\in\mathbb{R}^{m\times{}n}$ with SVD $U_M\Sigma_MV_M^T$, its coherence is denoted as $\mu(M)$ and given by \begin{align*} \mu(M)= \max(\max_{1\leq{}i\leq{}m}\frac{m}{r}\|[U_M]_{i,:}\|_F^2, \max_{1\leq{}j\leq{}n}\frac{n}{r}\|[V_M]_{j,:}\|_F^2). \end{align*} When the coherence of a matrix $M\in\Re^{m\times{}l}$ is not too high, $M$ could be $k$-isomeric with a small $k$, e.g., $k=\rank{M}$. Whenever the coherence of $M$ is very high, one may need a large $k$ to satisfy the $k$-isomeric property. For example, consider an extreme case where $M$ is a rank-1 matrix with one row being 1 and everywhere else being 0. In this case, we need $k=m$ to ensure that $M$ is $k$-isomeric. However, the connection between isomerism and coherence is not indestructible. A counterexample is the Hadamard matrix with $2^m$ rows and 2 columns. In this case, the matrix has an optimal coherence of 1, but the matrix is not $k$-isomeric for any $k\leq{}2^{m-1}$. While Definition~\ref{def:iso:k} involves all 1D sampling sets of cardinality $k$, we often need the isomeric property to be associated with a certain 2D sampling set $\Omega$. To this end, we define below a concept called \emph{$\Omega$-isomerism} (or \emph{$\Omega$-isomeric}). \begin{defn}[$\Omega$-isomeric]\label{def:iso:omg} Let $M\in\Re^{m\times{}l}$ and $\Omega\subseteq\{1,\cdots,$ $m\}\times\{1,\cdots,n\}$. Suppose that $\Omega^j\neq\emptyset$ (empty set), $\forall{}1\leq{}j\leq{}n$. Then the matrix $M$ is called $\Omega$-isomeric iff \begin{align*} &\rank{[M]_{\Omega^j,:}} = \rank{M}, \forall{}j = 1,\cdots,n. \end{align*} Note here that $\Omega^j$ (i.e., $j$th column of $\Omega$) is a 1D sampling set and $l\neq{}n$ is allowed. \end{defn} Similar to $k$-isomerism, $\Omega$-isomerism also assumes that the sampled submatrices, $\{[M]_{\Omega^j,:}\}_{j=1}^n$, are not rank deficient. The main difference is that $\Omega$-isomerism requires the rank of $M$ to be preserved by the submatrices sampled according to a \emph{specific} sampling set $\Omega$, and $k$-isomerism assumes that \emph{every} submatrix consisting of $k$ rows of $M$ has the same rank as $M$. Hence, $\Omega$-isomerism is less strict than $k$-isomerism. More precisely, provided that $|\Omega^j|\geq{}k,\forall{}1\leq{}j\leq{}n$, a matrix $M$ is $k$-isomeric ensures that $M$ is $\Omega$-isomeric as well, but not vice versa. In the extreme case where $M$ is nonzero at only one row, interestingly, $M$ can be $\Omega$-isomeric as long as the locations of the nonzero entries are included in $\Omega$. For example, the following rank-1 matrix $M$ is not 1-isomeric but still $\Omega$-isomeric for some $\Omega$ with $|\Omega^j|=1,\forall{}1\leq{}j\leq{}n$: \begin{align*} \Omega = \{(1,1),(1,2), (1,3)\} \textrm{ and } M =\left[\begin{array}{cc} 1 &1\\ 0&0\\ 0&0 \end{array}\right], \end{align*} where it is configured that $m=n=3$ and $l=2$. With the notation of $\Omega^T = \{(j_1,i_1) | (i_1,j_1)\in\Omega\}$, the isomeric property can be also defined on the column vectors of a matrix, as shown in the following definition. \begin{defn}[$\Omega/\Omega^T$-isomeric]\label{def:iso:omgt} Let $M\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. Suppose $\Omega_i\neq\emptyset$ and $\Omega^j\neq\emptyset$, $\forall{}i,j$. Then the matrix $M$ is called $\Omega/\Omega^T$-isomeric iff $M$ is $\Omega$-isomeric and $M^T$ is $\Omega^T$-isomeric as well. \end{defn} To solve Problem~\ref{pb:mc} without the assumption of missing at random, as will be shown later, it is necessary to assume that $L_0$ is $\Omega/\Omega^T$-isomeric. This condition has excluded the unidentifiable cases where any rows or columns of $L_0$ are wholly missing. Moreover, $\Omega/\Omega^T$-isomerism has partially considered the cases where $L_0$ is of high coherence: For the extreme case where $L_0$ is 1 at only one entry and 0 everywhere else, $L_0$ cannot be $\Omega/\Omega^T$-isomeric unless the index of the nonzero element is included in $\Omega$. In general, there are numerous reasons for the target matrix $L_0$ to be isomeric. For example, the standard assumptions of low-rankness, incoherence and uniform sampling are indeed sufficient to ensure isomerism, not necessary. \begin{theo}\label{thm:iso} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,$ $n\}$. Denote $n_1 = \max(m,n)$, $n_2=\min(m,n)$, $\mu_0=\mu(L_0)$ and $r_0=\rank{L_0}$. Suppose that $\Omega$ is a set sampled uniformly at random, namely $\mathrm{Pr}((i,j)\in\Omega)=\rho_0$ and $\mathrm{Pr}((i,j)\notin\Omega)=1-\rho_0$. If $\rho_0>c\mu_0r_0(\log{n_1})/n_2$ for some numerical constant $c$ then, with probability at least $1-n_1^{-10}$, $L_0$ is $\Omega/\Omega^T$-isomeric. \end{theo} Notice, that the isomeric condition can be also proven by discarding the uniform sampling assumption and accessing only the concept of coherence (see Theorem~\ref{thm:iso:rcn}). Furthermore, the isomeric condition could be even obeyed in the case of high coherence. For example, \begin{align}\label{eq:example:1} \hspace{-0.03in}\Omega \hspace{-0.03in}= \hspace{-0.03in}\{(1,1),\hspace{-0.03in} (1,2), \hspace{-0.03in}(1,3),\hspace{-0.03in} (2,1), \hspace{-0.03in}(3, 1)\} \textrm{ and } L_0 \hspace{-0.03in}=\hspace{-0.03in}\setlength\arraycolsep{0.1cm}\left[\hspace{-0.03in}\begin{array}{ccc} 1 &0&0\\ 0&0&0\\ 0&0&0 \end{array}\hspace{-0.03in}\right]\hspace{-0.03in}, \end{align} where $L_0$ is not incoherent and the sampling is not uniform either, but it can be verified that $L_0$ is $\Omega/\Omega^T$-isomeric. In fact, the isomeric condition is \emph{necessary} for the identifiability of $L_0$, as shown in the following theorem. \begin{theo}\label{thm:iso:necessary} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,$ $n\}$. If either $L_0$ is not $\Omega$-isomeric or $L_0^T$ is not $\Omega^T$-isomeric then there exist infinity many matrices (denoted as $L\in\Re^{m\times{}n}$) that fit the observed entries not worse than $L_0$ does: \begin{align*} L\neq{}L_0,\textrm{ } \rank{L}\leq\rank{L_0},\textrm{ }[L]_{ij} = [L_0]_{ij},\forall{}(i,j)\in\Omega. \end{align*} \end{theo} In other words, for any partial matrix $M'$ with sampling set $\Omega$, if there exists a completion $M$ that is not $\Omega/\Omega^T$-isomeric, then there are infinity many completions that are different from $M$ and have a rank not greater than that of $M$. In other words, isomerism is also necessary for the so-called \emph{finitely completable property} explored in~\cite{Kiraly:2012:icml,Kiraly:2015:jmlr,daniel:2016:jstsp}. As a consequence, logically speaking, the deterministic sampling conditions established in~\cite{Kiraly:2012:icml,Kiraly:2015:jmlr,daniel:2016:jstsp} should suffice to ensure isomerism. The above theorem illustrates that the isomeric condition is indeed necessary for the identifiability of the completions to any partial matrices, no matter how the observed entries are chosen. \subsection{Relative Well-Conditionedness} While necessary, the isomeric condition is unfortunately unable to guarantee the identifiability of $L_0$ for sure. More concretely, consider the following example: \begin{align}\label{eq:example:2} \Omega = \{(1, 1), (2, 2)\} \textrm{ and } L_0 =\left[\begin{array}{cc} 1 &\frac{10}{9}\\ \frac{9}{10} &1 \end{array}\right]. \end{align} It can be verified that $L_0$ is $\Omega/\Omega^T$-isomeric. However, there still exist infinitely many rank-1 completions different than $L_0$, e.g., $L_*=[1 ,1; 1, 1]$, which is a matrix of all ones. For this particular example, $L_*$ is the optimal rank-1 completion in the sense of coherence. In general, isomerism is only a condition for the sampled submatrices to be not rank deficient, but there is no guarantee that the sampled submatrices are well-conditioned. To compensate this weakness, we further propose an additional hypothesis called \emph{relative well-conditionedness}, which encourages the smallest singular value of the sampled submatrices to be far from 0. Again, we shall begin with a simple concept called \emph{$\omega$-relative condition number}, with $\omega$ being a 1D sampling set. \begin{defn}[$\omega$-relative condition number]\label{def:rcn:1} Let $M\in\Re^{m\times{}l}$ and $\omega\subseteq\{1,\cdots,m\}$. Suppose that $[M]_{\omega,:}\neq0$. Then the $\omega$-relative condition number of the matrix $M$ is denoted as $\gamma_{\omega}(M)$ and given by \begin{align*} \gamma_{\omega}(M) = 1/\|M([M]_{\omega,:})^+\|^2, \end{align*} where $(\cdot)^+$ and $\|\cdot\|$ are the pseudo-inverse and operator norm of a matrix, respectively. \end{defn} Regarding the bound of the $\omega$-relative condition number $\gamma_{\omega}(M)$, simple calculations yield \begin{align*} \sigma_{min}^2/\|M\|^2\leq\gamma_{\omega}(M)\leq1, \end{align*} where $\sigma_{min}$ is the smallest singular value of $[M]_{\omega,:}$. Hence, the sampled submatrix $[M]_{\omega,:}$ has a large minimum singular value is sufficient for ensuring that $\gamma_{\omega}(M)$ is large, not necessary. Roughly, the value of $\gamma_{\omega}(M)$ measures how much information of a matrix $M$ is contained in the sampled submatrix $[M]_{\omega,:}$. The more information $[M]_{\omega,:}$ contains, the larger $\gamma_{\omega}(M)$ is (this will be more clear later). For example, $\gamma_{\omega}(M)=1$ whenever $\omega=\{1,\cdots,m\}$. The concept of $\omega$-relative condition number can be extended to the case of 2D sampling sets, as shown below. \begin{defn}[$\Omega$-relative condition number]\label{def:rcn:2} Let $M\in\Re^{m\times{}l}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. Suppose that $[M]_{\Omega^j,:}\neq0$, $\forall{}1\leq{}j\leq{}n$. Then the $\Omega$-relative condition number of $M$ is denoted as $\gamma_{\Omega}(M)$ and given by \begin{align*} \gamma_{\Omega}(M) = \min_{1\leq{}j\leq{}n}\gamma_{\Omega^j}(M), \end{align*} where $\Omega^j$ is a 1D sampling set corresponding to the $j$th column of $\Omega$. Again, note here that $l\neq{}n$ is allowed. \end{defn} Using the notation of $\Omega^T$, we can define the concept of $\Omega/\Omega^T$-relative condition number as in the following. \begin{defn}[$\Omega/\Omega^T$-relative condition number]\label{def:rcn:3} Let $M\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. Suppose that $[M]_{\Omega^j,:}\neq0$ and $[M]_{:,\Omega_i}\neq0$, $\forall{}1\leq{}i\leq{}m,1\leq{}j\leq{}n$. Then the $\Omega/\Omega^T$-relative condition number of $M$ is denoted as $\gamma_{\Omega,\Omega^T}(M)$ and given by \begin{align*} \gamma_{\Omega,\Omega^T}(M) = \min(\gamma_{\Omega}(M), \gamma_{\Omega^T}(M^T)). \end{align*} \end{defn} To make sure that an arbitrary matrix $L_0$ is recoverable from a subset of the matrix entries, we need to assume that $\gamma_{\Omega,\Omega^T}(L_0)$ is reasonably large; this is the so-called \emph{relative well-conditionedness}. Under the standard settings of uniform sampling and incoherence, we have the following theorem to bound $\gamma_{\Omega,\Omega^T}(L_0)$. \begin{theo}\label{thm:rcn:bound} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,$ $n\}$. Denote $n_1 = \max(m,n)$, $n_2=\min(m,n)$, $\mu_0=\mu(L_0)$ and $r_0=\rank{L_0}$. Suppose that $\Omega$ is a set sampled uniformly at random, namely $\mathrm{Pr}((i,j)\in\Omega)=\rho_0$ and $\mathrm{Pr}((i,j)\notin\Omega)=1-\rho_0$. For any $\alpha>1$, if $\rho_0>\alpha{}c\mu_0r_0(\log{n_1})/n_2$ for some numerical constant $c$ then, with probability at least $1-n_1^{-10}$, $\gamma_{\Omega,\Omega^T}(L_0)>(1-1/\sqrt{\alpha})\rho_0$. \end{theo} The above theorem illustrates that, under the setting of uniform sampling \emph{plus} incoherence, the relative condition number approximately corresponds to the fraction of the observed entries. Actually, the relative condition number can be bounded from below without the assumption of uniform sampling. \begin{theo}\label{thm:iso:rcn} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. Denote $\mu_0=\mu(L_0)$ and $r_0=\rank{L_0}$. Denote by $\rho$ the smallest fraction of the observed entries in each column and row of $L_0$; namely, \begin{align*} \rho = \min(\min_{1\leq{}i\leq{}m}\frac{|\Omega_{i}|}{n}, \min_{1\leq{}j\leq{}n}\frac{|\Omega^{j}|}{m}). \end{align*} For any $0\leq\alpha<1$, if $\rho>1-(1-\alpha)/(\mu_0r_0)$ then the matrix $L_0$ is $\Omega/\Omega^T$-isomeric and $\gamma_{\Omega,\Omega^T}(L_0)>\alpha$. \end{theo} It is worth noting that the relative condition number could be large even if the coherence of $L_0$ is extremely high. For the example shown in~\eqref{eq:example:1}, it can be calculated that $\gamma_{\Omega,\Omega^T}(L_0)=1$. \section{Theories and Methods}\label{sec:mainbody} In this section, we shall prove some theorems pertaining to matrix completion as well as missing data recovery. In addition, we suggest a method termed IsoDP for matrix completion, which possesses some remarkable features that we miss in the traditional bilinear programs. \subsection{Missing Data Recovery}\label{sec:clue} Before exploring the matrix completion problem, we would like to consider a missing data recovery problem studied by~\cite{Zhang06}, which is described as follows: Let $y_0\in\Re^m$ be a data vector drawn form some low-dimensional subspace, denoted as $y_0\in\mathcal{S}_0\subset\Re^m$. Suppose that $y_0$ contains some available observations in $y_b\in\Re^k$ and some missing entries in $y_u\in\Re^{m-k}$. Namely, after a permutation, \begin{align}\label{eq:y} y_0 = \left[\begin{array}{c} y_b\\ y_u\\ \end{array}\right], y_b\in\Re^k, y_u\in\Re^{m-k}. \end{align} Given the observations in $y_b$, we seek to restore the unseen entries in $y_u$. To do this, we consider the prevalent idea that represents a data vector as a linear combination of the bases in a given dictionary: \begin{align}\label{eq:ax} y_0 = Ax_0, \end{align} where $A\in\Re^{m\times{}p}$ is a dictionary constructed in advance and $x_0\in\Re^{p}$ is the representation of $y_0$. Utilizing the same permutation used in~\eqref{eq:y}, we can partition the rows of $A$ into two parts according to the locations of the observed and missing entries: \begin{align}\label{eq:A} A = \left[\begin{array}{c} A_b\\ A_u\\ \end{array}\right], A_b\in\Re^{k\times{}p}, A_u\in\Re^{(m-k)\times{}p}. \end{align} In this way, the equation in~\eqref{eq:ax} gives that \begin{align*} y_b = A_bx_0\quad\text{and}\quad{}y_u = A_ux_0. \end{align*} As we now can see, the unseen data $y_u$ is exactly restored, as long as the representation $x_0$ is retrieved by only accessing the available observations in $y_b$. In general cases, there are infinitely many representations that satisfy $y_0 = Ax_0$, e.g., $x_0=A^+y_0$, where $(\cdot)^+$ is the pseudo-inverse of a matrix. Since $A^+y_0$ is the representation of minimal $\ell_2$ norm, we revisit the traditional $\ell_2$ program: \begin{align}\label{eq:l2} \min_{x} \frac{1}{2}\norm{x}_2^2,\quad\textrm{s.t.}\quad{}y_b = A_bx, \end{align} where $\|\cdot\|_2$ is the $\ell_2$ norm of a vector. The above problem has a closed-form solution given by $A_b^+y_b$. Under some verifiable conditions, the above $\ell_2$ program is indeed \emph{consistently successful} in a sense as in the following: For any $y_0\in\mathcal{S}_0$ with an arbitrary partition $y_0=[y_b;y_u]$ (i.e., arbitrarily missing), the desired representation $x_0=A^+y_0$ is the unique minimizer to the problem in~\eqref{eq:l2}. That is, the unseen data $y_u$ is exactly recovered by firstly computing $x_*=A_b^+y_b$ and then calculating $y_u=A_ux_*$. \begin{theo}\label{thm:l2} Let $y_0=[y_b;y_u]\in\Re^m$ be an authentic sample drawn from some low-dimensional subspace $\mathcal{S}_0$. Denote by $k$ the number of available observations in $y_b$. Then the convex program~\eqref{eq:l2} is consistently successful, as long as $\mathcal{S}_0\subseteq\mathrm{span}\{A\}$ and the given dictionary $A$ is $k$-isomeric. \end{theo} The above theorem says that, in order to recover an $m$-dimensional vector sampled from some subspace determined by a given $k$-isomeric dictionary $A$, one only needs to see $k$ entries of the vector. \subsection{Convex Matrix Completion} Low rank matrix completion concerns the problem of seeking a matrix that not only attains the lowest rank but also satisfies the constraints given by the observed entries: \begin{eqnarray*} \min_{L} \rank{L},\quad\textrm{s.t.}\quad{}[L]_{ij} = [L_0]_{ij},\forall{}(i,j)\in\Omega. \end{eqnarray*} Unfortunately, this idea is of little practical because the problem above is essentially NP-hard and cannot be solved in polynomial time~\cite{Chistov:1984}. To achieve practical matrix completion, Cand{\`e}s and Recht~\cite{Candes:2009:math,Recht2008} suggested an alternative that minimizes instead the nuclear norm; namely, \begin{eqnarray}\label{eq:numin} \min_{L} \|L\|_*,\quad\textrm{s.t.}\quad{}[L]_{ij} = [L_0]_{ij},\forall{}(i,j)\in\Omega, \end{eqnarray} where $\|\cdot\|_*$ denotes the nuclear norm, i.e., the sum of the singular values of a matrix. Under the context of uniform sampling, it has been proved that the above convex program succeeds in recovering the target $L_0$. Although its theory is built upon the assumption of missing at random, as observed widely in the literatures, the convex program~\eqref{eq:numin} actually works even when the locations of the missing entries are distributed in a correlated and nonuniform fashion. This phenomenon could be explained by the following theorem, which states that the solution to the problem in~\eqref{eq:numin} is \emph{unique} and \emph{exact}, provided that the isomeric condition is obeyed and the relative condition number of $L_0$ is large enough. \begin{theo}\label{thm:convex} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. If $L_0$ is $\Omega/\Omega^T$-isomeric and $\gamma_{\Omega,\Omega^T}(L_0)>0.75$ then $L_0$ is the unique minimizer to the problem in~\eqref{eq:numin}. \end{theo} Roughly speaking, the assumption $\gamma_{\Omega,\Omega^T}(L_0)>0.75$ requires that more than three quarters of the information in $L_0$ is observed. Such an assumption is seemingly restrictive but technically difficult to reduce in general cases. \subsection{Nonconvex Matrix Completion}\label{sec:mainres} The problem of missing data recovery is closely related to matrix completion, which is actually to restore the missing entries in multiple data vectors simultaneously. Hence, we would transfer the spirits of the $\ell_2$ program~\eqref{eq:l2} to the case of matrix completion. Following~\eqref{eq:l2}, one may consider Frobenius norm minimization for matrix completion: \begin{align}\label{eq:fnorm} \min_{X} \frac{1}{2}\norm{X}_F^2,\textrm{ s.t. }[AX]_{ij} = [L_0]_{ij},\forall{}(i,j)\in\Omega, \end{align} where $A\in\Re^{m\times{}p}$ is a dictionary matrix assumed to be given. Similar to~\eqref{eq:l2}, the convex program~\eqref{eq:fnorm} can also exactly recover the desired representation matrix $A^+L_0$, as shown in the theorem below. \begin{theo}\label{thm:fnorm} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. Provided that $L_0\in\mathrm{span}\{A\}$ and the given dictionary $A$ is $\Omega$-isomeric, the desired representation $X_0=A^+L_0$ is the unique minimizer to the problem in~\eqref{eq:fnorm}. \end{theo} Theorem~\ref{thm:fnorm} tells us that, in general, even when the locations of the missing entries are placed arbitrarily, the target $L_0$ is restored as long as we have a proper dictionary $A$. This motivates us to consider the commonly used bilinear program that seeks both $A$ and $X$ simultaneously: \begin{align}\label{eq:isodp:f} \hspace{-0.05in}\min_{A,X}\frac{1}{2} (\norm{A}_F^2\hspace{-0.02in}+\hspace{-0.02in} \norm{X}_F^2),\textrm{ s.t. }[AX]_{ij} \hspace{-0.02in}= \hspace{-0.02in} [L_0]_{ij},\forall{}(i,j)\hspace{-0.02in}\in\hspace{-0.02in}\Omega, \end{align} where $A\in\Re^{m\times{}p}$ and $X\in\Re^{p\times{}n}$. The problem above is bilinear and therefore nonconvex. So, it would be hard to obtain a strong performance guarantee as done in the convex programs, e.g.,~\cite{Candes:2009:math,liu:tsp:2016}. What is more, the setup of deterministic sampling requires a deterministic recovery guarantee, the proof of which is much more difficult than a probabilistic guarantee. Interestingly, under the very mild condition of isomerism, the problem in~\eqref{eq:isodp:f} is proven to include the exact solutions that identify the target matrix $L_0$ as the critical points. Furthermore, when the relative condition number of $L_0$ is sufficiently large, the local optimality of the exact solutions is guaranteed surely. \begin{theo}\label{thm:isodp:f} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. Denote the rank and the SVD of $L_0$ as $r_0$ and $U_0\Sigma_0V_0^T$, respectively. Define \begin{align*} &A_0 = U_0\Sigma_0^{\frac{1}{2}}Q^T, X_0= Q\Sigma_0^{\frac{1}{2}}V_0^T, \forall{}Q\in\Re^{p\times{}r_0}, Q^TQ = \mathtt{I}. \end{align*} Then we have the following: \begin{itemize} \item[1.]If $L_0$ is $\Omega/\Omega^T$-isomeric then the exact solution, denoted as $(A_0, X_0)$, is a critical point to the problem in~\eqref{eq:isodp:f}. \item[2.]If $L_0$ is $\Omega/\Omega^T$-isomeric, $\gamma_{\Omega,\Omega^T}(L_0)>0.5$ and $p=r_0$ then $(A_0, X_0)$ is a local minimum to the problem in~\eqref{eq:isodp:f}, and the local optimality is strict while ignoring the differences among the exact solutions that equally recover $L_0$. \end{itemize} \end{theo} The condition of $\gamma_{\Omega,\Omega^T}(L_0)>0.5$, roughly, demands that more than half of the information in $L_0$ is observed. Unless some extra assumptions are imposed, this condition is not reducible, because counterexamples do exist when $\gamma_{\Omega,\Omega^T}(L_0)<0.5$. Consider a concrete case with \begin{align}\label{eq:example:3} \Omega = \{(1, 1), (2, 2)\} \textrm{ and } L_0 =\left[\begin{array}{cc} 1 &\sqrt{\alpha^2-1}\\ \frac{1}{\sqrt{\alpha^2-1}} &1 \end{array}\right], \end{align} where $\alpha>\sqrt{2}$. Then it can be verified that $L_0$ is $\Omega/\Omega^T$-isomeric. Via some calculations, we have (assume $p=r_0$) \begin{align*} &\gamma_{\Omega,\Omega^T}(L_0) = \min(1-\frac{1}{\alpha^2},\frac{1}{\alpha^2})=\frac{1}{\alpha^2} < 0.5,\\ &A_0 = \left[\begin{array}{c} (\alpha^2-1)^{\frac{1}{4}}\\ \frac{1}{(\alpha^2-1)^{\frac{1}{4}}}\\ \end{array}\right]\textrm{ and } X_0 = \left[\frac{1}{(\alpha^2-1)^{\frac{1}{4}}}, (\alpha^2-1)^{\frac{1}{4}}\right]. \end{align*} Now, construct \begin{align*} &A_{\epsilon} = \left[\begin{array}{c} \frac{(\alpha^2-1)^{\frac{1}{4}}}{1+\epsilon}\\ 1/(\alpha^2-1)^{\frac{1}{4}}\\ \end{array}\right]\textrm{ and } X_{\epsilon} = \left[\frac{1+\epsilon}{(\alpha^2-1)^{\frac{1}{4}}}, (\alpha^2-1)^{\frac{1}{4}}\right], \end{align*} where $\epsilon>0$. It is easy to see that $(A_{\epsilon},X_{\epsilon})$ is a feasible solution to~\eqref{eq:isodp:f}. However, as long as $0<\epsilon<\sqrt{\alpha^2-1}-1$, it can be verified that \begin{align*} \|A_{\epsilon}\|_F^2 + \|X_{\epsilon}\|_F^2 < \|A_0\|_F^2 + \|X_0\|_F^2, \end{align*} which implies that $(A_0,X_0)$ is not a local minimum to~\eqref{eq:isodp:f}. In fact, for the particular example shown in~\eqref{eq:example:3}, it can be proven that a global minimum to~\eqref{eq:isodp:f} is given by $(A_*=[1 ;1], X_*=[1,1])$, which cannot correctly reconstruct $L_0$. \subsection{Isomeric Dictionary Pursuit} Theorem~\ref{thm:isodp:f} illustrates that program~\eqref{eq:isodp:f} relies on the assumption of $p=\rank{L_0}$. This is consistent with the widely observed phenomenon that program~\eqref{eq:isodp:f} may not work well while the parameter $p$ is far from the true rank of $L_0$. To overcome this drawback, again, we recall Theorem~\ref{thm:fnorm}. Notice, that the $\Omega$-isomeric condition imposed on the dictionary matrix $A$ requires that \begin{align*} \rank{A}\leq|\Omega^j|,\forall{}j=1,\cdots,n. \end{align*} This, together with the condition of $L_0\in\mathrm{span}\{A\}$, motivates us to combine the formulation~\eqref{eq:fnorm} with the popular idea of nuclear norm minimization, resulting in a bilinear program termed IsoDP, which estimates both $A$ and $X$ by minimizing a mixture of the nuclear and Frobenius norms: \begin{align}\label{eq:isodp} \hspace{-0.05in}\min_{A,X}\norm{A}_*\hspace{-0.03in}+\hspace{-0.03in}\frac{1}{2}\norm{X}_F^2,\textrm{ s.t. }[AX]_{ij} \hspace{-0.03in}= \hspace{-0.03in} [L_0]_{ij},\hspace{-0.02in}\forall{}(i,j)\hspace{-0.02in}\in\hspace{-0.02in}\Omega, \end{align} where $A\in\Re^{m\times{}p}$ and $X\in\Re^{p\times{}n}$. The above formula can be also derived from the framework of Schatten quasi-norm minimization~\cite{rahul:jlmr:2010,Shang:2016:SAT,xu:2017:aai}. It has been proven in~\cite{Shang:2016:SAT,xu:2017:aai} that, for any rank-$r$ matrix $L\in\Re^{m\times{}n}$ with singular values $\sigma_1,\cdots,\sigma_r$, the following holds: \begin{align}\label{eq:snorm} \frac{1}{q}\|L\|_{q}^q = \min_{A,X}\frac{1}{q_1} \|A\|_{q_1}^{q_1} + \frac{1}{q_2}\|X\|_{q_2}^{q_2}, \textrm{ s.t. } AX = L, \end{align} as long as $p\geq{}r$ and $1/q = 1/q_1+1/q_2$ ($q,q_1,q_2>0$), where $\|L\|_q = (\sum_{i=1}^r\sigma_i^q)^{1/q}$ is the Schatten-$q$ norm. In that sense, the IsoDP program~\eqref{eq:isodp} is related to the following Schatten-$q$ quasi-norm minimization problem with $q = 2/3$: \begin{align}\label{eq:stmin} \min_{L} \frac{3}{2}\|L\|_{2/3}^{2/3} ,\quad\textrm{s.t.}\quad{}[L]_{ij} = [L_0]_{ij},\forall{}(i,j)\in\Omega. \end{align} Nevertheless, programs~\eqref{eq:stmin} and~\eqref{eq:isodp} are not equivalent to each other; this is obvious if $p<m$ (assume $m\leq{}n$). In fact, even when $p\geq{}m$, the conclusion~\eqref{eq:snorm} only implies that the global minima of~\eqref{eq:stmin} and~\eqref{eq:isodp} are equivalent, but their local minima and critical points could be different. More precisely, any local minimum to~\eqref{eq:stmin} certainly corresponds to a local minimum to~\eqref{eq:isodp}, but not vice versa\footnote{Suppose that $L_1$ is a local minimum to the problem in~\eqref{eq:stmin}. Let $(A_1,X_1) = \arg\min_{A,X} \norm{A}_*+0.5\norm{X}_F^2$, s.t. $AX=L_1$. Then $(A_1,X_1)$ has to be a local minimum to~\eqref{eq:isodp}. This can be proven by the method of reduction to absurdity. Assume that $(A_1,X_1)$ is not a local minimum to~\eqref{eq:isodp}. Then there exists some feasible solution, denoted as $(A_2, X_2)$, that is arbitrarily close to $(A_1, X_1)$ and satisfies $\norm{A_2}_*+0.5\norm{X_2}_F^2 < \norm{A_1}_*+0.5\norm{X_1}_F^2$. Taking $L_2=A_2X_2$, we have that $L_2$ is arbitrarily close to $L_1$ and $\frac{3}{2}\|L_2\|_{2/3}^{2/3}\leq\norm{A_2}_*+0.5\norm{X_2}_F^2 < \norm{A_1}_*+0.5\norm{X_1}_F^2=\frac{3}{2}\|L_1\|_{2/3}^{2/3}$, which contradicts the premise that $L_1$ is a local minimum to~\eqref{eq:stmin}. So, a local minimum to~\eqref{eq:stmin} also gives a local minimum to~\eqref{eq:isodp}. But the converge of this statement may not be true, and~\eqref{eq:isodp} might have more local minima than~\eqref{eq:stmin}.}. For the same reason, the bilinear program~\eqref{eq:isodp:f} is not equivalent to the convex program~\eqref{eq:numin}. Regarding the recovery performance of the IsoDP program~\eqref{eq:isodp}, we establish the following theorem that reproduces Theorem~\ref{thm:isodp:f} without the assumption of $p=r_0$. \begin{theo}\label{thm:isodp} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. Denote the rank and the SVD of $L_0$ as $r_0$ and $U_0\Sigma_0V_0^T$, respectively. Define \begin{align*} &A_0 = U_0\Sigma_0^{\frac{2}{3}}Q^T, X_0= Q\Sigma_0^{\frac{1}{3}}V_0^T,\forall{}Q\in\Re^{p\times{}r_0}, Q^TQ = \mathtt{I}. \end{align*} Then we have the following: \begin{itemize} \item[1.]If $L_0$ is $\Omega/\Omega^T$-isomeric then the exact solution $(A_0, X_0)$ is a critical point to the problem in~\eqref{eq:isodp}. \item[2.]If $L_0$ is $\Omega/\Omega^T$-isomeric and $\gamma_{\Omega,\Omega^T}(L_0)>0.5$ then $(A_0, X_0)$ is a local minimum to the problem in~\eqref{eq:isodp}, and the local optimality is strict while ignoring the differences among the exact solutions that equally recover $L_0$. \end{itemize} \end{theo} Due to the advantages of the nuclear norm, the above theorem does not require the assumption of $p=\rank{L_0}$ any more. Empirically, unlike~\eqref{eq:isodp:f}, which exhibits superior performance only if $p$ is close to $\rank{L_0}$ and the initial solution is chosen carefully, IsoDP can work well by simply choosing $p=m$ and using $A=\mathtt{I}$ as the initial solution. \subsection{Optimization Algorithm}\label{sec:opt} Considering the fact that the observations in reality are often contaminated by noise, we shall investigate instead the following bilinear program that can also approximately solve the problem in~\eqref{eq:isodp}: \begin{align}\label{eq:isodp:noisy} &\hspace{-0.05in}\min_{A,X} \lambda(\norm{A}_*\hspace{-0.02in}+\hspace{-0.02in}\frac{1}{2}\norm{X}_F^2)\hspace{-0.02in}+\hspace{-0.02in}\frac{1}{2}\sum_{(i,j)\in\Omega}([AX]_{ij}\hspace{-0.02in}-\hspace{-0.02in}[L_0]_{ij})^2, \end{align} where $A\in\Re^{m\times{}m}$ (i.e., $p=m$), $X\in\Re^{m\times{}n}$ and $\lambda>0$ is taken as a parameter. The optimization problem in~\eqref{eq:isodp:noisy} can be solved by any of the many first-order methods established in the literatures. For the sake of simplicity, we choose to use the proximal methods by~\cite{proximal:2009:mp,Bolte2014}. Let $(A_t,X_t)$ be the solution estimated at the $t$th iteration. Define a function $g_t(\cdot)$ as \begin{align*} g_t(A) = \frac{1}{2}\sum_{(i,j)\in\Omega}([AX_{t+1}]_{ij}-[L_0]_{ij})^2. \end{align*} Then the solution to~\eqref{eq:isodp:noisy} is updated via iterating the following two procedures: \begin{align}\label{eq:proximal} &\hspace{-0.05in}X_{t+1}\hspace{-0.02in}= \hspace{-0.02in}\arg\min_{X} \frac{\lambda}{2}\|X\|_F^2\hspace{-0.02in}+\hspace{-0.02in}\frac{1}{2}\sum_{(i,j)\in\Omega}([A_tX]_{ij}-[L_0]_{ij})^2,\\\nonumber &\hspace{-0.05in}A_{t+1}\hspace{-0.02in}=\hspace{-0.02in} \arg\min_{A} \frac{\lambda}{\mu_t}\|A\|_*+\frac{1}{2}\|A - (A_t-\frac{\partial{}g_t(A_t)}{\mu_t})\|_F^2, \end{align} where $\mu_t>0$ is a penalty parameter and $\partial{}g_t(A_t)$ is the gradient of the function $g_t(A)$ at $A=A_t$. According to~\cite{proximal:2009:mp}, the penalty parameter $\mu_t$ could be set as $\mu_t = \|X_{t+1}\|^2$. The two optimization problems in~\eqref{eq:proximal} both have closed-form solutions. To be more precise, the $X$-subproblem is a least square regression problem: \begin{align}\label{eq:x-sub} [X_{t+1}]_{:,j} = (A_j^TA_j+\lambda\mathtt{I})^{-1}A_j^Ty_j, \forall{1\leq{}j\leq{}n}, \end{align} where $A_j = [A_t]_{\Omega^j,:}$ and $y_j=[L_0]_{\Omega^j,j}$. The $A$-subproblem is solved by Singular Value Thresholding (SVT)~\cite{svt:cai:2008}: \begin{align}\label{eq:a-sub} A_{t+1}=U\mathcal{H}_{\lambda/\mu_t}(\Sigma)V^T, \end{align} where $U\Sigma{}V^T$ is the SVD of $A_t-\partial{}g_t(A_t)/\mu_t$ and $\mathcal{H}_{\lambda/\mu_t}(\cdot)$ denotes the shrinkage operator with parameter $\lambda/\mu_t$. The whole optimization procedure is also summarized in Algorithm~\ref{alg1}. Without loss of generality, assume that $m\leq{}n$. Then the computational complexity of each iteration in Algorithm~\ref{alg1} is $O(m^2n)+O(m^3)$. \begin{algorithm}[htb] \caption{Solving problem~\eqref{eq:isodp:noisy} by alternating proximal} \label{alg1} \begin{algorithmic}[1] \STATE \textbf{Input}: $\{[L_0]_{ij} |(i,j)\in\Omega\}$. \STATE \textbf{Output}: the dictionary $A$ and the representation $X$. \STATE \textbf{Initialization}: $A=\mathtt{I}$. \REPEAT \STATE Update the representation matrix $X$ by~\eqref{eq:x-sub}. \STATE Update the dictionary matrix $A$ by~\eqref{eq:a-sub}. \UNTIL{convergence} \end{algorithmic} \end{algorithm} \section{Mathematical Proofs}\label{sec:proof} This section shows the detailed proofs of the theorems proposed in this work. \subsection{Notations} Besides of the notations presented in Section~\ref{sec:notation}, there are some other notations used throughout the proofs. Letters $U$, $V$, $\Omega$ and their variants (complements, subscripts, etc.) are reserved for left singular vectors, right singular vectors and support set, respectively. For convenience, we shall abuse the notation $U$ (resp. $V$) to denote the linear space spanned by the columns of $U$ (resp. $V$), i.e., the column space (resp. row space). The orthogonal projection onto the column space $U$, is denoted by $\mathcal{P}_U$ and given by $\mathcal{P}_U(M)=UU^TM$, and similarly for the row space $\mathcal{P}_V(M)=MVV^T$. Also, we denote by $\mathcal{P}_T$ the projection to the sum of the column space $U$ and the row space $V$, i.e., $\mathcal{P}_T(\cdot) = UU^T(\cdot)+(\cdot)VV^T-UU^T(\cdot)VV^T$. The same notation is also used to represent a subspace of matrices (i.e., the image of an operator), e.g., we say that $M\in\mathcal{P}_{U}$ for any matrix $M$ which satisfies $\mathcal{P}_{U}(M)=M$. The symbol $\mathcal{P}_{\Omega}$ denotes the orthogonal projection onto $\Omega$: \begin{align*} [\mathcal{P}_\Omega(M)]_{ij}=\left\{\begin{array}{cc} [M]_{ij},&\text{if }(i,j)\in\Omega,\\ 0, &\text{otherwise.}\\ \end{array}\right. \end{align*} Similarly, the symbol $\mathcal{P}_{\Omega}^{\bot}$ denotes the orthogonal projection onto the complement space of $\Omega$; that is, $\mathcal{P}_{\Omega}+\mathcal{P}_{\Omega}^{\bot}=\mathcal{I}$, where $\mathcal{I}$ is the identity operator. \vspace{-0.1in}\subsection{Basic Lemmas} While its definitions are associated with a certain matrix, the isomeric condition is actually characterizing some properties of a space, as shown in the lemma below. \begin{lemm}\label{lem:basic:L02U} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. Denote the SVD of $L_0$ as $U_0\Sigma_0V_0^T$. Then we have: \begin{itemize} \item[1.] $L_0$ is $\Omega$-isomeric iff $U_0$ is $\Omega$-isomeric. \item[2.] $L_0^T$ is $\Omega^T$-isomeric iff $V_0$ is $\Omega^T$-isomeric. \end{itemize} \end{lemm} \begin{proof} It can be manipulated that \begin{align*} [L_0]_{\Omega^j,:} = ([U_0]_{\Omega^j,:})\Sigma_0V_0^T, \forall{}j=1,\cdots, n. \end{align*} Since $\Sigma_0V_0^T$ is row-wisely full rank, we have \begin{align*} \rank{[L_0]_{\Omega^j,:}} = \rank{[U_0]_{\Omega^j,:}},\forall{}j=1,\cdots,n. \end{align*} As a consequence, $L_0$ is $\Omega$-isomeric is equivalent to $U_0$ is $\Omega$-isomeric. Similarly, the second claim is proven. \end{proof} The isomeric property is indeed subspace successive, as shown in the next lemma. \begin{lemm}\label{lem:basic:subsucc} Let $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$ and $U_0\in\Re^{m\times{}r}$ be the basis matrix of a subspace embedded in $\Re^m$. Suppose that $U$ is a subspace of $U_0$, i.e., $U = U_0U_0^TU$. If $U_0$ is $\Omega$-isomeric then $U$ is $\Omega$-isomeric as well. \end{lemm} \begin{proof}By $U = U_0U_0^TU$ and $U_0$ is $\Omega$-isomeric, \begin{align*} &\rank{[U]_{\Omega^j,:}} = \rank{([U_0]_{\Omega^j,:})U_0^TU}=\rank{U_0^TU}\\ &=\rank{U_0U_0^TU}=\rank{U}, \forall{}1\leq{}j\leq{}n. \end{align*} \end{proof} The following lemma reveals the fact that the isomeric property is related to the invertibility of matrices. \begin{lemm}\label{lem:basic:positive} Let $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$ and $U_0\in\Re^{m\times{}r}$ be the basis matrix of a subspace of $\Re^m$. Denote by $u_i^T$ the $i$th row of $U_0$, i.e., $U_0 = [u_1^T;\cdots;u_m^T]$. Define $\delta_{ij}$ as \begin{align}\label{eq:delta} \delta_{ij}=\left\{\begin{array}{cc} 1,&\text{if }(i,j)\in\Omega,\\ 0, &\text{otherwise.}\\ \end{array}\right. \end{align} Then the matrices, $\sum_{i=1}^{m}\delta_{ij}u_iu_i^T$, $\forall{}1\leq{}j\leq{}n$, are all invertible iff $U_0$ is $\Omega$-isomeric. \end{lemm} \begin{proof} Note that \begin{align*} &([U_0]_{\Omega^j,:})^T([U_0]_{\Omega^j,:})=\sum_{i=1}^{m}(\delta_{ij})^2u_iu_i^T=\sum_{i=1}^{m}\delta_{ij}u_iu_i^T. \end{align*} Now, it is easy to see that the matrix $\sum_{i=1}^{m}\delta_{ij}u_iu_i^T$ is invertible is equivalent to the matrix $([U_0]_{\Omega^j,:})^T([U_0]_{\Omega^j,:})$ is positive definite, which is further equivalent to $\rank{[U_0]_{\Omega^j,:}}=\rank{U_0}$, $\forall{}j=1,\cdots,n$. \end{proof} The following lemma gives some insights to the relative condition number. \begin{lemm}\label{lem:basic:rcn} Let $M\in\Re^{m\times{}l}$ and $\omega\subseteq\{1,\cdots,m\}$. Define $\{\delta_i\}_{i=1}^m$ with $\delta_i = 1$ if $i\in\omega$ and 0 otherwise. Define a dialog matrix $D\in\mathbb{R}^{m\times{}m}$ as $D=\diag{\delta_1,\delta_2,\cdots,\delta_m}$. Denote the SVD of $M$ as $U\Sigma{}V^T$. If $\rank{[M]_{\omega,:}} = \rank{M}$ then \begin{align*} \gamma_{\omega}(M) = \sigma_{min}, \end{align*} where $\sigma_{min}$ is the the smallest singular value (or eigenvalue) of the matrix $U^TDU$. \end{lemm} \begin{proof} First note that $[M]_{\omega,:}$ can be equivalently written as $DU\Sigma{}V^T$. By the assumption of $\rank{[M]_{\omega,:}} = \rank{M}$, $DU$ is column-wisely full rank. Thus, \begin{align*} &M([M]_{\omega,:})^+ = U\Sigma{}V^T(DU\Sigma{}V^T)^+ = U\Sigma{}V^T(\Sigma{}V^T)^+(DU)^+\\ &=U(DU)^+=U(U^TDU)^{-1}U^TD, \end{align*} which gives that \begin{align*} &M([M]_{\omega,:})^+(M([M]_{\omega,:})^+)^T = U(U^TDU)^{-1}U^T. \end{align*} As a result, we have $\|M([M]_{\omega,:})^+\|^2 = 1/\sigma_{min}$, and thereby \begin{align*} \gamma_{\omega}(M) = 1/\|M([M]_{\omega,:})^+\|^2 = \sigma_{min}. \end{align*} \end{proof} It has been proven in~\cite{siam_2010_minirank} that $\|L\|_*=\min_{A,X}\frac{1}{2}(\|A\|_F^2+\|X\|_F^2), \textrm{ s.t. }AX=L$. We have an analogous result, which has also been proven by~\cite{rahul:jlmr:2010,Shang:2016:SAT,xu:2017:aai}. \begin{lemm}\label{lem:basic:ax} Let $L\in\Re^{m\times{}n}$ be a rank-$r$ matrix with $r\leq{}p$. Denote the SVD of $L$ as $U\Sigma{}V^T$. Then we have the following: \begin{align*} \frac{3}{2}\trace{\Sigma^{\frac{2}{3}}} = \min_{A\in\Re^{m\times{}p},X\in\Re^{p\times{}n}}\|A\|_*+\frac{1}{2}\|X\|_F^2, \textrm{ s.t. }AX = L, \end{align*} where $\trace{\cdot}$ is the trace of a square matrix. \end{lemm} \begin{proof} Denote the singular values of $L$ as $\sigma_1\geq\cdots\geq\sigma_r>0$. We first consider the case that $\rank{A}=\rank{L}=r$. Since $AX=L$, the SVD of $A$ must have a form of $UQ\Sigma_AV_A^T$, where $Q$ is an orthogonal matrix of size $r\times{}r$ and $\Sigma_A = \diag{\alpha_1,\cdots,\alpha_r}$ with $\alpha_1\geq\cdots\geq\alpha_r>0$. Since $A^+L = \arg\min_{X} \|X\|_F^2, \textrm{ s.t. } AX=L$, we have \begin{align*} &\|A\|_* + \frac{1}{2}\|X\|_F^2 \geq \|A\|_* + \frac{1}{2}\|A^+L\|_F^2\\ &=\trace{\Sigma_A}+\frac{1}{2}\trace{\Sigma_A^{-1}Q^T\Sigma^2Q\Sigma_A^{-1}}. \end{align*} It can be proven that the eigenvalues of $\Sigma_A^{-1}Q^T\Sigma^2Q\Sigma_A^{-1}$ are given by $\{\sigma_i^2/\alpha_{\pi_i}^2\}_{i=1}^r$, where $\{\alpha_{\pi_i}\}_{i=1}^r$ is a permutation of $\{\alpha_i\}_{i=1}^r$. By rearrangement inequality, \begin{align*} \trace{\Sigma_A^{-1}Q^T\Sigma^2Q\Sigma_A^{-1}}=\sum_{i=1}^r\frac{\sigma_i^2}{\alpha_{\pi_i}^2}\geq\sum_{i=1}^r \frac{\sigma_i^2}{\alpha_i^2}. \end{align*} As a consequence, we have \begin{align*} &\|A\|_*\hspace{-0.02in} + \hspace{-0.02in}\frac{1}{2}\|X\|_F^2 \hspace{-0.02in}\geq\hspace{-0.02in} \sum_{i=1}^r \left(\alpha_i+\frac{\sigma_i^2}{2\alpha_i^2}\right)\hspace{-0.02in}=\hspace{-0.02in} \sum_{i=1}^r \left(\frac{1}{2}\alpha_i\hspace{-0.02in}+\hspace{-0.02in}\frac{1}{2}\alpha_i\hspace{-0.02in}+\hspace{-0.02in}\frac{\sigma_i^2}{2\alpha_i^2}\right)\\ &\geq{}\sum_{i=1}^r \frac{3}{2}\sigma_i^{\frac{2}{3}}=\frac{3}{2}\trace{\Sigma^{\frac{2}{3}}}. \end{align*} Regarding the general case of $\rank{A}\geq{}\rank{L}$, we can construct $A_1 = UU^TA$. By $AX=L$, $A_1X=L$. Since $\rank{A_1} = \rank{L}$, we have \begin{align*} &\|A\|_* + \frac{1}{2}\|X\|_F^2 \geq{}\|A_1\|_* + \frac{1}{2}\|X\|_F^2\\ &\geq{}\|A_1\|_* + \frac{1}{2}\|A_1^+L\|_F^2\geq\frac{3}{2}\trace{\Sigma^{\frac{2}{3}}}. \end{align*} Finally, the optimal value of $\frac{3}{2}\trace{\Sigma^{\frac{2}{3}}}$ is attained by $A_*=U\Sigma^{\frac{2}{3}}H^T$ and $X_*=H\Sigma^{\frac{1}{3}}V^T$, $\forall{}H^TH=\mathtt{I}$. \end{proof} The next lemma will be used multiple times in the proofs presented in this paper. \begin{lemm}\label{lem:basic:inverse} Let $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$ and $\mathcal{P}$ be an orthogonal projection onto some subspace of $\Re^{m\times{}n}$. Then the following are equivalent: \begin{itemize} \item[1.] $\mathcal{P}\mathcal{P}_{\Omega}\mathcal{P}$ is invertible. \item[2.] $\|\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}\|<1$. \item[3.] $\mathcal{P}\cap{}\mathcal{P}_{\Omega}^\bot=\{0\}$. \end{itemize} \end{lemm} \begin{proof} \textbf{1$\rightarrow$2:} Let $\mathrm{vec}(\cdot)$ denote the vectorization of a matrix formed by stacking the columns of the matrix into a single column vector. Suppose that the basis matrix associated with $\mathcal{P}$ is given by $P\in\Re^{mn\times{}r}, P^TP=\mathtt{I}$; namely, \begin{align*} \mathrm{vec}(\mathcal{P}(M)) = PP^T\mathrm{vec}(M),\forall{}M\in\Re^{m\times{}n}. \end{align*} Denote $\delta_{ij}$ as in~\eqref{eq:delta} and define a diagonal matrix $D$ as \begin{align*} D = \mathrm{diag}(\delta_{11},\delta_{21},\cdots,\delta_{ij},\cdots,\delta_{mn})\in\Re^{mn\times{}mn}. \end{align*} Notice that \begin{align*} &\mathcal{P}(M) = \mathcal{P}(\sum_{i,j}\langle{}M,e_ie_j^T\rangle{}e_ie_j^T)=\sum_{i,j}\langle{}M,e_ie_j^T\rangle{}\mathcal{P}(e_ie_j^T), \end{align*} where $e_i$ is the $i$th standard basis and $\langle\cdot\rangle$ denotes the inner product between two matrices. With this notation, it is easy to see that \begin{align*} &[\mathrm{vec}(\mathcal{P}(e_1e_1^T)),\mathrm{vec}(\mathcal{P}(e_2e_1^T)),\cdots,\mathrm{vec}(\mathcal{P}(e_me_n^T))]=PP^T. \end{align*} Similarly, we have \begin{align*} \mathcal{P}\mathcal{P}_{\Omega}\mathcal{P}(M) = \sum_{i,j}\langle\mathcal{P}(M),e_ie_j^T\rangle(\delta_{ij}\mathcal{P}(e_ie_j^T)), \end{align*} and thereby \begin{align*} &\mathrm{vec}(\mathcal{P}\mathcal{P}_{\Omega}\mathcal{P}(M))= PP^TD\mathrm{vec}(\mathcal{P}(M))\\ &=PP^TDPP^T\mathrm{vec}(M). \end{align*} For $\mathcal{P}\mathcal{P}_{\Omega}\mathcal{P}$ to be invertible, the matrix $P^TDP$ must be positive definite. Because, whenever $P^TDP$ is singular, there exists $z\in\Re^{mn}$ that satisfies $z\neq0$ and $P^TDPz=0$, and thus there exists $M\in\mathcal{P}$ and $M\neq0$ such that $\mathcal{P}\mathcal{P}_{\Omega}\mathcal{P}(M)=0$; this contradicts the assumption that $\mathcal{P}\mathcal{P}_{\Omega}\mathcal{P}$ is invertible. Denote the minimal singular value of $P^TDP$ as $0<\sigma_{min}\leq1$. Since $P^TDP$ is positive definite, we have \begin{align*} &\|\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}(M)\|_F = \|\mathrm{vec}(\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}(M))\|_2\\ &= \|(\mathtt{I}-P^TDP)P^T\mathrm{vec}(M)\|_2\leq(1-\sigma_{min})\|P^T\mathrm{vec}(M)\|_2 \\ &= (1-\sigma_{min})\|\mathcal{P}(M)\|_F, \end{align*} which gives that $\|\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}\|\leq1-\sigma_{min}<1$. \textbf{2$\rightarrow$3:} Suppose that $M\in{}\mathcal{P}\cap{}\mathcal{P}_{\Omega}^\bot$, i.e., $M =\mathcal{P}(M)= \mathcal{P}_{\Omega}^\bot(M)$. Then we have $M=\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}(M)$ and thus \begin{align*} &\|M\|_F = \|\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}(M)\|_F\leq\|\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}\|\|M\|_F\leq\|M\|_F. \end{align*} Since $\|\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}\|<1$, the last equality above can hold only when $M=0$. \textbf{3$\rightarrow$1:} Consider a nonzero matrix $M\in\mathcal{P}$. Then we have \begin{align*} &\|M\|_F^2 = \|\mathcal{P}(M)\|_F^2 = \|\mathcal{P}_{\Omega}\mathcal{P}(M)+\mathcal{P}_{\Omega}^\bot\mathcal{P}(M)\|_F^2\\ &=\|\mathcal{P}_{\Omega}\mathcal{P}(M)\|_F^2+\|\mathcal{P}_{\Omega}^\bot\mathcal{P}(M)\|_F^2, \end{align*} which gives that \begin{align*} &\|\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}(M)\|_F^2\leq\|\mathcal{P}_{\Omega}^\bot\mathcal{P}(M)\|_F^2=\|M\|_F^2 - \|\mathcal{P}_{\Omega}\mathcal{P}(M)\|_F^2. \end{align*} By $\mathcal{P}\cap{}\mathcal{P}_{\Omega}^\bot=\{0\}$, $\mathcal{P}_{\Omega}\mathcal{P}(M)\neq0$. Thus, \begin{align*} &\|\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}\|^2 \leq 1 - \inf_{\|M\|_F=1}\|\mathcal{P}_{\Omega}\mathcal{P}(M)\|_F^2<1. \end{align*} Provided that $\|\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}\|<1$, $\mathcal{I}+\sum_{i=1}^{\infty}(\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P})^i$ is well defined. Notice that, for any $M\in\mathcal{P}$, the following holds: \begin{align*} &\mathcal{P}\mathcal{P}_{\Omega}\mathcal{P}(\mathcal{I}+\sum_{i=1}^{\infty}(\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P})^i)(M)\\ &=\mathcal{P}(\mathcal{I}-\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P})(\mathcal{I}+\sum_{i=1}^{\infty}(\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P})^i)(M)\\ &=\mathcal{P}(\mathcal{I}+\sum_{i=1}^{\infty}(\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P})^i-\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}-\sum_{i=2}^{\infty}(\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P})^i)(M)\\ &=\mathcal{P}(M) = M. \end{align*} Similarly, it can be also proven that $(\mathcal{I}+\sum_{i=1}^{\infty}(\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P})^i)$ $\mathcal{P}\mathcal{P}_{\Omega}\mathcal{P}(M)=M$. Hence, $\mathcal{I}+\sum_{i=1}^{\infty}(\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P})^i$ is indeed the inverse operator of $\mathcal{P}\mathcal{P}_{\Omega}\mathcal{P}$. \end{proof} The lemma below is adapted from the arguments in~\cite{siam:stewart:1969}. \begin{lemm}\label{lem:basic:pinv} Let $A\in\mathbb{R}^{m\times{}p}$ be a matrix with column space $U$, and let $A_1 = A +\Delta$. If $\Delta\in{}U$ and $\|\Delta\|<1/\|A^+\|$ then \begin{align*} \rank{A_1} = \rank{A} \textrm{ and } \|A_1^+\|\leq{}\frac{\|A^+\|}{1 - \|A^+\|\|\Delta\|}. \end{align*} \end{lemm} \begin{proof}By $\Delta\in{}U$, \begin{align*} A_1 = A + UU^T\Delta= A + AA^+\Delta = A(\mathtt{I} + A^+\Delta). \end{align*} By $\|\Delta\|<1/\|A^+\|$, $\mathtt{I} + A^+\Delta$ is invertible and thus $\rank{A_1} = \rank{A}$. To prove the second claim, we denote by $V_1$ the row space of $A_1$. Then we have \begin{align*} V_1V_1^T = A_1^+A_1 = A_1^+A(\mathtt{I} + A^+\Delta), \end{align*} which gives that $A_1^+A = V_1V_1^T(\mathtt{I} + A^+\Delta)^{-1}$. Since $A_1\in{}U$, we have \begin{align*} A_1^+ = A_1^+UU^T = A_1^+AA^+ = V_1V_1^T(\mathtt{I} + A^+\Delta)^{-1}A^+, \end{align*} from which the conclusion follows. \end{proof} \subsection{Critical Lemmas} The following lemma has a critical role in the proofs. \begin{lemm}\label{lem:critical:inverse} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. Let the SVD of $L_0$ be $U_0\Sigma_0V_0^T$. Denote $\mathcal{P}_{U_0}(\cdot)=U_0U_0^T(\cdot)$ and $\mathcal{P}_{V_0}(\cdot)=(\cdot)V_0V_0^T$. Then we have the following: \begin{itemize} \item[1.] $\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}$ is invertible iff $U_0$ is $\Omega$-isomeric. \item[2.] $\mathcal{P}_{V_0}\mathcal{P}_{\Omega}\mathcal{P}_{V_0}$ is invertible iff $V_0$ is $\Omega^T$-isomeric. \end{itemize} \end{lemm} \begin{proof} The above two claims are proven in the same way, and thereby we only present the proof of the first one. Since the operator $\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}$ is linear and $\mathcal{P}_{U_0}$ is a linear space of finite dimension, the sufficiency can be proven by showing that $\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}$ is an injection. That is, we need to prove that the following linear system has no nonzero solution: \begin{align*} \mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}(M) = 0, \textrm{ s.t. }M\in\mathcal{P}_{U_0}. \end{align*} Assume that $\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}(M) = 0$. Then we have \begin{align*} U_0^T\mathcal{P}_{\Omega}(U_0U_0^TM) = 0. \end{align*} Denote the $i$th row and $j$th column of $U_0$ and $U_0^TM$ as $u_i^T$ and $b_j$, respectively; that is, $U_0 = [u_1^T;u_2^T;\cdots;u_m^T]$ and $U_0^TM = [b_1,b_2,\cdots,b_n]$. Define $\delta_{ij}$ as in~\eqref{eq:delta}. Then the $j$th column of $U_0^T\mathcal{P}_{\Omega}(U_0U_0^TM)$ is given by $(\sum_{i=1}^{m}\delta_{ij}u_iu_i^T)b_j$. By Lemma~\ref{lem:basic:positive}, the matrix $\sum_{i=1}^{m}\delta_{ij}u_iu_i^T$ is invertible. Hence, $U_0^T\mathcal{P}_{\Omega}(U_0U_0^TM) = 0$ implies that \begin{align*} b_j=0,\forall{}j=1,\cdots,n, \end{align*} i.e., $U_0^TM=0$. By the assumption of $M\in\mathcal{P}_{U_0}$, $M=0$. It remains to prove the necessity. Assume $U_0$ is not $\Omega$-isomeric. By Lemma~\ref{lem:basic:positive}, there exists $j_1$ such that the matrix $\sum_{i=1}^{m}\delta_{ij_1}u_iu_i^T$ is singular and therefore has a nonzero null space. So, there exists $M_1\neq{}0$ such that $U_0^T\mathcal{P}_{\Omega}(U_0M_1)=0$. Let $M=U_0M_1$. Then we have $M\neq0$, $M\in\mathcal{P}_{U_0}$ and \begin{align*} \mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}(M) = 0. \end{align*} This contradicts the assumption that $\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}$ is invertible. As a consequence, $U_0$ must be $\Omega$-isomeric. \end{proof} The next four lemmas establish some connections between the relative condition number and the operator norm. \begin{lemm}\label{lem:critical:rnc2optnorm} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$, and let the SVD of $L_0$ be $U_0\Sigma_0V_0^T$. Denote $\mathcal{P}_{U_0}(\cdot)=U_0U_0^T(\cdot)$ and $\mathcal{P}_{V_0}(\cdot)=(\cdot)V_0V_0^T$. If $L_0$ is $\Omega/\Omega^T$-isomeric then \begin{align*} &\|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}\| = 1 - \gamma_{\Omega}(L_0),\textrm{ }\|\mathcal{P}_{V_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{V_0}\| = 1 - \gamma_{\Omega^T}(L_0^T). \end{align*} \end{lemm} \begin{proof} We only need to prove the first claim. Denote $\delta_{ij}$ as in~\eqref{eq:delta} and define a set of diagonal matrices $\{D_j\}_{j=1}^n$ as $D_j = \mathrm{diag}(\delta_{1j},\delta_{2j},\cdots,\delta_{mj})\in\Re^{m\times{}m}$. Denote the $j$th column of $\mathcal{P}_{U_0}(M)$ as $b_j$. Then we have \begin{align*} &\|[\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}(M)]_{:,j}\|_2 = \|U_0U_0^Tb_j - U_0(U_0^TD_jU_0)U_0^Tb_j\|_2\\ &=\|(\mathtt{I}-U_0^TD_jU_0)U_0^Tb_j\|_2\leq\|(\mathtt{I}-U_0^TD_jU_0)\|\|U_0^Tb_j\|_2. \end{align*} By Lemma~\ref{lem:basic:positive}, $U_0^TD_jU_0$ is positive definite. As a consequence, $\sigma_j\mathtt{I}\preccurlyeq{}U_0^TD_jU_0\preccurlyeq\mathtt{I}$, where $\sigma_j>0$ is the minimal eigenvalue of $U_0^TD_jU_0$. By Lemma~\ref{lem:basic:rcn} and Definition~\ref{def:rcn:2}, $\sigma_j\geq{}\gamma_{\Omega}(L_0)$, $\forall{}1\leq{}j\leq{}n$. Thus, \begin{align*} &\|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}(M)\|_F^2\leq\sum_{j=1}^{n}(1-\sigma_j)^2\|b_j\|_2^2\\ &\leq{}(1-\gamma_{\Omega}(L_0))^2\|\mathcal{P}_{U_0}(M)\|_F^2, \end{align*} where gives that $\|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}\|\leq1-\gamma_{\Omega}(L_0)$. It remains to prove that the value of $1-\gamma_{\Omega}(L_0)$ is attainable. Without loss of generality, assume that $j_1 = \arg\min_j\sigma_j$, i.e., $\sigma_{j_1} = \gamma_{\Omega}(L_0)$. Construct a $r_0\times{}r_0$ matrix $B$ with the $j_1$th column being the eigenvector corresponding to the smallest eigenvalue of $U_0^TD_{j_1}U_0$ and everywhere else being zero. Let $M_1 = U_0B$. Then it can be verified that $\|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}(M_1)\|_F = (1-\gamma_{\Omega}(L_0))\|M_1\|_F$. \end{proof} \begin{lemm}\label{lem:critical:optnorm:big} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. Let the SVD of $L_0$ be $U_0\Sigma_0V_0^T$. Denote $\mathcal{P}_{U_0}(\cdot)=U_0U_0^T(\cdot)$ and $\mathcal{P}_{V_0}(\cdot)=(\cdot)V_0V_0^T$. If $L_0$ is $\Omega/\Omega^T$-isomeric then: \begin{align*} &\|(\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0})^{-1}\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}^\bot\| =\sqrt{\frac{1}{\gamma_{\Omega}(L_0)}-1},\\ &\|(\mathcal{P}_{V_0}\mathcal{P}_{\Omega}\mathcal{P}_{V_0})^{-1}\mathcal{P}_{V_0}\mathcal{P}_{\Omega}\mathcal{P}_{V_0}^\bot\|=\sqrt{\frac{1}{\gamma_{\Omega^T}(L_0^T)} - 1}. \end{align*} \end{lemm} \begin{proof} We shall prove the first claim. Let $M\in\Re^{m\times{}n}$. Denote the $j$th column of $M$ and $(\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0})^{-1}\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}^\bot(M)$ as $b_j$ and $y_j$, respectively. Denote $\delta_{ij}$ as in~\eqref{eq:delta} and define a set of diagonal matrices $\{D_j\}_{j=1}^n$ as $D_j = \mathrm{diag}(\delta_{1j},\delta_{2j},\cdots,\delta_{mj})\in\Re^{m\times{}m}$. Then we have \begin{align*} &y_j = [(\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0})^{-1}\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}^\bot(M)]_{:,j}\\ &= U_0(U_0^TD_jU_0)^{-1}U_0^TD_j(\mathtt{I} - U_0U_0^T)b_j. \end{align*} It can be calculated that \begin{align*} &\|y_j\|_2^2 \leq \|(U_0^TD_jU_0)^{-1}U_0^TD_j(\mathtt{I} - U_0U_0^T)\|^2\|b_j\|_2^2=\\ &\|(U_0^TD_jU_0)^{-1}U_0^TD_j(\mathtt{I} - U_0U_0^T)DU_0(U_0^TD_jU_0)^{-1}\|\|b_j\|_2^2\\ &=\|(U_0^TD_jU_0)^{-1} - \mathtt{I}\|\|b_j\|_2^2\leq\left(\frac{1}{\gamma_{\Omega}(L_0)}-1\right)\|b_j\|_2^2, \end{align*} which gives that \begin{align*} \|(\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0})^{-1}\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}^\bot\|\leq\sqrt{\frac{1}{\gamma_{\Omega}(L_0)}-1}. \end{align*} Using a similar argument as in the proof of Lemma~\ref{lem:critical:rnc2optnorm}, it can be proven that the value of $\sqrt{1/\gamma_{\Omega}(L_0)-1}$ is attainable. To be more precise, assume without loss of generality that $j_1 = \arg\min_{j}\sigma_{j}$, where $\sigma_j$ is the smallest singular value of $U_0^TD_{j}U_0$. Denote by $\sigma^*$ and $v^*$ the largest singular value and the corresponding right singular vector of $(U_0^TD_jU_0)^{-1}U_0^TD_j(\mathtt{I} - U_0U_0^T)$, respectively. Then the above justifications have already proven that $\sigma^*=\sqrt{1/\gamma_{\Omega}(L_0)-1}$. Construct an $m\times{}n$ matrix $M$ with the $j_1$th column being $v^*$ and everywhere else being zero. Then it can be verified that $\|(\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0})^{-1}\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}^\bot(M)\|_F = \sqrt{1/\gamma_{\Omega}(L_0)-1}\|M\|_F$. \end{proof} \begin{lemm}\label{lem:critical:optnorm:ptpo} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$, and let the SVD of $L_0$ be $U_0\Sigma_0V_0^T$. Denote $\mathcal{P}_{T_0}(\cdot)=U_0U_0^T(\cdot)+(\cdot)V_0V_0^T-U_0U_0^T(\cdot)V_0V_0^T$. If $L_0$ is $\Omega/\Omega^T$-isomeric then \begin{align*} \|\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{T_0}\| \leq{}2(1-\gamma_{\Omega,\Omega^T}(L_0)). \end{align*} \end{lemm} \begin{proof} Using the same arguments as in the proof of Lemma~\ref{lem:basic:inverse}, it can be proven that $\|\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}\| = \|\mathcal{P}\mathcal{P}_{\Omega}^\bot\|^2$, with $\mathcal{P}$ being any orthogonal projection onto a subspace of $\mathbb{R}^{m\times{}n}$. Thus, we have the following \begin{align*} & \|\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{T_0}\| = \|\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot\|^2 = \sup_{\|M\|_F=1}\|\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot(M)\|_F^2 \\ &= \sup_{\|M\|_F=1}\|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot(M) + \mathcal{P}_{U_0}^\bot\mathcal{P}_{V_0}\mathcal{P}_{\Omega}^\bot(M)\|_F^2\\ &= \sup_{\|M\|_F=1}(\|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot(M)\|_F^2 + \|\mathcal{P}_{U_0}^\bot\mathcal{P}_{V_0}\mathcal{P}_{\Omega}^\bot(M)\|_F^2)\\ &\leq\sup_{\|M\|_F=1}\|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot(M)\|_F^2 + \sup_{\|M\|_F=1}\|\mathcal{P}_{V_0}\mathcal{P}_{\Omega}^\bot(M)\|_F^2 \\ &= \|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\|^2 + \|\mathcal{P}_{V_0}\mathcal{P}_{\Omega}^\bot\|^2, \end{align*} which, together with Lemma~\ref{lem:critical:rnc2optnorm}, gives that \begin{align*} &\|\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{T_0}\| \leq \|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}\| + \|\mathcal{P}_{V_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{V_0}\|\\ &= 1 - \gamma_{\Omega} (L_0) + 1 - \gamma_{\Omega^T} (L_0^T)\leq2(1 - \gamma_{\Omega,\Omega^T}(L_0)) \end{align*} \end{proof} \begin{lemm}\label{lem:critical:optnorm:invpt} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$, and let the SVD of $L_0$ be $U_0\Sigma_0V_0^T$. Denote $\mathcal{P}_{T_0}(\cdot)=U_0U_0^T(\cdot)+(\cdot)V_0V_0^T-U_0U_0^T(\cdot)V_0V_0^T$. If the operator $\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0}$ is invertible, then we have \begin{align*} \|(\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0})^{-1}\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0}^\bot\| = \sqrt{\frac{1}{1-\|\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{T_0}\|}-1}. \end{align*} \end{lemm} \begin{proof} We shall use again the two notations, $\mathrm{vec}(\cdot)$ and $D$, defined in the proof of Lemma~\ref{lem:basic:inverse}. Let $P\in\mathbb{R}^{mn\times{}r}$ be a column-wisely orthonormal matrix such that $\mathrm{vec}(\mathcal{P}_{T_0}(M)) = PP^T\mathrm{vec}(M)$, $\forall{}M$. Since $\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0}$ is invertible, it follows that $P^TDP$ is positive definite. Denote by $\sigma_{min}(\cdot)$ the smallest singular value of a matrix. Then we have the following: \begin{align*} &\|(\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0})^{-1}\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0}^\bot\|^2 \\ &= \|P(P^TDP)^{-1}P^TD(\mathtt{I}-PP^T)\|^2\\ &=\|P(P^TDP)^{-1}P^TD(\mathtt{I}-PP^T)DP(P^TDP)^{-1}P^T\| \\ &= \|(P^TDP)^{-1}-\mathtt{I}\| = \frac{1}{\sigma_{min}(P^TDP)} -1 \\ &= \frac{1}{1 - \|P^T(\mathtt{I} - D)P\|} - 1=\frac{1}{1 - \|\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{T_0}\|} - 1. \end{align*} \end{proof} The following lemma is more general than Theorem~\ref{thm:fnorm}. \begin{lemm}\label{lem:critical:uinorm} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. Consider the following convex problem: \begin{align}\label{eq:uinorm} \min_{X} \norm{X}_{UI},\textrm{ s.t. }\mathcal{P}_{\Omega}(AX-L_0)=0, \end{align} where $\norm{\cdot}_{UI}$ generally denotes a convex unitary invariant norm and $A\in\Re^{m\times{}p}$ is given. If $L_0\in\mathrm{span}\{A\}$ and $A$ is $\Omega$-isomeric then $X_0=A^+L_0$ is the unique minimizer to the convex optimization problem in~\eqref{eq:uinorm}. \end{lemm} \begin{proof} Denote the SVD of $A$ as $U_A\Sigma_AV_A^T$. Then it follows from $\mathcal{P}_{\Omega}(AX-L_0)=0$ and $L_0\in\mathrm{span}\{A\}$ that \begin{align*} \mathcal{P}_{U_A}\mathcal{P}_{\Omega}\mathcal{P}_{U_A}(AX-L_0) = 0. \end{align*} By Lemma~\ref{lem:basic:L02U} and Lemma~\ref{lem:critical:inverse}, $\mathcal{P}_{U_A}\mathcal{P}_{\Omega}\mathcal{P}_{U_A}$ is invertible and thus $AX = L_0$. Hence, $\mathcal{P}_{\Omega}(AX-L_0)=0$ is equivalent to $AX=L_0$. Notice, that Theorem 4.1 of~\cite{tpami_2013_lrr} actually holds for any convex unitary invariant norms. That is, \begin{align*} A^+L_0 = \arg\min_{X} \|X\|_{UI}, \textrm{ s.t. } AX = L_0, \end{align*} which implies that $A^+L_0$ is the unique minimizer to the problem in~\eqref{eq:uinorm}. \end{proof} \subsection{Proofs of Theorems~\ref{thm:iso},~\ref{thm:iso:necessary} and~\ref{thm:rcn:bound}} We need to use some notations as follows. Let the SVD of $L_0$ be $U_0\Sigma_0V_0^T$. Denote $\mathcal{P}_{U_0}(\cdot)=U_0U_0^T(\cdot)$, $\mathcal{P}_{V_0}(\cdot)=(\cdot)V_0V_0^T$ and $\mathcal{P}_{T_0}(\cdot) = \mathcal{P}_{U_0}(\cdot)+\mathcal{P}_{V_0}(\cdot)-\mathcal{P}_{U_0}\mathcal{P}_{V_0}(\cdot)$. \begin{proof}({\bf proof of Theorem~\ref{thm:iso}}) Define an operator $\mathcal{H}$ in the same way as in~\citep{Candes:2009:math}: \begin{align*} \mathcal{H} = \mathcal{P}_{T_0} - \frac{1}{\rho_0}\mathcal{P}_{T_0}\mathcal{P}_{\Omega_{\mathcal{A}}}\mathcal{P}_{T_0}. \end{align*} According to Theorem 4.1 of~\citep{Candes:2009:math}, there exists some numerical constant $c>0$ such that the inequality, \begin{align*} \|\mathcal{H}\|\leq\sqrt{\frac{c\mu_0r_0\log{n_1}}{\rho_0n_2}}, \end{align*} holds with probability at least $1-n_1^{-10}$ provided that the right hand side is smaller than 1. So, $\|\mathcal{H}\|<1$ provided that \begin{align*} \rho_0>\frac{c\mu_0r_0\log{n_1}}{n_2}. \end{align*} When $\|\mathcal{H}\|<1$, we have \begin{align*} &\|\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{T_0}\| = \|\rho_0\mathcal{H}+(1-\rho_0)\mathcal{P}_{T_0}\|\\ &\leq{}\rho_0\|\mathcal{H}\|+(1-\rho_0)\|\mathcal{P}_{T_0}\|<1. \end{align*} Since $\mathcal{P}_{U_0}(\cdot)=\mathcal{P}_{U_0}\mathcal{P}_{T_0}(\cdot)=\mathcal{P}_{T_0}\mathcal{P}_{U_0}(\cdot)$, we have \begin{align*} &\|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}\| = \|\mathcal{P}_{U_0}\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{T_0}\mathcal{P}_{U_0}\|\leq\|\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{T_0}\|<1. \end{align*} Due to the virtues of Lemma~\ref{lem:basic:inverse}, Lemma~\ref{lem:critical:inverse} and Lemma~\ref{lem:basic:L02U}, it can be concluded that $L_0$ is $\Omega$-isometric with probability at least $1-n_1^{-10}$. In a similar way, it can be also proven that $L_0^T$ is $\Omega^T$-isometric with probability at least $1-n_1^{-10}$. \end{proof} \begin{proof}({\bf proof of Theorem~\ref{thm:iso:necessary}}) When $L_0$ is not $\Omega$-isomeric, Lemma~\ref{lem:basic:L02U} and Lemma~\ref{lem:critical:inverse} give that $\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}$ is not invertible. By Lemma~\ref{lem:basic:inverse}, $\mathcal{P}_{U_0}\cap{}\mathcal{P}_\Omega^\bot\neq\{0\}$. Thus, there exists $\Delta\neq0$ that satisfies $\Delta\in\mathcal{P}_{U_0}$ and $\Delta\in\mathcal{P}_\Omega^\bot$. Now construct $L = L_0 + \Delta$. Then we have $L\neq{}L_0$, $\mathcal{P}_\Omega(L) = \mathcal{P}_\Omega(L_0)$ and $\rank{L} = \rank{\mathcal{P}_{U_0}(L_0+\Delta)}\leq\rank{L_0}$. Since $\mathcal{P}_{U_0}\cap{}\mathcal{P}_\Omega^\bot$ is a nonempty linear space, there are indeed infinitely many choices for $L$. \end{proof} \begin{proof}({\bf proof of Theorem~\ref{thm:rcn:bound}}) Using the same arguments as in the proof of Theorem~\ref{thm:iso}, we conclude that the following holds with probability at least $1-n_1^{-10}$: \begin{align*} \|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}\|<1-\rho_0+\frac{\rho_0}{\sqrt{\alpha}}, \end{align*} which, together with Lemma~\ref{lem:critical:rnc2optnorm}, gives that $\gamma_{\Omega}(L_0)>(1-1/\sqrt{\alpha})\rho_0$. Similarity, it can be also proven that $\gamma_{\Omega^T}(L_0^T)>(1-1/\sqrt{\alpha})\rho_0$ with probability at least $1-n_1^{-10}$. \end{proof} \subsection{Proof of Theorem~\ref{thm:iso:rcn}} Let the SVD of $L_0$ be $U_0\Sigma_0V_0^T$. Denote the $i$th row of $U_0$ as $u_i^T$, i.e., $U_0 = [u_1^T;u_2^T;\cdots;u_{m}^T]$. Define $\delta_{ij}$ as in~\eqref{eq:delta}, and define a collection of diagonal matrices $\{D_j\}_{j=1}^{n}$ as $D_j = \mathrm{diag}(\delta_{1j},\delta_{2j},\cdots,\delta_{mj})\in\mathbb{R}^{m\times{}m}$. With these notations, we shall show that the operator norm of $\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}$ can be bounded from above. Considering the $j$th column of $\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}(X), \forall{}X, j$, we have \begin{align*} &[\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}(X)]_{:,j} = U_0U_0^T(\mathtt{I}-D_j)U_0U_0^T[X]_{:,j}, \end{align*} which gives that \begin{align*} \|[\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}(X)]_{:,j}\|_2\leq\|U_0U_0^T(\mathtt{I} - D_j)U_0U_0^T\|\|[X]_{:,j}\|_2. \end{align*} Since the diagonal of $D_j$ has at most $(1-\rho)m$ zeros, \begin{align*} &\|U_0U_0^T(\mathtt{I} - D_j)U_0U_0^T\|= \|\sum_{i=1}^{m_1}(1-\delta_{ij})u_iu_i^T\|\\ &\leq\sum_{i=1}^{m}(1-\delta_{ij})\|u_iu_i^T\|\leq(1-\rho)\mu_0r_0, \end{align*} where the last inequality follows from the definition of coherence. Thus, we have \begin{align*} \|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}\|\leq(1-\rho)\mu_0r_0. \end{align*} Similarly, based on the assumption that at least $\rho{}n$ entries in each row of $L_0$ are observed, we have \begin{align*} \|\mathcal{P}_{V_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{V_0}\|\leq(1-\rho)\mu_0r_0. \end{align*} By the assumption $\rho>1 - (1-\alpha)/(\mu_0r_0)$, \begin{align*} \|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}\|<1-\alpha\quad\textrm{and}\quad\|\mathcal{P}_{V_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{V_0}\|<1-\alpha. \end{align*} By Lemma~\ref{lem:basic:inverse} and Lemma~\ref{lem:critical:inverse}, $L_0$ is $\Omega/\Omega^T$-isomeric. In addition, it follows from Lemma~\ref{lem:critical:rnc2optnorm} that $\gamma_{\Omega,\Omega^T}(L_0)>\alpha$. \subsection{Proofs of Theorems~\ref{thm:l2} and~\ref{thm:fnorm}} Theorem~\ref{thm:fnorm} is indeed an immediate corollary of Lemma~\ref{lem:critical:uinorm}. So we only prove Theorem~\ref{thm:l2}. \begin{proof}By $y_0\in\mathcal{S}_0\subseteq\mathrm{span}\{A\}$, $y_0=AA^+y_0$ and therefore $y_b = A_bA^+y_0$. That is, $x_0=A^+y_0$ is a feasible solution to the problem in~\eqref{eq:l2}. Provided that $y_b\in\Re^k$ and the dictionary matrix $A$ is $k$-isomeric, Definition~\ref{def:iso:k} gives that $\rank{A_b} = \rank{A}$, which implies that \begin{align*} \mathrm{span}\{A_b^T\}=\mathrm{span}\{A^T\}. \end{align*} On the other hand, it is easy to see that $A^+y_0\in\mathrm{span}\{A^T\}$. Hence, there exists a dual vector $w\in\Re^p$ that obeys \begin{align*} A_b^Tw = A^+y_0, \textrm{ i.e., } A_b^Tw \in\partial\frac{1}{2}\|A^+y_0\|_2^2. \end{align*} By standard convexity arguments~\cite{book:convex}, $x_0=A^{+}y_0$ is an optimal solution to the problem in~\eqref{eq:l2}. Since the squared $\ell_2$ norm is a strongly convex function, it follows that the optimal solution to~\eqref{eq:l2} is unique. \end{proof} \subsection{Proof of Theorem~\ref{thm:convex}} \begin{proof} Let the SVD of $L_0$ be $U_0\Sigma_0V_0^T$. Denote $\mathcal{P}_{T_0}(\cdot)=U_0U_0^T(\cdot)+(\cdot)V_0V_0^T-U_0U_0^T(\cdot)V_0V_0^T$. Since $\gamma_{\Omega,\Omega^T}(L_0)>0.5$, it follows from Lemma~\ref{lem:critical:optnorm:ptpo} that $\|\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{T_0}\|$ is strictly smaller than 1. By Lemma~\ref{lem:basic:inverse}, $\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0}$ is invertible and $T_0\cap{}\Omega^\bot = \{0\}$. Given $\gamma_{\Omega,\Omega^T}(L_0)>0.75$, Lemma~\ref{lem:critical:optnorm:invpt} and Lemma~\ref{lem:critical:optnorm:ptpo} imply that \begin{align*} &\|(\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0})^{-1}\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0}^\bot\| = \sqrt{\frac{1}{1-\|\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{T_0}\|}-1}\\ &\leq\sqrt{\frac{1}{2\gamma_{\Omega,\Omega^T}(L_0)-1}-1}<1. \end{align*} Next, we shall consider a feasible solution $L=L_0+\Delta$ and show that the objective strictly increases unless $\Delta=0$. By $\mathcal{P}_{\Omega}(\Delta) = 0$, $\mathcal{P}_{\Omega}\mathcal{P}_{T_0}(\Delta) = -\mathcal{P}_{\Omega}\mathcal{P}_{T_0}^\bot(\Delta)$. Since the operator $\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0}$ is invertible, we have \begin{align*} \mathcal{P}_{T_0}(\Delta) = -(\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0})^{-1}\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0}^\bot(\Delta). \end{align*} By $\|(\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0})^{-1}\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0}^\bot\|<1$, $\|\mathcal{P}_{T_0}(\Delta)\|_*<\|\mathcal{P}_{T_0}^\bot(\Delta)\|_*$ holds unless $\mathcal{P}_{T_0}^\bot(\Delta)=0$. By the convexity of the nuclear norm, \begin{align*} &\|L_0+\Delta\|_* - \|L_0\|_*\geq{}\langle{}\Delta,U_0V_0^T+W\rangle, \end{align*} where $W\in{}\mathcal{P}_{T_0}^\bot$ and $\|W\|\leq1$. Due to the duality between the nuclear norm and operator norm, we can construct a $W$ such that $\langle{}\Delta,W\rangle=\|\mathcal{P}_{T_0}^\bot(\Delta_0)\|_*$. Thus, \begin{align*} &\|L_0+\Delta\|_* - \|L_0\|_*\geq{}\|\mathcal{P}_{T_0}^\bot(\Delta)\|_* - \|\mathcal{P}_{U_0}\mathcal{P}_{V_0}(\Delta)\|_*\\ &\geq\|\mathcal{P}_{T_0}^\bot(\Delta)\|_* - \|\mathcal{P}_{T_0}(\Delta)\|_*. \end{align*} Hence, $\|L_0+\Delta\|_*$ is strictly greater than $\|L_0\|_*$ unless $\Delta\in{}T_0$. Since $T_0\cap\Omega^\bot=\{0\}$, it follows that $L_0$ is the unique minimizer to the problem in~\eqref{eq:numin}. \end{proof} \subsection{Proof of Theorem~\ref{thm:isodp:f}} \begin{proof} Since $A_0 = U_0\Sigma_0^{\frac{1}{2}}Q^T$ and $X_0= Q\Sigma_0^{\frac{1}{2}}V_0^T$, we have the following: 1) $A_0X_0=L_0$; 2) $L_0\in\mathrm{span}\{A_0\}$ and $A_0$ is $\Omega$-isomeric; 3) $L_0^T\in\mathrm{span}\{X_0^T\}$ and $X_0^T$ is $\Omega^T$-isomeric. Hence, according to Lemma~\ref{lem:critical:uinorm}, we have \begin{align*} &X_0 = A_0^+L_0=\arg\min_{X} \|X\|_F^2,\textrm{ s.t. }\mathcal{P}_{\Omega}(A_0X - L_0)=0,\\ &A_0 = L_0X_0^+=\arg\min_{A} \|A\|_F^2,\textrm{ s.t. }\mathcal{P}_{\Omega}(AX_0 - L_0)=0. \end{align*} Hence, $(A_0,X_0)$ is a critical point to the problem in~\eqref{eq:isodp:f}. It remains to prove the second claim. Suppose that $(A=A_0+\Delta_0, X = X_0+E_0)$ with $\|\Delta_0\|\leq\varepsilon$ and $\|E_0\|\leq\varepsilon$ is a feasible solution to~\eqref{eq:isodp:f}. We want to prove that \begin{align*} \frac{1}{2}(\|A\|_F^2+\|X\|_F^2) \geq \frac{1}{2}(\|A_0\|_F^2+\|X_0\|_F^2) \end{align*} holds for some small $\varepsilon$, and show that the equality can hold only if $AX=L_0$. Denote \begin{align}\label{eq:temp:notation} &\mathcal{P}_{U_0}(\cdot)=U_0U_0^T(\cdot), \mathcal{P}_{V_0}(\cdot)=(\cdot)V_0V_0^T,\\\nonumber &\mathcal{P}_1 = (\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0})^{-1}\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}^\bot,\\\nonumber &\mathcal{P}_2 = (\mathcal{P}_{V_0}\mathcal{P}_{\Omega}\mathcal{P}_{V_0})^{-1}\mathcal{P}_{V_0}\mathcal{P}_{\Omega}\mathcal{P}_{V_0}^\bot. \end{align} Define \begin{align}\label{eq:temp:notation:1} &\bar{A}_0 = A_0 + \mathcal{P}_{U_0}(\Delta_0) \textrm{ and } \bar{X}_0 = X_0 + \mathcal{P}_{V_0}(E_0). \end{align} Provided that $\varepsilon<\min(1/\|A_0^+\|, 1/\|X_0^+\|)$, it follows from Lemma~\ref{lem:basic:pinv} that \begin{align}\label{eq:temp:notation:pseinv} &\rank{\bar{A}_0} = \rank{\bar{X}_0} = r_0,\\\nonumber &\|\bar{A}_0^+\|\leq\frac{\|A_0^+\|}{1-\|A_0^+\|\varepsilon}\textrm{ and }\|\bar{X}_0^+\|\leq\frac{\|X_0^+\|}{1-\|X_0^+\|\varepsilon}. \end{align} By $\mathcal{P}_{\Omega}(AX-L_0)=0$, \begin{align*} \mathcal{P}_{\Omega}(A_0E_0+\Delta_0X_0+\Delta_0E_0)=0. \end{align*} Then it can be manipulated that \begin{align*} &\mathcal{P}_{\Omega}(\bar{A}_0E_0) \\ &= -\mathcal{P}_{\Omega}(\Delta_0\bar{X}_0- \mathcal{P}_{U_0}\mathcal{P}_{V_0}(\Delta_0E_0) + \mathcal{P}_{U_0}^\bot\mathcal{P}_{V_0}^\bot(\Delta_0E_0)). \end{align*} Since $\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}$ is invertible, we have \begin{align}\label{eq:temp:p1} &\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0) = -\mathcal{P}_{V_0}^\bot(\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0})^{-1}\mathcal{P}_{U_0}\mathcal{P}_{\Omega}(\Delta_0\bar{X}_0\\\nonumber &-\mathcal{P}_{U_0}\mathcal{P}_{V_0}(\Delta_0E_0) + \mathcal{P}_{U_0}^\bot\mathcal{P}_{V_0}^\bot(\Delta_0E_0)) \\\nonumber &= -\mathcal{P}_{V_0}^\bot\mathcal{P}_1\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0) - \mathcal{P}_{V_0}^\bot\mathcal{P}_1\mathcal{P}_{U_0}^\bot\mathcal{P}_{V_0}^\bot(\Delta_0E_0) \end{align} Similarly, by the invertibility of $\mathcal{P}_{V_0}\mathcal{P}_{\Omega}\mathcal{P}_{V_0}$, \begin{align}\label{eq:temp:p2} &\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)\\\nonumber &= -\mathcal{P}_{U_0}^\bot\mathcal{P}_2\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0) - \mathcal{P}_{U_0}^\bot\mathcal{P}_2\mathcal{P}_{U_0}^\bot\mathcal{P}_{V_0}^\bot(\Delta_0E_0). \end{align} The combination of~\eqref{eq:temp:p1} and~\eqref{eq:temp:p2} gives that \begin{align*} &\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0) = \mathcal{P}_{V_0}^\bot\mathcal{P}_1\mathcal{P}_2\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0) + \\\nonumber &\mathcal{P}_{V_0}^\bot(\mathcal{P}_1\mathcal{P}_2-\mathcal{P}_1)\mathcal{P}_{U_0}^\bot\mathcal{P}_{V_0}^\bot(\Delta_0E_0). \end{align*} By $\rank{\bar{A}_0}=r_0=p$, \begin{align*} \mathcal{P}_{U_0}^\bot\mathcal{P}_{V_0}^\bot(\Delta_0E_0) = \mathcal{P}_{U_0}^\bot\mathcal{P}_{V_0}^\bot(\Delta_0\bar{A}_0^+\bar{A}_0E_0). \end{align*} By Lemma~\ref{lem:critical:optnorm:big} and the assumption of $\gamma_{\Omega,\Omega^T}(L_0)>0.5$, $\|\mathcal{P}_1\|<1$ and $\|\mathcal{P}_2\|<1$. Thus, \begin{align*} &\|\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0)\|\leq\|\mathcal{P}_1\mathcal{P}_2\|\|\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0)\|\\ &+\varepsilon(\|\mathcal{P}_1\mathcal{P}_2\|+\|\mathcal{P}_1\|)\|\bar{A}_0^+\|\|\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0)\|\\ &\leq\left(\frac{1}{\gamma_{\Omega,\Omega^T}(L_0)}-1+\frac{2\varepsilon\|A_0^+\|}{1-\|A_0^+\|\varepsilon}\right)\|\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0)\|. \end{align*} Let \begin{align*} \varepsilon < \min\left(\frac{1}{2\|A_0^+\|}, \frac{2\gamma_{\Omega,\Omega^T}(L_0)-1}{4\|A_0^+\|\gamma_{\Omega,\Omega^T}(L_0)}\right). \end{align*} Then we have that $\|\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0)\|<\|\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0)\|$ strictly holds unless $\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0)=0$. Since $\rank{\bar{A}_0}=r_0=p$, $\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0)=0$ simply leads to $E_0\in\mathcal{P}_{V_0}$. Hence, \begin{align*} A_0E_0+\Delta_0X_0+\Delta_0E_0 \in\mathcal{P}_{V_0}\cap\mathcal{P}_{\Omega}^\bot = \{0\}, \end{align*} which implies that $AX = L_0$. Thus, we finally have \begin{align*} \frac{1}{2}(\|A\|_F^2+\|X\|_F^2)\geq\|L_0\|_*=\frac{1}{2}(\|A_0\|_F^2+\|X_0\|_F^2), \end{align*} where the inequality follows from $\|AX\|_*=\min_{A,X}\frac{1}{2}(\|A\|_F^2+\|X\|_F^2)$~\cite{siam_2010_minirank}. \end{proof} \subsection{Proof of Theorem~\ref{thm:isodp}} \begin{proof} Since $A_0 = U_0\Sigma_0^{\frac{2}{3}}Q^T$ and $X_0= Q\Sigma_0^{\frac{1}{3}}V_0^T$, we have the following: 1) $A_0X_0=L_0$; 2) $L_0\in\mathrm{span}\{A_0\}$ and $A_0$ is $\Omega$-isomeric; 3) $L_0^T\in\mathrm{span}\{X_0^T\}$ and $X_0^T$ is $\Omega^T$-isomeric. Due to Lemma~\ref{lem:critical:uinorm}, we have \begin{align*} &X_0 = A_0^+L_0=\arg\min_{X} \|X\|_F^2,\textrm{ s.t. }\mathcal{P}_{\Omega}(A_0X - L_0)=0,\\ &A_0 = L_0X_0^+=\arg\min_{A} \|A\|_*,\textrm{ s.t. }\mathcal{P}_{\Omega}(AX_0 - L_0)=0. \end{align*} Hence, $(A_0,X_0)$ is a critical point to the problem in~\eqref{eq:isodp}. Regarding the second claim, we consider a feasible solution $(A=A_0+\Delta_0, X = X_0+E_0)$, with $\|\Delta_0\|\leq\varepsilon$ and $\|E_0\|\leq\varepsilon$. Define $\mathcal{P}_{U_0}$, $\mathcal{P}_{V_0}$, $\mathcal{P}_1$, $\mathcal{P}_2$, $\bar{A}_0$ and $\bar{X}_0$ in the same way as in~\eqref{eq:temp:notation} and~\eqref{eq:temp:notation:1}. Note that the statements in~\eqref{eq:temp:notation:pseinv} still hold in the general case of $p\geq{}r_0$. Denote the SVD of $\bar{X}_0$ as $\bar{Q}\bar{\Sigma}\bar{V}_0^T$. Then we have $V_0V_0^T = \bar{V}_0\bar{V}_0^T$. Denote \begin{align*} P_{\bar{Q}} = \bar{Q}\bar{Q}^T \textrm{ and }P_{\bar{Q}}^\bot = \mathtt{I} - \bar{Q}\bar{Q}^T. \end{align*} Denote the condition number of $X_0$ as $\tau_0$. With these notations, we shall finish the proof by exploring two cases. \subsubsection{Case 1: $\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}^\bot\|_*\geq2\tau_0\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}\|_*$} Denote the SVD of $L_0\bar{X}_0^+$ as $\tilde{U}_0\tilde{\Sigma}\tilde{Q}^T$. Then we have \begin{align*} \tilde{U}_0\tilde{U}_0^T = U_0U_0^T \textrm{ and }\tilde{Q}\tilde{Q}^T = \bar{Q}\bar{Q}^T. \end{align*} By the convexity of the nuclear norm, \begin{align}\label{eq:temp:ded:1} &\|A\|_* - \|L_0\bar{X}_0^+\|_*=\|A_0+\Delta_0\|_* - \|L_0\bar{X}_0^+\|_*\\\nonumber &\geq{}\langle{}A_0+\Delta_0-L_0\bar{X}_0^+,\tilde{U}_0\tilde{Q}^T+W\rangle, \end{align} where $W\in\mathbb{R}^{m\times{}p}$, $\tilde{U}_0^TW = 0$, $W\tilde{Q} = 0$ and $\|W\|\leq1$. Due to the duality between the nuclear norm and operator norm, we can construct a $W$ such that \begin{align}\label{eq:temp:ded:2} \langle{}\Delta_0,W\rangle=\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}^\bot\|_*. \end{align} We also have \begin{align*} &\langle{}A_0\hspace{-0.02in}+\hspace{-0.02in}\Delta_0\hspace{-0.02in}-\hspace{-0.02in}L_0\bar{X}_0^+,\tilde{U}_0\tilde{Q}^T\rangle\hspace{-0.02in}=\hspace{-0.02in}\langle{}\Delta_0+A_0E_0\bar{X}_0^+,\tilde{U}_0\tilde{Q}^T\rangle\\ &=\langle{}\Delta_0\bar{X}_0\bar{X}_0^++A_0E_0\bar{X}_0^+,\tilde{U}_0\tilde{Q}^T\rangle, \end{align*} which gives that \begin{align}\label{eq:temp:ded:3} &\mathrm{abs}(\langle{}A_0+\Delta_0-L_0\bar{X}_0^+,\tilde{U}_0\tilde{Q}^T\rangle)\leq\|\bar{X}_0^+\|\|\mathcal{P}_{U_0}\mathcal{P}_{V_0}\\\nonumber &(\Delta_0\bar{X}_0+A_0E_0)\|_*\leq\|\bar{X}_0^+\|\|\Delta_0\bar{X}_0+\mathcal{P}_{V_0}(A_0E_0)\|_*, \end{align} where we denote by $\mathrm{abs}(\cdot)$ the absolute value of a real number. By $\mathcal{P}_{\Omega}(A_0E_0+\Delta_0X_0+\Delta_0E_0)=0$, \begin{align*} &\Delta_0\bar{X}_0+\mathcal{P}_{V_0}(A_0E_0) = -\mathcal{P}_2\mathcal{P}_{V_0}^\bot(A_0E_0) - \mathcal{P}_2\mathcal{P}_{V_0}^\bot(\Delta_0E_0)\\ &=-\mathcal{P}_2(-\mathcal{P}_{V_0}^\bot\mathcal{P}_1(\Delta_0X_0+\Delta_0E_0) - \mathcal{P}_{V_0}^\bot\mathcal{P}_{U_0}(\Delta_0E_0))\\ &- \mathcal{P}_2\mathcal{P}_{V_0}^\bot(\Delta_0E_0)=\mathcal{P}_2\mathcal{P}_1(\Delta_0X_0+\Delta_0E_0) - \mathcal{P}_2\mathcal{P}_{U_0}^\bot(\Delta_0E_0)\\ &= \mathcal{P}_2\mathcal{P}_1(\Delta_0\bar{X}_0) + \mathcal{P}_2\mathcal{P}_1\mathcal{P}_{V_0}^\bot(\Delta_0E_0) - \mathcal{P}_2\mathcal{P}_{U_0}^\bot(\Delta_0E_0). \end{align*} By Lemma~\ref{lem:critical:optnorm:big} and the assumption of $\gamma_{\Omega,\Omega^T}(L_0)>0.5$, $\|\mathcal{P}_1\|<1$ and $\|\mathcal{P}_2\|<1$. As a result, we have \begin{align}\label{eq:temp:ded:3b} &\|\Delta_0\bar{X}_0+\mathcal{P}_{V_0}(A_0E_0)\|_* \\\nonumber &\leq \|\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)\|_*+2\|\mathcal{P}_{U_0}^\bot(\Delta_0E_0)\|_*. \end{align} Let \begin{align*} \varepsilon < \min\left(\frac{0.1\|X_0\|}{1+1.1\tau_0},\frac{0.175}{\|X_0^+\|}\right). \end{align*} Due to~\eqref{eq:temp:ded:3},~\eqref{eq:temp:ded:3b} and the assumption of $\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}^\bot\|_*\geq2\tau_0\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}\|_*$, it can be calculated that \begin{align}\label{eq:temp:ded:4} &\mathrm{abs}(\langle{}A_0+\Delta_0-L_0\bar{X}_0^+,\tilde{U}_0\tilde{Q}^T\rangle)\\\nonumber &\leq\|\bar{X}_0^+\|\|\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)\|_*+2\|\bar{X}_0^+\|\|\mathcal{P}_{U_0}^\bot(\Delta_0(P_{\bar{Q}}+P_{\bar{Q}}^\bot)E_0)\|_*\\\nonumber &\leq\|\bar{X}_0^+\|\|\bar{X}_0\|\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}\|_*+2\varepsilon\|\bar{X}_0^+\|\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}\|_*\\\nonumber &+2\varepsilon\|\bar{X}_0^+\|\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}^\bot\|_*\leq1.1\tau_0\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}\|_*\\\nonumber &+0.2\tau_0\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}\|_*+0.35\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}^\bot\|_*\\\nonumber &\leq(0.65+0.35)\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}^\bot\|_*=\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}^\bot\|_*. \end{align} Now, combining~\eqref{eq:temp:ded:1},~\eqref{eq:temp:ded:2} and~\eqref{eq:temp:ded:4}, we have \begin{align*} &\|A\|_* - \|L_0\bar{X}_0^+\|_*\geq\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}^\bot\|_*\\ &-\mathrm{abs}(\langle{}A_0+\Delta_0-L_0\bar{X}_0^+,\tilde{U}_0\tilde{Q}^T\rangle)\geq0, \end{align*} which, together with Lemma~\ref{lem:basic:ax}, simply leads to \begin{align*} &\|A\|_* + \frac{1}{2}\|X\|_F^2 = (\|A\|_*-\|L_0\bar{X}_0^+\|_*)\\ &+(\|L_0\bar{X}_0^+\|_*+\frac{1}{2}\|X\|_F^2)\geq\|L_0\bar{X}_0^+\|_*+\frac{1}{2}\|\bar{X}_0\|_F^2\\ &\geq{}\frac{3}{2}\trace{\Sigma_0^{\frac{2}{3}}}=\|A_0\|_* + \frac{1}{2}\|X_0\|_F^2. \end{align*} For the equality of $\|A\|_* + 0.5\|X\|_F^2=\|A_0\|_* + 0.5\|X_0\|_F^2$ to hold, at least, $\|X\|_F = \|\bar{X}_0\|_F$ must be obeyed, which implies that $E_0\in\mathcal{P}_{V_0}$. Hence, we have $A_0E_0+\Delta_0X_0+\Delta_0E_0 \in\mathcal{P}_{V_0}\cap\mathcal{P}_{\Omega}^\bot = \{0\}$, which gives that $AX = L_0$. \subsubsection{Case 2: $\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}^\bot\|_*\leq2\tau_0\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}\|_*$} Using a similar manipulation as in the proof of Theorem~\ref{thm:isodp:f}, we have \begin{align*} &\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0) = \mathcal{P}_{U_0}^\bot\mathcal{P}_2\mathcal{P}_1\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)+ \\ &\mathcal{P}_{U_0}^\bot(\mathcal{P}_2\mathcal{P}_1-\mathcal{P}_2)\mathcal{P}_{U_0}^\bot\mathcal{P}_{V_0}^\bot(\Delta_0E_0)=\mathcal{P}_{U_0}^\bot\mathcal{P}_2\mathcal{P}_1\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)\\ &+\mathcal{P}_{U_0}^\bot(\mathcal{P}_2\mathcal{P}_1-\mathcal{P}_2)\mathcal{P}_{U_0}^\bot\mathcal{P}_{V_0}^\bot(\Delta_0P_{\bar{Q}}E_0 + \Delta_0P_{\bar{Q}}^\bot{}E_0) . \end{align*} Due to Lemma~\ref{lem:critical:optnorm:big} and the assumption of $\gamma_{\Omega,\Omega^T}(L_0)>0.5$, we have $\|\mathcal{P}_1\|<1$ and $\|\mathcal{P}_2\|<1$. By the assumption of $\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}^\bot\|_*\leq2\tau_0\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}\|_*$, \begin{align*} &\|\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)\|_*\leq\|\mathcal{P}_2\mathcal{P}_1\|\|\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)\|_*\\ &+(4\tau_0+2)\varepsilon\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}\|_*= \|\mathcal{P}_2\mathcal{P}_1\|\|\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)\|_*\\ &+(4\tau_0+2)\varepsilon\|\mathcal{P}_{U_0}^\bot(\Delta_0)\bar{X}_0\bar{X}_0^+\|_* \\ &\leq\left(\frac{1}{\gamma_{\Omega,\Omega^T}(L_0)}-1+\frac{(4\tau_0+2)\varepsilon\|X_0^+\|}{1-\|X_0^+\|\varepsilon}\right)\|\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)\|_*. \end{align*} Let \begin{align*} \varepsilon < \min\left(\frac{1}{2\|X_0^+\|}, \frac{2\gamma_{\Omega,\Omega^T}(L_0)-1}{(8\tau_0+4)\|X_0^+\|\gamma_{\Omega,\Omega^T}(L_0)}\right). \end{align*} Then $\|\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)\|_*<\|\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)\|_*$ strictly holds unless $\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)=0$. That is, \begin{align*} \mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}} = 0 \textrm{ and thus } \mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}^\bot = 0. \end{align*} Hence, we have $\mathcal{P}_{U_0}^\bot(\Delta_0)=0$, which simply leads to \begin{align*} A_0E_0+\Delta_0X_0+\Delta_0E_0 \in\mathcal{P}_{U_0}\cap\mathcal{P}_{\Omega}^\bot = \{0\}, \end{align*} and which gives that $AX = L_0$. By Lemma~\ref{lem:basic:ax}, \begin{align*} &\|A\|_*+\frac{1}{2}\|X\|_F^2 \geq{}\frac{3}{2}\trace{\Sigma_0^{\frac{2}{3}}}=\|A_0\|_* + \frac{1}{2}\|X_0\|_F^2. \end{align*} \end{proof} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{rcn.pdf}\vspace{-0.15in} \caption{Left: The relative condition number $\gamma_{\Omega,\Omega^T}(L_0)$ vs the missing rate $1-\rho_0$ at $m=500$. Middle: The relative condition number vs the matrix size $m$. Right: Plotting the recovery performance of convex optimization as a function of the missing rate.}\label{fig:rcn}\vspace{-0.2in} \end{center} \end{figure} \section{Experiments}\label{sec:exp} \subsection{Investigating the Relative Condition Number}\label{sec:exp:rcn} To study the properties of the relative condition number, we generate a vector $x\in\mathbb{R}^{m}$ according to the model $[x]_t = \sin(2t\pi/m)$, $t=1,\cdots,m$. That is, $x$ is a univariate time series of dimension $m$. We consider the forecasting tasks of recovering $x$ from a collection of $l$ observations, $\{[x]_t\}_{t=1}^{l}$, where $l=\rho_0m$ varies from $0.1m$ to $0.9m$ with step size $0.1m$. Let $y\in\mathbb{R}^{m}$ be the mask vector of the sampling operator, i.e., $[y]_t$ is 1 if $[x]_t$ is observed and 0 otherwise. In order to recover $x$, it suffices to recover its \emph{convolution matrix}~\cite{liu:tip:2014}. Thus, the forecasting tasks here can be converted to matrix completion problems, with \begin{align*} L_0 = \mathcal{A}(x)\quad\textrm{and}\quad\Omega=\mathrm{supp}(\mathcal{A}(y)), \end{align*} where $\mathcal{A}(\cdot)$ is the convolution matrix of a tensor\footnote{Unlike~\cite{liu:tip:2014}, we adopt here the circulant boundary condition. Thus, the $j$th column of $\mathcal{A}(x)$ is simply the vector obtained by circularly shifting the elements in $x$ by $j-1$ positions.}, and $\mathrm{supp}(\cdot)$ is the support set of a matrix. In this example, $L_0\in\mathbb{R}^{m\times{}m}$ is a circulant matrix that is perfectly incoherent and low rank; namely, $\rank{L_0}\equiv2$ and $\mu(L_0)\equiv1$, $\forall{}m>2$. Moreover, each column and each row of $\Omega$ have exactly a cardinality of $\rho_0m$. We use the convex program~\eqref{eq:numin} to restore $L_0$ from the given observations. The results are shown in Figure~\ref{fig:rcn}. It can be seen that the relative condition number is independent of the matrix sizes and monotonously deceases as the missing rate grows. As we can see from the right hand side of Figure~\ref{fig:rcn}, the recovery performance visibly declines when the missing rate exceeds $30\%$ (i.e., $\rho_0<0.7$), which approximately corresponds to $\gamma_{\Omega,\Omega^T}(L_0)<0.55$. When $\rho_0<0.3$ (which corresponds approximately to $\gamma_{\Omega,\Omega^T}(L_0)<0.15$), matrix completion totally breaks down. These results illustrate that relative well-conditionedness is important for guaranteeing the success of matrix completion in practice. Of course, the lower bound on $\gamma_{\Omega,\Omega^T}(L_0)$ would depend on the characteristics of data, and the condition $\gamma_{\Omega,\Omega^T}(L_0)>0.75$ proven in Theorem~\ref{thm:convex} is just a universal bound for guaranteeing exact recovery in the worst case. In addition, the estimate given in Theorem~\ref{thm:iso:rcn} is accurate only when the missing rate is low, as shown in the left part of Figure~\ref{fig:rcn}. Among the other things, it is worth noting that the sampling complexity does not decrease as the matrix size $m$ grows. This phenomenon is in conflict with the uniform sampling based matrix completion theories, which prove that a small fraction of $O((\log{m})^2/m)$ entries should suffice to recover $L_0$~\cite{Chen:2015:tit}, and which implies that the sampling complexity should decrease to zero when the matrix size $m$ goes to infinity. Hence, as aforementioned, the theories built upon uniform sampling are no longer applicable when applying to the deterministic missing data patterns. \subsection{Results on Randomly Generated Matrices} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{location.pdf}\vspace{-0.15in} \caption{Visualizing the configurations of $\Omega$ used in our simulations. The white points correspond to the locations of the observed entries. In these two examples, 90\% entries of the matrix are missing.}\label{fig:location}\vspace{-0.2in} \end{center} \end{figure} \begin{figure} \begin{center} \subfigure[nonuniform]{\includegraphics[width=0.48\textwidth]{nonuniform.pdf}} \subfigure[uniform]{\includegraphics[width=0.48\textwidth]{uniform.pdf}}\vspace{-0.15in} \caption{Comparing IsoDP with convex optimization and LRFD. The numbers plotted on the above figures are the success rates within 20 random trials. The white and black areas mean ``succeed" and ``fail", respectively. Here, the success is in a sense that $\mathrm{PSNR_{dB}}$ $\geq$ 40.}\label{fig:cmp}\vspace{-0.2in} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{iso.pdf}\vspace{-0.15in} \caption{Visualizing the regions in which the isomeric condition holds.}\label{fig:iso}\vspace{-0.2in} \end{center} \end{figure} To evaluate the performance of various matrix completion methods, we generate a collection of $m\times{}n$ ($m=n=100$) target matrices according to $L_0=BC$, where $B\in\Re^{m\times{}r_0}$ and $C\in\Re^{r_0\times{}n}$ are $\mathcal{N}(0,1)$ matrices. The rank of $L_0$, i.e., $r_0$, is configured as $r_0=1, 5, 10, \cdots, 90, 95$. Regarding the sampling set $\Omega$ consisting of the locations of the observed entries, we consider two settings: One is to create $\Omega$ by using a Bernoulli model to randomly sample a subset from $\{1,\cdots,m\}\times\{1,\cdots,n\}$ (referred to as ``uniform''), the other is to let the locations of the observed entries be centered around the main diagonal of a matrix (referred to as ``nonuniform''). Figure~\ref{fig:location} shows how the sampling set $\Omega$ looks like. The observation fraction is set as $|\Omega|/(mn)=0.01,0.05,\cdots,0.9, 0.95$. To show the advantages of IsoDP, we include for comparison two prevalent methods: convex optimization~\cite{Candes:2009:math} and Low-Rank Factor Decomposition (LRFD)~\cite{liu:tsp:2016}. The same as IsoDP, these two methods do not assume that rank of $L_0$ either. When $p = m$ and the identity matrix is used to initialize the dictionary $A$, the bilinear program~\eqref{eq:isodp:f} does not outperform convex optimization, thereby we exclude it from the comparison. The accuracy of recovery, i.e., the similarity between $L_0$ and $\hat{L}_0$, is measured by Peak Signal-to-Noise Ratio ($\mathrm{PSNR_{dB}}$). Figure~\ref{fig:cmp} compares IsoDP to convex optimization and LRFD. It can be seen that IsoDP works distinctly better than the competing methods. Namely, while handling the nonuniformly missing data, the number of matrices successfully restored by IsoDP is 102\% and 71\% more than convex optimization and LRFD, respectively. While dealing with the missing entries chosen uniformly at random, in terms of the number of successfully restored matrices, IsoDP outperforms both convex optimization and LRFD by 44\%. These results verify the effectiveness of IsoDP. Figure~\ref{fig:iso} plots the regions where the isometric condition is valid. By comparing Figure~\ref{fig:cmp} to Figure~\ref{fig:iso}, it can be seen that the recovery performance of IsoDP has not reached the upper limit defined by isomerism. That is, there is still some room left for improvement. \subsection{Results on Motion Data} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{dinosaur.pdf} \vspace{-0.15in}\caption{An example image from the Oxford dinosaur sequence and the locations of the observed entries in the data matrix of trajectories. In this dataset, 74.29\% entries of the trajectory matrix are missing.}\label{fig:din}\vspace{-0.2in} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{dinosaur-res.pdf} \vspace{-0.15in}\caption{Some examples of the originally incomplete and fully restored trajectories. (a) The original incomplete trajectories. (b) The trajectories restored by convex optimization~\cite{Candes:2009:math}. (c) The trajectories restored by LRFD~\cite{liu:tsp:2016}. (d) The trajectories restored by IsoDP.}\label{fig:dinosaur:res}\vspace{-0.2in} \end{center} \end{figure} We now consider the Oxford dinosaur sequence\footnote{Available at http://www.robots.ox.ac.uk/$\sim$vgg/data1.html}, which contains in total 72 image frames corresponding to 4983 track points observed by at least 2 among 36 views. The values of the observations range from 8.86 to 629.82. We select 195 track points which are observed by at least 6 views for experiment, resulting in a $72\times{}195$ trajectory matrix 74.29\% entries of which are missing (see Figure~\ref{fig:din}). The tracked dinosaur model is rotating around its center, and thus the true trajectories should form complete circles~\cite{zheng:cvpr:2012}. \begin{table} \caption{Mean square error (MSE) on the Oxford dinosaur sequence. Here, the rank of a matrix is estimated by $\#\{i |\sigma_i\geq10^{-4}\sigma_1\}$, where $\sigma_1\geq\sigma_2\geq\cdots$ are the singular values of the matrix. The regularization parameter in each method is manually tuned such that the rank of the restored matrix meets a certain value. Here, the MSE values are evaluated on the training data (i.e., observed entries).}\label{tb:motion}\vspace{-0.2in} \begin{center} \begin{tabular}{|c|c|c|c|}\hline rank of the & & &\\ restored matrix &convex optimization &LRFD &IsoDP\\\hline 6 &426.1369 &28.4649 &\textbf{0.6140}\\ 7 &217.9963 &21.6968 &\textbf{0.4682}\\ 8 &136.7643 &17.2269 &\textbf{0.1480}\\ 9 &94.4673 &13.954 &\textbf{0.0585}\\ 10 &53.9864 &6.3768 &\textbf{0.0468}\\ 11 &43.2613 &5.9877 &\textbf{0.0374}\\ 12 &29.7542 &4.5136 &\textbf{0.0302}\\\hline \end{tabular}\vspace{-0.2in} \end{center} \end{table} The results in Theorem~\ref{thm:isodp} imply that our IsoDP may possess the ability to attain a solution of strictly low rank. To confirm this, we evaluate convex optimization, LRFD and IsoDP by examining the rank of the restored trajectory matrix as well as the fitting error on the observed entries. Table~\ref{tb:motion} shows the evaluation results. It can be seen that, while the restored matrices have the same rank, the fitting error produced by IsoDP is much smaller than the competing methods. The error of convex optimization is quite large, because the method cannot produce a solution of exactly low rank unless a biased regularization parameter is chosen. Figure~\ref{fig:dinosaur:res} shows some examples of the originally incomplete and fully restored trajectories. Our IsoDP method can approximately recover the circle-like trajectories. \subsection{Results on Movie Ratings} We also consider the MovieLens~\cite{Harper:2015} datasets that are widely used in research and industry. The dataset we use is consist of 100,000 ratings (integers between 1 and 5) from 943 users on 1682 movies. The distribution of the observed entries is severely imbalanced: The number of movies rated by each user ranges from 20 to 737, and the number of users who have rated for each movie ranges from 1 to 583. We remove the users that have less than 80 ratings, and so for the movies. Thus the final dataset used for experiments is consist of 14,675 ratings from 231 users on 206 movies. For the sake of quantitative evaluation, we randomly select 1468 ratings as the testing data, i.e., those ratings are intentionally set unknown to the matrix completion methods. So, the percentage of the observed entries used as inputs for matrix completion is only 27.75\%. \begin{table} \caption{MSE on the MovieLens dataset. The regularization parameters of the competing methods have been manually tuned to the best. Here, the MSE values are evaluated on the testing data.}\label{tb:movie}\vspace{-0.2in} \begin{center} \begin{tabular}{cc}\hline methods & MSE \\\hline random & 3.7623\\ average & 1.6097 \\ convex optimization & 0.9350\\ LRFD & 0.9213 \\\hline IsoDP ($\lambda=0.0005$) & 0.8412 \\ IsoDP ($\lambda=0.0008$) & 0.8250 \\ IsoDP ($\lambda=0.001$) & \textbf{0.8228}\\ IsoDP ($\lambda=0.002$) & 0.8295\\ IsoDP ($\lambda=0.005$) & 0.8583\\\hline \end{tabular}\vspace{-0.2in} \end{center} \end{table} Despite convex optimization and LRFD, we also consider two ``trivial'' baselines: One is to estimate the unseen ratings by randomly choosing an integer from the range of 1 to 5, the other is to simply use the average rating of 3 to fill the unseen entries. The comparison results are shown in Table~\ref{tb:movie}. As we can see, all the considered matrix completion methods outperform distinctly the trivial baselines, illustrating that matrix completion is beneficial on this dataset. In particular, IsoDP with proper parameters performs much better than convex optimization and LRFD, confirming the effectiveness of IsoDP on realistic datasets. \section{Conclusion}\label{sec:con} This work studied the identifiability of real-valued matrices under the convex of deterministic sampling. We established two deterministic conditions, isomerism and relative well-conditionedness, for ensuring that an arbitrary matrix is identifiable from a subset of the matrix entries. We first proved that the proposed conditions can hold even if the missing data pattern is irregular. Then we proved a series of theorems for missing data recovery and convex/nonconvex matrix completion. In general, our results could help to understand the completion regimes of arbitrary missing data patterns, providing a basis for investigating the other related problems such as data forecasting. \section*{Acknowledgement} This work is supported in part by New Generation AI Major Project of Ministry of Science and Technology under Grant SQ2018AAA010277, in part by national Natural Science Foundation of China (NSFC) under Grant 61622305, Grant 61532009 and Grant 71490725, in part by Natural Science Foundation of Jiangsu Province of China (NSFJPC) under Grant BK20160040, in part by SenseTime Research Fund.
1,108,101,562,387
arxiv
\section{Introduction} Branching processes have been applied widely to model epidemic spread (see, e.g., the monographs by Andersson and Britton \citep {hakanbritton}, Daley and Gani \citep{daleygani} and Mode and Sleeman \citep {modesleemam}, and the review by Pakes \citep{pakes}). The process describing the number of infectious individuals in an epidemic model may be well approximated by a branching process if the population is homogeneously mixing and the number of infectious individuals is small in relation to the total size of the susceptible population, since under these circumstances the probability that an infectious contact is with a previously infected individual is negligible (see, e.g., Isham \citep{isham}). Such an approximation dates back to the pioneering works of Bartlett \citep{bartlett} and Kendall \citep{kendall}, and can be made mathematically precise by showing convergence of the epidemic process to a limiting branching process as the number of susceptibles tends to infinity (see Ball \citep{ball83}, Ball and Donnelly \citep{ball} and Metz \citep{metz}). The approximation may also be extended to epidemics in populations that are not homogeneously mixing, for example, those containing small mixing units such as households and workplaces (see Pellis \textit{et al.} \citep{pellis}). Before proceeding we give outline descriptions of some common branching process models (see, e.g., Jagers \citep{jage} for further details), which describe the evolution of a single-type population. In all of these models, individuals have independent and identically distributed reproduction processes. In a Bienaym{\'e}--Galton--Watson branching process, each individual lives for one unit of time and then has a random number of children, distributed according to a random variable, $\zeta$ say. In a Bellman--Harris branching process (BHBP), each individual lives until a random age, distributed according to a random variable $I$ say, and then has a random number of children, distributed according to $\zeta$, where $I$ and $\zeta$ are independent. The Sevast'yanov branching process (SBP) is defined similarly, except $I$ and $\zeta$ may be dependent, so the number of children an individual has is correlated with that individual's lifetime. Finally, in a general branching process, also called a Crump--Mode--Jagers (CMJ) branching process, each individual lives until a random age, distributed according to $I$, and reproduces at ages according to a point process $\xi$. More precisely, if an individual, $i$ say having reproduction variables $(I_i,\xi_i)$, is born at time $b_i$ and $0 \le \tau_{i1} \le\tau_{i2}\le \cdots\le I_i$ denote the points of $\xi_i$, then individual $i$ has one child at each of times $b_i+\tau_{i1}, b_i+\tau_{i2}, \ldots\,$. This paper is primarily concerned with models for epidemics of diseases, such as measles, mumps and avian influenza, which follow the so-called SIR (Susceptible $\to$ Infective $\to$ Removed) scheme in a closed, homogeneously mixing population or some of its extensions. A key epidemiological parameter for such an epidemic model is the basic reproduction number $R_0$ (see Heesterbeek and Dietz \citep{heesterbeek}), which in the present setting is given by the mean of the offspring distribution of the approximating branching process. In particular a major outbreak (i.e., one whose size is of the same order as the population size) occurs with nonzero probability if and only if $R_0>1$. Suppose that $R_0>1$ and a fraction $c$ of the population is vaccinated with a perfect vaccine in advance of an epidemic. Then $R_0$ is reduced to $(1-c)R_0$, since a proportion $c$ of infectious contacts is with vaccinated individuals. It follows that a major outbreak is almost surely prevented if and only if $c \ge1-R_0^{-1}$. This well-known result, which gives the critical vaccination coverage to prevent a major outbreak and goes back at least to 1964 (e.g., Smith \citep{Smith}), is widely used to inform public health authorities. As a consequence of the above result, many analyses of vaccination strategies in the epidemic modelling literature have focussed on reducing $R_0$ to its critical value of one. However, if the population is large, both the total size and the duration of an outbreak may still be appreciable. Indeed, in the limit as the population size tends to infinity, {when $R_0=1$,} both of these quantities have infinite expectation under any plausible modelling assumptions. In practice, there may be a cost associated with an individual contracting the disease being modelled, in which case it is of interest to determine vaccination strategies which reduce the expected value of the total cost of an outbreak to an acceptable level. Alternatively, it may be desired to control the duration of an outbreak, for example, if the presence of an outbreak means that restrictions are placed on the population within which it is spreading. {Clearly, for large populations, both of these aims necessitate that $R_0$ is reduced to somewhat less than one.} The above remarks pertain to the common situation of controlling an epidemic that is in its increasing phase. A different situation arises with diseases, such as measles and mumps, which are controlled by mass vaccination but small outbreaks still occur among unvaccinated individuals. Supplementary vaccination may be used to reduce the size or duration of such outbreaks (as in the illustrative example of mumps in Bulgaria in Section~\ref{illus} of this paper). A similar phenomenon occurs with pathogens, such as monkeypox virus, which primarily affect animals but spill over into human populations giving stuttering chains of human-to-human transmission (Lloyd-Smith \textit{et al.} \cite{lsmith}). In at least some of the above scenarios, it may be the case that a specific vaccination level cannot be achieved immediately but rather the fraction of the population that is vaccinated will be time-dependent. The aim of this paper is to develop a methodology based on branching processes for addressing the above issues in a unified fashion. Gonz\'alez \textit{et al.} \citep{gmb1,gmb2} studied properties of the time to extinction of an epidemic given that a fraction $c$ of individuals is vaccinated, when the number of infectious individuals in the population is modelled by a continuous-time BHBP and a (more general) continuous-time SBP, respectively. In an earlier work, De Serres \textit{et al.} \citep{dsgf} used a discrete-time Bienaym{\'e}--Galton--Watson branching process to study the spread of an infectious disease under various control measures, specifically to estimate the effective (i.e., post-control) value of $R_0$ from observations on size and durations of small outbreaks. The main objective in Gonz\'alez \textit{et al.} \citep{gmb1,gmb2} was to determine the optimal proportion of susceptible individuals which has to be vaccinated so that the mean (or given quantile of the) extinction time of the disease is less than some specified value. To that end, stochastic monotonicity and continuity properties of the distribution function and mean of the time that the infection survives, depending on the vaccination coverage rate were first determined. {In the present paper, we} extend the results in Gonz\'alez \textit{et al.} \citep{gmb1,gmb2} in several directions that are both practically and theoretically important. First, we assume that the spread of infection is modelled as a CMJ branching process. The CMJ branching process is appropriate for modelling the early stages of a very wide variety of SIR epidemics, and includes both BHBP and SBP as special cases. Second, we consider more general vaccination processes. In Gonz\'alez \textit{et al.} \citep{gmb1,gmb2} it was assumed that the fraction of the population that is vaccinated remained constant with time. We now allow this fraction to be an arbitrary but specified function of time, thus capturing for example the setting in which people are vaccinated as the disease spreads. Third, we consider the control of more general functions of the epidemic process. Gonz\'alez \textit{et al.} \citep{gmb1,gmb2} focused on controlling the duration of the epidemic. The methods developed in this paper are applicable to a wide class of functions of the epidemic process. In addition to the duration of an outbreak, this class includes, for example, the total number of people infected and the maximum number of infected people present during the epidemic. The methodology of the paper is very different from that of Gonz\'alez \textit{et al.} \citep{gmb1,gmb2}. The key stochastic monotonicity and continuity results in these papers were obtained by analysis of integral equations governing properties of the time to extinction of the branching process. In the present paper, a main tool is coupling and, in particular, a pruning method of constructing a realisation of a vaccinated process from that of the corresponding unvaccinated process. As indicated in Section~\ref{conc}, this methodology is very powerful and applicable to a broad range of processes. The remainder of the paper is organised as follows. In Section~\ref{modcoup}, we describe a very general model for an SIR epidemic in a closed, homogeneously mixing community and explain why its early spread may be approximated by a CMJ branching process. We introduce a very general vaccination process and give the basic coupling construction for obtaining a realisation of the vaccinated epidemic process from that of the unvaccinated process. The theoretical results of the paper are given in Section~\ref{sectmoncty}. In Section~\ref{monprune}, we introduce functions of a realisation of a CMJ branching process that are monotonically decreasing with pruning. Examples of such functions include the extinction time, the maximum population size over all time and the total number of births over all time. Then we prove in general, that is, independently of the function, monotonicity and continuity properties of the mean (Section~\ref{monmean}), distribution function (Section~\ref{mondsn}) and quantiles (Section~\ref{monquantile}) of such functions. In Section~\ref{optimal}, we use the previous results to define optimal vaccination policies based on mean and quantiles. The theory is then specialised in Section~\ref{timeextinction} to the extinction time of an outbreak. The {methodology} is illustrated in Section~\ref{illus} with applications to mumps in Bulgaria, where vaccination is targeted at reducing the duration of an outbreak. The paper ends with some concluding comments in Section~\ref{conc}. \section{Model and coupling construction} \label{modcoup} Consider first the following model for the spread of an epidemic in a closed, homogeneously mixing population. Initially there are $a$ infectives and $N$ susceptibles. Infectious individuals have independent and identically distributed life histories $\mathcal{H}=(I,\xi)$, where $I$ is the time elapsing between an individual's infection and his/her eventual removal or death and $\xi$ is a point process of times, relative to an individual's infection, at which infectious contacts are made. Each contact is with an individual chosen independently and uniformly from the population. If a contact is with an individual who is susceptible, then that individual becomes infected and itself makes contacts according to its life history. If a contact is with an individual who is not susceptible, then nothing happens. The epidemic ceases as soon as there is no infective present in the population. Note that, for simplicity, we assume that every infectious contact with a susceptible necessarily leads to that susceptible becoming infected. The model is easily extended to the situation when each contact with a susceptible is successful (i.e., leads to infection) independently with probability $p$ by letting $\mathcal{H}=(I,\xi')$, where $\xi'$ is a suitable thinning of $\xi$. The above model is essentially that introduced by Ball and Donnelly \citep{ball}, who noted that it included as special cases a range of specific models that had hitherto received considerable attention in the literature. For example, SIR and SEIR (Susceptible $\to$ Exposed (i.e., latent) $\to$ Infective $\to$ Removed) models come under the above framework. The only difference between the above model and that in Ball and Donnelly \citep{ball} is that, in the latter, each contact is with an individual chosen independently and uniformly from the $N$ initial susceptibles (rather than from the entire population of $N+a$ individuals). In Ball and Donnelly \citep{ball}, a coupling argument (which also holds for the present model) is used to prove strong convergence, as the number of initial susceptibles $N \to \infty$ (with the number of initial infectives $a$ held fixed), of the process of infectives in the epidemic model to a CMJ branching process (see Jagers \citep{jage}), in which a typical individual lives until age $I$ and reproduces at ages according to $\xi$. Thus for large $N$, the epidemic may be approximated by the CMJ branching process. The approximation assumes that every contact is with a susceptible individual. The proof in Ball and Donnelly \citep{ball} may be extended to epidemics other than SIR, for example, SIS (Susceptible $\to$ Infective $\to$ Susceptible) and SIRS (Susceptible $\to$ Infective $\to$ Removed $\to$ Susceptible), by suitably generalizing the life history $\mathcal{H}$ to allow for removed individuals to become susceptible again (see, e.g., Ball \citep{ball99} in the context of epidemics among a population partitioned into households). Indeed, for a very broad class of homogeneously mixing epidemic models, {that covers all of the common stochastic formulations of infectious disease spread,} the early stages of an epidemic in a large population with few initial infectives may be approximated by a CMJ branching process. This paper is concerned with the use of vaccination schemes to control an epidemic, for example, in terms of its duration or of the total number of individuals infected. We are thus interested in the short-term behaviour of the epidemic, so we model the epidemic as a CMJ branching process, $Z=\{Z(t)\dvtx t\geq 0\}$, where $Z(t)$ denotes the number of infected individuals at time $t$. Thus $Z(0)$, which we assume to be fixed, represents the number of infected individuals at the beginning of the outbreak. We model the vaccination process by a function $\alpha\dvtx [0,\infty) \to[0,1]$, such that $\alpha(t)$ is the proportion of the population that are immune at time $t$ ($t\ge0$). Thus, the probability that a contact at time $t$ is with a susceptible (i.e., non-immune) individual is $1-\alpha(t)$. If the vaccine is perfect, that is, it confers immunity immediately with probability one, then $\alpha(t)$ is given by the proportion of the population that has been vaccinated by time $t$. If the vaccine is imperfect then that is implicitly included in the function $\alpha$. For example, if the vaccine is all-or-nothing (i.e., it renders the vaccinee completely immune with probability $\varepsilon$, otherwise it has no effect), then $\alpha(t)=\varepsilon\tilde{\alpha}(t)$, where $\tilde{\alpha}(t)$ is the proportion of the population that has been vaccinated by time $t$. Note that if the immunity conferred by vaccination does not wane then $\alpha$ is nondecreasing in $t$. We denote by $Z_{\alpha}=\{Z_{\alpha}(t)\dvtx t\geq0\}$ the vaccination version of $Z$, in which each birth in $Z$ is aborted independently, with probability $\alpha(t)$ if the birth time is at time $t$. Let $\mathcal{A}$ be the space of all functions $\alpha\dvtx [0,\infty) \to[0,1]$. We construct coupled realizations of $Z$ and $Z_{\alpha}$ $(\alpha\in\mathcal{A})$ on a common probability space $(\Omega, \mathcal{F},P)$ as follows. Let $(\Omega_1, \mathcal{F}_1,P_1)$ be a probability space on which are defined independent life histories $\mathcal{H}_1,\mathcal{H}_2,\dots\,$, each distributed as $\mathcal{H}$, which are pieced together in the obvious fashion to construct a realization of $Z$. More specifically, the life histories $\mathcal{H}_1,\mathcal{H}_2,\dots,\mathcal{H}_a$ are assigned to the $a$ initial infectives and, for $i=1,2,\dots\,$, the $i$th individual born in $Z$ is assigned the life history $\mathcal{H}_{a+i}$. Note that with this construction $Z$ may be viewed as a tree, which is augmented with birth and death times of branches. Let $(\Omega_2, \mathcal{F}_2,P_2)$ be a probability space on which is defined a sequence $U_1,U_2,\dots$ of independent random variables, each uniformly distributed on $(0,1)$. Let $(\Omega, \mathcal{F},P)=(\Omega_1 \times\Omega_2, \mathcal{F}_1 \times\mathcal{F}_2, P_1 \times P_2)$. Then, for $\alpha\in\mathcal{A}$, a realization of $Z_{\alpha}$ is constructed on $(\Omega, \mathcal{F},P)$ as follows. For $i=1,2,\dots\,$, let $b_i$ denote the time of the $i$th birth in $Z$, if such a birth occurs. Then this birth is deleted in $Z_{\alpha}$ if and only if $U_i\leq\alpha(b_i)$. If a birth is deleted in $Z_{\alpha}$, then none of the descendants of that individual in $Z$ occurs in $Z_{\alpha}$. Thus, if the $j$th birth in $Z$ is such a descendant then $U_j$ is redundant in the construction of $Z_{\alpha}$. With the tree setting in mind, the process of deleting an individual and all of its descendants is called \emph{pruning}. For a previous use of pruning in a branching process framework see, for example, Aldous and Pitman \cite{aldous}. Finally, we give some notation concerned with functions in $\mathcal {A}$, which will be used throughout the paper. For $\alpha, \alpha' \in\mathcal{A}$, write $\alpha\prec \alpha'$ if $\alpha(t) \le\alpha'(t)$ for all $t \in[0,\infty)$. Also, for any $c \in[0,1]$ and any $t_0 \ge0$, define the function $\alpha_c^{t_0} \in \mathcal{A}$ by \[ \alpha_c^{t_0}(t)= \cases{ 0 & \quad \mbox{if} $t< t_0$, \cr c & \quad \mbox{if} $t\geq t_0$. } \] Thus, for example, $\alpha_c^0$ denotes the constant function equal to $c$ and $\alpha_0^0$ denotes the constant function equal to $0$. \section{Monotonicity and continuity properties depending on vaccination function \texorpdfstring{$\alpha$}{alpha}} \label{sectmoncty} \subsection{Functions \texorpdfstring{$f(Z_{\alpha})$}{f(Z alpha)} monotone to pruning} \label{monprune} Let $f(Z)$ be any nonnegative function of $Z$ taking values in the extended real line $\mathbb{R} \cup\{\infty\}$ and, for $\alpha\in\mathcal{A}$, let $\mu_{\alpha}^f=\mathrm{E}[f(Z_{\alpha })]$. Again with the tree setting in mind, we say that $f$ is monotonically decreasing with pruning, and write $f \in\mathcal{P}$, if $f(Z^P) \le f(Z)$ almost surely whenever $Z^P$ is obtained from $Z$ by pruning. For an event, $E$ say, let $1_E$ denote the indicator function of $E$. Examples of functions that are monotonically decreasing with pruning include: \begin{enumerate}[(iii)] \item[(i)] the extinction time $T=\inf\{t\ge0\dvtx Z(t)=0\}$ and $1_{\{T > t\}}$, where $t \in[0,\infty)$ is fixed; \item[(ii)] the maximum population size (number of infected individuals in the epidemic context) over all time, $M=\sup_{t \ge0} Z(t)$ and $1_{\{M>x\}}$, where $x \in[0,\infty)$ is fixed; \item [(iii)] $N(t)$, the total number of births (new infections in the epidemic context) in $(0,t]$, where $t \in[0,\infty)$ is fixed, and the total number of births over all time (outbreak total size in the epidemic context) $N(\infty)=\lim_{t \to\infty}N(t)$, together with the corresponding indicator functions $1_{\{N(t) > x\}}$ and $1_{\{N(\infty) > x\}}$, where $x \in[0,\infty)$ is fixed. \end{enumerate} Throughout the paper, we assume that $Z$ is non-explosive, that is, that $\mathrm{P}(N(t)<\infty)=1$ for any $t \in(0,\infty)$. Conditions which guarantee this property may be found in Jagers \citep{jage}, Section~6.2. \subsection{Monotonicity and continuity of mean of \texorpdfstring{$f(Z_{\alpha})$}{f(Z alpha)}} \label{monmean} In this subsection, we derive monotonicity and continuity properties of $\mathrm{E}[f(Z_{\alpha})]$, when viewed as a function of the vaccination process $\alpha$, for functions $f$ that are monotonically decreasing with pruning. \begin{theorem}\label{teoa1} If $\alpha, \alpha' \in\mathcal{A}$ satisfy $\alpha\prec\alpha'$ and $f \in\mathcal{P}$, then $\mu _{\alpha}^f \ge \mu_{\alpha'}^f$. \end{theorem} \begin{pf} The result follows immediately from the above construction of $Z$ and $Z_{\alpha}$, $\alpha\in\mathcal{A}$, on $(\Omega, \mathcal{F},P)$, since $f$ is monotonically decreasing with pruning and $Z_{\alpha'}$ may be obtained from $Z_{\alpha}$ by successive prunings. \end{pf} We now give conditions under which $\mu_{\alpha}^f$ is continuous in $\alpha$. For $\alpha,\alpha' \in\mathcal{A}$, let $\|\alpha-\alpha'\|=\sup_{t \in [0,\infty)}|\alpha(t)-\alpha'(t)|$ and, for $t>0$, let $\|\alpha-\alpha '\|_t=\sup_{s \in [0,t]}|\alpha(s)-\alpha'(s)|$. For $t>0$, write $f \in\mathcal{P}_t$ if $f \in \mathcal{P}$ and $f(Z)$ depends on $Z$ only through $\{Z(s)\dvtx 0 \le s \le t\}$. Let $m$ be the offspring mean for $Z$. For $c \in[0,1]$, let $m_c$ denote the offspring mean of $Z_{\alpha_c^0}$, so $m_c=(1-c)m$. Further, let $c_{\mathrm{inf}}=\max(0,1-m^{-1})$ and note that $m_{c_{\mathrm{inf}}}\le1$. For $t_0 \ge0$ and $c \in[0,1]$, let \begin{eqnarray*} \mathcal{A}(c,t_0)=\bigl\{\alpha\in \mathcal{A}\dvtx \alpha(t) \ge c \mbox{ for all }t \ge t_0\bigr\}. \end{eqnarray*} \begin{theorem}\label{teoa2} \begin{enumerate}[(b)] \item[(a)] Fix $t>0$, let $f \in\mathcal{P}_t$ and suppose that there exists a non-negative real-valued function $\hat{f}$, with $\mathrm{E}[\hat{f}(Z)] < \infty$, such that, for $P$-almost all $\omega\in\Omega$, \begin{equation} \label{Xbound2} f\bigl(Z_{\alpha}(\omega)\bigr) \le\hat{f}\bigl(Z(\omega) \bigr)\qquad \mbox{for all } \alpha\in \mathcal{A}. \end{equation} Then, for each $\varepsilon>0$, there exists $\eta=\eta(\varepsilon)>0$ such that for all $\alpha, \alpha' \in \mathcal{A}$ satisfying $\|\alpha-\alpha'\|_t \le\eta$, \begin{equation} \label{mucontinuity} \bigl|\mu_{\alpha}^f-\mu_{\alpha'}^f\bigr| \le\varepsilon. \end{equation} \item[(b)] Suppose that $m < \infty$. Let $f \in\mathcal{P}$ and $t_0\geq 0$, and suppose that there exists a non-negative real-valued function $\hat{f}(Z_{\alpha_{c_{\mathrm{inf}}}^{t_0}})$, with $\mathrm{E}[\hat{f}(Z_{\alpha_{c_{\mathrm{inf}}}^{t_0}})] < \infty $, such that, for $P$-almost all $\omega\in\Omega$, \begin{equation} \label{Xbound1} f\bigl(Z_{\alpha}(\omega)\bigr) \le \hat{f} \bigl(Z_{\alpha_{c_{\mathrm{inf}}}^{t_0}}(\omega)\bigr)\qquad \mbox{for all } \alpha\in \mathcal{A}(c_{\mathrm{inf}}, t_0). \end{equation} Then, for each $\varepsilon>0$, there exists $\eta=\eta(\varepsilon)>0$ such that (\ref{mucontinuity}) holds for all $\alpha, \alpha' \in \mathcal{A}(c_{\mathrm{inf}}, t_0)$ satisfying $\|\alpha-\alpha'\| \le\eta$. \end{enumerate} \end{theorem} \begin{pf} (a) For $n=1,2,\dots$ and $\alpha, \alpha' \in\mathcal{A}$, let \begin{eqnarray*} B_n\bigl(\alpha,\alpha'\bigr)= \bigcap _{i=1}^n \bigl\{\omega\in \Omega\dvtx U_i( \omega) \notin \bigl(\min\bigl(\alpha(b_i),\alpha'(b_i) \bigr),\max\bigl(\alpha(b_i),\alpha'(b_i) \bigr)\bigr]\bigr\}, \end{eqnarray*} and let $B_0(\alpha,\alpha')=\Omega$. Now $\mathrm{P}(N(t)<\infty)=1$, since $Z$ is non-explosive. Observe that if $\omega \in B_{N(t)}(\alpha,\alpha')$ then, by construction, $Z_{\alpha}(s,\omega)=Z_{\alpha'}(s,\omega)$ for all $s \in[0,t]$, whence $f(Z_{\alpha}(\omega))=f(Z_{\alpha'}(\omega))$ since $f \in \mathcal{P}_t$. Now, for any $\alpha \in\mathcal{A}$, \begin{eqnarray*} \mu_{\alpha}^f=\mathrm{E} \bigl[f(Z_{\alpha})1_{B_{N(t)}(\alpha,\alpha ')} \bigr]+\mathrm{E} \bigl[f(Z_{\alpha}) 1_{B_{N(t)}^c(\alpha,\alpha')} \bigr], \end{eqnarray*} where $B_{N(t)}^c(\alpha,\alpha')=\Omega\setminus B_{N(t)}(\alpha ,\alpha')$. Thus, for any $\alpha, \alpha' \in\mathcal{A}$, \begin{eqnarray*} \mu_{\alpha}^f-\mu_{\alpha'}^f=\mathrm{E} \bigl[f(Z_{\alpha}) 1_{B_{N(t)}^c(\alpha,\alpha')} \bigr]-\mathrm{E} \bigl[f(Z_{\alpha'}) 1_{B_{N(t)}^c(\alpha,\alpha')} \bigr], \end{eqnarray*} whence, since $f$ is nonnegative, \begin{eqnarray*} \bigl|\mu_{\alpha}^f-\mu_{\alpha'}^f\bigr| \le \mathrm{E} \bigl[\hat{f}(Z) 1_{B_{N(t)}^c(\alpha,\alpha')} \bigr]. \end{eqnarray*} Now \begin{eqnarray*} \mathrm{E} \bigl[\hat{f}(Z) 1_{B_{N(t)}^c(\alpha,\alpha')} \bigr]= \mathrm{E} \bigl[\hat{f}(Z) \mathrm{E} [1_{B_{N(t)}^c(\alpha,\alpha ')}|Z ] \bigr]. \end{eqnarray*} Further, (i) $Z$ determines $N(t)$ and (ii) $(U_1,U_2,\dots)$ is independent of $Z$, so, $P$-almost surely, \begin{eqnarray*} \mathrm{E} [1_{B_{N(t)}^c(\alpha,\alpha')}|Z ]&=& 1-\prod_{i=1}^{N(t)} \bigl(1-\bigl|\alpha(b_i)-\alpha'(b_i)\bigr|\bigr) \\ &\le&1-(1-\delta)^{N(t)}, \end{eqnarray*} where $\delta=\|\alpha-\alpha'\|_t$. Hence, $P$-almost surely, \begin{eqnarray*} \mathrm{E} [1_{B_{N(t)}^c(\alpha,\alpha')}|Z ] \le \mathrm{E} [1_{B_{N(t)}^c(\alpha_0^0,\alpha_{\delta}^0)}|Z ], \end{eqnarray*} whence, for $\alpha, \alpha' \in\mathcal{A}$, \begin{eqnarray} \label{hatmubound} \bigl|\mu_{\alpha}^f-\mu_{\alpha'}^f\bigr| &\le& \mathrm{E} \bigl[\hat {f}(Z)1_{B_{N(t)}^c(\alpha_0^0,\alpha_{\delta}^0)} \bigr] \nonumber \\[-8pt]\\[-8pt] &=&\hat{\mu}_t(\delta)\qquad \mbox{say}.\nonumber \end{eqnarray} Now $\mathrm{P}(N(t)<\infty)=1$, so $P$-almost surely, \begin{eqnarray*} \hat{f}(Z)1_{B_{N(t)}^c(\alpha_0^0,\alpha_{\delta}^0)} \to0 \qquad \mbox{as } \delta\downarrow0 \end{eqnarray*} (in fact $\hat{f}(Z)1_{B_{N(t)}^c(\alpha_0^0,\alpha_{\delta}^0)}=0$ for all $\delta\in[0,\delta^*)$, where $\delta^*= \min(U_1,U_2,\dots, U_{N(t)})$), so by the dominated convergence theorem $\hat{\mu}_t(\delta ) \to0$ as $\delta\downarrow0$. Thus, given $\varepsilon> 0$, there exists $\eta$ such that $\hat{\mu}_t(\delta) \le\varepsilon$ for all $\delta\in(0,\eta)$ and the theorem follows using (\ref{hatmubound}). (b) For $\alpha\in\mathcal{A}(c_{\mathrm{inf}}, t_0)$, the process $Z_\alpha$ can be viewed as a vaccinated version of the process $Z_{\alpha_{c_{\mathrm{inf}}}^{t_0}}$ with vaccination function $\tilde {\alpha}$ given by \[ \tilde{\alpha}(t)= \cases{ \alpha(t) &\quad \mbox{if} $t< t_0$, \cr \displaystyle \frac{\alpha (t)}{1-c_{\mathrm{inf}}} & \quad \mbox{if} $t\geq t_0$. } \] Note that $Z_{\alpha_{c_{\mathrm{inf}}}^{t_0}}$ has offspring mean $m$ until time $t_0$, and $m_{c_{\mathrm{inf}}}\le1$ after time $t_0$. Thus, since $Z$ is non-explosive (so $\mathrm {P}(Z(t_0)<\infty)=1$), the total number of births over all time in $Z_{\alpha_{c_{\mathrm{inf}}}^{t_0}}$ (i.e., $N_{\alpha_{c_{\mathrm{inf}}}^{t_0}}(\infty)$) is finite almost surely. Also, $\|\tilde{\alpha}-\tilde{\alpha}'\| \le (1-c_{\mathrm{inf}})^{-1} \|\alpha-\alpha'\|$. The proof then proceeds as in part (a), but with $Z$ and $N(t)$ replaced by $Z_{\alpha_{c_{\mathrm{inf}}}^{t_0}}$ and $N_{\alpha_{c_{\mathrm{inf}}}^{t_0}}(\infty)$, respectively, and $\alpha , \alpha'$ replaced by $\tilde{\alpha}, \tilde{\alpha}'$. \end{pf} \begin{remark}\label{rr1} \begin{enumerate}[(b)] \item[(a)] Suppose that $m \le1$. Then $c_{\mathrm{inf}}=0$ and it follows that $Z_{\alpha_{c_{\mathrm{inf}}}^{t_0}}=Z$ and $\mathcal {A}(c_{\mathrm{inf}}, t_0)=\mathcal{A}$. Thus, for any $f \in\mathcal {P}$, Theorem~\ref{teoa2}(b) implies that, for any $\varepsilon>0$, there exists $\eta=\eta(\varepsilon)>0$ such that (\ref{mucontinuity}) holds for all $\alpha, \alpha' \in \mathcal{A}$ satisfying $\|\alpha-\alpha'\| \le\eta$. \item[(b)] Suppose that $m > 1$ and $f \in\mathcal{P}$. Then the argument used to prove Theorem~\ref{teoa2}(b) breaks down since $\mathrm {P}(Z(\infty)<\infty)<1$. Thus, with our argument we can prove continuity in $\alpha$ of $\mu_\alpha^f$ for $f \in\mathcal{P}_t$, for any $t>0$, but not for $f \in\mathcal{P}$. However, this is no restriction from a practical viewpoint since $t$ in Theorem~\ref{teoa2}(a), or $t_0$ in Theorem~\ref{teoa2}(b), can be made arbitrarily large. For example, in any real life-setting there will be a maximum time frame over which it is of interest to evaluate the performance of a vaccination process and $t$ or $t_0$ can be chosen accordingly. \end{enumerate} \end{remark} \subsection{Monotonicity and continuity of distribution function of~\texorpdfstring{$f(Z_{\alpha})$}{f(Z alpha)}} \label{mondsn} Using the previous results, we establish in this subsection monotonicity and continuity properties of the distribution function of $f(Z_{\alpha})$. For $f \in\mathcal{P}$ and $\alpha\in\mathcal{A}$, let \begin{eqnarray*} v_{\alpha}^f(x)=\mathrm{P}\bigl(f(Z_{\alpha}) \le x \bigr)=1-\mathrm{E}[1_{\{f(Z_{\alpha})>x\}}], \qquad x \ge0, \end{eqnarray*} be the distribution function of the random variable $f(Z_{\alpha})$. For $\alpha\in\mathcal{A}$ and $t \in[0,\infty]$, let $\phi_{N_\alpha(t)}(s)=\mathrm{E}[s^{N_\alpha(t)}]$ $(0 \le s \le1)$ denote the probability generating function of $N_\alpha(t)$. Suppose that $P(N_\alpha(t)<\infty)=1$. Then $\phi_{N_\alpha(t)}(1-)=1$ and $\phi_{N_\alpha(t)}^{-1}(u)$ is well defined for all $u \in[u_{\alpha,t},1]$, where $u_{\alpha,t}=\mathrm{P}(N_\alpha(t)=0)$. Extend the domain of $\phi_{N_\alpha(t)}^{-1}$ by defining $\phi_{N_\alpha(t)}^{-1}(u)=0$ for $u \in[0,u_{\alpha,t})$. Define the function $\delta_{\alpha,t}\dvtx [0,1] \to[0,1]$ by \begin{equation} \label{deltaepsilon} \delta_{\alpha,t}(\varepsilon)=1-\phi_{N_\alpha(t)}^{-1}(1- \varepsilon),\qquad 0 \le\varepsilon\le1. \end{equation} Note that $\delta_{\alpha,t}(\varepsilon)>0$ if $\varepsilon>0$ and $\lim_{\varepsilon\downarrow0} \delta_{\alpha,t}(\varepsilon)=0$. \begin{theorem}\label{teoa4} \begin{enumerate}[(b)] \item[(a)] Suppose that $f \in\mathcal{P}$ and $\alpha, \alpha' \in\mathcal{A}$ satisfy $\alpha\prec\alpha'$. Then \begin{equation} \label{valphainequ} v_{\alpha}^f(x) \le v_{\alpha'}^f(x) \qquad \mbox{for all } 0 \le x \le\infty. \end{equation} \item[(b)] Fix $t>0$ and suppose that $f \in\mathcal{P}_t$. Then, for any $\varepsilon>0$, \begin{equation} \label{nucontinuity} \sup_{0\leq x < \infty}\bigl|v_\alpha^f(x)-v_{\alpha'}^f(x)\bigr| \leq\varepsilon \end{equation} for all $\alpha, \alpha' \in\mathcal{A}$ satisfying $\|\alpha-\alpha'\|_t \le\delta_{\alpha_0^0,t}(\varepsilon)$. \item[(c)] Suppose that $f \in\mathcal{P}$. Then, for any $\varepsilon>0$, (\ref{nucontinuity}) holds for all $\alpha, \alpha' \in\mathcal{A}(c_{\mathrm{inf}},t_0)$ satisfying $\|\alpha-\alpha'\| \le \delta_{\alpha_{c_{\mathrm{inf}}}^{t_0},\infty}(\varepsilon)$. \end{enumerate} \end{theorem} \begin{pf} (a) Fix $x \in[0,\infty)$ and let $\tilde{f}_x$ be the function of $Z$ given by $\tilde{f}_x(Z)=1_{\{f(Z)>x\}}$. Then $\tilde{f}_x \in\mathcal{P}$ and (\ref{valphainequ}) follows from Theorem~\ref{teoa1}, since $v_{\alpha}^f(x)=1-\mathrm{E}[\tilde {f}_x(Z_{\alpha})]$. (b) For each $x \in[0,\infty)$, \begin{eqnarray*} \bigl|v_\alpha^f(x)-v_{\alpha'}^f(x)\bigr|=\bigl| \mathrm{E}\bigl[\tilde{f}_x(Z_{\alpha })\bigr]-\mathrm{E}\bigl[ \tilde{f}_x(Z_{\alpha'})\bigr]\bigr| \end{eqnarray*} and $\tilde{f}_x(Z_{\alpha}(\omega)) \le1$ for all $\alpha\in \mathcal{A}$ and all $\omega\in\Omega$. Fix $t > 0$ and note that $\tilde{f}_x \in\mathcal{P}_t$, since $f \in\mathcal{P}_t$ . It then follows from (\ref{hatmubound}), taking $\hat{f}(Z)=1$, that, for $x \in[0,\infty)$ and $\alpha,\alpha' \in\mathcal{A}$, \begin{equation} \label{modvalpha} \bigl|v_\alpha^f(x)-v_{\alpha'}^f(x)\bigr| \le \hat{\mu}_t\bigl(\bigl\|\alpha-\alpha'\bigr\|_t \bigr), \end{equation} where, for $\delta\in[0,1]$, \begin{eqnarray*} \hat{\mu}_t(\delta)=\mathrm{P} \bigl(B_{N(t)}^c \bigl(\alpha_0^0,\alpha_{\delta }^0\bigr) \bigr)=1-\mathrm{E} \bigl[(1-\delta)^{N(t)} \bigr]=1-\phi _{N(t)}(1-\delta). \end{eqnarray*} Recall that $N(t)=N_{\alpha_0^0}(t)$ and note that $P(N_{\alpha_0^0}(t)<\infty)=1$ since $Z$ is non-explosive. Thus, $\phi_{N_{\alpha_0^0}(t)}^{-1}(u)$ is well defined for all $u \in [0,1]$ and, since $1-\phi_{N_{\alpha_0^0}(t)}(1-\delta_{\alpha_0^0,t}(\varepsilon))\le \varepsilon$, the theorem follows. (c) The proof is similar to part (b) but with $N_{\alpha_0^0}(t)$ replaced by $N_{\alpha_{c_{\mathrm {inf}}}^{t_0}}(\infty)$. \end{pf} \begin{remark} \label{rr2} \begin{enumerate}[(b)] \item[(a)] Observe that the function $\delta_{\alpha_0^0,t}$, defined using (\ref{deltaepsilon}), is independent of both $f$ and $x$, so the uniform continuity of $v_{\alpha}^f(x)$, with respect to $\alpha$, holds uniformly over all $f \in\mathcal{P}$ and all $x \in[0,\infty)$. \item[(b)] Similar to Remark~\ref{rr1}(a), Theorem~\ref{teoa4}(c) shows that if $m \le1$ (so $P(N(\infty)<\infty)=1$) and $f \in\mathcal{P}$ then, for any $\varepsilon>0$, (\ref{nucontinuity}) holds for all $\alpha, \alpha' \in\mathcal{A}$ satisfying $\|\alpha-\alpha'\| \le \delta_{\alpha_0^0,\infty}(\varepsilon)$. \end{enumerate} \end{remark} \subsection{Monotonicity and continuity of quantiles of \texorpdfstring{$f(Z_{\alpha})$}{f(Z alpha)}} \label{monquantile} In applications, we wish to control the quantiles of $f(Z_{\alpha})$, so we now derive related monotonicity and continuity properties. Fix $f \in\mathcal{P}$ and $\alpha\in\mathcal{A}$, and define, for $0<p<1$, \begin{eqnarray*} x_{\alpha,p}^f=\inf\bigl\{x\dvtx v_{\alpha}^f(x)\ge p\bigr\}, \end{eqnarray*} with the convention that $x_{\alpha,p}^f=\infty$ if $v_{\alpha}^f(x)< p$ for all $x \in[0,\infty)$. Thus, $x_{\alpha,p}^f$ is the quantile of order $p$ of the random variable $f(Z_{\alpha})$. For $\alpha\in\mathcal{A}$, let $\mathcal{A}^+(\alpha)=\{\alpha' \in\mathcal{A}\dvtx \alpha\prec\alpha'\}$. For a sequence $\{\alpha_n\}$ and $\alpha$ in $\mathcal{A}$, we define $\lim_{n \to\infty} \alpha _n=\alpha$ to mean $\lim_{n \to\infty}\|\alpha_n-\alpha\|=0$. \begin{theorem}\label{teoa5} Suppose that $f \in\mathcal{P}$ and $p \in(0,1)$. \begin{enumerate}[(b)] \item[(a)] If $\alpha,\alpha' \in\mathcal{A}$ satisfy $\alpha \prec\alpha'$, then $x_{\alpha',p}^f \le x_{\alpha,p}^f$. \item [(b)] Suppose further that $f \in\mathcal{P}_t$ for some $t>0$ and $\alpha\in\mathcal{A}$ is such that $x_{\alpha,p}^f < \infty$. Let $\{\alpha_n\}$ be any sequence in $\mathcal{A}$ satisfying $\lim_{n \to\infty} \alpha_n=\alpha$. Then $\lim_{n \to\infty} x_{\alpha_n,p}^f=x_{\alpha,p}^f$ in each of the following cases: \begin{enumerate}[(ii)] \item[(i)] $\alpha_n \in \mathcal{A}^+(\alpha)$ for all $n$; \item[(ii)] $v_{\alpha}^f$ is continuous and strictly increasing at $x_{\alpha,p}^f$. \end{enumerate} \end{enumerate} \end{theorem} \begin{pf} (a) By Theorem~\ref{teoa4}(a), $\{x\dvtx v_{\alpha}^f(x)\ge p\} \subseteq\{x\dvtx v_{\alpha'}^f(x)\ge p\}$, which implies $x_{\alpha',p}^f \le x_{\alpha,p}^f$. (b) Choose $t>0$ such that $f \in\mathcal{P}_t$. Suppose that (i) holds. Let $x_{\sup}=\limsup_{n \to\infty} x_{\alpha_n,p}^f$ and $x_{\mathrm{inf}}=\liminf_{n \to\infty} x_{\alpha_n,p}^f$. Then by part (a), $x_{\sup} \le x_{\alpha,p}^f$. Fix $\varepsilon>0$. Then, since $\lim_{n \to\infty} \alpha_n=\alpha$ and $\|\alpha_n-\alpha\|_t\le\|\alpha_n-\alpha\|$, there exists $n_0$ such that $\|\alpha_n-\alpha\|_t\le\delta_{\alpha_0^0,t}(\varepsilon)$ for all $n \ge n_0$, where $\delta_{\alpha_0^0,t}(\varepsilon)$ is defined at (\ref{deltaepsilon}) -- recall that $N(t)=N_{\alpha_0^0}(t)$. Now, $\alpha\prec\alpha_n$, hence, by Theorem~\ref{teoa4}(a) and (b), $v_{\alpha_n}^f(x)-v_{\alpha}^f(x)\le\varepsilon$, for all $x\ge 0$ and for all $n \ge n_0$. In particular, setting $x=x_{\alpha_n,p}^f$ and noting that $v_{\alpha_n}^f(x_{\alpha_n,p}^f) \ge p$ since $v_{\alpha_n}^f$ is right-continuous, yields that $v_{\alpha}^f(x_{\alpha_n,p}^f) \ge p-\varepsilon$ for all $n \ge n_0$. Hence, $v_{\alpha}^f(x_{\mathrm{inf}}) \ge p-\varepsilon$, since $v_{\alpha}^f$ is increasing and right-continuous. This holds for all $\varepsilon>0$, so $v_{\alpha}^f(x_{\mathrm{inf}}) \ge p$, whence $x_{\mathrm{inf}}\ge x_{\alpha,p}^f$. Thus, $x_{\mathrm{inf}}=x_{\sup}=x_{\alpha,p}^f$, so $\lim_{n \to\infty} x_{\alpha_n,p}^f=x_{\alpha,p}^f$, as required. Suppose that (ii) holds. First, we assume that $\alpha_n \prec \alpha$ for all $n$. Then, by part (a), $x_{\mathrm{inf}} \ge x_{\alpha,p}^f$. Note that $v_{\alpha}^f(x_{\alpha,p}^f)=p$, since $v_{\alpha}^f$ is continuous at $x_{\alpha,p}^f$, and $v_{\alpha}^f(x)>p$ for all $x>x_{\alpha,p}^f$, since $v_{\alpha}^f$ is strictly increasing at $x_{\alpha,p}^f$. Fix $x>x_{\alpha,p}^f$ and let $\varepsilon=v_{\alpha}^f(x)-p$, so $\varepsilon>0$. As before, there exists $n_0$ such that $\|\alpha_n-\alpha\|_t\le\delta_{\alpha_0^0,t}(\varepsilon)$ for all $n \ge n_0$. It then follows from Theorem~\ref{teoa4} that \begin{eqnarray*} v_{\alpha}^f(x)-v_{\alpha_n}^f(x) \le \varepsilon=v_{\alpha}^f(x)-p \qquad \mbox{for all }n \ge n_0. \end{eqnarray*} Thus $v_{\alpha_n}^f(x) \ge p$ for all $n \ge n_0$, whence $x_{\alpha_n,p}^f \le x$ for all $n \ge n_0$, which implies that $x_{\sup} \le x$. Since this holds for any $x>x_{\alpha,p}^f$, it follows that $x_{\sup} \le x_{\alpha,p}^f$, which combined with $x_{\mathrm{inf}} \ge x_{\alpha,p}^f$ yields the required result. Now, we consider an arbitrary sequence $\{\alpha_n\}$ that converges to $\alpha$. For $q=1,2,\dots\,$, define functions $\alpha^+_q$ and $\alpha^-_q$ by $\alpha^+_q(s)=\min\{\alpha(s)+\frac{1}{q},1\}$ and $\alpha^-_q(s)=\max\{\alpha(s)-\frac{1}{q},0\}$ $(s \ge0)$. Then $\lim_{q \to\infty} \alpha^+_q=\lim_{q \to\infty} \alpha ^-_q=\alpha$. Further, $\alpha^-_q \prec\alpha\prec\alpha^+_q$ for each $q=1,2,\dots\,$. Hence, by part (i) and the above, $\lim_{q \to\infty} x_{\alpha ^+_q,p}^f=\lim_{q \to \infty} x_{\alpha^-_q,p}^f=x_{\alpha,p}^f$. For any fixed $q \in \mathbb{N}$, $\alpha_n \prec\alpha^+_q$ for all sufficiently large $n$, so Theorem~\ref{teoa5}(a) implies that $\liminf_{n \to\infty} x_{\alpha_n,p}^f \ge x_{\alpha^+_q,p}^f$. Letting $q \to\infty$ then yields that $ x_{\mathrm{inf}} \ge x_{\alpha,p}^f$. A similar argument using the sequence $\{\alpha^-_q\}$ shows that $x_{\sup} \le x_{\alpha,p}^f$, whence $\lim_{n \to\infty} x_{\alpha_n,p}^f= x_{\alpha,p}^f$, as required. \end{pf} \begin{remark}\label{rr3} \begin{enumerate}[(b)] \item[(a)] It is straightforward to extend Theorem~\ref{teoa5}(b) to a family of vaccination processes with a continuous index set, for example, $\{\alpha_s:s \in\mathcal{I}\}$, where $\mathcal{I}$ is a connected subset of $\mathbb{R}^d$ for some $d \in\mathbb{N}$. Theorem~\ref{teoa5}(b) implies that, under appropriate conditions, $\lim_{s \to s^*}{x_{\alpha_s,p}^f}= x_{\alpha_{s^*},p}^f$. We use this extension when studying optimal vaccination policies in the next subsection. \item[(b)] Invoking Remark~\ref{rr2}(b) shows that if $m \le1$ then Theorem~\ref{teoa5}(b) holds with $\mathcal{P}_t$ replaced by $\mathcal{P}$. \end{enumerate} \end{remark} \subsection{Optimal vaccination policies based on mean and quantiles} \label{optimal} From the above monotonicity and continuity properties of mean and quantiles, we propose next how to choose optimal $\alpha$s, that is, optimal vaccination policies in a sense that is made clear below, from a subset $\mathcal{A}^*$ of $\mathcal{A}$. Fix $f\in\mathcal{P}$, $b> 0$ and $0<p<1$, and let $\mathcal{A}_b^f=\{\alpha\in\mathcal{A}^*\dvtx \mu_{\alpha}^f\leq b\}$ and $\mathcal{A}_{p,b}^f=\{\alpha\in\mathcal{A}^*\dvtx x_{\alpha ,p}^f\leq b\}$. {Notice that if, for example, $f$ is the time to extinction, then $\mathcal{A}_b^f$ and $\mathcal{A}_{p,b}^f$ comprise those vaccination policies in $\mathcal{A}^*$ for which the mean and the quantile of order $p$, respectively, of the time to extinction is less than or equal to some bound $b$. Then it is of interest to search for optimal vaccination policies which satisfy these properties.} Then, if they exist, optimal vaccination policies based on the mean are \[ \mathop{\operatorname{argmax}}_{\alpha\in\mathcal{A}_b^f} \mu _{\alpha}^f \] and optimal vaccination policies based on the quantiles are \[ \mathop{\operatorname{argmax}}_{\alpha\in\mathcal{A}_{p,b}^f} x_{\alpha,p}^f. \] We notice that the sets $\mathcal{A}_b^f$ and $\mathcal{A}_{p,b}^f$ can be empty. If they are not empty, optimal vaccination policies may not be unique when a total order is not defined on the sets $\mathcal{A}_b^f$ and $\mathcal{A}_{p,b}^f$. Otherwise, provided the conditions of Theorems \ref{teoa1}, \ref{teoa2} and \ref{teoa5} are {satisfied}, the monotonicity and continuity properties of mean and quantiles of $f(Z_{\alpha})$ proved in those theorems imply that there exist unique $\alpha_{\mathrm{opt},b}^f\in\mathcal{A}_b^f$ and $\alpha_{\mathrm{opt},p,b}^f\in\mathcal{A}_{p,b}^f$ such that \[ \mu_{\alpha_{\mathrm{opt},b}^f}^f=\max_{\alpha\in\mathcal{A}_b^f} \mu_{\alpha}^f \quad \mbox{and}\quad x_{\alpha_{\mathrm{opt},p,b}^f,p}^f=\max _{\alpha\in\mathcal{A}_{p,b}^f} x_{\alpha,p}^f. \] Intuitively, $\alpha_{\mathrm{opt},b}^f$ and $\alpha_{\mathrm{opt},p,b}^f$ are the smallest vaccination policies in $\mathcal{A}^*$ such that the mean\vspace*{2pt} and the $p$th quantile, respectively, of $f(Z_{\alpha_{\mathrm{opt},b}^f})$ and $f(Z_{\alpha_{\mathrm{opt},p,b}^f})$ are less than or equal to $b$. Before giving some simple examples of $\mathcal{A}^*$, we discuss briefly conditions that ensure the existence and uniqueness of optimal policies. For fixed $f \in\mathcal{P}$, define the binary relation $\prec_f$ on $\mathcal{A}$ by $\alpha\prec_f \alpha'$ if and only if $\mu_\alpha^f \le\mu_{\alpha'}^f$. Observe that, if $\alpha \prec\alpha'$ then, by Theorem~\ref{teoa1}, {$\alpha' \prec_f \alpha$} for any $f \in\mathcal{P}$. The relation $\prec_f$ is not an ordering, because {$\alpha\prec_f \alpha'$ and $\alpha' \prec_f \alpha$} imply only that $\mu_\alpha^f = \mu_{\alpha'}^f$ (and not that $\alpha=\alpha'$). However, we can consider the equivalence relation $\sim_f$ on $\mathcal{A}$ defined by $\alpha \sim_f \alpha'$ if and only if $\mu_\alpha^f = \mu_{\alpha'}^f$. Then $\prec_f$ is a total ordering on the quotient set $\mathcal{A}/\sim_f$, that is, the set of all possible equivalence classes, using the obvious definition of $\prec_f$ on $\mathcal{A}/\sim_f$. Given a subset $\mathcal{A}^*$ of $\mathcal{A}$, a simple condition that ensures the existence of ${\operatorname{argmax }}_{\alpha\in \mathcal{A}_b^f}\mu_{\alpha}^f$ for any fixed $b>0$ is that the set of real numbers $\{\mu_{\alpha}^f\dvtx \alpha\in\mathcal{A}^*\}$ is closed. More precisely, this ensures the existence of an equivalence class on which the maximum is attained. To obtain a unique maximum requires that $\prec_f$ is a total ordering on $\mathcal{A}^*$ (or at least on $\mathcal{A}_b^f$ for fixed $b$). Note that even if $\prec$ is a total ordering {on} $\mathcal{A}^*$, Theorem~\ref{teoa1} does not ensure that $\prec_f$ is a total ordering on $\mathcal{A}^*$. For the latter, we require that $\mu_\alpha^f>\mu_{\alpha'}^f$ for all $\alpha, \alpha' \in \mathcal{A}^*$ satisfying $\alpha\prec\alpha'$ and $\alpha\ne \alpha'$. The coupling argument in Section~\ref{modcoup} can be used to show that this holds for any practically useful $f$ and it is assumed implicitly in the sequel. Similar arguments to the above pertain for optimal vaccination policies based on quantiles. A simple example of $\mathcal{A}^*$ is the set of constant functions, that is, $\mathcal{A}^*=\{\alpha_c^0 \dvtx 0\leq c \leq1\}$. On this set, the total order is defined by the order of the real numbers. Another example is the set $\mathcal{A}^*=\{\alpha_{M,t_v,p_0}\dvtx M\geq0, 0\leq p_0\leq1, 0\leq t_v\leq{p_0^{-1}}\}$, where, for $s\geq0$, \begin{equation} \label{alphapara} \alpha_{M,t_v,p_0}(s)= \cases{ 0 & \quad \mbox{if }$s \leq M$, \cr p_0(s-M) & \quad \mbox{if} $M <s\leq M+t_v$, \cr t_vp_0 & \quad \mbox{if} $M+t_v<s$. } \end{equation} For fixed $M$, $t_v$ and $p_0$, the function $\alpha_{M,t_v,p_0}$ describes the proportion of immune individuals in the population when the vaccination process starts at time $M$, takes $t_v$ time units and the proportion of individuals vaccinated per unit time is $p_0$. We notice that a total order on $\mathcal{A}^*$ is not possible. However, in practice, $M$ and $p_0$ are usually known before vaccination begins, and therefore, the functions can be parameterized through $t_v$ alone. For fixed $M$ and $p_0$, denote $\alpha_{t_v}= \alpha_{M,t_v,p_0}$ and $\mathcal{A}^*=\{\alpha_{t_v}\dvtx {c_{\mathrm{inf}}p_0^{-1}} \le t_v \le p_0^{-1}\}$. Then $\prec_f$ is a total ordering on $\mathcal{A}^*$ and Theorem~\ref{teoa2}(b) ensures that $\{\mu_{\alpha}^f\dvtx \alpha \in\mathcal{A}^*\}$ is closed, so, provided $\mathcal{A}_b^f$ is non-empty, the optimal vaccination policy exists and is unique. Moreover, it and the corresponding optimal policies based on {the mean and} quantiles are given by $\alpha_{t_{\mathrm{opt}, \mu}^f}$ and $\alpha_{t_{{\mathrm{opt}}, p}^f}$, with \begin{eqnarray*} t_{\mathrm{opt}, \mu}^f=\inf\bigl\{t_v \dvtx \mu_{\alpha _{t_v}}^f\leq b \bigr\} \quad \mbox{and}\quad t_{\mathrm{opt}, p}^f= \inf\bigl\{t_v \dvtx x_{\alpha_{t_v},p}^f\leq b \bigr\}, \end{eqnarray*} respectively. Finally, we notice that, usually, $\mu_{\alpha}^f$ and $x_{\alpha,p}^f$ cannot be derived in a closed form. Therefore, in order to obtain optimal vaccination policies, we need to approximate them. The coupling construction can be used to give a Monte-Carlo based estimation. Suppose, for simplicity of argument, that $m\leq1$. Fix $n\geq1$, for $i=1,\ldots,n$, one can simulate a realization $Z^{(i)}$ of $Z$ and $U_j^{(i)}$ of $U_j$, for $j=1,2,\ldots,N^{(i)}(\infty)$, where $N^{(i)}(\infty)$ is the total number of births in $Z^{(i)}$. For each $\alpha\in\mathcal{A}^*$, we obtain a realization $f(Z_{\alpha}^{(i)})$ of $f(Z_{\alpha})$, for $i=1,\ldots,n$. From these realizations, we estimate $\mu_{\alpha}^f$ and $x_{\alpha,p}^f$. \subsection{Time to extinction} \label{timeextinction} We specialise the preceding results to the case when evaluation of a vaccination strategy $\alpha$ is based on the associated distribution of the time to extinction of the virus in an outbreak. To this end, for $z\in\mathbb{N}$, we denote by $T_{\alpha,z}$ the time to extinction of the process $Z_{\alpha}$ when $Z(0)=z$, that is, \[ T_{\alpha,z}=\inf\bigl\{t\geq0\dvtx Z_{\alpha}(t)=0\bigr\}. \] Thus, $T_{\alpha,z}$ is the maximal time that the infection survives in the population in an outbreak when the time-dependent proportion of immune individuals is given by $\alpha$ and the number of infected individuals at the beginning of the outbreak is $z$. Now individuals infect independently of each other, so we have that \[ T_{\alpha,z} = \max\bigl\{ T_{\alpha,1}^{(1)},T_{\alpha,1}^{(2)}, \ldots, T_{\alpha,1}^{(z)}\bigr\}, \] where $T_{\alpha,1}^{(i)}$ are independent random variables with the same distribution as $T_{\alpha,1}$. Hence \[ \mathrm{P}(T_{\alpha,z}\leq t)=\bigl(v_{\alpha}(t) \bigr)^z, \] where $v_{\alpha}{(t)}=\mathrm{P}(T_{\alpha,1}\leq t)$. Therefore, to analyze the behaviour of $T_{\alpha,z}$, for any $z$, it is sufficient to study $T_{\alpha,1}$ through $v_{\alpha}$. From now on, we denote $T_{\alpha,1}$ by $T_{\alpha}$. We first use the results of Sections~\ref{sectmoncty} to derive some continuity and monotonicity properties of the distribution function $v_{\alpha}$. When every individual is immune, that is, $\alpha (t)=1$ for all $t > 0$, the infectious disease does not spread to any susceptible individual and then the extinction time is given by the survival time of the initial infected individual. It stands to reason that if there are non-immune individuals in the population, then it is probable that the infectious disease takes more time to become extinct. In the following result, which is an immediate application of Theorem~\ref {teoa4}(a) with $f=T$, we show this fact investigating the behaviour of $v_{\alpha}$ depending on the function $\alpha$. \begin{corollary}\label{teob1} Suppose that $\alpha, \alpha' \in\mathcal{A}$ satisfy $\alpha\prec \alpha'$. Then $v_{\alpha}(t)\leq v_{\alpha'}(t)$, for all $t\geq0$. \end{corollary} Intuitively, it is clear that the greater the proportion of immune individuals, the more likely it is that the infectious disease disappears quickly. Consequently, for any $\alpha\in \mathcal{A}$, the distribution function $v_{\alpha}$ is bounded above by $v_{\alpha_1^0}$, the distribution function of the survival time of the initial infected individual, and bounded below by $v_{\alpha_0^0}$, which is not necessarily a proper distribution function. Moreover, we obtain that minor changes in the proportion of the immune individuals generate minor changes in the distribution of outbreak duration. The following result is an immediate application of Theorem~\ref{teoa4}(b), (c) with $f=T$. \begin{corollary}\label{teob2} \begin{enumerate}[(b)] \item[{(a)}] Fix $t>0$. Then, for each $\varepsilon>0$, \begin{eqnarray*} \sup_{0\leq u \leq t}\bigl|v_\alpha(u)-v_{\alpha'}(u)\bigr|\leq \varepsilon, \end{eqnarray*} for all $\alpha, \alpha' \in\mathcal{A}$ satisfying $\|\alpha-\alpha'\|_t \le\delta_{\alpha_0^0,t}(\varepsilon)$. \item[{(b)}] Fix $t_0\geq0$. Then, for each $\varepsilon>0$, \begin{eqnarray*} \sup_{0\leq t < \infty}\bigl|v_\alpha(t)-v_{\alpha'}(t)\bigr|\leq \varepsilon, \end{eqnarray*} for all $\alpha, \alpha' \in\mathcal{A}(c_{\mathrm{inf}},t_0)$ satisfying $\|\alpha-\alpha'\| \le \delta_{\alpha_{c_{\mathrm{inf}}}^{t_0},\infty}(\varepsilon)$. \end{enumerate} \end{corollary} Finally, we consider the quantiles of $T_{\alpha}$. For $\alpha \in\mathcal{A}$ and $0 < p < 1$, let $t_{\alpha,p}=\inf\{t\dvtx v_{\alpha}(t)\ge p\}$ be the quantile of order $p$ of $T_{\alpha}$. \begin{corollary}\label{teob3} \begin{enumerate}[(b)] \item[(a)] If $\alpha, \alpha' \in\mathcal{A}$ satisfy $\alpha \prec\alpha'$, then $t_{\alpha',p} \le t_{\alpha,p}$ for every $0<p<1$. \item[(b)] Suppose that $\alpha\in\mathcal{A}$ and $0<p<1$ are such that $t_{\alpha,p}<\infty$ and $v_\alpha$ is continuous and strictly increasing at $t_{\alpha,p}$. Then $\lim_{n \to\infty}t_{\alpha_n,p}=t_{\alpha,p}$, for any sequence $\{\alpha_n\}$ in $\mathcal{A}$ satisfying $\lim_{n \to\infty} \alpha_n =\alpha$. \end{enumerate \end{corollary} \begin{pf} \begin{enumerate}[(b)] \item[(a)] The result follows directly from Theorem~\ref{teoa5}(a), on setting $f=T$. \item[(b)] Let $t=t_{\alpha,p}+1$ and $f=\min\{ T,t\}$, so $f \in \mathcal{P}_t$. The conditions on $t_{\alpha,p}$ and $v_\alpha$ ensure that $t_{\alpha ,p}=x_{\alpha, p}^f$ for all $\alpha\in\mathcal{A}$. The result then follows immediately from Theorem~\ref{teoa5}(b).\qed \end{enumerate} \noqed \end{pf} Corollary~\ref{teob3} can be extended to a family of vaccination processes with a continuous index set; cf. Remark~\ref{rr3}(b). In order to apply Corollary~\ref{teob3}, we need to determine conditions which guarantee that $v_{\alpha}$ is both continuous and strictly increasing. \begin{theorem}\label{teob4} Suppose that the lifetime random variable $I$ is continuous. Then, for any $\alpha\in\mathcal{A}$, $v_{\alpha}$ is a continuous distribution function. \end{theorem} \begin{pf} Let $B_0=0$ and, for $n=1,2,\dots\,$, let $B_n$ denote the time of the $n$th birth in $Z$, with the convention that $B_n=\infty$ if $N(\infty)<n$. For $n=0,1,\dots,N(\infty)$, let $I_n$ and $D_n=B_n+I_n$ denote respectively, the lifetime and time of death of the $n$th individual born in $Z$. Let $\mathcal{D}=\{D_0,D_1,\dots,D_{N(\infty)}\}$ denote the random set of all death-times in $Z$. Observe that, for any $t>0$ and any $\alpha\in\mathcal{A}$, $T_{\alpha}=t$ only if $t \in\mathcal{D}$. Thus, it is sufficient to show that $\mathrm {P}(t \in\mathcal{D})=0$ for any $t>0$. Fix $t>0$ and define $D_n=\infty$ for $n>N(\infty)$. Then, since $\mathrm{P}(N(t)<\infty)=1$, \begin{equation} \label{PDt} \mathrm{P}(t \in\mathcal{D})=\mathrm{P} \Biggl(\bigcup _{n=0}^{\infty}\{ D_n=t\} \Biggr)\le\sum _{n=0}^{\infty}\mathrm{P}(D_n=t). \end{equation} Further, for $n=0,1,\dots\,$, \begin{eqnarray*} \label{PDnt} \mathrm{P}(D_n=t)&=&\mathrm{P}\bigl(N(t) \ge n\bigr) \mathrm{P}\bigl(D_n=t|N(t)\ge n\bigr) \\ &=&\mathrm{P}\bigl(N(t) \ge n\bigr)\mathrm{E}_{B_n|N(t)\ge n}\bigl[\mathrm {P} \bigl(D_n=t|B_n,N(t)\ge n\bigr)\bigr] \\ &=&\mathrm{P}\bigl(N(t) \ge n\bigr)\mathrm{E}_{B_n|N(t)\ge n}\bigl[\mathrm {P} \bigl(I_n=t-B_n|B_n,N(t)\ge n\bigr)\bigr] \\ &=&\mathrm{P}\bigl(N(t) \ge n\bigr)\mathrm{E}_{B_n|N(t)\ge n}\bigl[\mathrm {P}(I_n=t-B_n)\bigr] \\ &=&0, \end{eqnarray*} since $I_n$ is independent of both $B_n$ and $\{N(t)\ge n\}$, and $I$ is continuous. It then follows from (\ref{PDt}) that $\mathrm{P}(t \in\mathcal {D})=0$, which completes the proof. \end{pf} We notice that under weak conditions, the function $v_\alpha$ is strictly increasing. Indeed, let $R$ be the number of points of $\xi$ in $[0,I]$, so $R$ is a random variable giving the number of offspring of a typical individual in the CMJ branching process $Z$. Suppose that $\mathrm{P}(R=0)>0$ and that $I|R=0$ is an absolutely continuous random variable, having density $f_{I|R=0}$ satisfying $f_{I|R=0}(t)>0$ for all $t \in(0,\infty)$. Then it is easily seen that, for any $\alpha\in\mathcal{A}$, $v_{\alpha}$ is strictly increasing on $(0,\infty)$, since, for any open interval $(a,b)$ in $(0,\infty)$, the probability that the initial individual has no offspring and dies in $(a,b)$ is strictly positive. It is straightforward to give conditions under which $v_{\alpha}$ is strictly increasing on $(0,\infty)$ when $I$ has bounded support. For example, suppose that $\mathrm{P}(R=0)$ and $\mathrm{P}(R=1)$ are both strictly positive, and $I|R=0$ and $B|R=1$ are both absolutely continuous with densities that are strictly positive on $(0,t_I)$, for some $t_I > 0$. Here, $B$ is the age that a typical individual has his/her first child. Then, given any interval $(a,b) \subset(0,\infty)$, there exists $n_0 \in \mathbb{N}$ such that with strictly positive probability (i) each of the first $n_0$ individuals in $Z$ has precisely one child, (ii) the ($n_0+1$)th individual in $Z$ has no children and (iii) $T \in(a,b)$. It then follows that $\mathrm{P} (T_{\alpha} \in (a,b) )>0$, provided $\alpha(t)<1$ for all $t>0$. \begin{figure} \includegraphics{551f01.eps} \caption{Numbers of new infected individuals weekly reported.} \label{f1} \end{figure} \section{Illustrative example: Analyzing the control measures for mumps in Bulgaria} \label{illus} As an illustration of how to apply our theoretical results and to show their usefulness, we analyze a mumps data set from Bulgaria. In Bulgaria, an increasing number of new cases of individuals infected with mumps has been observed in recent years (see Figure~\ref{f1}). This may be a result of a poor immunization of birth cohorts 1982--1992 (see Kojouharova \textit{et al.} \citep{euro}). In such a situation, it is necessary to provide supplementary doses of mumps, measles and rubella (MMR) vaccine targeted at those cohorts in order to shorten the duration of the outbreaks. Thus our objective is to determine, using the observed data, optimal vaccination levels based on the time to extinction that guarantee, with a high probability, that the outbreak durations will be less than some suitable bound. {As an example, we determine} the percentage of the target cohort that must be vaccinated to guarantee that only primary and first-generation cases will be observed in at least 90\% of outbreaks. In order to apply our results, we model the spread of mumps by a CMJ branching process. This is reasonable since mumps is an infectious disease which follows the SEIR scheme, and in general, the early stages of outbreaks following this scheme can be approximated by a CMJ branching process. Although this is the general situation, a deeper discussion is needed in the case of mumps. This disease concerns predominantly young people in schools and universities, which means small separate populations and population-dependent propagation. Hence, the approximation of mumps outbreaks in these populations by CMJ processes is valid only when outbreaks are very short, which is the case for the outbreaks we study as we show later. \begin{figure} \includegraphics{551f02.eps} \caption{Numbers of new infected individuals per week for the provinces of Bulgaria with the highest incidence of mumps.} \label{f1_bis} \end{figure} The data we analyze (reported by the Bulgarian Ministry of Health) are the total number of new cases of infected individuals with mumps observed weekly in each province of Bulgaria from 2005 to 2008, whose birth cohorts were poorly immunized. Notice that we do not observe outbreak durations, so, first, we describe the procedure to derive the outbreak durations from these data. Then, taking into account the main features of mumps transmission, we select an appropriate general branching process to describe the evolution of infected individuals in an outbreak and estimate its main parameters from the data set. Finally, once the model is fitted, we propose optimal vaccination levels based on the quantiles of the outbreak duration. \subsection{Deriving the outbreak duration} Our first task is to determine the behaviour of mumps outbreak durations in Bulgaria from 2005 to 2008, since our optimal vaccination level is based on outbreak duration. However, outbreak durations have not been registered; only the total number of new cases of infected individuals with mumps in each province has been observed (see Figure~\ref{f1_bis}). Thus, instead, we derive the outbreak durations from this data set, taking into account the main features of mumps transmission. Mumps is a viral infectious disease of humans and spreads from person to person through the air. The period between {someone being transmitted mumps and that person first showing symptoms of mumps is called the incubation period for mumps}. This incubation period can be 12 to 25 days and the average is 16 to 18 days. The infectious period {(i.e., when an individual is able to transmit the mumps virus to others)} starts about 2 days before the onset of symptoms and usually, an individual with mumps symptoms is immediately isolated from the population (see \url{http://kidshealth.org}). In view of the range of the incubation period, we consider that an outbreak is formed by the cases that appear in a province in a sequence of weeks with no more than three consecutive weeks without cases. That is, when we observe more than three weeks without cases we consider that the outbreak has become extinct, with the next outbreak starting in the first subsequent week in which there is at least one new case. Applying this procedure for each province, we have obtained 262 outbreaks. The left plot in Figure~\ref{zbar} could represent one such outbreak initiated by one infected individual. In this schematic representation, we have considered that the infectious period is \emph{negligible} due to the fact that infected individuals are immediately isolated when they show symptoms. The variable $Z_t$ denotes the underlying branching process, {which is not} observed. The segments over/under $Z_t$ indicates the lengths of time for which $Z_t$ takes the corresponding values. The tick marks on the axis represent weeks, and $\bar Z_n$ the number of new cases observed during the $n$th week. Indeed, $\bar Z_n$, $n \geq0$, are the variables that {are} observed. In this context, by outbreak duration we mean the time elapsing between the appearance of the first case until isolation of the last one, that is the time to extinction of the branching process minus the incubation period of the first individual. Thus, a more accurate way to approximate outbreak duration from the observed data is by the total number of weeks until extinction of the virus (giving an error, due to discretization, of at most one week), yielding seven weeks in the outbreak of Figure~3 (left). \begin{figure} \includegraphics{551f03.eps} \caption{Left: Schematic representation of an outbreak. $Z_t$ denotes the underlying branching process and $\bar Z_n$ the number of new cases in the $n$th week. Right: Durations for outbreaks started with one infected individual.} \label{zbar} \end{figure} For each of the 262 outbreaks, we calculated the total number of weeks until extinction of the virus (and, also, the outbreak size, i.e., total number of infected individuals). We noticed that the behaviour of these outbreak durations depends on the initial number of infected individuals. Hence, we have considered only those outbreaks which started with one infected individual, a total of 144. We checked that both outbreak duration and outbreak size were homogeneous between provinces (Kruskal--Wallis test: $p$-values 0.4763 and 0.4782, resp.) and consequently assumed that disease propagation in the different provinces are independent replications of the same process. Thus, the right plot in Figure~\ref{zbar} shows the histogram of outbreak durations for all 144 outbreaks started with one infected individual. We observe two different groups, outbreaks for which their duration is less than 10 weeks (comprising 134 outbreaks) and another group where the outbreak duration is greater than 10 weeks (comprising the remaining ten outbreaks). Possibly, this happens because some cases observed in a week could not come from cases of previous weeks, and then new outbreaks could have appeared overlapping in time. Hence, we consider that the outbreaks corresponding to durations of this last group may have been initiated no more than 10 weeks before. Thus, outbreak durations greater than 10 weeks have been removed from our study, and only durations less than 10 weeks have been considered in order not to overestimate the duration of the outbreaks. Nevertheless, an outbreak with apparent duration less than 10 weeks could actually be the superposition of two or more separate outbreaks, but we cannot determine this. \begin{figure} \includegraphics{551f04.eps} \caption{Left: Durations for outbreaks started with one infected individual without overlapping. Right: Simulated durations from a BHBP for outbreaks started with one infected individual.} \label{f2} \end{figure} The left plot of Figure~\ref{f2} shows the durations of the 134 outbreaks considered. We notice that 83\% of these outbreaks have only one infected individual, so their outbreak durations are 0. { The remaining 17\% of outbreaks seem to have a cyclical behaviour with period given by the mean of the incubation period (approximately 2.5 weeks)}. \subsection{Modelling mumps transmission} \label{mumps} As noted above, mumps is a contagious disease of humans that is spread from person to person through the air. The most common method of transmission is through coughing or sneezing, which can spread droplets of saliva and mucus infected with the mumps virus. Hence, when an infected person coughs or sneezes, the droplets atomize and can enter the eyes, nose, or mouth of another person. Following mumps transmission, a person does not immediately become sick. Once the virus enters the body, it travels to the back of the throat, nose and lymph glands in the neck, where it begins to multiply. As indicated previously, this period between mumps transmission and the beginning of mumps symptoms is the incubation period for mumps. People who have mumps are most contagious from 2 days before symptoms begin to 6 days after they end and transmission may occur at anytime in that period. Since an individual with mumps symptoms is immediately isolated from the population, the infectious period is very short in comparison with the incubation period, so, as indicated previously, we assume that transmission occurs only at the end point of an individual's incubation period. This assumption simplifies the mathematical model and does not influence strongly outbreak duration. As the end of the incubation period means that an individual's viral load has reached a given threshold to produce clinical signs, we assume that the mean number of individuals infected by an infected individual is constant and does not depend on the length of his/her incubation period. An earlier analysis of these mumps data using Bienaym{\'e}--Galton--Watson branching processes is given in Angelov and Slavtchova--Bojkova \citep{angelov2012}. However, the above observations imply that the Bellman--Harris branching process (BHBP) (see Athreya and Ney \citep{athreya}) is a more appropriate model for mumps transmission and indeed it provides an improved fit to these data. Recall that a BHBP is a CMJ branching process, in which an individual reproduces only at the end of his/her life-time, according to an offspring law which is the same for all the individuals. In the epidemiological context, age is the incubation period and the reproduction law is the contagion distribution. Next, we describe the incubation period and contagion distributions used to model mumps transmission in each outbreak in Bulgaria by means of the same BHBP (recall that we did not find any difference in the behaviour of the outbreaks in different provinces). We assume that the incubation period $I$ follows a gamma distribution, with shape parameter $r>0$ and rate $\gamma>0$, so $I$ has mean $r\gamma^{-1}$ and probability density function \begin{eqnarray*} f_I(u)=\frac{\gamma^ru^{r-1}\exp(-\gamma u)}{\Gamma(r)},\qquad u>0, \end{eqnarray*} where $\Gamma$ is the gamma function, and that the contagion distribution follows a Poisson distribution with mean $m$. These distributions are appropriate for the incubation period and the number of infections, respectively (see, e.g., Daley and Gani \citep{daleygani}, Farrington and Grant \citep{fg}, Farrington \textit{et al.} \citep{fkg} or Mode and Sleeman \citep{modesleemam}). Intuitively, $m$, the mean number of individuals infected by an infected individual, represents the power of the virus. Taking into account that the incubation period is estimated between 12 and 25 days and the average is 16 to 18 days, we consider the gamma distribution with mean 17 and $r=50$, {which implies} that the incubation period in $98.7\%$ of individuals is between 12 and 25 days. To estimate $m$, we consider the maximum likelihood estimator (MLE) based on the total number of births in independent extinct realisations of a BHBP. The total number of births in a BHBP has the same distribution as that in a Bienaym\'{e}--Galton--Watson branching process with the same offspring distribution. In our application, the offspring distribution is Poisson and it follows that the total number of births $N(\infty)$ (excluding the initial $a$ individuals) follows a Borel--Tanner distribution with probability mass function \[ \mathrm{P}\bigl(N(\infty)=k\bigr)=\frac{a m^k (a+k)^{k-1} \mathrm {e}^{-(a+k)m}}{k!}, \qquad k=0,1,\dots. \] (Note that, for $l=1,2,\dots\,$, the mean number of births in the $l$th generation is $a m^l$, so the expectation of this Borel--Tanner distribution is $E[N(\infty)]=a(m+m^2+\dots)=am(1-m)^{-1}$, when~$m<1$.) It follows that the MLE of the offspring mean $m$, based on $L$ independent realisations, is given by $\hat m = (\sum_{i=1}^L n^{(i)})(\sum_{i=1}^L a^{(i)} + n^{(i)})^{-1}$, where, for $i=1,2,\ldots,L$, $a^{(i)}$ and $n^{(i)}$ are respectively the initial number of individuals and the total number of births in the $i$th realisation (for details see Farrington \textit{et al.} \cite{fkg}). In our case $L=134$, $\sum_{i=1}^L a^{(i)}=134$ and $\sum_{i=1}^L n^{(i)}=62$, whence $\hat m = 0.3163$. Note that inference based on duration of outbreaks is less sensitive to underreporting than that based on the total number of births. However, estimating the offspring law based on the time to extinction of each outbreak turns into a difficult problem in branching processes {theory, even for the simplest model} (see, e.g., Farrington \textit{et al.} \citep{fkg}). Applying the general theory of branching processes, since the estimated value of $m$ is less than~1, we deduce that mumps transmission can still occur in Bulgaria, but such spread cannot lead to a large-scale epidemic. This fact is consistent with the Figures~\ref{f1} and \ref{f1_bis}. Although the epidemic becomes extinct, it can have different levels of severity. One measure of severity is the mean size of {an} outbreak, excluding the initial case, viz. $m(1-m)^{-1}$, which in our case is estimated by 0.463. However, we are concerned with the problem of how to shorten outbreak durations by vaccination. To this end, we analyze the random variable $T_{\alpha_{c_{\mathrm{inf}}}^0}$, the time to extinction of a BHBP with incubation period and contagion distributions as described above. {Note that $c_{\mathrm{inf}}=0$, as $m\le1$, so here $T_{\alpha_{c_{\mathrm{inf}}}^0}$ is the extinction time when there is no supplementary vaccination.} {The }variable $T_{\alpha_{c_{\mathrm{inf}}}^0}$ includes the incubation period of the initial individual, which is not observed in practice. Thus, from now on, we use the random variable $\widetilde{T}_{\alpha_{c_{\mathrm{inf}}}^0}$, the difference between $T_{\alpha_{c_{\mathrm{inf}}}^0}$ and the incubation period of the initial individual (i.e., the definition of outbreak duration given in the previous subsection) to model mumps outbreak duration in Bulgaria. The right plot in Figure~\ref{f2} shows a histogram of $10\,000$ simulated durations of outbreaks (rounded up to the nearest integer), each initiated by one infected individual and modelled by a BHBP with the above parameters. We notice that in 72.9\% of these simulated outbreaks the initial infected individual does not infect any new individual {(recall 83\% for real data)}. Moreover, {the simulated outbreak durations show the same cyclical behaviour as seen in the real data.} Comparing real and simulated durations, we deduce that mumps outbreak durations in Bulgaria can be modelled by the variable $\widetilde{T}_{\alpha_{c_{\mathrm{inf}}}^0}$ (Pearson's chi-squared test: $p$-value 0.2951, grouping the tail for values greater than 8). \subsection{Determining the optimal vaccination levels} Once we have fitted the model, in order to apply our theoretical results we have assumed that the proportion of immune individuals is constant with time, since, generally, vaccination is applied when an individual is a child and the disease spreads when he/she is a teenager. In the particular case of supplementary vaccination for Bulgarian mumps, for simplicity we assume that this vaccination process occurs simultaneously across the country (e.g., in secondary schools at the same specific time). To determine the optimal vaccination levels, we denote by $\widetilde{T}_{\alpha_{c}^0}$ the difference between $T_{\alpha_{c}^0}$ and the incubation period of the initial individual, when the proportion of immune individuals in the population is $c$, with $0\leq c\leq1$. In the same way as was proved for $T_{\alpha_{c}^0}$ (see Corollary~\ref{teob3}), we deduce that $\widetilde{T}_{\alpha_{c}^0}$ has the same quantile properties depending on $c$ as $T_{\alpha_{c}^0}$ (notice that $\widetilde{T}_{\alpha_{c}^0}$ is monotonically decreasing with pruning). Therefore, next we propose vaccination policies based on the quantiles of $\widetilde{T}_{\alpha_{c}^0}$, with $0\leq c\leq 1$. Specifically, for fixed $p$ and $t$, with $0 < p < 1$ and $t > 0$, we seek vaccination policies which guarantee that the mumps virus becomes extinct in each outbreak, with probability greater than or equal to $p$, not later than time $t$ after the outbreak has been detected with $z$ initial infected individuals, that is\looseness=-1 \begin{eqnarray*} c_{\mathrm{opt}}=c_{\mathrm{opt}}(z,p,t)= \inf\bigl\{c\dvtx 0\leq c\leq 1,{x}^{\widetilde{T}}_{\alpha_c^0,p^{1/z}}\leq t\bigr\}, \end{eqnarray*}\looseness=0 where ${x}^{\widetilde{T}}_{\alpha_c^0,p^{1/z}}$ denotes the quantile of order $p^{1/z}$ of the variable $\widetilde{T}_{\alpha_c^0}$.\vadjust{\goodbreak} \begin{figure} \includegraphics{551f05.eps} \caption{Left: Behaviour of the distribution function of $\widetilde{T}_{\alpha_{c}^0}$ for $c=0,0.4,0.8$. Right: Behaviour of ${x}^{\widetilde{T}}_{\alpha_c^0,0.9^{1/5}}$ depending on $c$, with $0\leq c\leq1$.} \label{f3} \end{figure} As an illustration, we take $z=5$, $p=0.9$ and $t=3$, being the time measured in weeks. First we justify these values. Consider the value of $z$. Since the number of infected individuals at the beginning of an outbreak is unknown, we bound it by the greatest number of individuals infected by one infected individual. Taking into account that the contagion distribution is Poisson and the estimate of $m$, we obtain the upper bound to be $5$, and therefore we take $z=5$. Moreover, we select $t=3$, which, taking into account the features of the incubation period, guarantees that only primary and first-generation cases will be observed. Since in our situation the estimated value of $m$ is less than~1, to approximate $c_{\mathrm{opt}}$, we need to obtain the empirical distribution of $\widetilde{T}_{\alpha_{c}^0}$, for $0\leq c\leq 1$, using the Monte-Carlo method described in Section~\ref{optimal}. To this end, for each $c=0.01k$, with $k=0,\ldots,100$, $100\,000$ processes have been simulated and their duration calculated. The left plot in Figure~\ref{f3} shows the behaviour of the empirical distribution function of $\widetilde{T}_{\alpha_{c}^0}$ for several values of $c$. Notice that as $c$ increases, the outbreak duration decreases in a continuous way, in accordance with Corollaries \ref{teob1} and \ref{teob2}. The right plot in Figure~\ref{f3} shows the behaviour of ${x}^{\widetilde{T}}_{\alpha_c^0,0.9^{1/5}}$ depending on $c$, which is in accordance with Corollary~\ref{teob3}. Since ${x}^{\widetilde{T}}_{\alpha_{c_{\mathrm{inf}}}^0,0.9^{1/5}}=6.97$, our model estimates that the duration of 90\% of outbreaks in Bulgaria is less than 6.97 weeks, if vaccination is not applied (in our real data 97\% of outbreaks have durations less than 6 weeks). In order to shorten the outbreak duration, from our study, we deduce that $c_{\mathrm{opt}}(5,0.9,3)=0.6$ (see right plot in Figure~\ref{f3}). Therefore, vaccinating a proportion of 60\% of susceptible individuals in the target cohort, guarantees that in at least 90\% of outbreaks of mumps in Bulgaria only primary and first-generation cases will be observed after the vaccination. Finally, we notice that $c_{\mathrm{opt}}(5,0.9,0)=0.94$, that is, to guarantee that at least the 90\% of outbreaks do not spread after vaccination, the vaccination level should be 94\% of susceptible individuals in the target cohort. \begin{table} \tablewidth=\textwidth \tabcolsep=0pt \caption{Sensitivity analysis on the mean and shape parameter of the gamma incubation distribution} \label{sa} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}d{2.1}llllll@{}} \hline \multicolumn{1}{l}{Mean}& &\multicolumn{5}{c}{Shape parameter $r$}\\ [-5pt] & &\multicolumn{5}{c}{\hrulefill}\\ & &\multicolumn{1}{l}{30} & \multicolumn{1}{l}{40}& \multicolumn{1}{l}{50} & \multicolumn{1}{l}{60}& \multicolumn{1}{l}{70} \\ \hline 16 &\% Coverage & 92.2&95.3&97.1&98.8&98.8\\ & $c_{\mathrm{opt}}(5,0.9,3)$&0.60&0.57&0.56&0.54&0.54\\ 16.5 &\% Coverage & 93.0&96.6&98.1&98.9&99.4\\ & $c_{\mathrm{opt}}(5,0.9,3)$&0.63&0.60&0.58&0.56&0.55\\ 17 &\% Coverage & 94.9&95.5&98.7&99.3&99.6\\ & $c_{\mathrm{opt}}(5,0.9,3)$&0.66&0.64&0.60&0.58&0.57\\ 17.5 &\% Coverage & 95.4&97.9&99.0&99.5&99.8\\ & $c_{\mathrm{opt}}(5,0.9,3)$&0.70&0.67&0.65&0.62&0.61\\ 18 &\% Coverage & 95.3&97.8&99.0&99.5&99.8\\ & $c_{\mathrm{opt}}(5,0.9,3)$&0.73&0.71&0.68&0.65&0.64\\ \hline \end{tabular*} \end{table} The parameters of the gamma distribution used to model the incubation period have been derived from knowledge of mumps transmission rather than estimated from data. Thus we have performed a sensitivity analysis of their influence on the optimal vaccination level. {We have considered gamma distributions with mean and shape parameter $r$ taking values in a grid (giving different probabilities for the incubation period belonging to range 12--25, which we denote as percentages of coverage), yielding the results shown in Table~\ref{sa}. One can observe that increasing the mean (holding $r$ fixed) clearly increases the duration of the epidemic leading to higher values of $c_{\mathrm{opt}}$. Moreover, increasing the shape parameter $r$ (holding the mean fixed) decreases the variance of lifetimes and hence also the chance of long outbreak duration, leading to lower values of $c_{\mathrm{opt}}$. The optimal vaccination level $c_{\mathrm{opt}}(5,0.9,3)$ is fairly stable in the vicinity of the chosen values of 17 and 50 for the mean and shape parameter $r$, respectively.} \begin{remark} From a computational point of view it is interesting to note that to find optimal vaccination policies, the simulation method based on \emph{pruning}, described at the end of Section~\ref{optimal}, has proved to be at least 17\% faster than those in Gonz\'alez \textit{et al.} \citep{gmb1,gmb2}, {which are also simulation-based methods but work directly with the distribution of the extinction time. For the BHBP there exist other methods to approximate the distribution function of the time to extinction based on solving numerically an associated integral equation (see Mart{\'\i}nez and Slavtchova-Bojkova \citep{rm}, which includes comparison with simulation-based methods). Unlike the latter approach, the {Monte-Carlo} method proposed in Section~\ref{optimal} is easily extended to time-dependent vaccination processes.} All the computations and simulations have been made with the statistical computing and graphics language and environment $\mathbf{R}$ (``GNU S'', see \cite{r}). \end{remark} \section{Concluding comments} \label{conc} The coupled pruning technique for proving monotonicity and continuity properties of functions defined on CMJ branching processes depending on the vaccination function $\alpha$ is both simple and powerful. It is clear that the proofs generalise easily to more general branching processes, such as multitype CMJ branching processes, time-inhomogeneous branching processes and branching processes in a random environment. The function $\alpha$ does not have to represent vaccination. It could represent any control of disease propagation that has the effect of reducing either the number of susceptibles or the probability that a contacted susceptible becomes infected. However, for the coupled pruning technique to work it is necessary that, in the branching process setting, the control affects only the probability that a birth is aborted and not the intrinsic reproduction law of the branching process. Thus, for example, the method cannot be applied to density-dependent processes, such as population size dependent branching processes, if the density dependence relates to the size of the unvaccinated population rather than the total population size. Given that the results in the Bulgarian mumps illustration are based on simulation alone, it may seem more appropriate to use an epidemic model rather than a branching process that approximates such a model. However, there are several advantages in using the simpler branching process formulation. First, branching process models can be fitted directly to the data more easily; in particular they do not require knowledge of the size of the population in which the outbreaks are occurring. Second, the coupled pruning technique enables the monotonicity and continuity properties pertaining to vaccination functions to be proved easily. Third, the coupled pruning technique yields an associated {Monte-Carlo} method for determining optimal vaccination processes. The framework for optimal vaccination policies studied in Section~\ref{optimal} can be extended to include alternative formulations of optimal policies. For example, one may define a cost $c(\alpha)$ associated with each vaccination process $\alpha \in\mathcal{A}$ and then seek vaccination processes from a subset $\mathcal{A}^*$ of $\mathcal{A}$ which either (i) minimise $c(\alpha)$ subject to $\mu_\alpha^f \le b$ or (ii) minimise $\mu_\alpha^f$ subject to $c(\alpha) \le c_0$, where $c_0$ is specified. Provided the cost function, $c(\alpha)$ is suitably monotonic and continuous in $\alpha$ and $\mathcal{A}^*$ is totally ordered, Theorems \ref{teoa1} and \ref{teoa2} imply the existence of unique such optimal vaccination processes and it should be possible to extend the {Monte-Carlo} algorithm at the end of Section~\ref{optimal} to estimate the optimal vaccination processes. {Optimal vaccination} policies that permit vaccination costs to be taken into account are especially relevant in animal vaccination. \section*{Acknowledgements} We thank the referees for their careful reading of our paper and for their constructive comments which have improved its presentation. The research was partially supported by the Ministerio de Econom{\'\i}a y Competitividad and the FEDER through the Plan Nacional de Investigaci\'on Cient{\'\i}fica, Desarrollo e Innovaci\'on Tecnol\'ogica, grant MTM2012-31235 and by the appropriated state funds for research allocated to Sofia University, grant 125/2012, Bulgaria.
1,108,101,562,388
arxiv
\section{Introduction} A regular closed embedded plane curve can be parameterized as $X: S^1\rightarrow \mathbb{R}^2, \varphi\mapsto (x(\varphi), y(\varphi))$. If the total length of $X$ is $L$, $X$ is usually parameterized as $X: [0, L] \rightarrow\mathbb{R}^2, s\mapsto (x(s), y(s))$, where $s$ is the arc length parameter. Denote by $T(s)$ the unit tangential vector at a point $X(s)$ and by $N(s)=N_{in}(s)$ the inward unit normal vector such that every ordered pair $(T(s), N(s))$ determines a positive orientation of the plane. The curve $X$ is called \textbf{star-shaped} if there exists a point $O$ inside the region bounded by $X$ such that $$\det (X(s), T(s)) = \begin{vmatrix} x(s) & y(s) \\ x^{\prime}(s) & y^{\prime}(s) \end{vmatrix} >0$$ for every $s$, and the point $O$ is called a star center of the curve. If $X$ is a $C^2$ curve then its (relative) curvature is defined as $$\kappa(s) := \left\langle \frac{\partial T}{\partial s}(s), ~N(s)\right\rangle.$$ In this paper, we will investigate the evolution behaviour of a star-shaped centrosymmetric plane curve under Gage's {\bf area-preserving flow (GAPF)} \cite{Gage-1986}, that is to say, we will consider the following Cauchy problem \begin{equation}\label{eq:1.1.201909} \left\{\begin{array}{l} \frac{\partial X}{\partial t}(\varphi, t) = \left(\kappa-\frac{2\pi}{L}\right)N \ \ \ \text{in} \ \ S^1\times (0, \omega),\\ X(\varphi, 0)= X_0(\varphi) \ \ \ \ \ \ \text{on} \ \ S^1, \end{array} \right. \end{equation} where $X: S^1\times [0, \omega)\rightarrow \mathbb{R}^2 ((\varphi, t)\mapsto (x, y))$ is a family of closed curves with $X_0$ being star-shaped and centrosymmetric about the origin $O$ of the plane, $\kappa=\kappa(\varphi, t)$ the curvature and $L=L(t)$ the length of $X(\cdot, t)$. In 1984, Gage (\cite{Gage-1986}) proved that the evolving curve $X(\cdot, t)$ under this flow can be deformed into a circle if the initial curve $X_0$ is convex. As Gage pointed out that there is a kind of simple closed curves which may develop singularities in a finite time under the flow (\ref{eq:1.1.201909}), which is verified by Mayer's numerical experiment (see \cite{Mayer-2001}). So, unlike Grayson's theorem \cite{Grayson-1987} for the {\bf curve shortening flow (CSF)}, there is no convergence result of GAPF for generic simple closed initial curves. A natural question is whether there is a class of non-convex initial curves which may become convex and converge to a circle under the flow (\ref{eq:1.1.201909}). To guarantee such a convergence, referring to Gage's example (see \cite{Gage-1986, Mayer-2001}), we should assume that the initial curve $X_0$ can not concave wildly. In this paper, we consider $X_0$ to be star-shaped and centrosymmetric and obtain the following main result: \begin{theorem}\label{thm:1.1.201909} Let $X_0$ be a smooth, embedded and star-shaped curve in the plane. If $X_0$ is centrosymmetric then Gage's area-preserving flow (\ref{eq:1.1.201909}) with this initial curve exists globally and makes the evolving curve convex in a finite time and deforms it into a circle as time tends to infinity. \end{theorem} A flow is called global if the evolving curve is smooth for all $t\in [0, +\infty)$. For an initial star-shaped curve in the plane, it is still unknown whether Gage's area-preserving flow exists globally or not (see Lemma \ref{lem:2.5.201909} and Remark \ref{rem:3.4.201909}). If $X_0$ is only star-shaped, it seems that the flow (\ref{eq:1.1.201909}) may not preserve the star-shapedness of the evolving curve during the evolution process. It causes essential difficulties to understand the asymptotic behavior of $X(\cdot, t)$. The extra symmetric property of $X_0$ is inspired by Prof. Michael Gage \cite{PC-2018} (also see the early work \cite{Gage-1993, Gage-Li-1994} by Gage and Li). The proof of Theorem \ref{thm:1.1.201909} is divided into two parts. In the first part, i.e. the global existence of the flow, the star-shapedness of the evolving curve plays an essential role, see Lemma \ref{lem:3.7.201909} and Corollary \ref{cor:3.8.201909}. To show that $X(\cdot, t)$ is star-shaped, $X_0$ needs to be of centrosymmetry. If $X_0$ is centrosymmetric with respect to the point $O$, the key idea in this part is to show that the evolving curve never touches the point $O$ via a comparison to the evolution behavior of the famous CSF, see Lemmas \ref{lem:3.1.201909}-\ref{lem:3.6.201909} for the detaails. In the second part, i.e. the convergence of the evolving curve, some ideas in Grayson's papers \cite{Grayson-1987, Grayson-1989} are adopted, see Lemmas \ref{lem:4.2.201909}-\ref{lem:4.3.201909}. Once $X(\cdot, t)$ is proved to be star-shaped, the polar angle $\theta$ can be used as a parameter of the evolving curve. In order to make $\theta$ independent of time, one can add a tangent component to the origin flow to get a new one: \begin{equation}\label{eq:1.2.201909} \left\{\begin{array}{l} \frac{\partial X}{\partial t}(\varphi, t) = \alpha T + \left(\kappa - \frac{2\pi}{L}\right)N \ \ \ \text{in} \ \ S^1\times (0, \omega),\\ X(\varphi, 0)= X_0(\varphi) \ \ \ \ \ \ \text{on} \ \ S^1. \end{array} \right. \end{equation} The tangent component $\alpha(\varphi, t)T(\varphi, t)$ (not influence the shape of $X$, see Proposition 1.1 on page 6 of \cite{Chou-Zhu}) will be determined in the next section. GAPF \cite{Gage-1986} has also been considered by Wang, Wo and Yang \cite{Wang-Wo-Yang-2018} if the initial curve is closed and locally convex. They have studied some asymptotic behaviours of the evolving curve, including the convergence for global flows and some blow-up properties. For higher dimensional cases, one can refer to Huisken's volume-preserving flow of convex hypersurfaces \cite{Huisken-1987} and its generalization by Kim and Kwon \cite{Kim-Kwon-2018} to the case of star-shaped hypersurfaces with a so called $\rho$-reflection property. This paper is organized as follows. In Section 2, some basic properties of the flow (\ref{eq:1.1.201909}) are obtained, including the short time existence and a property of $X(\cdot, t)$ which implies its star-shapeness. In Section 3, it is proved that the flow (\ref{eq:1.1.201909}) exists on the time interval $[0, +\infty)$. And in the final section, the proof of Theorem \ref{thm:1.1.201909} is completed. \section{Preparation} \setcounter{equation}{0} Given a curve $X$ in the plane, its ``support function" is defined by\footnote{$p$ is usually called the support function of $X$ in convex geometry, see, for example, \cite{Gru-2007, Hs-1981, Sch-2014}.} $$ p=-\langle X, N\rangle.$$ If the curve is expressed as $X(s)=(x(s), y(s))$, $s\in[0,L]$, where $L$ is the length of $X$, then its unit tangent and normal vector fields can be written as $$T(s)=(\dot{x}(s), \dot{y}(s)), \ \ \ \ \ \ \ N(s)=(-\dot{y}(s), \dot{x}(s)), $$ where ``$\cdot$" stands for derivative with respect to the arc length parameter $s$. Since $$p(s)=x(s)\dot{y}(s)-\dot{x}(s)y(s)=\det (X(s), T(s)),$$ $X$ is star-shaped with respect to the origin $O$ if and only if $p(s)>0$ for all $s$. Using the polar coordinate system $(r, \theta)$ for the plane, a smooth and closed curve can be expressed as $$X(s)=r(\theta)P(\theta),$$ where $\theta=\theta(s)$ and $P(\theta)=(\cos\theta, \sin\theta)$. Let $Q(\theta)=(-\sin\theta, \cos\theta)$, then the unit tangential vector of $X$ can be given by $$T = \frac{dr}{ds}P + r\frac{d\theta}{ds} Q,$$ and furthermore \begin{equation}\label{eq:2.1.201909} \det \langle X(s), T(s)\rangle=r^2(s) \dot{\theta}(s) \end{equation} If $X$ is closed and star-shaped then one can choose the origin $O$ of our frame so that $r(s)>0$ and $\det \langle X, T\rangle > 0$. The equation (\ref{eq:2.1.201909}) implies that one can use the polar angle $\theta$ to parameterize a star-shaped plane curve. Now let us deal with the flow (\ref{eq:1.1.201909}) with a star-shaped initial value $X_0$. We shall first derive some evolution equations and determine the tangential component $\alpha T$ to make the polar angle $\theta$ independent of time $t$. Then (\ref{eq:1.1.201909}) can be reduced to a Cauchy problem of a single equation for the radial function $r=r(\theta, t)$. After that, some basic properties of the flow (\ref{eq:1.1.201909}) will be explored in this section. Let $g:=\sqrt{\langle \frac{\partial X}{\partial \varphi}, \frac{\partial X}{\partial \varphi}\rangle}$ be the metric of the evolving curve. Set $\beta=\kappa-\frac{2\pi}{L}$. Under the flow (\ref{eq:1.2.201909}), $g$ evolves according to \begin{eqnarray*} \frac{\partial g}{\partial t} = \frac{1}{g}\left\langle\frac{\partial}{\partial t}\frac{\partial X}{\partial \varphi}, \frac{\partial X}{\partial \varphi}\right\rangle =g\left\langle\frac{\partial}{\partial s}(\alpha T+\beta N), T\right\rangle =\left(\frac{\partial\alpha}{\partial s}-\beta\kappa\right)g. \end{eqnarray*} The interchange of the operators $\partial/\partial s$ and $\partial/\partial t$ is given by \begin{eqnarray*} \frac{\partial}{\partial t}\frac{\partial}{\partial s}=\frac{\partial}{\partial t}\left(\frac{1}{g}\frac{\partial}{\partial \varphi}\right) =\frac{\partial}{\partial s}\frac{\partial}{\partial t}-\left(\frac{\partial\alpha}{\partial s}-\beta\kappa\right)\frac{\partial}{\partial s}. \end{eqnarray*} $T$ and $N$ evolve according to \begin{eqnarray*} &&\frac{\partial T}{\partial t}=\frac{\partial }{\partial t}\frac{\partial X}{\partial s} =\frac{\partial }{\partial s}\frac{\partial X}{\partial t}-\left(\frac{\partial\alpha}{\partial s}-\beta\kappa\right)T =\left(\alpha\kappa+\frac{\partial\beta}{\partial s}\right)N, \\ &&\frac{\partial N}{\partial t}=\left\langle\frac{\partial N}{\partial t}, T\right\rangle T +\left\langle\frac{\partial N}{\partial t}, N\right\rangle N =-\left(\alpha\kappa+\frac{\partial\beta}{\partial s}\right)T. \end{eqnarray*} If there is a family of star-shaped curves evolving under the flow (\ref{eq:1.2.201909}) then we can express the evolving curve as \begin{eqnarray}\label{eq:2.2.201909} X(\theta, t)=r(\theta, t)P(\theta). \end{eqnarray} Noticing that $\frac{\partial X}{\partial \theta}=\frac{\partial r}{\partial \theta}P+rQ$, we obtain \begin{eqnarray}\label{eq:2.3.201909} g=\left\|\frac{\partial X}{\partial \theta}\right\|=\left(r^2+\left(\frac{\partial r}{\partial \theta}\right)^2\right)^{1/2}, ~~ T=\frac{\partial r}{\partial s}P+\frac{r}{g}Q, ~~ N=-\frac{r}{g}P+\frac{\partial r}{\partial s}Q. \end{eqnarray} Differentiating the right hand side of (\ref{eq:2.2.201909}) and using (\ref{eq:1.2.201909}) and (\ref{eq:2.3.201909}), one gets \begin{eqnarray*} \frac{\partial r}{\partial t}P+r\frac{\partial \theta}{\partial t}Q=\alpha T+\beta N =\left(\alpha\frac{\partial r}{\partial s}-\frac{r\beta}{g}\right)P +\left(\frac{\alpha r}{g}+\beta\frac{\partial r}{\partial s}\right) Q. \end{eqnarray*} Comparing the coefficients of both sides can yield the following evolution equations: \begin{eqnarray}\label{eq:2.4.201909} \frac{\partial r}{\partial t}=\alpha\frac{\partial r}{\partial s}-\frac{r\beta}{g}, \ \ \ \frac{\partial \theta}{\partial t}=\frac{\alpha}{g}+\frac{\beta}{r}\frac{\partial r}{\partial s}. \end{eqnarray} From now on, we choose $$\alpha=-\frac{\beta}{r}\frac{\partial r}{\partial s}g=-\frac{\beta}{r}\frac{\partial r}{\partial \theta}$$ so that $\frac{\partial \theta}{\partial t}\equiv 0$, the polar angle $\theta$ is independent of the time $t$. Since the curvature of the evolving curve is $$\kappa=\frac{1}{g^3}\left(-r\frac{\partial^2 r}{\partial \theta^2}+2\left(\frac{\partial r}{\partial \theta}\right)^2+r^2\right),$$ one can immediately obtain the evolution equation of $r$, \begin{eqnarray}\label{eq:2.5.201909} \frac{\partial r}{\partial t}=\frac{1}{g^2} \frac{\partial^2 r}{\partial \theta^2}-\frac{2}{rg^2}\left(\frac{\partial r}{\partial \theta}\right)^2 -\frac{r}{g^2}+\frac{2\pi g}{rL}. \end{eqnarray} Now, if $r=r(\theta, t)>0$ is defined on $[0, 2\pi]\times [0, \omega)$ and satisfies the equation (\ref{eq:2.5.201909}), then a family of curves $\{X=rP|t\in [0, \omega)\}$ satisfies the flow (\ref{eq:1.2.201909}). So we can reduce the flow (\ref{eq:1.2.201909}) to the equation (\ref{eq:2.5.201909}) with initial value $r_0(\theta)>0$: \begin{lemma}\label{lem:2.1.201909} Suppose $X_0$ is star-shaped. The flow (\ref{eq:1.2.201909}) is equivalent to the quasi-linear parabolic equation (\ref{eq:2.5.201909}) with a positive initial value $r_0$ in some interval $[0, \omega)$. \end{lemma} The length of the curve can be calculated according to $$L(t)=\int_0^{2\pi}g(\theta, t) d\theta=\int_0^{2\pi}\sqrt{r^2+\left(\frac{\partial r}{\partial \theta}\right)^2} d\theta.$$ Let us define an operator $F$ from the space $C^{2, \alpha}([0, 2\pi]\times [0, \omega))$ to $C^{\beta}([0, 2\pi]\times [0, \omega))$, for $0<\beta<\alpha\leq 1$ according to $$F(r)=\frac{\partial r}{\partial t}- \frac{1}{g^2} \frac{\partial^2 r}{\partial \theta^2} +\frac{2}{rg^2}\left(\frac{\partial r}{\partial \theta}\right)^2 +\frac{r}{g^2}-\frac{2\pi g}{rL}.$$ Since the Frechet derivative of $F$ at some point $r_0>0$ is $$DF(r_0)f=\frac{\partial f}{\partial t}-\frac{1}{r_0^2+(\frac{\partial r_0}{\partial\theta})^2}\frac{\partial^2 f}{\partial\theta^2} +\text{lower linear terms of}\ f,$$ the equation (\ref{eq:2.5.201909}) is uniformly parabolic near its initial value $r_0$. It follows from the implicit function theorem of Banach spaces that the Cauchy problem (\ref{eq:2.5.201909}) has a unique solution in some small time interval (See Section 1.2 in \cite{Chou-Zhu}). Using Lemma \ref{lem:2.1.201909} can give us the short time existence. \begin{lemma}\label{lem:2.2.201909} The flow (\ref{eq:1.1.201909}) has a unique smooth solution in some time interval $[0, \omega)$, where $\omega>0$. \end{lemma} Next, we shall derive some basic properties of the flow (\ref{eq:1.1.201909}). \begin{lemma}\label{lem:2.3} Under the flow (\ref{eq:1.1.201909}), the area $A$ of the evolving curve is constant, that is, $A(t)\equiv A_0$; and the length $L$ satisfies $\sqrt{4\pi A_0}\leq L(t)\leq L_0$. \end{lemma} \begin{proof} This result is a direct consequence of the equations (1.18)-(1.19) in \cite{Chou-Zhu} and the classical isoperimetric inequality. \end{proof} Under the flow (\ref{eq:1.2.201909}), the ``support function" is $$p=-\langle X, N\rangle=-\left\langle rP, -\frac{r}{g}P+\frac{1}{g}\frac{\partial r}{\partial \theta} Q\right\rangle =\frac{r^2}{g},$$ and thus a closed curve is star-shaped if and only if $r>0$ and $\left|\frac{\partial r}{\partial \theta}\right|$ is bounded everywhere. Since $r$ and $\left|\frac{\partial r}{\partial \theta}\right|$ satisfy parabolic equations, one can apply the comparison principle to bound these two functions. For a continuous function $f=f(\theta, t)$, set $$f_{\min}(t)=\min\{f(\theta, t)|\theta\in [0, 2\pi]\}, ~~f_{\max}(t)=\max\{f(\theta, t)|\theta\in [0, 2\pi]\}.$$ \begin{lemma}\label{lem:2.4.201909} Given a star-shaped curve $X_0$, we choose a point $O$ in the plane such that $r_0(\theta)>0$. If $r(\theta, t)\geq c>0$ for $(\theta, t)\in [0, 2\pi]\times [0, \omega)$ under the flow (\ref{eq:1.2.201909}) then \begin{eqnarray} r(\theta, t) \leq \frac{L_0}{2}, \label{eq:2.6.201909}\\ \left|\frac{\partial r}{\partial \theta}(\theta, t) \right|\leq C_1, \label{eq:2.7.201909} \end{eqnarray} where $C_1=\max\left\{\max\limits_{\theta}\left|\frac{\partial r}{\partial \theta}(\theta, 0)\right|, \frac{3L_0}{\pi}\right\}$ is a constant depending on the initial curve $X_0$. \end{lemma} \begin{proof} Fix a $t_0\in [0, \omega)$. Let $t\in[0, t_0)$, if $r(\cdot, t)$ attains its maximum value $r_{\max}(t)$ at $(\theta_*, t)$, then $$\frac{\partial^2 r}{\partial \theta^2} (\theta_*, t)\leq 0, ~~~ \frac{\partial r}{\partial \theta}(\theta_*, t)=0,~~~ g(\theta_*, t)=r(\theta_*, t).$$ By (\ref{eq:2.5.201909}), \begin{eqnarray*} \frac{\partial r}{\partial t}(\theta_*, t) \leq \frac{2\pi}{L(t)}\leq \sqrt{\frac{\pi}{A_0}}. \end{eqnarray*} So the maximum principle implies that if $t\in [0, t_0)$ then \begin{eqnarray}\label{eq:2.8.201909} r(\theta, t) \leq r_{\max}(0)+ \sqrt{\frac{\pi}{A_0}} t_0. \end{eqnarray} Differentiating the equation (\ref{eq:2.5.201909}), one gets \begin{eqnarray*} \frac{\partial^2 r}{\partial t \partial \theta}&=&\frac{1}{g^2}\frac{\partial^3 r}{\partial \theta^3} -\frac{2}{g^4}\frac{\partial r}{\partial \theta} \left(\frac{\partial^2 r}{\partial \theta^2}\right)^2 -\frac{4}{rg^2}\frac{\partial r}{\partial \theta}\frac{\partial^2 r}{\partial \theta^2} +\frac{2}{r^2g^2}\left(\frac{\partial r}{\partial \theta}\right)^3 \\ && +\frac{4}{g^4}\left(\frac{\partial r}{\partial \theta}\right)^3 +\frac{4}{rg^4}\left(\frac{\partial r}{\partial \theta}\right)^3 \frac{\partial^2 r}{\partial \theta^2} -\frac{1}{g^2}\frac{\partial r}{\partial \theta} +\frac{2r^2}{g^4}\frac{\partial r}{\partial \theta} \\ && +\frac{2\pi}{Lg}\frac{\partial r}{\partial \theta} +\frac{2\pi}{rLg} \frac{\partial r}{\partial \theta} \frac{\partial^2 r}{\partial \theta^2} -\frac{2\pi g}{r^2L} \frac{\partial r}{\partial \theta}. \end{eqnarray*} Let $w=(\frac{\partial r}{\partial \theta})^2$, it evolves according to \begin{eqnarray*} \frac{1}{2}\frac{\partial w}{\partial t} &=& \frac{1}{2}\frac{1}{g^2}\frac{\partial^2 w}{\partial \theta^2} -\frac{1}{g^2}\left(\frac{\partial^2 r}{\partial \theta^2}\right)^2 -\frac{2w}{g^4}\left(\frac{\partial^2 r}{\partial \theta^2}\right)^2 -\frac{2}{rg^2}\frac{\partial r}{\partial \theta}\frac{\partial w}{\partial \theta} \\ && +\frac{2w^2}{r^2g^2} +\frac{4w^2}{g^4} +\frac{2w}{rg^4}\frac{\partial r}{\partial \theta}\frac{\partial w}{\partial \theta} -\frac{w}{g^2}+\frac{2r^2w}{g^4} \\ && +\frac{2\pi w}{Lg}+\frac{\pi}{rLg}\frac{\partial r}{\partial \theta}\frac{\partial w}{\partial \theta} -\frac{2\pi gw}{r^2L}. \end{eqnarray*} If $w$ attains its maximum value $w_{\max}(t_0)$ at $(\theta_*, t_0)$, then $\frac{\partial w}{\partial \theta}(\theta_*, t_0)=0, \frac{\partial^2 w}{\partial \theta^2}(\theta_*, t_0) \leq 0,$ and thus at the point $(\theta_*, t_0)$, one obtains \begin{eqnarray*} \frac{1}{2}\frac{\partial w}{\partial t} &\leq& \left(\frac{2w^2}{r^2g^2}+\frac{4w^2}{g^4}\right)+ \left(-\frac{w}{g^2} +\frac{2r^2w}{g^4}\right) +\left(\frac{2\pi w}{Lg}-\frac{2\pi gw}{r^2L}\right) \\ &=&\frac{2w^2(3r^2+w)}{r^2g^4}+\frac{w}{g^4}(r^2-w)-\frac{2\pi w^2}{r^2Lg} \\ &=& \frac{2w^2(3Lr^2+wL)-2\pi w^2(r^2+w)g}{r^2Lg^4}+\frac{w}{g^4}(r^2-w) \\ &=& \frac{2 w^2r^2(3L-\pi g)+2w^3(L-\pi g)}{r^2Lg^4}+\frac{w}{g^4}(r^2-w). \end{eqnarray*} If $w\geq \max\{\left(r_{\max}(t)\right)^2, \left(\frac{3L_0}{\pi}\right)^2\}$ then $3L-\pi g \leq 0$ and $r^2-w\leq 0$. One gets $\frac{\partial w}{\partial t}(\theta_*, t_0)\leq 0$. The maximum principle tells us \begin{eqnarray}\label{eq:2.9.201909} \left|\frac{\partial r}{\partial \theta}\right| \leq \max\left\{\max_{\theta}\left|\frac{\partial r}{\partial \theta}(\theta, 0)\right|, ~r_{\max}(t), ~\frac{3L_0}{\pi}\right\}. \end{eqnarray} Combining (\ref{eq:2.8.201909}) and (\ref{eq:2.9.201909}), the support function $p=\frac{r^2}{g}$ is positive on the time interval $[0, t_0]$. The evolving curve $X(\cdot, t)$ is star-shaped with respect to $O$ for $t\in [0, t_0]$. Since $r$ is the distance from $O$ to $X(\theta, t)$, one obtains \begin{eqnarray*} r(\theta, t) \leq \frac{L(t)}{2} \leq \frac{L_0}{2} \end{eqnarray*} which gives us (\ref{eq:2.6.201909}) and enables us to revise the estimate (\ref{eq:2.9.201909}) as \begin{eqnarray}\label{eq:2.10.201909} \left|\frac{\partial r}{\partial \theta}\right| \leq \max\left\{\max_{\theta}\left|\frac{\partial r}{\partial \theta}(\theta, 0)\right|, ~\frac{L_0}{2}, ~\frac{3L_0}{\pi}\right\} =\max\left\{\max_{\theta}\left|\frac{\partial r}{\partial \theta}(\theta, 0)\right|, ~\frac{3L_0}{\pi}\right\}. \end{eqnarray} So we have done. \end{proof} It follows from Lemma \ref{lem:2.4.201909} that the support function of the evolving curve is positive everywhere once a positive lower bound of $r$ is given. So the flow (\ref{eq:1.1.201909}) preserves star-shapedness of the evolving curve under the condition of Lemma \ref{lem:2.4.201909}. Now one can conclude that: \begin{lemma}\label{lem:2.5.201909} If the flow (\ref{eq:1.1.201909}) does not blow up in the time interval $[0, \omega)$ and $r_{\min}(t)>0$ for $t\in [0, \omega)$ then the point $O$ is the star center of every evolving curve $X(\cdot, t)$. \end{lemma} \section{Global existence}\label{sec:3.201909} In this section, it is shown that Gage's area-preserving flow (\ref{eq:1.1.201909}) exists in the time interval $[0, +\infty)$ if the initial smooth curve $X_0$ is centrosymmetric and star-shaped with respect to the origin $O$. \subsection{Star-shaped curves under the CSF}\label{subsec:3.1.201909} The popular curve shortening flow with a smooth, closed and embedded initial curve $X_0$ is defined by \begin{equation}\label{eq:3.1.201909} \left\{\begin{array}{l} \frac{\partial Y}{\partial t}(\varphi, t)=\widetilde{\kappa}(\varphi, t)\widetilde{N}(\varphi, t) \ \ \ \text{in} \ \ S^1\times (0, \omega),\\ Y(\varphi, 0)= X_0(\varphi) \ \ \ \ \ \ \text{on} \ \ S^1, \end{array} \right. \end{equation} where $\widetilde{\kappa}(\varphi, t)$ is the relative curvature with respect to the Frenet frame $\{\widetilde{T}, \widetilde{N}\}$. Grayson theorem \cite{Grayson-1987} asserts that the evolving curve $Y(\cdot, t)$ is smooth, preserves its embeddedness and becomes convex on the time interval $\left[0, \frac{A_0}{2\pi}\right)$, where $A_0$ is the area bounded by $X_0$. Gage-Hamilton theorem \cite{Gage-1983, Gage-1984, Gage-Hamilton-1986} says that a closed convex curve $X_0$ evolving under the CSF (\ref{eq:3.1.201909}) becomes asymptotically circular as $t\rightarrow \frac{A_0}{2\pi}$. If $X_0$ is star-shaped with respect to the origin $O$, it follows from the continuity of the evolving curve that there exists $t_0>0$ such that $Y(\cdot, t)$ is star-shaped with respect to $O$ for $t\in [0, t_0)$. So one can add a proper tangent component to the flow (\ref{eq:3.1.201909}) \begin{equation}\label{eq:3.2.201909} \left\{\begin{array}{l} \frac{\partial Y}{\partial t}(\varphi, t)=\widetilde{\alpha} \widetilde{T}+\widetilde{\kappa}\widetilde{N} \ \ \ \text{in} \ \ S^1\times (0, \omega),\\ Y(\varphi, 0)= X_0(\varphi) \ \ \ \ \ \ \text{on} \ \ S^1 \end{array} \right. \end{equation} to make the polar angle $\theta$ of $Y(\cdot, t)$ independent of time. Now let us parameterize $Y(\cdot, t)$ by $\theta$ and set $Y(\theta, t)=\rho(\theta, t)P(\theta)$ where $P(\theta)=(\cos\theta, \sin\theta)$. As is well known that the solution to the flow (\ref{eq:3.1.201909}) differs from that of the flow (\ref{eq:3.2.201909}) by a reparametrization and a Euclidean translation. The evolution equation of the radial function $\rho(\theta, t)$ is \begin{eqnarray}\label{eq:3.3.201909} \frac{\partial \rho}{\partial t}=\frac{1}{(g_\rho)^2}\frac{\partial^2 \rho}{\partial \theta^2} -\frac{2}{\rho(g_\rho)^2}\left(\frac{\partial \rho}{\partial \theta}\right)^2 -\frac{\rho}{(g_\rho)^2}, \end{eqnarray} where $g_\rho = \sqrt{\rho^2 +\left(\frac{\partial \rho}{\partial \theta}\right)^2}$ is the metric of $Y(\theta, t)$. \begin{lemma}\label{lem:3.1.201909} Suppose the initial smooth curve $X_0$ is star-shaped and centrosymmetric with respect to $O$. Under the CSF (\ref{eq:3.2.201909}), if the evolving curve $Y(\cdot, t)$ is star-shaped with respect to $O$ then it is centrosymmetric with respect to $O$. \end{lemma} \begin{proof} For $t_*\in \left[0, \frac{A_0}{2\pi}\right)$, the Gage-Hamilton-Grayson theorem tells us that there exist constants $M_i (t_*)>0$ such that \begin{eqnarray}\label{eq:3.4.201909} \left|\frac{\partial^i \rho}{\partial \theta^i}(\theta, t)\right| \leq M_i(t_*), \ \ \ \ (\theta, t)\in [0, 2\pi]\times [0, t_*],\ \ i=1, 2, \cdots. \end{eqnarray} Define $\widetilde{\rho}(\theta, t) = \rho (\theta+\pi, t)$ and $\varphi_1(\theta, t)= \widetilde{\rho}(\theta, t) - \rho (\theta, t)$. Since $X_0$ is centrosymmetric, $\varphi_1(\theta, 0)\equiv 0$. By the equation (\ref{eq:3.3.201909}), $\varphi_1(\theta, t)$ evolves according to \begin{eqnarray} \frac{\partial \varphi_1}{\partial t} &=& \frac{1}{(g_{\widetilde{\rho}})^2}\frac{\partial^2 \varphi_1}{\partial \theta^2} +\frac{1}{(g_{\widetilde{\rho}})^2}\frac{\partial^2 \rho}{\partial \theta^2} -\frac{1}{(g_{\rho})^2}\frac{\partial^2 \rho}{\partial \theta^2} +\frac{2}{\rho(g_\rho)^2} \left[\left(\frac{\partial \rho}{\partial \theta}\right)^2 - \left(\frac{\partial \widetilde{\rho}}{\partial \theta}\right)^2\right] \nonumber\\ &&+\frac{2}{\rho(g_\rho)^2} \left(\frac{\partial \widetilde{\rho}}{\partial \theta}\right)^2 -\frac{2}{\widetilde{\rho}(g_{\widetilde{\rho}})^2} \left(\frac{\partial \widetilde{\rho}}{\partial \theta}\right)^2 +\frac{\rho - \widetilde{\rho}}{(g_\rho)^2} + \frac{\widetilde{\rho}}{(g_\rho)^2} - \frac{\widetilde{\rho}}{(g_{\widetilde{\rho}})^2} \nonumber\\ &=& \frac{1}{(g_{\widetilde{\rho}})^2}\frac{\partial^2 \varphi_1}{\partial \theta^2} -\frac{1}{(g_\rho)^2 (g_{\widetilde{\rho}})^2}\frac{\partial^2 \rho}{\partial \theta^2} \left(\frac{\partial \rho}{\partial \theta} + \frac{\partial \widetilde{\rho}}{\partial \theta}\right) \frac{\partial \varphi_1}{\partial \theta} -\frac{2}{\rho(g_\rho)^2}\left(\frac{\partial \rho}{\partial \theta} +\frac{\partial \widetilde{\rho}}{\partial \theta}\right)\frac{\partial \varphi_1}{\partial \theta} \nonumber\\ && +\frac{2}{\rho(g_\rho)^2(g_{\widetilde{\rho}})^2}\left(\frac{\partial \widetilde{\rho}}{\partial \theta}\right)^2 \left(\frac{\partial \rho}{\partial \theta} +\frac{\partial \widetilde{\rho}}{\partial \theta}\right)\frac{\partial \varphi_1}{\partial \theta} +\frac{\widetilde{\rho}}{(g_\rho)^2 (g_{\widetilde{\rho}})^2} \left(\frac{\partial \rho}{\partial \theta} + \frac{\partial \widetilde{\rho}}{\partial \theta}\right)\frac{\partial \varphi_1}{\partial \theta} \nonumber\\ && -\frac{\widetilde{\rho} + \rho}{(g_\rho)^2(g_{\widetilde{\rho}})^2} \frac{\partial^2 \rho}{\partial \theta^2} \varphi_1 +\frac{2(\widetilde{\rho} + \rho)}{\rho (g_\rho)^2(g_{\widetilde{\rho}})^2} \left(\frac{\partial \widetilde{\rho}}{\partial \theta}\right)^2 \varphi_1 +\frac{2}{\rho \widetilde{\rho} (g_{\widetilde{\rho}})^2} \left(\frac{\partial \widetilde{\rho}}{\partial \theta}\right)^2 \varphi_1 \nonumber\\ && -\frac{\varphi_1}{(g_{\rho})^2} +\frac{\widetilde{\rho}(\widetilde{\rho} + \rho)}{(g_\rho)^2(g_{\widetilde{\rho}})^2}\varphi_1. \label{eq:3.5.201909} \end{eqnarray} The equation (\ref{eq:3.5.201909}) is linear with smooth coefficients (see (\ref{eq:3.4.201909})) and zero initial value. By the uniqueness of the solution to linear parabolic equations, one has \begin{eqnarray} \label{eq:3.6.201909} \varphi_1(\theta, t) \equiv 0. \end{eqnarray} So the evolving curve $Y(\cdot, t)$ is centrosymmetric with respect to $O$ for $t\in [0, t_*]$. The proof is completed by the arbitrary choice of $t_*$. \end{proof} \begin{lemma}\label{lem:3.2.201909} Let the initial smooth curve $X_0$ be star-shaped and centrosymmetric with respect to $O$, then under the CSF (\ref{eq:3.2.201909}), the evolving curve $Y(\cdot, t)$ is star-shaped. \end{lemma} \begin{proof} Suppose that there is a time $t_*\in [0, \frac{A_0}{2\pi})$ such that $Y(\cdot, t)$ is star-shaped with respect to $O$ for $t\in [0, t_*)$ but $Y(\cdot, t_*)$ is not so One can claim that \begin{eqnarray}\label{eq:3.7.201909} \rho(\theta, t)>0, \ \ \ \ (\theta, t) \in [0, 2\pi] \times [0, t_*), \end{eqnarray} and \begin{eqnarray}\label{eq:3.8.201909} \rho_{\min}(t_*) :=\min\{\rho(\theta, t_*)| \theta \in [0, 2\pi]\}=0. \end{eqnarray} In fact, $Y(\cdot, t)$ is star-shaped with respect to $O$ for $t\in [0, t_*)$ so (\ref{eq:3.7.201909}) holds. If (\ref{eq:3.8.201909}) does not hold then, for some $\delta >0$, one obtains \begin{eqnarray}\label{eq:3.9.201909} \rho_{\min}(t_*) \geq \delta. \end{eqnarray} By the evolution equation (\ref{eq:3.3.201909}), the quantity $v=\frac{1}{2}\left(\frac{\partial \rho}{\partial \theta}\right)^2$ evolves according to \begin{eqnarray*} \frac{\partial v}{\partial t} &=& \frac{1}{(g_\rho)^2}\frac{\partial^2 v}{\partial \theta^2} -\frac{1}{(g_\rho)^2}\left(\frac{\partial^2 \rho}{\partial \theta^2}\right)^2 -\frac{4v}{(g_\rho)^4}\left(\frac{\partial^2 \rho}{\partial \theta^2}\right)^2 -\frac{4}{\rho(g_\rho)^2}\frac{\partial \rho}{\partial \theta}\frac{\partial v}{\partial \theta} \\ && +\frac{8 v^2}{\rho^2(g_\rho)^2} + \frac{16 v^2}{(g_\rho)^4} +\frac{8v}{\rho(g_\rho)^4} \frac{\partial \rho}{\partial \theta} \frac{\partial v}{\partial \theta} -\frac{2 v}{(g_\rho)^2} + \frac{4 \rho^2 v}{(g_\rho)^4}. \end{eqnarray*} At the point $(\theta_*, t)$ where $v(\theta, t)$ attains $v_{\max}(t)$, one gets \begin{eqnarray*} \frac{\partial^2 v}{\partial \theta^2} (\theta_*, t) \leq 0, ~~ ~~ ~~\frac{\partial v}{\partial \theta} (\theta_*, t) = 0, \end{eqnarray*} and thus \begin{eqnarray*} \frac{\partial v}{\partial t} (\theta_*, t) &\leq& \frac{8 v^2}{\rho^2(g_\rho)^2}(\theta_*, t) + \frac{16 v^2}{(g_\rho)^4}(\theta_*, t) -\frac{2 v}{(g_\rho)^2}(\theta_*, t) + \frac{4 \rho^2 v}{(g_\rho)^4}(\theta_*, t) \\ &\leq& \frac{4v}{\rho^2}(\theta_*, t)+4+0+\frac{1}{2} \leq \frac{4v}{\delta^2}+\frac{9}{2}, \end{eqnarray*} where $t\in [0, t_*)$. It follows from the maximum principle that \begin{eqnarray*} v_{\max}(t) \leq \left(v_{\max}(0) +\frac{9}{8}\delta^2\right)e^{\frac{4}{\delta^2}t_*}, \end{eqnarray*} i.e., \begin{eqnarray}\label{eq:3.10.201909} \left|\frac{\partial \rho}{\partial \theta}\right|_{\max}(t) \leq \sqrt{\left(2v_{\max}(0) +\frac{9}{4}\delta^2\right)e^{\frac{4}{\delta^2}t_*}}. \end{eqnarray} By (\ref{eq:3.9.201909}) and (\ref{eq:3.10.201909}), the support function of $Y(\cdot, t) \left(\mbox{i.e.}, p_Y (\theta, t_*) = \frac{\rho^2(\theta, t_*)}{g_\rho(\theta, t_*)}\right)$ has a positive lower bound. So $Y(\cdot, t)$ is star-shaped with respect to $O$ for $t\in [0, t_*+\varepsilon)$. This is a contradiction to the assumption that $Y(\cdot, t_*)$ is not star-shaped with respect to $O$. Therefore, the claim (\ref{eq:3.8.201909}) holds and there exists a $\theta_* \in [0, 2\pi]$ such that \begin{eqnarray}\label{eq:3.11.201909} \rho(\theta_*, t_*) = 0. \end{eqnarray} By the choice of $t_* \in (0, \frac{A_0}{2\pi})$, the evolving curve $Y(\cdot, t_*)$ does not blow up. By Lemma \ref{lem:3.1.201909}, $Y(\cdot, t)$ is centrosymmetric with respect to $O$. So \begin{eqnarray}\label{eq:3.12.201909} \rho(\theta_*+\pi, t_*) = 0. \end{eqnarray} The equations (\ref{eq:3.11.201909}) and (\ref{eq:3.12.201909}) contradict to Huisken's monotonic formula \cite{Huisken-1998} or the fact that the evolving curve keeps embedded before it shrinks to a point (see Corollary 3.2.4 of \cite{Gage-Hamilton-1986}). Therefore, the equation (\ref{eq:3.8.201909}) can not hold. For every $t\in \left[0, \frac{A_0}{2\pi}\right)$, the evolving curve $Y(\cdot, t)$ is star-shaped with respect to $O$. \end{proof} As a direct corollary of Lemma \ref{lem:3.1.201909} and Lemma \ref{lem:3.2.201909}, we have \begin{corollary}\label{cor:3.3.201909} Suppose the initial smooth curve $X_0$ is star-shaped and centrosymmetric with respect to $O$. Under the CSF (\ref{eq:3.2.201909}), the evolving curve $Y(\cdot, t)$ shrinks to the point $O$ as $t \rightarrow \frac{A_0}{2\pi}$. \end{corollary} In 2010, Mantegazza in his note \cite{Mantegazza-2010} showed that a star-shaped (with respect to $O$) initial curve remains so under the CSF till the point $O$ is contained in the open region bounded by the evolving curve. \begin{remark}\label{rem:3.4.201909} There exist some smooth, closed, star-shaped but not embedded curves which do not always preserve the star-shapedness of evolving curves under the CSF. Figure \ref{fig:1} presents such a curve $X_0$ which is of positive curvature everywhere and is star-shaped with respect to $O$, but the two small loops shrink so that they do not intersect each other after some seconds under the CSF. This makes the evolving curve no longer star shaped. This example also indicates that the embeddedness of the initial curve in Theorem \ref{thm:1.1.201909} is so important that it can not be omitted. \end{remark} \begin{figure}[tbh] \centering \includegraphics[scale=0.8]{figure1.pdf} \caption{An Immersed Star-shaped Curve $X_0$.}\label{fig:1} \end{figure} \subsection{Star-shapedness of the evolving curve under the flow (\ref{eq:1.1.201909})}\label{subsec:3.2.201909} It is proved in this subsection that Gage's area-preserving flow (\ref{eq:1.1.201909}) preserves the star-shapedness of the evolving curve if the initial smooth curve $X_0$ is both centrosymmetric and star-shaped with respect to the origin $O$. \begin{lemma}\label{lem:3.4.201909} Suppose the initial smooth curve $X_0$ is star-shaped and centrosymmetric with respect to $O$ and Gage's area-preserving flow (\ref{eq:1.1.201909}) exists on the time interval $[0, \omega)$. If the evolving curve $X(\cdot, t)$ under (\ref{eq:1.1.201909}) is star-shaped with respect to $O$ for all $t\in [0, \omega)$ then it is centrosymmetric with respect to $O$. \end{lemma} \begin{proof} For every $t_*\in [0, \omega)$, the radial function $r(\theta, t)$ evolves according to the equation (\ref{eq:2.5.201909}) under Gage's area-preserving flow (\ref{eq:1.1.201909}). By assumption, $X(\cdot, 0)$ is star-shaped with respect to $O$, there exists a constant $c=c(t_*)>0$ such that for all $(\theta, t)\in [0, 2\pi] \times [0, t_*)$, \begin{eqnarray}\label{eq:3.13.201909} r(\theta, t) \geq c(t_*). \end{eqnarray} Lemma \ref{lem:2.4.201909} tells us that (\ref{eq:2.6.201909}) and (\ref{eq:2.7.201909}) hold for $t\in [0, t_*]$. By the classical theory of parabolic equations \cite{Lieberman-1996}, there exist constants $C_i(t_*)>0$ such that \begin{eqnarray}\label{eq:3.14.201909} \left|\frac{\partial^i r}{\partial \theta^i}(\theta, t)\right| \leq C_i(t_*), \ \ \ \ (\theta, t) \in [0, 2\pi] \times [0, t_*],\ \ \ i=1, 2, \cdots. \end{eqnarray} Set $\widetilde{r}(\theta, t):=r(\theta+\pi, t)$. By (\ref{eq:2.5.201909}), the function $\varphi_2(\theta, t):= \widetilde{r}(\theta, t)- r(\theta, t)$ satisfies a linear, uniformly parabolic equation which has smooth coefficients (similar to the equation (\ref{eq:3.5.201909})) and involves a non-local quantity $L=L(t)$ of the curve, \begin{eqnarray*} \frac{\partial \varphi_2}{\partial t} &=& \frac{1}{(g_{\widetilde{r}})^2}\frac{\partial^2 \varphi_2}{\partial \theta^2} -\frac{1}{g^2 (g_{\widetilde{r}})^2}\frac{\partial^2 r}{\partial \theta^2} \left(\frac{\partial r}{\partial \theta} + \frac{\partial \widetilde{r}}{\partial \theta}\right) \frac{\partial \varphi_2}{\partial \theta} -\frac{2}{r g^2}\left(\frac{\partial r}{\partial \theta} +\frac{\partial \widetilde{r}}{\partial \theta}\right)\frac{\partial \varphi_2}{\partial \theta} \nonumber\\ && +\frac{2}{rg^2(g_{\widetilde{r}})^2}\left(\frac{\partial \widetilde{r}}{\partial \theta}\right)^2 \left(\frac{\partial r}{\partial \theta} +\frac{\partial \widetilde{r}}{\partial \theta}\right)\frac{\partial \varphi_2}{\partial \theta} +\frac{\widetilde{r}}{g^2 (g_{\widetilde{r}})^2} \left(\frac{\partial r}{\partial \theta} + \frac{\partial \widetilde{r}}{\partial \theta}\right)\frac{\partial \varphi_2}{\partial \theta} \nonumber\\ && -\frac{\widetilde{r} + r}{g^2(g_{\widetilde{r}})^2} \frac{\partial^2 r}{\partial \theta^2} \varphi_2 +\frac{2(\widetilde{r} + r)}{r g^2(g_{\widetilde{r}})^2} \left(\frac{\partial \widetilde{r}}{\partial \theta}\right)^2 \varphi_2 +\frac{2}{r \widetilde{r} (g_{\widetilde{r}})^2} \left(\frac{\partial \widetilde{r}}{\partial \theta}\right)^2 \varphi_2 \nonumber\\ && -\frac{\varphi_2}{(g_{r})^2} +\frac{\widetilde{r}(\widetilde{r} + r)}{g^2(g_{\widetilde{r}})^2}\varphi_2 +\frac{2\pi(\widetilde{r}+r)}{\widetilde{r}L(g_{\widetilde{r}}+g)}\varphi_2 \nonumber\\ && +\frac{2\pi}{\widetilde{r} L(g+g_{\widetilde{r}})} \left(\frac{\partial \widetilde{r}}{\partial \theta} + \frac{\partial r}{\partial \theta}\right) \frac{\partial \varphi_2}{\partial \theta} -\frac{2\pi g}{\widetilde{r} rL}\varphi_2, \end{eqnarray*} where $g_{\widetilde{r}}:=\sqrt{\widetilde{r}^2+ \left(\frac{\partial \widetilde{r}}{\partial \theta}\right)^2}$. Since $\varphi_2(\theta, 0)\equiv 0$, one can obtain $\varphi_2(\theta, t) \equiv 0, that is to say, the evolving curve $X(\cdot, t)$ is symmetric with respect to $O$. \end{proof} \begin{corollary}\label{cor:3.5.201909} Suppose the initial smooth curve $X_0$ is star-shaped and centrosymmetric with respect to $O$. If the evolving curve $X(\cdot, t)$ under Gage's area-preserving flow (\ref{eq:1.1.201909}) is star-shaped then $O$ is one of its star centers. \end{corollary} \begin{proof} Suppose Gage's area-preserving flow (\ref{eq:1.1.201909}) with initial $X_0$ exists on a time interval $[0, \omega)$. By continuity, there exists a small $t_0\in [0, \omega)$ such that $X(\cdot, t)$ is star-shaped with respect to $O$ if $t<t_0$. Lemma \ref{lem:3.4.201909} implies that $X(\cdot, t)$ is symmetric with respect to $O$ if $t<t_0$. Suppose there exists a $t_*\in (0, \omega)$ such that (i) $X(\cdot, t_*)$ is star-shaped with respect to some point but not the origin $O$ and (ii) $X(\cdot, t)$ is star-shaped with respect to $O$ for all $t\in [0, t_*)$. By Lemma \ref{lem:3.4.201909} and the continuity of $X(\cdot, t)$, $X(\cdot, t_*)$ is centrosymmetric with respect to $O$. So one can conclude that \begin{eqnarray*} r_{\min}(t_*) > 0. \end{eqnarray*} Otherwise, $X(\cdot, t_*)$ is not star-shaped with respect to any point because it is centrosymmetric with respect to $O$. Let $P$ be a star center of $X(\cdot, t_*)$, then the symmetry of the curve implies that $-P$ is also a star center (see \cite{Smith-1968}). Since the set of all star centers of $X(\cdot, t_*)$ is convex, $O$ is one of its star centers, which leads to a contradiction. \end{proof} Next we shall show that Gage's area-preserving flow (\ref{eq:1.1.201909}) preserves star-shapedness of the evolving curve. \begin{lemma}\label{lem:3.6.201909} Suppose the initial smooth curve $X_0$ is star-shaped and centrosymmetric with respect to $O$ and Gage's area-preserving flow (\ref{eq:1.1.201909}) exists on a time interval $[0, \omega)$. Then the evolving curve $X(\cdot, t)$ is star-shaped with respect to $O$ for all $t\in [0, \omega)$. \end{lemma} \begin{proof} Suppose there is a $t_*\in (0, \omega)$ such that $X(\cdot, t)$ is star-shaped for $t\in [0, t_*)$ but $X(\cdot, t_*)$ is not so. It follows from Corollary \ref{cor:3.5.201909} that $X(\cdot, t)$ is star-shaped with respect to $O$ for all $t\in [0, t_*)$. By Lemma \ref{lem:3.4.201909}, $X(\cdot, t)$ is centrosymmetric with respect to $O$ on the same time interval $[0, t_*)$. Let $\varepsilon_0 = \min\left\{\frac{t_*}{2}, \frac{A_0}{4\pi}\right\}$, then $X(\cdot, t_*-\varepsilon_0)$ is a centrosymmetric and star-shaped curve with respect to $O$. Let $X(\cdot, t_*-\varepsilon_0)$ evolve according to the CSF, then we obtain a family of smooth curves $Y(\cdot, t)$ for $t\in \left[t_*-\varepsilon_0, t_*-\varepsilon_0+\frac{A_0}{2\pi}\right).$ Write $Y(\theta, t)= \rho(\theta, t) P(\theta)$, where $\theta$ is the polar angle independent of $t$. By Lemma \ref{lem:3.1.201909}, Lemma \ref{lem:3.2.201909} and Corollary \ref{cor:3.3.201909}, $Y(\cdot, t)$ is star-shaped and centrosymmetric with respect to $O$ and it shrinks to $O$ as $t$ tends to the time $\left(t_*-\varepsilon_0+\frac{A_0}{2\pi}\right)$. Therefore, there exists a $\delta = \delta (t_*) >0$ such that \begin{eqnarray}\label{eq:3.15.201909} \rho(\theta, t) \geq \delta(t_*) \end{eqnarray} for all $(\theta, t) \in [0, 2\pi] \times \left[t_*-\varepsilon_0, t_*+\frac{3A_0}{8\pi}\right]$. Under Gage's area-preserving flow, the radial function $r(\theta, t)$ of $X(\theta, t)$ satisfies the equation (\ref{eq:2.5.201909}) for $t\in [t_*-\varepsilon_0, t_*)$. Set $\varphi_3 (\theta, t)= r(\theta, t) - \rho(\theta, t)$, where $(\theta, t) \in [0, 2\pi] \times [t_*-\varepsilon_0, t_*)$. By (\ref{eq:2.5.201909}) and (\ref{eq:3.3.201909}), $\varphi_3$ evolves according to \begin{eqnarray} \frac{\partial \varphi_3}{\partial t} &=& \frac{1}{g^2}\frac{\partial^2 \varphi_3}{\partial \theta^2} -\frac{1}{(g_\rho)^2 g^2}\frac{\partial^2 \rho}{\partial \theta^2} \left(\frac{\partial \rho}{\partial \theta} + \frac{\partial r}{\partial \theta}\right) \frac{\partial \varphi_3}{\partial \theta} -\frac{2}{\rho (g_\rho)^2}\left(\frac{\partial \rho}{\partial \theta} +\frac{\partial r}{\partial \theta}\right)\frac{\partial \varphi_3}{\partial \theta} \nonumber\\ && +\frac{2}{\rho (g_\rho)^2 g^2}\left(\frac{\partial \rho}{\partial \theta}\right)^2 \left(\frac{\partial \rho}{\partial \theta} +\frac{\partial r}{\partial \theta}\right)\frac{\partial \varphi_3}{\partial \theta} +\frac{r}{(g_\rho)^2 g^2} \left(\frac{\partial \rho}{\partial \theta} + \frac{\partial r}{\partial \theta}\right)\frac{\partial \varphi_3}{\partial \theta} \nonumber\\ && -\frac{r + \rho}{(g_\rho)^2 g^2} \frac{\partial^2 \rho}{\partial \theta^2} \varphi_3 +\frac{2(r + \rho)}{\rho (g_\rho)^2 g^2} \left(\frac{\partial \rho}{\partial \theta}\right)^2 \varphi_3 +\frac{2}{\rho r g^2} \left(\frac{\partial \rho}{\partial \theta}\right)^2 \varphi_3 \nonumber\\ && -\frac{\varphi_3}{g^2} +\frac{r(r + \rho)}{(g_\rho)^2 g^2}\varphi_3 +\frac{2\pi g}{rL}. \label{eq:3.16.201909} \end{eqnarray} Noticing that $\frac{2\pi g}{rL}>0$ and (\ref{eq:3.16.201909}) is a linear parabolic equation with smooth and bounded coefficients, the maximum principle implies that there exists a constant $\widetilde{c} >0$ such that for $t\in [t_*-\varepsilon_0, t_*)$, \begin{eqnarray*} \varphi_3 (\theta, t) \geq \min\{\varphi_3 (\theta, t_*-\varepsilon_0)| \theta\in [0, 2\pi]\} e^{-\widetilde{c} t}. \end{eqnarray*} Since $\varphi_3 (\theta, t_*-\varepsilon_0) \equiv 0$, we have $\varphi_3 (\theta, t) \geq 0$. Thus \begin{eqnarray} \label{eq:3.17.201909} r (\theta, t_*) \geq \rho(\theta, t_*) \geq \delta(t_*) >0. \end{eqnarray} By Lemma \ref{lem:3.4.201909} and Corollary \ref{cor:3.5.201909}, $X(\cdot, t_*)$ is star-shaped, which contradicts with the assumption. \end{proof} \subsection{Extending Gage's area-preserving flow}\label{subsec:3.3.201909} Now let us extend Gage's area-preserving flow (\ref{eq:1.1.201909}). Once the curvature of the evolving curve is bounded on the time interval $[0, T_*)$ for any $T_*>0$, one can prove the smoothness of $X(\cdot, t)$ and the flow (\ref{eq:1.1.201909}) can be extended globally. Suppose the initial smooth curve $X_0$ is star-shaped and centrosymmetric with respect to $O$ and Gage's area-preserving flow (\ref{eq:1.1.201909}) with initial $X_0$ exists on the time interval $[0, T_*)$, where $T_*>0$ is a finite number. By Lemma \ref{lem:3.4.201909}, Corollary \ref{cor:3.5.201909} and Lemma \ref{lem:3.6.201909}, the evolving curve $X(\cdot, t)$ is star-shaped and centrosymmetric with respect to $O$. Let $\varepsilon_0 = \min\{\frac{T_*}{2}, \frac{A_0}{4\pi}\}$. $X(\cdot, t)$ is star-shaped with respect to $O$ for $t\in [0, T_*)$, so there exists a $\delta_1=\delta_1(T_*)>0$ such that $ r (\theta, t) \geq \delta_1 $ holds for $t\in [0, T_*-\varepsilon_0]$. By the proof of Lemma \ref{lem:3.6.201909}, there exists a $\delta_2=\delta_2 (T_*)>0$ such that $ r (\theta, t) \geq \delta_2 $ for $t\in [T_*-\varepsilon_0, T_*+\frac{A_0}{8\pi}]$. Let $\delta = \delta (T_*)=\min\{\delta_1, \delta_2\}>0$, then for all $(\theta, t) \in [0, 2\pi] \times [0, T_*+\frac{A_0}{8\pi})$, one can obtain \begin{eqnarray} \label{eq:3.18.201909} r (\theta, t) \geq \delta >0, \end{eqnarray} which together with Lemma \ref{lem:2.4.201909} and Lemma \ref{lem:2.5.201909} tells us that if $t\in\left[0, T_*+\frac{A_0}{8\pi}\right)$, then the evolving curve $X(\cdot, t)$ is star-shaped with respect to $O$. So there exists an $h=h(T_*)>0$ such that the ``support function" with respect to $O$ satisfies \begin{eqnarray} \label{eq:3.19.201909} p (\theta, t) \geq 2h(T_*) >0, \ \ \ \ (\theta, t) \in [0, 2\pi] \times \left[0, T_*+\frac{A_0}{8\pi}\right). \end{eqnarray} Under the flow (\ref{eq:1.1.201909}), the curvature evolves according to \begin{eqnarray}\label{eq:3.20.201909} \frac{\partial \kappa}{\partial t}=\frac{\partial^2 \kappa}{\partial s^2}+\kappa^3-\frac{2\pi}{L}\kappa^2, \end{eqnarray} where $s$ stands for the arc length parameter. Define \begin{eqnarray*} \varphi_4(s, t)= \frac{\kappa (s, t)}{p(s, t)- h(T_*)}, \ \ \ \ \ \ t\in [0, T_*), \end{eqnarray*} we have \begin{eqnarray*} &&\frac{\partial \varphi_4}{\partial s} = \frac{1}{p-h}\frac{\partial \kappa}{\partial s} -\frac{\kappa}{(p-h)^2}\frac{\partial p}{\partial s},\\ && \frac{\partial^2 \varphi_4}{\partial s^2} = \frac{1}{p-h}\frac{\partial^2 \kappa}{\partial s^2} -\frac{2}{(p-h)^2}\frac{\partial \kappa}{\partial s} \frac{\partial p}{\partial s} - \frac{\kappa}{(p-h)^2}\frac{\partial^2 p}{\partial s^2} + \frac{2\kappa}{(p-h)^3} \left(\frac{\partial p}{\partial s}\right)^2. \end{eqnarray*} Noticing that the support function evolves according to \begin{eqnarray}\label{eq:3.21.201909} \frac{\partial p}{\partial t} &=& -\frac{\partial}{\partial t}\langle X, N \rangle =-\left\langle \left(\kappa - \frac{2\pi}{L}\right)N, N \right\rangle +\left\langle X, \frac{\partial \kappa}{\partial s} T \right\rangle \nonumber \\ &=& \frac{2\pi}{L} - \kappa+ \langle X, T\rangle \frac{\partial \kappa}{\partial s}, \end{eqnarray} the evolution equation of $\varphi_4$ can be given by \begin{eqnarray}\label{eq:3.22.201909} \frac{\partial \varphi_4}{\partial t} &=&\frac{\partial^2 \varphi_4}{\partial s^2} +\frac{2}{p-h}\frac{\partial p}{\partial s}\frac{\partial \varphi_4}{\partial s} -\frac{\kappa}{p-h}\langle X, T\rangle \frac{\partial \varphi_4}{\partial s} +\frac{\kappa}{(p-h)^2} \frac{\partial^2 p}{\partial s^2} \nonumber\\ && +(p-h)^2(\varphi_4)^3 - \frac{2\pi}{L}(p-h) (\varphi_4)^2+(\varphi_4)^2 - \frac{2\pi}{L}\frac{\varphi_4}{p-h} \nonumber\\ && - \frac{\kappa^2}{(p-h)^3} \langle X, T\rangle \frac{\partial p}{\partial s}. \end{eqnarray} Using \begin{eqnarray*} \frac{\partial p}{\partial s} = \kappa \langle X, T\rangle, ~~~~ \frac{\partial^2 p}{\partial s^2} =\frac{\partial \kappa}{\partial s} \langle X, T\rangle +\kappa-\kappa^2 p, \end{eqnarray*} we can compute that \begin{eqnarray} \label{eq:3.23.201909} \frac{\kappa}{(p-h)^2} \frac{\partial^2 p}{\partial s^2} - \frac{\kappa^2}{(p-h)^3} \langle X, T\rangle \frac{\partial p}{\partial s} = \varphi_4 \langle X, T\rangle \frac{\partial \varphi_4}{\partial s} + (\varphi_4)^2 -p(p-h)(\varphi_4)^3. \end{eqnarray} Substituting (\ref{eq:3.23.201909}) into the evolution equation of $\varphi_4$ can give us \begin{eqnarray}\label{eq:3.24.201909} \frac{\partial \varphi_4}{\partial t} &=&\frac{\partial^2 \varphi_4}{\partial s^2} +\frac{2}{p-h}\frac{\partial p}{\partial s}\frac{\partial \varphi_4}{\partial s} -h(p-h)(\varphi_4)^3 -\frac{2\pi}{L}(p-h) (\varphi_4)^2\nonumber \\ &&+2(\varphi_4)^2 - \frac{2\pi}{L}\frac{\varphi_4}{p-h}. \end{eqnarray} As is well known that the arc length parameter $s$ depends on both space parameter and time $t$, one can not apply the maximum principle directly. In Section 2, the parameter $\varphi$ (independent of $t$) is used to parametrize $X(\cdot, t)$. Now, let $g=\frac{\partial s}{\partial \varphi}$ and suppose that for fixed $t$, $\varphi_4(\cdot, t)$ attains its maximum $(\varphi_4)_{\max} (t)$ at the point $(\varphi_*, t)$. Denoted by $s_* = s(\varphi_*)$, then $$\frac{\partial \varphi_4}{\partial s} (s_*, t) =\left(\frac{\partial \varphi_4}{\partial \varphi} \frac{1}{g}\right) (s(\varphi_*), t) =0$$ and $$\frac{\partial^2 \varphi_4}{\partial s^2}(s_*, t) = \left(\frac{\partial^2 \varphi_4}{\partial \varphi^2}\frac{1}{g^2}\right) (s(\varphi_*), t) -\left(\frac{\partial \varphi_4}{\partial \varphi} \frac{\partial g}{\partial \varphi} \frac{1}{g^3}\right) (s(\varphi_*), t) \leq 0.$$ By (\ref{eq:3.24.201909}), \begin{eqnarray*} \frac{\partial \varphi_4}{\partial t}(s_*, t) &\leq& -h(p-h)(\varphi_4(s_*, t))^3 - (p-h)\frac{2\pi}{L} (\varphi_4(s_*, t))^2 \\ && +2(\varphi_4(s_*, t))^2 - \frac{2\pi}{L}\frac{\varphi_4(s_*, t)}{p-h} \\ &<& -h(p-h)(\varphi_4(s_*, t))^3 +2(\varphi_4(s_*, t))^2 \\ &<& -h^2(\varphi_4(s_*, t))^3 +2(\varphi_4(s_*, t))^2. \end{eqnarray*} Once $\varphi_4(s_*, t)$ is greater than $\frac{2}{h^2}$, then $\frac{\partial \varphi_4}{\partial t}(s_*, t)< 0$. By the maximum principle, we get \begin{eqnarray}\label{eq:3.25.201909} \varphi_4(s, t) \leq \max\left\{(\varphi_4)_{\max}(0), ~~\frac{2}{h^2(T_*)}\right\} \end{eqnarray} for $(s, t)\in [0, L] \times \left[0, T_*+\frac{A_0}{8\pi}\right]$. Since the ``support function" satisfies $p (s, t) \leq \frac{L(t)}{2} \leq \frac{L(0)}{2}$, we have \begin{eqnarray}\label{eq:3.26.201909} \kappa(s, t) \leq \left(\frac{L(0)}{2}-h(T_*)\right) \cdot\max\left\{(\varphi_4)_{\max}(0), ~~\frac{2}{h^2(T_*)}\right\}. \end{eqnarray} Similarly, if $\varphi_4$ attains $(\varphi_4)_{\min}(t)$ at a point $(s_*, t)$ then $\frac{\partial \varphi_4}{\partial s} (s_*, t)= 0, ~~\frac{\partial^2 \varphi_4}{\partial s^2}(s_*, t) \geq 0$. Once $\varphi_4(s_*, t) \leq -\frac{2\pi}{\sqrt{4\pi A_0} h(T_*)}$, we have \begin{eqnarray*} \frac{\partial \varphi_4}{\partial t}(s_*, t) &\geq& -h(p-h)(\varphi_4(s_*, t))^3 - (p-h)\frac{2\pi}{L(t)} (\varphi_4(s_*, t))^2 \\ && +2(\varphi_4(s_*, t))^2 - \frac{2\pi}{L(t)}\frac{\varphi_4(s_*, t)}{p-h} \\ &\geq& -\left(h \varphi_4(s_*, t) +\frac{2\pi}{L(t)}\right) (p-h)(\varphi_4(s_*, t))^2 >0, \end{eqnarray*} and $$ \varphi_4(s, t) \geq \min\left\{(\varphi_4)_{\min}(0), ~-\frac{2\pi}{\sqrt{4\pi A_0} h(T_*)}\right\} $$ for $t\in [0, L] \times \left[0, T_*+\frac{A_0}{8\pi}\right]$. Therefore we can get \begin{eqnarray}\label{eq:3.26.201909} \kappa(s, t) \geq \left(\frac{L(0)}{2}-h(T_*)\right) \cdot\min \left\{(\varphi_4)_{\min}(0), ~-\frac{2\pi}{\sqrt{4\pi A_0} h(T_*)}\right\}. \end{eqnarray} \begin{lemma}\label{lem:3.7.201909} Let the initial smooth curve $X_0$ be star-shaped and centrosymmetric with respect to $O$ and Gage's area-preserving flow (\ref{eq:1.1.201909}) exist on a time interval $[0, T_*)$ for some $T_*>0$, then the curvature of $X(\cdot, t)$ is bounded on the time interval $[0, T_*]$ uniformly. \end{lemma} As a consequence of Lemma \ref{lem:3.7.201909}, one can extend Gage's area-preserving flow (\ref{eq:1.1.201909}) globally. \begin{corollary}\label{cor:3.8.201909} Gage's area-preserving flow (\ref{eq:1.1.201909}) exists on the time interval $[0, +\infty)$ if the initial smooth curve $X_0$ is star-shaped and centrosymmetric with respect to $O$. \end{corollary} \begin{proof} Let $T_*$ be a positive number which can be chosen arbitrarily. By Lemma \ref{lem:2.4.201909}, both $r$ and $\frac{\partial r}{\partial \theta}$ are bounded uniformly on the time interval $[0, T_*]$. By Lemmas (\ref{lem:3.1.201909})-(\ref{lem:3.7.201909}), $\left|\frac{\partial^2 r}{\partial \theta^2}\right|$ is also bounded uniformly on the same interval $[0, T_*]$. All the higher derivatives $\frac{\partial^i r}{\partial \theta^i} (i\geq 3)$ satisfy linear parabolic equations, they also have uniform bounds. Therefore, the evolving curve is smooth on any finite time interval, and the flow (\ref{eq:1.1.201909}) with initial $X_0$ exists globally. \end{proof} \section{Convergence and a convexity theorem} In this section, it is shown that Gage's area preserving flow can deform every smooth, closed and embedded curve into a convex one if the flow exists on the time interval $[0, +\infty)$. Since we have proved that the flow (\ref{eq:1.1.201909}) with inial curve $X_0$ being smooth, centrosymmetric and star-shaped does not blow up at any finite time in Section 3, the main result in this paper can be obtained by the following theorem immediately. \begin{theorem}\label{thm:4.1.201909} Given a smooth, embedded and closed initial curve $X_0$, if the flow (\ref{eq:1.1.201909}) does not blow up in the time interval $[0, +\infty)$, then the evolving curve converges to a finite circle as time goes to infinity. \end{theorem} In order to prove this result, we need do some preparations. From now on, we use the subindex to stand for partial derivatives, such as $\kappa_t=\frac{\partial \kappa}{\partial t}, \kappa_s=\frac{\partial \kappa}{\partial s}, \kappa_{ss}=\frac{\partial^2 \kappa}{\partial s^2}, \cdots$. The length of the evolving curve is decreasing during the evolution process and bounded from below by $\sqrt{4\pi A_0}$. Following the parlance from \cite{Grayson-1987}, the time derivative of the length must approach zero at an $\varepsilon$-dense set of sufficiently large time. Now one can use this fact in the following special case. Denote by $I_n$ the time interval $[n-\frac{1}{10}, n+\frac{1}{10}]$, where $n$ is a natural number. For a positive number $\varepsilon$, all of the intervals which satisfy that $\inf_{I_n}\int_0^L \kappa^2 ds-\frac{4\pi^2}{L}\geq \varepsilon$ consists of a finite set. So there exists an $N>0$ such that whenever $n>N$ we have $t_n\in I_n$ satisfying \begin{eqnarray}\label{eq:4.1.201909} \int_0^L \kappa(\cdot, t_n)^2 ds-\frac{4\pi^2}{L(t_n)}\leq \varepsilon. \end{eqnarray} \begin{lemma}\label{lem:4.2.201909} The $L^2$ norm of the difference $(\kappa-\frac{2\pi}{L})$ converges to zero as $t\rightarrow \infty$. \end{lemma} \begin{proof} Integrating the evolution equation of $L$ can give us $$L(t)-L(0)=-\int_0^t\int_0^L \left(\kappa-\frac{2\pi}{L}\right)^2 dsdt.$$ Letting $t$ tend to infinity can yield \begin{eqnarray}\label{eq:4.2.201909} \int_0^\infty\int_0^L \left(\kappa-\frac{2\pi}{L}\right)^2 dsdt < L(0). \end{eqnarray} We consider the time derivative of the integral $\int_0^L \left(\kappa-\frac{2\pi}{L}\right)^2 ds$: \begin{eqnarray} \frac{d}{dt}\int_0^L \left(\kappa-\frac{2\pi}{L}\right)^2 ds &=& -2\int_0^L (\kappa_s)^2 ds + \int_0^L \kappa^2\left(\kappa-\frac{2\pi}{L}\right)^2 ds \nonumber\\ && +\frac{2\pi}{L}\int_0^L \kappa\left(\kappa-\frac{2\pi}{L}\right)^2 ds. \label{eq:4.3.201909} \end{eqnarray} One only needs to deal with the case that the evolving curve is not convex. In this case, $\inf_{s} \kappa^2=0$. Otherwise, the convex curve $X(\cdot, t)$ will converge to a finite circle (see \cite{Gage-1983}) and the proof is completed. Since $$\sup_{s} \kappa^2 \leq L \int_0^L (\kappa_s)^2 ds,$$ one obtains $$-\int_0^L (\kappa_s)^2 ds\leq -\frac{1}{L}\sup_{s} \kappa^2.$$ By the Cauchy-Schwartz inequality, \begin{eqnarray*} \left|\int_0^L \kappa\left(\kappa-\frac{2\pi}{L}\right)^2 ds\right| &\leq& \sqrt{\int_0^L \kappa^2\left(\kappa-\frac{2\pi}{L}\right)^2 ds} \cdot \sqrt{\int_0^L \left(\kappa-\frac{2\pi}{L}\right)^2 ds} \\ &\leq& \sqrt{\int_0^L \sup_{s} \kappa^4 ds} \cdot \sqrt{\int_0^L \left(\kappa-\frac{2\pi}{L}\right)^2 ds}. \\ &=& \sup_{s} \kappa^2 \cdot \sqrt{L(t)} \cdot \sqrt{\int_0^L \left(\kappa-\frac{2\pi}{L}\right)^2 ds}. \end{eqnarray*} Taking this estimate into the equation (\ref{eq:4.3.201909}) can yield \begin{eqnarray*} \frac{d}{dt}\int_0^L\left(\kappa-\frac{2\pi}{L}\right)^2 ds &\leq& -\frac{2}{L(t)}\sup_{s} \kappa^2 + \sup_{s} \kappa^2 \int_0^L \left(\kappa-\frac{2\pi}{L}\right)^2 ds \\ && + \sup_{s} \kappa^2 \cdot \frac{2\pi}{\sqrt{L(t)}} \cdot \sqrt{\int_0^L \left(\kappa-\frac{2\pi}{L}\right)^2 ds} \\ &\leq& \sup_{s} \kappa^2 \left(-\frac{2}{L_0}+ \int_0^L\left(\kappa-\frac{2\pi}{L}\right)^2 ds +\frac{2\pi}{\sqrt{L_\infty}} \sqrt{\int_0^L \left(\kappa-\frac{2\pi}{L}\right)^2 ds}\right), \end{eqnarray*} where $L_\infty = \sqrt{4\pi A}$. Choose $\varepsilon>0$ in (\ref{eq:4.1.201909}) small enough such that $$-\frac{2}{L_0}+ \frac{3}{2}\varepsilon + \frac{2\pi}{\sqrt{L_\infty}}\sqrt{\frac{3\varepsilon}{2}} \leq 0.$$ We now claim that $\int_0^L(\kappa-\frac{2\pi}{L})^2 ds < \frac{3\varepsilon}{2}$. Firstly, assume that at time $t=t_n$, we have $$\int_0^L(\kappa-\frac{2\pi}{L})^2 ds \leq \varepsilon.$$ Secondly, for $t>t_n$, suppose there exists a $t_*\in \left(t_n, t_n+\frac{11}{10}\right]$ such that \begin{eqnarray*} \int_0^L \left(\kappa(\cdot, t_*)-\frac{2\pi}{L(t_*)}\right)^2 ds= \frac{3\varepsilon}{2} \end{eqnarray*} and \begin{eqnarray*} \int_0^L \left(\kappa-\frac{2\pi}{L}\right)^2 ds < \frac{3\varepsilon}{2} \end{eqnarray*} for all $t\in [t_n, t_*)$. In the time interval $(t_n, t_*)$, we have \begin{eqnarray*} \frac{d}{dt}\int_0^L\left(\kappa-\frac{2\pi}{L}\right)^2 ds &\leq& \sup_{s} \kappa^2 \left(-\frac{2}{L_0}+ \frac{3}{2}\varepsilon + \frac{2\pi}{\sqrt{L_\infty}}\sqrt{\frac{3\varepsilon}{2}} \right) \leq 0. \end{eqnarray*} Thus \begin{eqnarray*} \int_0^L \left(\kappa(\cdot, t_*)-\frac{2\pi}{L(t_*)}\right)^2 ds &\leq& \int_0^L \left(\kappa(\cdot, t_n)-\frac{2\pi}{L(t_n)}\right)^2 ds < \frac{3\varepsilon}{2}, \end{eqnarray*} which contradicts with the definition of $t_*$. For any $\varepsilon>0$, there exists an $N>0$ such that $t>N$ implies that $\int_0^L(\kappa-\frac{2\pi}{L})^2 ds < \frac{3}{2}\varepsilon$. That is to say, one has the limit $$\lim_{t\rightarrow \infty} \int_0^L \left(\kappa-\frac{2\pi}{L}\right)^2 ds =0. $$ \end{proof} The integral of $\int_0^L (\kappa-\frac{2\pi}{L})^2 ds$ in the equation (\ref{eq:4.3.201909}) is bounded in the time interval $[0, \infty)$. Next, the $L^2$-norm of $(\kappa-\frac{2\pi}{L})_s$ will be estimated. \begin{lemma}\label{lem:4.3.201909} Under the assumption of Theorem \ref{thm:4.1.201909}, we have the limit $$\lim_{t\rightarrow \infty} \int_0^L (\kappa_s)^2 ds =0. $$ \end{lemma} \begin{proof} Denote $u=\kappa-\frac{2\pi}{L}$. It is easy to calculate that $$u_t=u_{ss}+\left(u+\frac{2\pi}{L}\right)^2u-\frac{2\pi}{L^2}\int_0^L u^2 ds$$ and $$u_{st}=u_{sss}+3\left(u+\frac{2\pi}{L}\right)uu_s+\left(u+\frac{2\pi}{L}\right)^2u_s.$$ By using these equations one can get \begin{eqnarray}\label{eq:4.4.201909} \frac{d}{dt}\int_0^L(u_s)^2 ds &=& -2\int_0^L (u_{ss})^2 ds +7\int_0^L u^2(u_s)^2 ds \nonumber\\ && + \frac{18\pi}{L} \int_0^L u (u_s)^2 ds +\frac{8\pi^2}{L^2}\int_0^L(u_s)^2 ds. \end{eqnarray} If $\int_0^L (u_{s})^2 ds>C\int_0^L u^2 ds$ for some positive $C$ then from $$\int_0^L (u_{s})^2 ds=-\int_0^L uu_{ss} ds\leq \sqrt{\frac{1}{C}\int_0^L (u_s)^2 ds\int_0^L (u_{ss})^2 ds}$$ it follows that \begin{eqnarray}\label{eq:4.5.201909} \int_0^L (u_s)^2 ds\leq \frac{1}{C} \int_0^L (u_{ss})^2 ds. \end{eqnarray} We now estimate the terms in the right hand side of the equation (\ref{eq:4.4.201909}). \begin{eqnarray} \int_0^L u^2(u_s)^2 ds &\leq& \int_0^L u^2 ds \cdot \sup_s(u_s)^2\leq \int_0^L u^2 ds \left(\int_0^L |u_{ss}| ds\right)^2 \nonumber\\ &\leq& \int_0^L u^2 ds L \int_0^L (u_{ss})^2 ds. \label{eq:4.6.201909} \end{eqnarray} Using (\ref{eq:4.5.201909})-(\ref{eq:4.6.201909}) can yield \begin{eqnarray} \int_0^L u(u_s)^2 ds &\leq& \frac{1}{2}\int_0^L u^2(u_s)^2 ds+\frac{1}{2} \int_0^L (u_s)^2 ds \nonumber\\ &\leq& \frac{L}{2}\int_0^L u^2 ds \int_0^L (u_{ss})^2 ds + \frac{1}{2C} \int_0^L (u_{ss})^2 ds. \label{eq:4.7.201909} \end{eqnarray} Taking the estimates (\ref{eq:4.5.201909})-(\ref{eq:4.7.201909}) into the equation (\ref{eq:4.4.201909}), one obtains \begin{eqnarray*} \frac{d}{dt}\int_0^L(u_s)^2 ds &=& \left(-2+(7L+9\pi)\int_0^L u^2 ds+ \frac{9\pi}{C L}+ \frac{8\pi^2}{CL^2}\right)\int_0^L (u_{ss})^2 ds \\ &\leq& \left(-2+(7L_0+9\pi)\int_0^L u^2 ds+ \frac{9\pi}{C L_\infty}+ \frac{8\pi^2}{CL_\infty^2}\right)\int_0^L (u_{ss})^2 ds. \end{eqnarray*} Lemma \ref{lem:4.2.201909} tells us that there exists a positive $t_0$ such that for $t>t_0$, $$(7L_0+9\pi)\int_0^L u^2 ds \leq \frac{1}{2}.$$ If $C\geq \max\{\frac{36\pi}{L_\infty}, \frac{32\pi^2}{L_\infty^2}\}$ then $\frac{9\pi}{C L_\infty}+ \frac{8\pi^2}{CL_\infty^2} \leq1/2$ and furthermore \begin{eqnarray*} \frac{d}{dt}\int_0^L(u_s)^2 ds \leq -\int_0^L (u_{ss})^2 ds \end{eqnarray*} if $t>t_0$. For a $C^1$ and $2\pi$-periodic function $f$ defined on $\mathbb{R}$ such that $\int_0^{2\pi} f dx=0$, Wirtinger's inequality tells us $$\int_0^{2\pi} (f^\prime)^2 dx\geq \int_0^{2\pi} f^2 dx.$$ Let $s=\frac{L}{2\pi}\phi$, then $$u_s=u_\phi\frac{2\pi}{L} \ \ \ \mbox{and} \ \ \ \ u_{ss}=u_{\phi\phi}\left(\frac{2\pi}{L}\right)^2.$$ By Wirtinger's inequality, we get \begin{eqnarray*} \int_0^L (u_{ss})^2 ds = \left(\frac{2\pi}{L}\right)^3\int_0^{2\pi} (u_{\phi\phi})^2 d\phi \geq \left(\frac{2\pi}{L}\right)^3 \int_0^{2\pi} (u_\phi)^2 d\phi =\left(\frac{2\pi}{L}\right)^2 \int_0^L (u_s)^2 ds. \end{eqnarray*} If $C\geq \max\{\frac{36\pi}{L_\infty}, \frac{32\pi^2}{L_\infty^2}\}$, then $$\frac{d}{dt}\int_0^L(u_s)^2 ds \leq -\left(\frac{2\pi}{L_0}\right)^2 \int_0^L (u_s)^2 ds, \ \ \ t>t_0. $$ Thus, for $t>t_0$, \begin{eqnarray}\label{eq:4.8.201909} \int_0^L (u_{s})^2 ds \leq \int_0^L (u_{s}(\cdot, t_0))^2 ds\cdot \exp \left[-\left(\frac{2\pi}{L_0}\right)^2t\right]. \end{eqnarray} For large time, the quantity $\int_0^L (u_{s})^2 ds$ either decays exponentially or is bounded, $$\int_0^L (u_{s})^2 ds \leq \left(1+\max\left\{\frac{36\pi}{L_\infty}, \frac{32\pi^2}{L_\infty^2}\right\}\right)\int_0^L u^2 ds.$$ In either event, it decreases to zero. \end{proof} Now we go back to the proof of Theorem \ref{thm:4.1.201909}. It follows from Sobolev's inequality that $|\kappa-\frac{2\pi}{L}|$ tends to 0 as $t\rightarrow \infty$. Thus there is a time $T_0>0$ such that $\kappa>0$ for all $t>T_0$. The evolving curve becomes a convex one. By the result in Gage's original paper \cite{Gage-1986} (see also the note \cite{Chao-Ling-Wang}), we know that the curvature of the evolving curve converges to $\sqrt{\frac{\pi}{A_0}}$ in the $C^\infty$ metric. In the section 4 of the paper \cite{Gao-Zhang-2019} by the first author and Zhang, the evolving curve of GAPF is proved to converge to a fixed limiting circle not escaping to infinity or oscillating indefinitely as $t \rightarrow +\infty$. So the proof of Theorem \ref{thm:1.1.201909} can be completed by combining Corollary \ref{cor:3.8.201909} and Theorem \ref{thm:4.1.201909}. ~\\ \textbf{Acknowledgments} Laiyuan Gao is supported by the National Natural Science Foundation of China (No.11801230). Shengliang Pan is supported by the National Natural Science Foundation of China (No.12071347). The authors are very grateful to Professor Michael Gage for his inspiration on the flow (\ref{eq:1.1.201909}). The first author thanks the Department of Mathematics of UC San Diego for providing a fantastic research atmosphere because a part of this paper was finished when he visited Professor Lei Ni.
1,108,101,562,389
arxiv
\section{Introduction} In the traditional ``core accretion" model of planet formation, growth of planets proceeds in a bottom-up manner. Planets begin their growth as rocky cores, or protoplanets. If these protoplanets reach sufficient size within the lifetime of the gas disk, they will be able to trigger runaway gas accretion, resulting in a gas giant (\citealt{pollack_gas_giants}). This runaway occurs when $M_{\rm{atm}} \sim M_{\rm{core}}$, where $M_{\rm{atm}}$ is the mass of the planet's atmosphere and $M_{\rm{core}}$ is the mass of the planet in solids. The critical core mass, $M_{\rm{crit}}$, where this occurs is usually quoted as $M_{\rm{crit}} \sim 10 M_\oplus$, though the actual mass depends on the disk parameters, especially the opacity and the core's accretion rate (see, e.g. \citealt{raf06}, \citealt{pymc_2015}). A gas giant will not form if the planet cannot reach $M_{\rm{crit}}$ within the lifetime of the gas disk, $\tau_{\rm{disk}}$, which is $\sim 2.5 \, \text{Myr}$ for G stars (\citealt{lifetimes_mamajek}, \citealt{lifetimes_ribas}). Traditional models rely on gravitational focusing to increase the effective radius for collisions. These models, which we will refer to as ``canonical core accretion" or ``planetesimal accretion" models, give growth timescales that are generally fast enough to reach critical core mass for $a\lesssim10\,\text{AU}$, but become longer than the disk dispersal timescale past this distance. (See \citealt{gold}, hereafter GLS, for a review of gas-free regimes.) Observations of exoplanetary systems have challenged this canonical core accretion model in a number of ways. Here we focus on the existence of systems that feature gas giants at wide orbital separations (see, e.g. \citealt{bowler_DI_review} for a review). Of particular note is the planetary system surrounding the star HR 8799, which exhibits a nonhierarchical, multiplanet structure: HR 8799 consists of four gas giant planets ($M\sim10 \, M_J$) at extremely wide projected separations: 14, 24, 38, and 68 AU (\citealt{HR8799_orig}, \citealt{HR8799_fourth}). HR 8799 poses a serious challenge to canonical core accretion models because the last doubling timescale for growth at these distances is far too long for a core to reach the critical mass necessary to trigger runaway growth within $\tau_{\rm{disk}}$. Additional effects, such as gas drag from the planet's atmosphere (\citealt{ii_03}) or damping of the planetesimals' random motions by the nebular gas (\citealt{raf}), can increase the cross section for collisions further. Neither of these effects, however, are sufficient to allow the \textit{in situ} formation of gas giants at $70 \, \text{AU}$. A number of alternative formation scenarios have been proposed to explain the formation of HR 8799. One commonly suggested explanation is that HR 8799 is evidence of an alternative formation scenario known as ``gravitational instability," wherein the gaseous component of the protoplanetary disk becomes unstable to gravitational collapse and subsequently fragments into the observed gas giant planets (\citealt{boss_1997}; see also \citealt{kl_2016} for a more recent review). However, \cite{kratter_gas_giants} pointed out that it is difficult to form fragments of the sizes seen in HR 8799 without having these ``planets" grow to brown dwarf or even M-star masses. The lack of observed brown dwarfs at wide orbital separations provides some evidence against this hypothesis, but additional statistical work is needed (\citealt{bowler_DI_review}). Outward scattering after formation at smaller orbital separations is another possibility, but $N$-body simulations by \cite{dodson-robinson_gas_giants} find that scattering is unlikely to produce systems with the multiplanet architecture of HR 8799. In recent years, a third possibility has emerged: a modification to the theory of core accretion commonly referred to as ``pebble accretion," which we will also refer to as ``gas-assisted growth" (\citealt{OK10}, \citealt{pmc11}, \citealt{OK12}, \citealt{lj12}, \citealt{LJ14}, \citealt{lkd_2015}, \citealt{mljb15}, \citealt{vo_2016}, \citealt{igm16}, \citealt{xbmc_2017}, \citealt{rmp_2018}). In pebble accretion, the interaction between solid bodies and the gas disk is considered in detail when determining the growth rates of planets. In particular, gas drag can enhance growth rates by removing energy from small bodies. Particles that deplete their kinetic energy within the gravitational sphere of influence of a larger body can become bound to this parent body, which will eventually lead to accretion of the particle by the growing protoplanet. This process can occur at larger impact parameters than are required for the particle to collide with the core, which in turn increases the accretion cross section. This interaction often affects mm-cm-sized bodies the most strongly. Note, however, that for low-density, ``fluffy" aggregates, the radius of bodies most substantially affected by gas drag can be substantially larger. For gas-assisted growth to operate, a reservoir of pebble-sized objects must exist in the protoplanetary disk. Because the sizes of these pebbles are comparable to the $\sim$ mm wavelengths used to measure dust surface densities in the outer regions of protoplanetary disks, observations can directly probe the surface densities in the small solids that fuel gas-assisted growth. These observations find large reservoirs of small, pebble-sized solids (\citealt{andrews_09}, \citealt{andrews}). An example is shown in Figure \ref{fig:andrews09}, which presents disk surface densities measured by \cite{andrews_09}. The figure shows the surface density in particles of radius $0.1 \, \text{mm} - 1 \, \text{mm}$, which is inferred by integrating the size distribution used in the paper ($d N / d r_s \propto r_s^{-3.5}$) from 0.1 to $1 \, \text{mm}$. Performing the integration gives the fraction of the measured solid surface density contained in this size range ($\sim 70\%$). \begin{figure} [h] \centering \includegraphics[width=\linewidth]{rev_sd_comp_3} \caption{Colored lines show the dust surface density in $0.1 \, \text{mm} - 1 \, \text{mm}$ sized particles taken from $870$ $\mu \text{m}$ continuum emission observations of protoplanetary disks done by \cite{andrews_09}. See text for details. Also shown for reference is the value of the solid surface density in the minimum-mass solar nebula (MMSN), appropriate for the outer disk, $30 \, (a/\text{AU})^{-3/2} \, \text{g} \, \text{cm}^{-2}$ (\citealt{weid_mmsn}, \citealt{hay_mmsn}), as well as the fiducial surface density used in this work to match the observations. In the gray shaded region the values of the curves are extrapolations to scales smaller than the observations can resolve.} \label{fig:andrews09} \end{figure} Given this observed reservoir of small solids, pebble accretion dramatically increases the expected growth rate of large cores. Under fiducial conditions, the timescale for a core's last doubling to canonical values of $M_{\rm{crit}}$ is below the disk lifetime, even at many tens of AU separations. Though fast accretion of solids deposits enough energy to delay the onset of runaway accretion of a gas envelope, once a core has reached several Earth masses, finely tuned disk conditions are required to slow atmospheric growth enough to prevent runaway from ultimately occurring. Thus, growth via pebble accretion seems to predict that wide orbital separation gas giants should be common. However, direct imaging surveys show that planets $\gtrsim 2-5 \, M_J$ are rare at distances $> 30 \, \text{AU}$ (\citealt{brandt_di}, \citealt{chauvin_di}, \citealt{bowler_DI_review}, \citealt{gal_gas_giant_freq}). One possibility for solving this problem is the presence of turbulence in the nebular gas. In this work, by ``turbulence," we generally mean any anomalous root mean square (RMS) velocity of the nebular gas that is not due to the laminar velocity that arises from radial pressure support in the disk. The main effect of turbulence on pebble accretion is to increase the velocity dispersion of the pebbles due to their coupling with the gas; it is only in Section \ref{final_mass} that we connect our parameterization of the turbulent RMS velocity to the transport of angular momentum in the disk. Turbulence can both increase the kinetic energy of an incoming particle and decrease the core's gravitational sphere of influence. Turbulence also drives particles vertically, reducing the overall densities of small bodies and slowing accretion. Turbulence is usually only included in models of protoplanetary growth by pebble accretion by increasing the particle scale height and hence reducing the mass density of solids. Some models of the early stages of planetesimal growth discuss the effects of turbulence (e.g. \citealt{gio_2014}, \citealt{hgb_2016}), but these models are concerned with accretion at cross sections comparable to the core's geometric cross section; i.e. they neglect the effects of the core's gravity. In this paper, we use an order-of-magnitude model of pebble accretion (\citealt{rmp_2018}, hereafter R18) to propose a criterion for the formation of gas giants via gas-assisted growth. In particular, R18 investigated how turbulence affects the growth of gas giant cores as a function of core mass. High-mass cores ($\gtrsim 10^{-2}-10^{-1} M_\oplus$) can grow on timescales less than the lifetime of the gas disk, even in strong turbulence. However, for lower-mass cores and stronger turbulence, the range of pebble sizes available for growth is restricted. In this case, the pebble sizes for which growth is most efficient often cannot be accreted, and growth can ``stall" at low core masses. In effect, a core must first achieve a minimum mass before it can quickly grow to $M_{\rm{crit}}$ via gas-assisted growth. In this paper, for our fiducial calculation, we assume that growth to this minimum mass happens by canonical core accretion, which allows us to place semi-major axis limits on where gas giant growth is possible. We also calculate values for the core mass needed at a given semi-major axis for pebble accretion to be rapid, which apply regardless of how the early stages of growth proceed. The assumption that low-mass growth is fueled by planetesimal accretion requires that, in addition to the reservoir of small pebbles, a substantial population of larger planetesimals has formed. We discuss the ramifications of varying the mass in planetesimals in Section \ref{upper_lim}. Close to the central star, planetesimal accretion can dominate the early growth of planets, with pebble accretion setting the growth timescale for high-mass cores. Far from the central star, however, planetesimal accretion is less efficient, limiting its ability to grow cores to high enough masses that pebble accretion kicks in. Thus, turbulence can set the maximum distance at which gas giant formation is possible via pebble accretion. We find that for quiescent disks, gas giants can form far out in the disk ($a \lesssim 70 \, \text{AU}$), but for stronger turbulence, this maximum distance is smaller (e.g. $a \lesssim 40 \, \text{AU}$ for $\alpha \gtrsim 10^{-2}$). Furthermore, while disks with weaker turbulence can have gas giants at wider orbital separation, the weaker viscosities in these disks mean that the masses of the gas giants formed are likely lower ($\lesssim 2 \, M_J$), which would preclude them from being detected by the current generation of direct-imaging surveys. Therefore, there may exist a population of wide orbital separation gas giants that have yet to be found due to their low luminosities. In Section \ref{overview}, we review our model, which is discussed in detail in R18. In Section \ref{wide_sep}, we discuss how gas-assisted growth operates at wide orbital separation, contrasting the rapid growth at high core mass with the slower growth for low-mass cores. In Section \ref{gas_giants}, we explore how turbulence can place limits on the semi-major axes where gas giants can form. In Section \ref{final_mass}, we investigate the implications for the final masses of gas giants if turbulence plays a role in gap opening in addition to early core growth. Finally, in Section \ref{summary}, we summarize our results and give our conclusions. \section{Model Overview} \label{overview} In this section, we will give a brief summary of the ideas behind pebble accretion and how they are implemented in our model. We will focus on pebble accretion at the mass scales relevant to limiting gas giant growth -- i.e. masses in the range $10^{-4} M_\oplus \lesssim M \lesssim 10^{-2} M_\oplus$ (see Figure \ref{fig:m_min}). A more general and in-depth discussion can be found in the Appendix, and in R18. \subsection{Basic Pebble Accretion Processes} \label{basic} In this section, we discuss the basic parameters that go into calculating the growth timescale and contrast gas-assisted growth with growth via planetesimal accretion. The setup for our model consists of a large body, or protoplanetary ``core," growing by accreting a population of small bodies. Our calculation is performed for a given size of small body, expressed either in terms of the small body's mass, $m$, or its radius $r_s$. Note that, practically speaking, the important parameter for our calculation is the particle's Stokes number, $St$ (see Section \ref{t_grow}). We can convert from Stokes number to radius or mass by assuming a density for the small bodies. In what follows, we will assume a density of $\rho_s = 2 \, \text{g} \, \text{cm}^{-3}$, which is appropriate for rocky or icy bodies. We note, however, that lower density, fluffy aggregates will have higher radii at a given Stokes number. In general, the growth timescale for the large body of mass $M$ is given by \begin{align} t_{\rm{grow}} = \left(\frac{1}{M}\frac{dM}{dt}\right)^{-1}, \end{align} while the growth rate, $dM/dt$ can be expressed as \begin{align} \label{eq:m_dot} \frac{dM}{dt} = m (n \sigma_{\rm{acc}} v_\infty) = m \left(\frac{f_s \Sigma}{2 H_p m} \right) (2 R_{\rm{acc}}) (2 H_{\rm{acc}}) v_\infty \, . \end{align} Here $n$ is the volumetric number density of small bodies, $\sigma_{\rm{acc}}$ is the accretion cross section, and $v_\infty$ is the velocity at which small bodies approach the large body. In the second equality, we have set $n=f_s \Sigma / (2 H_p m)$, where $H_p$ is the scale height of the small bodies, $\Sigma$ is the surface density of the gas, and $f_s\equiv\Sigma_p/\Sigma$ is the solid-to-gas mass ratio in the disk. We have also decomposed $\sigma_{\rm{acc}}$ into the product of length scales parallel and perpendicular to the disk plane, $2 R_{\rm{acc}}$ and $2 H_{\rm{acc}}$, respectively. Combining these two expressions gives \begin{align} \label{eq:t_grow} t_{\rm{grow}} = \frac{M H_p}{2 f_s \Sigma v_\infty R_{\rm{acc}} H_{\rm{acc}}} \; . \end{align} Thus, once $H_p$, $v_\infty$, $R_{\rm{acc}}$, and $H_{\rm{acc}}$ are determined, $t_{\rm{grow}}$ can be calculated. For growth that proceeds by accretion of massive planetesimals, the effects of gas drag are generally negligible (though see \citealt{raf} for a discussion of the effects of gas drag on smaller planetesimals of size $\lesssim 1 \, \rm{km}$). In this case, the value of $R_{\rm{acc}}$ is determined by the maximum impact parameter at which a small body will be gravitationally focused into a collision with the core, \begin{align} R_{\rm{focus}} = R \left( 1 + \frac{v_{\rm{esc}}^2}{v_{\infty}^2} \right )^{1/2} \; , \end{align} where $R$ is the physical radius of the core, and $v_{\rm{esc}} = \sqrt{2 G M /R}$ is the escape velocity from the core. An important parameter for calculating $R_{\rm{focus}}$ is the core's ``Hill radius," which is the characteristic radius at which the large body's gravity strongly influences the trajectories of the small bodies. For a big body of mass $M$ orbiting a star of mass $M_*$ at a semi major axis $a$, the Hill radius, $R_H$ is given by (\citealt{hill}), \begin{align} R_H = a \left( \frac{M}{3 M_*} \right)^{1/3} \; , \end{align} which can be obtained by determining the length at which the gravity of the large body is equal to the tidal gravity from the central star. Particles that pass within distances $\sim$ $R_H$ of the core move on complex trajectories that cannot be expressed as a simple function of impact parameter (\citealt{hill_enc}). Particles that emerge from the Hill radius without colliding with the large body will generally have their velocities relative to the core excited up to $v_\infty \sim R_H \Omega \equiv v_H$ in a random direction (GLS), where $\Omega = \sqrt{G M_* / a^3}$ is the Keplerian angular frequency and $a$ is the semi-major axis of the core's orbit. The quantity $v_H$ is known as the ``Hill velocity." If $v_\infty \sim v_H \ll v_{\rm{esc}}$, it is straightforward to show that \begin{align} \label{eq:r_focus} R_{\rm{focus}} \sim \sqrt{R R_H} \; .\end{align} Since interactions with the core excite planetesimals to a random velocity $v_\infty \sim v_H$, this is the largest capture radius possible for planetesimal accretion without invoking some damping mechanism to lower the planetesimal velocity below $v_H$. Note, however that since $R \ll R_H$, $R_{\rm{focus}} < R_H$. In gas-assisted growth, on the other hand, the value of $R_{\rm{acc}}$ can be much larger than $R_{\rm{focus}}$. For ``pebble-sized" small bodies, the interaction between the small bodies and the gas is important when calculating the accretion rate. In particular, gas drag can remove kinetic energy from the small body as it interacts with the growing core. If the work done by gas drag is sufficiently large, small bodies that otherwise would have merely been deflected by the core's gravity can become gravitationally bound to the core, further reducing their energy and causing them to inspiral and eventually be accreted by the core. This can dramatically increase the impact parameters at which accretion will occur. As discussed by, e.g. R18, in certain regions of parameter space, the core can accrete over the entirety of its Hill sphere, i.e. accretion proceeds with $R_{\rm{acc}} = R_H \gg R_{\rm{focus}}$. \subsection{Pebble Accretion at Different Particle Radii} \label{part_size} The Hill radius represents the largest distance at which particles can be captured. However, not all sizes of particles can be captured at $R_H$. To fully characterize the scale at which pebbles are captured, we need to introduce two additional radii. The first radius is the wind shearing (WISH) radius, which is the radius interior to which the core's gravity dominates over the differential acceleration between the small body and the core due to gas drag, \begin{align} R_{WS}^\prime = \sqrt{\frac{G(M+m)}{\Delta a_{WS}}} \; , \end{align} where $\Delta a_{WS}$ is the differential acceleration between the two bodies due to gas drag (\citealt{pmc11}). Particles that approach the core at impact parameters $>R_{WS}^\prime$ will be pulled off the core by gas drag even if they are inside of $R_H$. Thus, the value of $R_{\rm{acc}}$ is given by \begin{align} R_{\rm{acc}} = \min(R_H,R_{WS}^\prime) \; . \end{align} However, the value of $R_{WS}^\prime$ depends on the size of the small body being accreted, unlike $R_H$. To see this, we note that if $M\gg m$, we can rewrite $R_{WS}^\prime$ as \begin{align} \label{eq:r_ws_gen} R_{WS}^\prime \approx \sqrt{\frac{G M t_s}{v_{\rm{rel}}}} \; . \end{align} Here $t_s$ is the stopping time of the small body, \begin{align} \label{eq:t_s} t_s \equiv \frac{m v_{\rm{rel}}}{F_D\left(m\right)} \; , \end{align} $v_{\rm{rel}}$ is the relative velocity between the small body and the gas, and $F_D\left(m\right)$ is the drag force on the small body (see the Appendix for a discussion of how the correct $v_{\rm{rel}}$ for calculating $R_{WS}^\prime$ is determined). The stopping time parameterizes the size of the particle in terms of its interaction with the gas. Qualitatively, for large core masses only the stopping time of the smaller body is relevant because the core is essentially unaffected by gas drag. The largest particles that can deplete their kinetic energy will have $R_{WS}^\prime > R_H$, and will be able to accrete over the entirety of the core's Hill sphere, while smaller particles will have $R_{WS}^\prime < R_H$ and will only be accreted at more modest values of impact parameter. See the first two panels of Figure \ref{fig:rs_ex}. \begin{figure} [h] \centering \includegraphics[width=1.0\linewidth]{rev_illus_ex2} \caption{Illustration of how particle capture proceeds at different small-body radii. The central black circle represents the planet, and the blue circles represent incoming particles. The gray shaded region denotes the extent of the planet's atmosphere (or the planet's radius for $R>R_b$), and the yellow shaded region shows the region where incoming particles can be accreted. \textit{Left Panel}: for large particle radii, $R_{WS}^\prime > R_H$, so particles that enter $R_H$ and deplete their kinetic energy will be accreted by the core. Particles with impact parameters $<R_{WS}^\prime$ but $>R_H$ will not be able to accrete. \textit{Middle Panel}: for intermediate sizes of particles, $R_{WS}^\prime < R_H$. These particles need to pass within $R_{WS}^\prime$ to be accreted, as if they pass within $R_H$ but at distances $>R_{WS}^\prime$ they will be sheared off the core by gas drag. \textit{Right Panel}: small particles will have $R_{WS}^\prime < R_b$. At this point, the core's gravitational sphere of influence is inside its atmosphere. Particles of these sizes will couple to the local gas flow, which will flow around the core's static atmosphere. Thus, particles of a size such that $R_{WS}^\prime < R_b$ will not be accreted via pebble accretion.} \label{fig:rs_ex} \end{figure} Pebble accretion will not continue down to arbitrary sizes of small bodies. As $R_{WS}^\prime$ decreases with decreasing particle size, we will eventually reach the scale of the core's atmosphere. Because the atmosphere is essentially static, and the flow velocity is subsonic, the local gas flow will not be able to penetrate into the core's atmosphere. Gas will instead flow around the static atmosphere held by the core. See, e.g. \cite{ormel_flows} for an example of this behavior in the context of a planet embedded in a protoplanetary disk. We take the scale of the core's atmosphere to be determined by the Bondi radius, $R_b$, which is the scale at which the escape velocity from the core is equal to the local isothermal sound speed $c_s = \sqrt{k T / \mu}$, where $k$ is Boltzmann's constant, $T$ is the temperature, and $\mu$ is the mean molecular weight of the gas molecules. Thus, $R_b$ is given by \begin{align} R_b = \frac{G M}{c_s^2} \; . \end{align} Once particles are small enough that $R_{WS}^\prime < R_b$, they need to penetrate into the core's atmosphere to become bound to the core. However, these small particles will couple strongly to the gas, which will flow around $R_b$, stopping the particles from accreting. Thus, we take $R_{WS}^\prime = R_b$ to set the smallest size of particles that can be accreted; see the right panel of Figure \ref{fig:rs_ex}. We note here that we are neglecting any effects from potential ``recycling" of the core's atmosphere by the protoplanetary disk, but see, e.g. \cite{osk_2015} and \cite{ll_2017} for discussions of this effect. If the gas flow is able to penetrate into the core's atmosphere (e.g. \citealt{faw2015}), or if the core's atmospheric mass is small due to the high accretion luminosity, the core may be able to accrete the small particle sizes that we exclude. However, the accretion timescales for these particles are extremely long (see Section \ref{high_core}), so even if these particles can indeed accrete, their inclusion makes a negligible contribution to the total growth rate. Cores that have $R_b < R$ will not be able to accrete a substantial amount of gas from the nebula, which occurs for planetary masses $M<M_a$, where \begin{align} M_a &\equiv \frac{c_s^3}{G}\left(\frac{3}{4 \pi G \rho_p}\right)^{1/2}\\ &\approx 2 \times 10^{-4} M_\oplus \left( \frac{a}{30 \, \rm{AU}} \right)^{-9/14} \left( \frac{\rho_p}{2 \, \rm{g} \, \rm{cm}^{-3}} \right)^{-1/2} \end{align} where $\rho_p$ is the density of the protoplanet (e.g. \citealt{raf06}). The lowest core masses considered in this work are below this threshold. In this case, the considerations discussed above will still apply with the protoplanet's radius $R$ in place of its Bondi radius (i.e. accretion will cease for $R_{WS}^\prime < R$). In summary, the largest sizes of particles that can deplete their kinetic energy can be captured at the core's Hill radius $R_H$. For smaller sizes of particles, the WISH radius will eventually become smaller than $R_H$, which limits the impact parameters where accretion can occur. Finally, the smallest sizes of particles will have $R_{WS}^\prime < R_b$. These particles will not be able to penetrate into distances $<R_{WS}^\prime$, and therefore will not be able to accrete via pebble accretion. \subsection{Summary of Timescale Calculation} \label{t_grow} In this section, we briefly discuss how $t_{\rm{grow}}$, as well as the parameters necessary for calculating the growth timescale ($R_{\rm{acc}}$, $H_{\rm{acc}}$, $H_p$, and $v_\infty$, see Equation \ref{eq:t_grow}), are determined. We also define symbols that will be used in the rest of the paper. For a summary of how these parameters are calculated, see the Appendix. For a more detailed discussion of how the calculation is performed, see R18. Besides the orbital separation, $a$, the mass of the planet, $M$, and the stellar mass, $M_*$, the other input parameters needed to calculate the growth timescale are the radius of the small bodies being accreted, $r_s$, and the strength of the turbulence, which is parameterized by the Shakura-Sunyaev $\alpha$ parameter (\citealt{ss_alpha}). Using the value of $r_s$, we can calculating the stopping time of the particle, $t_s$, and the particle's Stokes number, \begin{align} St \equiv t_s \Omega \; . \end{align} The Stokes number is a dimensionless measurement of the particle's size in terms of how well coupled the particle is to the gas and is the directly relevant parameter for calculating the effects of gas drag on the particle. We also note that using this form of the Stokes number for expressions involving turbulence (e.g. Equations \ref{turb_rel_gas} and \ref{eq:v_turb_kep}) implicitly assumes that the turnover time of the largest-scale turbulent eddies is equal to the local orbital period. The value of $\alpha$ parameterizes the strength of the local turbulence in terms of the turbulent viscosity: $\nu_t = \alpha c_s H_g$, where $H_g = c_s/\Omega$ is the scale height of the gas disk. In terms of $\alpha$, the local turbulent gas velocity is given by \begin{align} \label{eq:v_gas} v_{\rm{gas},t} = \sqrt{\alpha} c_s \; . \end{align} We use $\alpha$ mainly to parameterize the magnitude of the turbulent gas velocity, which is the quantity that affects the pebble accretion process. It is only in Section \ref{final_mass} that we explicitly use $\alpha$ to parameterize the viscosity. While the $\alpha$ model of accretion disks is generally invoked to transport angular momentum inward and explain measured accretion rates in protoplanetary disks (see, e.g. \citealt{mlb_2005}), for our purposes, $\alpha$ is fundamentally a local parameter and is not necessarily connected with the accretion rate onto the central star. The most commonly cited mechanism for generating turbulence in protoplanetary disks is the magnetorotational instability (MRI; for a review, see \citealt{b_2009}). Simulations of MRI under ideal magnetohydrodynamical (MHD) conditions find effective $\alpha$ values of $10^{-2}-10^{-1}$ (e.g. \citealt{hgb_1995}), while MHD simulations that include nonideal MHD effects such as ambipolar diffusion find lower $\alpha$ values, in the range $10^{-4}-10^{-3}$ (e.g. \citealt{bs_2011}). In these simulations, the RMS turbulent gas velocity can be approximated to order of magnitude by taking $v_{gas} = \sqrt{\alpha} c_s$, as in Equation \eqref{eq:v_gas} (e.g. \citealt{xbmc_2017}). More recent works argue that magnetically driven winds can generate observed accretion rates, in which case protoplanetary disks could be quite inviscid (see, e.g. \citealt{b_2016}, \citealt{som_2016}). Even in this case, however, pure fluid instabilities, such as convective overstability (see, e.g. \citealt{l_2014}) or the zombie vortex instability (see, e.g. \citealt{mpj_2015}), spiral density waves raised by giant planets (see, e.g. \citealt{bnh_2016}), and hydrodynamical turbulence (see, e.g. \citealt{fnt_2017}), can all generate large RMS velocities for which the effective $\alpha$ value in Equation \eqref{eq:v_gas} is not equal to the $\alpha$ value characterizing angular momentum transport. Once $a$, $M$, $M_*$, $r_s$, and $\alpha$ are specified, we can calculate the quantities needed to determine $t_{\rm{grow}}$. To begin, in order to determine the rate that particles encounter the core, as well as the kinetic energy of the small body relative to the protoplanet, we need to calculate the small body's velocity far from the core. Because we take the core to move at the local Keplerian velocity, we take $v_\infty$ to be set by the larger of the particle's shear velocity, $v_{\rm{shear}} = R_{\rm{acc}} \Omega$, and its velocity relative to the local Keplerian velocity, which is due to the particle's interaction with both the laminar and turbulent components of the gas velocity. We use $v_{pk}$ to denote the value of this velocity relative to the Keplerian orbital velocity. Thus, $v_\infty$ is given by \begin{align} v_\infty = \max(v_{pk},v_{\rm{shear}}) \; . \end{align} For every particle size, we calculate both the kinetic energy of the particle before the encounter, \begin{align} \label{eq:ke} KE = \frac{1}{2} m v_\infty^2, \end{align} and the work done by gas drag during the encounter, \begin{align} \label{eq:work} W =2 F_D(v_{\rm{enc}}) R_{\rm{acc}}, \end{align} where $v_{\rm{enc}}$ is the velocity of the small body relative to the gas during its encounter with the core. For a discussion of how $F_D$ and $v_{\rm{enc}}$ are calculated, see the Appendix. Particles that have $KE > W$ cannot accrete; i.e. we set $t_{\rm{grow}} = \infty$ for such particles, regardless of the values of the parameters in Equation \eqref{eq:t_grow}. \footnote{Particles with $R_{\rm{acc}}=R_H$ and $v_\infty = v_H$ merely have their growth timescale enhanced by a factor $KE/W$ for $KE>W$, see the appendix for more details.} In practice, this sets the upper limit on the particle sizes that can be accreted via gas-assisted growth. In addition to determining the work done on the particle during its encounter, the impact parameter for accretion $R_{\rm{acc}}$ is used to determine the width of the accretion cross section. As stated in Section \ref{part_size}, $R_{\rm{acc}}$ is given by \begin{align} R_{\rm{acc}} = \min(R_{H}, R_{WS}^\prime) \; . \end{align} For more details on how $R_{WS}^\prime$ is calculated, see the Appendix. The height of the accretion rectangle $H_{\rm{acc}}$ is the minimum of the particle scale height $H_p$ and the impact parameter for accretion $R_{\rm{acc}}$: \begin{align} H_{\rm{acc}} = \min(R_{\rm{acc}}, H_{p}) \; , \end{align} as particles with a vertical extent larger than $R_{\rm{acc}}$ will not be accreted. The particle scale height is also needed because it sets the density of the small bodies; it can be set by the Kelvin-Helmholtz shear instability or by turbulent diffusion, \begin{align} \label{eq:h_p} H_p =& \max(H_{KH},H_t) \nonumber \\ =& \max\left[\frac{2 \eta v_k}{\Omega} \min\left(1,St^{-1/2}\right),H_g\min\left(1,\sqrt{\frac{\alpha}{St}}\right) \right] \; , \end{align} where $v_k$ is the local Keplerian orbital velocity, $\eta \equiv c_s^2/\left(2 v_k^2 \right)$ is a measure of the pressure support in the protoplanetary disk, and $\eta v_k$ is the velocity of the nebular gas relative to $v_k$ due to radial pressure support (i.e. the non-turbulent velocity of the gas). \subsection{Values of Parameters} For the purposes of reporting numerical values in what follows, we use a fiducial set of parameters that specify the properties of the protoplanetary disk at a given semi-major axis. The effect of varying some of the parameters is discussed in R18. We take the central star to be a solar mass, $M_* = M_\odot$. The small bodies and the core are taken to be spherical, with density $\rho_s = 2 \, \text{g} \, \text{cm}^{-3}$. We assume the gas disk is 70\% H$_2$ and 30 \% He by mass, leading to a mean molecular weight of $\mu \approx 2.35 \, m_H \approx 3.93\times 10^{-24} \, \text{g}$, with a neutral collision cross section $\sigma \approx 10^{-15} \, \text{cm}^2$. The temperature and gas surface density profiles are taken to be power laws in the semi-major axis. For the temperature profile we take $T = T_0 (a/\text{AU})^{-3/7} \, \text{K}$ (\citealt{cg_97}), where $T_0 = 200 \, \text{K}$, which is appropriate for a disk irradiated by star of luminosity $L \sim 3 L_\odot$ (e.g. \citealt{igm16}). For the gas surface density we use $\Sigma = 500 (a/\text{AU})^{-1} \, \text{g} \, \text{cm}^{-2}$, and we assume a constant solid-to-gas mass ratio of $f_s = 1/100$. These choices are made to match solid surface densities found in observations of protoplanetary disks (see Figure \ref{fig:andrews09}). \section{Gas-Assisted Growth Timescales at Wide Orbital Separation} \label{wide_sep} In this section, we discuss the timescales for growth via pebble accretion at wide orbital separation ($\gtrsim 10$ AU), where canonical core accretion models are slow. We find that even in the presence of strong ($\alpha \gtrsim 10^{-2}$) turbulence, the growth timescales for high-mass cores are far shorter for pebble accretion than for planetesimal accretion. Indeed, the doubling timescale is so fast at these orbital separations that it begs the question of what inhibits this rapid growth. To investigate this question, we also show that, unlike for planetesimal accretion, gas-assisted growth is generally slower for low core masses, particularly when turbulence is strong. \subsection{Growth at High Core Mass} \label{high_core} In the case of planetesimal accretion, the growth timescale for a core to reach $M_{\rm{crit}}$ is dominated by the last doubling timescale of the core; i.e. the slowest growth occurs for the highest core masses. Thus, when considering whether growth of wide orbital separation gas giants is possible, most authors examine the growth timescales at large core masses, which limit growth in canonical core accretion. The modification to the core accretion model presented by gas-assisted growth, on the other hand, substantially decreases the last doubling time at $M_{\rm{crit}}$ to well below the disk lifetime, even at wide orbital separations (e.g. \citealt{lj12}). While turbulence can reduce the rapid growth rates provided by pebble accretion, our modeling reveals that, at high core mass, growth remains efficient even in the presence of strong turbulence. This agrees with results from MHD simulations by \cite{xbmc_2017}, who numerically explore the growth rates of high-mass planetary cores in the presence of MRI turbulence. An example of our results is shown in Figure \ref{RvsT1}, which shows the growth timescale for a $5\,M_\oplus$ planet located at 30 AU. For particle sizes $r_s \gtrsim 50 \, \rm{cm}$, the growth timescale increases $\propto St$, as these particles require many orbital crossings to fully dissipate their kinetic energy (see the appendix and R18). As we decrease the small-body radius, we encounter small-body sizes $\left(1 \, \rm{cm} \lesssim r_s \lesssim 50 \, \rm{cm}\right)$ that are large enough that wind-shearing and scale height considerations are unimportant, allowing them to accrete over the entire Hill sphere at a rapid rate that is independent of $r_s$. As we continue to move to smaller pebble radii, eventually the particle size becomes small enough that the WISH radius and the particle scale height become important, decreasing the accretion rate. Finally, we reach the point where $R_{WS}^\prime < R_b$, which marks the pebble size at which particles couple so strongly to the gas that they flow around $R_{b}$ without accreting. This causes the cutoff in the graph seen on the left. For all values of $\alpha$ shown in Figure \ref{RvsT1}, there exists a broad range of small-body sizes, $r_s$, for which gas-assisted growth is able to operate, and the growth timescale of the core is less than the disk lifetime. Though turbulence erodes accretion of the smallest pebbles that were available in the laminar case, there still exists a range of particle sizes where rapid growth is possible. \begin{figure} [h] \centering \includegraphics[width=1.05\linewidth]{rev_RvsT_30AU} \caption{The growth timescale for a protoplanet as a function of the small-body radius the core is accreting. The timescale is plotted for several values of $\alpha$, which measures the strength of turbulence in the disk. The values shown are for a $5 \, M_\oplus$ core at $30 \, \text{AU}$. The lines are cut off for particles that are unable to accrete according to the energy criteria discussed in Section \ref{t_grow}.} \label{RvsT1} \end{figure} Figure \ref{RvsT1} shows the emergence of a regime where the core can accrete larger particles at a minimal timescale that is independent of $r_s$ and $\alpha$. This timescale is reached for cores accreting in 2D (i.e. $H_{\rm{acc}} = H_p$) over the entirety of their Hill radius. As discussed in, e.g. R18, the maximal possible approach velocity in the 2D regime occurs when particles shear into $R_H$, i.e. when $v_\infty = v_H$; larger velocities will excite pebbles vertically, causing the core to accrete in 3D. Setting $H_{\rm{acc}} = H_p$, $R_{\rm{acc}} = R_H$ and $v_\infty = v_H$ in Equation \eqref{eq:t_grow}, we that see this timescale is given by \begin{align} \label{eq:t_min} t_{\rm{Hill}} = \frac{M}{2 f_s \Sigma R_H^2 \Omega} \; . \end{align} In terms of fiducial parameters, $t_{\rm{Hill}}$ can be expressed as \begin{align} \label{eq:t_min_fid} t_{\rm{Hill}} \approx 4 \times 10^4 \left( \frac{a}{30\,\text{AU}}\right)^{1/2} \left( \frac{M}{5\,M_\oplus} \right)^{1/3} \text{years}. \end{align} This timescale, which we will refer to as the ``Hill timescale," is faster than gravitational focusing by a factor $R_H^2/R_{\rm{focus}}^2 \approx R_H/R$. If we approximate the star and the planet as uniform density spheres and take $\rho_* \sim \rho_p$, we have $R_H/R \sim a/R_*$. Thus, not only is the enhancement in growth rate substantial, the enhancement of pebble accretion relative to gravitational focusing is an increasing function of semi-major axis. The qualitative features of growth discussed above apply over a wide range of core masses. This can be seen from examination of Figure \ref{fig:heatmap_a30}, which shows the growth timescale for protoplanets as a function of both core mass and small-body radius. The four panels show the growth timescale for four different values of $\alpha$, while each individual panel shows the growth rate plotted as a function of both $r_s$ and $M$. As can be seen in Figure \ref{fig:heatmap_a30}, growth at ``high" core masses $\left( \gtrsim 10^{-3}-10^{-2} M_\oplus \right)$ proceeds in a similar manner to what is shown in Figure \ref{RvsT1}: the largest pebbles accrete on the rapid Hill timescale, independent of the small-body radius $r_s$, while smaller pebbles accrete less efficiently. Thus, as long as there exists a reservoir of particles that are able to accrete at $t_{\rm{Hill}}$, growth at higher core mass proceeds rapidly, even in the presence of strong turbulence. \begin{figure*} [h] \centering \includegraphics[width=\linewidth]{rev_heatmap_30AU} \caption{The growth timescale as a function of core mass, for $a = 30\,\text{AU}$. The red hatched region indicates where no accretion is possible. In the white regions particles can still accrete via other processes, e.g. gravitational focusing.} \label{fig:heatmap_a30} \end{figure*} However, it is also clear from examination of Figure \ref{fig:heatmap_a30} that below some ``minimum" mass, growth operates in a qualitatively different manner. We discuss the reasons for this change, as well as the ramifications for planetary growth, in the next section. \subsection{Growth Timescales for Low-Mass Protoplanets} In the last section, we showed that growth at large core masses is quite fast in gas-assisted growth, even in the presence of strong turbulence. This efficiency brings another issue, however, as we now need to understand why wide orbital separation gas giants are not ubiquitous given these rapid growth rates. As we will show below, at wide orbital separations and low core masses, growth timescales can be substantially longer than $t_{\rm{Hill}}$. Figure \ref{fig:heatmap_a30} illustrates the difference in growth at low core masses. One feature of particular note in this figure is how the range of particle sizes available for accretion is restricted both at low core masses ($M \lesssim 10^{-3}-10^{-2} M_\oplus$ in the figure) and as the strength of turbulence increases. The limited range of sizes where pebble accretion can operate is often neglected in other works on pebble accretion, e.g. \cite{lj12}. While some works, such as \cite{OK10}, discuss an upper limit on particle size that agrees with our work in the laminar regime, this upper limit shrinks rapidly as the strength of turbulence is increased, as can be seen in the figure. For a more in-depth comparison of these models, see R18. In all four panels, we see that, for low core masses, it is primarily small particles with high accretion timescales that are available for growth. It is only when the core reaches a sufficiently ``large" mass \footnote{The mass scale where this change occurs is well approximated by $v_H/v_{\rm{gas}} = 48^{-1/3}$, see R18.} that the features discussed in the previous section emerge and cores are able to rapidly accrete pebbles. Thus, there is in some sense a ``minimum" mass, above which pebble accretion becomes efficient and proceeds on timescales less than the lifetime of the disk. This trend is more pronounced as the strength of turbulence is increased, in the sense that the mass required for accretion to be faster than the disk lifetime increases rapidly as $\alpha$ increases. We note here that in the ``weak" turbulence regime (top panels of Figure \ref{fig:heatmap_a30}), there is a feature where the cores can accrete a limited range of sizes (with $r_s \sim $ 1 m) on short timescales. This is caused by the fact that the heaviest particles can actually have low kinetic energies relative to the core, since they drift at speeds close to the Keplerian velocity. This effect is eroded by the presence of turbulence, since it excites the random velocity of even the largest particles. In what follows, this effect is unimportant, since our choice of size distribution means that such large particles are not present (see Section \ref{int}), though it is an interesting area for future inquiry. The difficulty in accreting larger pebbles at low core mass is due to the weaker gravitational influence of the core. Gravitational perturbations from the core on the incoming pebbles can greatly increase the drag force on the small body during the encounter, since they increase the velocity of the particle relative to the local gas flow. The strength of this perturbation increases with increasing core mass. Thus, as core mass decreases, the work done by gas drag is reduced, limiting the range of small-body sizes that can be accreted. Furthermore, the size of $R_{\rm{acc}}$ also decreases with core mass, which means incoming particles have a smaller distance over which they can dissipate their kinetic energy relative to the core. Increasing the strength of the turbulence in the disk amplifies the difficulty in accreting particles, as incoming pebbles now have substantially higher kinetic energies. Not only can a smaller range of particles be accreted at low core mass, the smaller particle sizes that are available for growth have long growth timescales. One reason for this is that smaller particles can be more easily pulled off the core by gas drag, meaning that their maximum impact parameter for accretion is $R_{\rm{acc}} = R_{WS}^\prime$, which can be quite small in comparison to $R_H$. Furthermore, these particles are more easily excited vertically by the turbulent gas velocity (see Equation \ref{eq:h_p}). Thus, smaller particles have larger scale heights, reducing their number density and further slowing growth. Thus, at lower core masses, we expect growth via pebble accretion in high turbulence to be quite slow, as the only particles that the smaller cores can accrete have large growth timescales. These timescales can be several orders of magnitude slower than the growth timescale at $M_{\rm{crit}}$, meaning that growth at low core masses can often be the time-limiting step in gas giant formation via gas-assisted growth. \subsection{Size Distribution of Small Bodies} \label{int} Because the growth timescale is generally much slower for smaller particle sizes than it is for larger ones, the timescale for growth will also be dependent on the size distribution of small bodies that are available for accretion. Thus, in order to facilitate a more quantitative discussion of where growth of gas giants is possible in our model, we will integrate quantities of interest over an assumed size distribution, a process we now discuss in more detail. If the size distribution of small bodies is specified, i.e. if we know $d N/dr_s$, we can integrate the accretion rate of the large body over small body radius and obtain a total accretion rate. This integrated timescale is sensitive to the actual form of size distribution employed; thus, while integrating over size distribution can be quite illustrative, the results are less general. For our purposes, we employ the power-law distribution from \cite{size_dist}, who calculated the steady-state size distribution from a collisional cascade. This gives a distribution of sizes such that $dN/dr_s \propto r_s^{-3.5}$. For a power-law size distribution $dN/dr_s \propto r^{-q}$, most of the mass is in the largest particle sizes for $q<4$. Thus, our results are insensitive to the lower cutoff radius but highly dependent on the upper radius, since for an $r_s^{-3.5}$ power-law, most of the mass is in the larger particles. This is the most important feature of the size distribution we employ: for any size distribution with most of the mass in the largest particle radii, the qualitative picture discussed below is unchanged, though the quantitative results will change by order unity factors. Unless otherwise stated, we use an upper radius such that the largest Stokes number present is $St_{\rm{max}} = 10^{-1}$, and a lower radius such that the smallest bodies present correspond to $St_{\rm{min}} = 10^{-4}$. This constant Stokes number upper limit is most appropriate for the case when particle growth is limited by collisions, and the relative velocity is dominated by laminar drift. The value of $St_{\rm{max}} = 10^{-1}$ comes from \cite{blum_wurm_coll}, who gave $r_s = 10$ cm as the size past which collisions become destructive for a particular set of disk parameters. This radius corresponds to $St \sim 10^{-1}$ for the disk they considered. If bodies are held together mainly by chemical forces, then the relative velocity between particles is the main determinant of the outcome of a collision. This relative velocity in turn depends on the particle Stokes number and the amplitude of the gas velocity. Because the laminar drift velocity $\eta v_k$ is approximately constant throughout the disk, if laminar drift sets the collision velocity, the particle Stokes number is the only parameter relevant to determining when collisions become destructive. This simple description of the size distribution neglects the effects of increased turbulence, which would increase the particle-particle relative velocities during a collision, in turn lowering the critical Stokes number for destructive collisions. This also neglects the importance of radial drift in the outer regions of protoplanetary disks, which can proceed on shorter timescales than particle-particle collisions. In general, the size distribution of pebbles in disks is more complex than the simple prescription given here. We use this as our fiducial size distribution in order to reduce the number of input parameters our results depend on while still describing the general features of gas-assisted growth. \subsection{Integrated Growth Timescales} In this section, we discuss how integrated growth timescales change as the core grows. We also discuss how we can analytically calculate the integrated growth timescale, which is used later on to calculate analytic expressions for both the minimum mass for pebble accretion to be rapid and the semi-major axes where gas giant growth can occur. An example of the results from integrating over small-body size is shown in Figure \ref{fig:t_vs_m}, which plots the integrated growth timescale at $a = 20 \, \text{AU}$ as a function of core mass for several different levels of turbulence. An estimate for the $e$-folding time for the dissipation of the gaseous component of the disk, $\tau_{\rm{disk}} \approx 2.5 \, \text{Myr}$, is also shown. The disk dissipation timescale $\tau_{\rm{disk}}$ represents an approximate cutoff for gas giant formation; cores that are unable to reach the critical core mass within $\tau_{\rm{disk}}$ will not be able to trigger runaway accretion before the gas is substantially depleted. At low core masses, the growth timescale drops quickly as the core mass increases. This is due to the fact that these larger cores can accrete more massive pebbles. For the lower-mass cores, the largest pebbles that the core can accrete are smaller than the maximal size of the particles present. Therefore, as the core grows, it can accrete a larger fraction of the available solids, increasing the growth rate. If the growth timescale has only a simple power-law dependence on $r_s$ for the whole range of sizes, we can explicitly integrate the growth timescale over size and calculate an analytic expression for $t_{\rm{grow}}$. This requires that none of the parameters that go into calculating $t_{\rm{grow}}$ change regimes over the range of sizes considered: for example, if $R_{\rm{acc}} = R_H$ for the largest sizes present but $R_{\rm{acc}} = R_{WS}^\prime$ for the smaller sizes, our integrand is now a piecewise function of $r_s$, and a simple analytic solution is no longer possible. In practice, if we make the approximation that the regimes that apply for the maximal particle size present hold throughout the integral, the resultant errors are generally small. In what follows, we will be particularly interested in the mass at which the growth timescale becomes shorter than the lifetime of the gas disk, since subsequent growth will proceed on even shorter timescales. From Figure \ref{fig:t_vs_m}, we can see that, at the point where $t_{\rm{grow}}$ becomes shorter than $\tau_{\rm{disk}}$, cores are small enough that $R_{\rm{acc}} = R_{WS}^\prime$ (i.e. the core's WISH radius is smaller than its Hill radius for all of the small-body sizes it accretes)\footnote{As discussed in the appendix, $R_{WS}^\prime$ is really the smaller of two radii: $R_{WS}^\prime = \min\left(R_{WS},R_{\rm{shear}}\right)$. In making our analytic approximations, we assume that the cores we are concerned with are low enough mass that $R_{WS}<R_{\rm{shear}}$. This assumption can be shown to be generally valid by comparing the analytic approximations we derive to our numerical results}. We also assume that $R_{WS}^\prime$ is small enough that the core accretes in 3D, i.e. $R_{WS}^\prime < H_p$. Additionally, for the small-body radii the core is accreting $v_\infty = v_{pk} \approx v_{\rm{gas}}$ (i.e. the small body's random velocity dominates over shear, and these particles are well coupled to the gas). Due to the wide orbital separations and small particle sizes we are interested in, the particles are expected to be in the Epstein drag regime. Using these values throughout the integration over size allows us to compute $t_{\rm{grow}}$ analytically, and comparison of the resultant analytic expressions with the numerical calculations presented below shows that these approximations are robust. Finally, when calculating the work done on the particle, we set $v_{\rm{enc}} = v_{\rm{kick}} = G M / \left( R_{\rm{acc}} v_\infty \right)$ (see the appendix for more details). It can be shown that particles with the Stokes number given by Equation \eqref{eq:st_limit} are in this regime. Using the considerations above, we can now calculate closed-form expressions for $t_{\rm{grow}}$. To begin, we determine the largest size of particle these low-mass cores can accrete. Using Equations \eqref{eq:t_s}, \eqref{eq:ke}, and \eqref{eq:work} and the values of parameters discussed in the preceding paragraph, we see that the maximal size of particle the core can accrete is given by\footnote{ We note that this is similar to the barrier between the ``Hyperbolic" and ``Full Settling" regimes identified by \cite{OK10}, except that our value for $v_{\rm{gas}}$ includes a contribution from the turbulent gas velocity, which dominates for $\alpha > \eta$. } \begin{align} \label{eq:st_limit} St_\ell = 12 \frac{v_H^3}{v_{\rm{gas}}^3} \; . \end{align} Because our size distribution is dominated by the largest particle sizes, we can use the Stokes number limit from Equation \eqref{eq:st_limit} to determine the growth rate. If we neglect the lower bounds on integrations over particle size, it is straightforward to demonstrate that the growth rate is, to order-of-magnitude, given by the product of the growth rate for the largest sizes of particles the core can accrete, $\dot{M}(St_\ell)$, and the fraction of the surface density contained in solids up to size $St_\ell$, $f(St_\ell)$: \begin{align} \dot{M} \sim \dot{M}(St_\ell) f(St_\ell) \; . \end{align} Plugging in our assumed values for the parameters into Equation \eqref{eq:t_grow} for $t_{\rm{grow}}$, we see that in this regime, \begin{align} t_{\rm{grow}} = \frac{H_p}{2 \Sigma_p G t_s} \; . \end{align} Thus, there are two possible growth regimes, depending on whether $H_p = H_{KH}$ or $H_p = H_t$. For $St < 1$, we have $H_t > H_{KH}$ (i.e. $H_p = H_t$) for $\alpha > 2 \eta St$. This limit on $St$ divides our analytic expressions into two piecewise regimes. Explicitly performing the integration over size, the growth timescale is given by \begin{align} \label{eq:frac_an} t_{\rm{grow}} \approx \begin{dcases} 9 \times 10^{7} \, \text{years} \, \left( \frac{M}{10^{-5} M_\oplus} \right)^{-2} \times \\ \quad \left( \frac{a}{30 \, \, \text{AU}} \right)^{5/2} St_{\rm{max}}^{1/2} \left( \frac{\alpha}{10^{-3}} \right)^{7/2} ,& \alpha > 2 \eta St_\ell \\ \\ 3 \times 10^{7} \, \text{years} \, \left( \frac{M}{10^{-5} M_\oplus} \right)^{-3/2} \times \\ \quad \left( \frac{a}{30 \, \text{AU}} \right)^{51/14} St_{\rm{max}}^{1/2} \quad, & \alpha < 2 \eta St_\ell \end{dcases} \end{align} Eventually, the core becomes massive enough that it can accrete all sizes of particles available, i.e. $St_{\ell} > St_{\rm{max}}$. This causes the growth timescale to become independent of $M$, since $\dot{M} \propto R_{WS}^{\prime 2} \propto M$. In this regime, the growth timescale is given by \begin{align} \label{eq:full_an} t_{\rm{grow}} \approx \begin{dcases} 7 \times 10^{3} \, \text{years} \, \left( \frac{a}{30 \, \text{AU}} \right)^{11/14} \times \\ \quad St_{\rm{max}}^{-3/2} \left( \frac{\alpha}{10^{-3}} \right)^{1/2}, & \alpha > 2 \eta St_{\rm{max}}\\ \\ 1 \times 10^{4} \, \text{years} \, \left( \frac{a}{30 \, \text{AU}} \right)^{15/14} St_{\rm{max}}^{-1}, &\alpha < 2 \eta St_{\rm{max}} \end{dcases} \end{align} Thus, the low-mass growth of the core, where the growth timescale decreases for increasing core mass, is the time-limiting step in gas-assisted growth. As can be seen in Figure \ref{fig:t_vs_m} and verified by the analytic expressions above, once the core reaches a mass such that $t_{\rm{grow}} < \tau_{\rm{disk}}$, all subsequent growth should proceed on timescales that are faster than the disk lifetime. Therefore, the early stages of core growth, where gravitational focusing of planetesimals may be faster than gas-assisted growth, will play a key role in whether a planet can grow to be a gas giant. \begin{figure} [h] \centering \includegraphics[trim={5cm 0 0 0},clip,width=1.15\linewidth]{rev_TvsM_edit} \caption{Integrated growth timescale for a core at $a = 20 \, \rm{AU}$ as a function of core mass. The growth timescale is integrated over sizes using a Dohnanyi distribution with a maximum size corresponding to $St=10^{-1}$, as discussed in Section \ref{int}. The approximate $e$-folding time of the gaseous component of the disk, $\tau_{\rm{disk}}$, is marked as a dashed horizontal line. As the core grows, it can accrete a larger fraction of the available small-body sizes, causing the growth timescale to drop rapidly. Eventually, the core's mass becomes large enough that it can accrete all available particle sizes, causing it to enter into a regime where growth timescale is independent of $M$. We also note that once the core becomes massive enough that the growth timescale drops below $\tau_{\rm{disk}}$, subsequent growth at higher core masses proceeds on timescales well below the disk lifetime.} \label{fig:t_vs_m} \end{figure} \section{Restrictions on the Growth of Gas Giants} \label{gas_giants} The effects of the previous sections imply that to understand under what conditions gas giant formation is possible via pebble accretion, we must examine lower-mass cores, for which the gas-assisted growth timescale can be quite long. If these cores were to grow by gas-assisted growth alone, then growth would always stall at sufficiently small core mass such that turbulence dominates over the core's gravity. For low core masses, however, planetesimal accretion can be quite rapid. Therefore, the final fate of a protoplanet depends on whether canonical core accretion can provide sufficiently rapid growth at small core masses such that the core can reach a size where pebble accretion becomes efficient, which will in turn allow the core to grow rapidly to the critical core mass needed for runaway growth. \subsection{Planetesimal Accretion Timescale } In order to calculate the semi-major axis where gas giants form, we consider early growth by planetesimal accretion and subsequent growth by pebble accretion. This requires us to calculate the timescale for growth by planetesimal accretion for a given core mass. In general, the scale height of particles is given by $H_p = v_z/\Omega$, where $v_z$ is the vertical component of the small body's velocity. As stated previously, the fastest growth possible via planetesimal accretion (without some external damping mechanism) occurs when the planetesimal velocity dispersion is equal to the Hill velocity. We use this regime for our fiducial value of the growth timescale via planetesimal accretion. If we take $v_z \sim v_\infty =v_H$, and use Equations \eqref{eq:m_dot} and \eqref{eq:r_focus}, then the growth rate of the core is proportional to \begin{align} \dot{M} \propto R_H R \, \Sigma_{\rm{pla}} \Omega \; , \end{align} where $\Sigma_{\rm{pla}}$ is the surface density of planetesimals. The prefactor in the above equation is not well constrained by analytic considerations; in a more detailed treatment, it should taken from $N$-body simulations of the interactions between the planetesimals. For our purposes, we take the prefactor from \cite{jl_17}; this gives \begin{align} \dot{M} = 6 \pi R_H R \, \Sigma_{\rm{pla}} \Omega \; . \end{align} For our fiducial value of the growth timescale, we set $\Sigma_{\rm{pla}} = \Sigma_p$, which gives a timescale of \begin{align} \label{eqn:t_GF_fid} t_{\rm{pla}} \approx 2 \times 10^{7} \, \text{years} \left( \frac{a}{30 \, \text{AU}} \right)^{3/2} \left( \frac{M}{5 \, M_\oplus} \right)^{1/3}. \end{align} Solving for the mass where $t_{\rm{pla}}=\tau_{\rm{disk}}$ gives an expression for the maximum mass a planet can reach via planetesimal accretion, \begin{align} M_{\rm{pla}} = 8 \times 10^{-3} M_\oplus \fid{a}{30 \, \rm{AU}}{-9/2} \fid{\Sigma_{\rm{pla},0}}{5 \, \rm{g} \, \rm{cm}^{-2}}{3} \; , \end{align} where $\Sigma_{\rm{pla},0}$ is the prefactor of the planetesimal surface density profile, i.e. $\Sigma_{\rm{pla}} = \Sigma_{\rm{pla},0} \left(a/\rm{AU}\right)^{-1}$. Our choice of $\Sigma_{\rm{pla}} = \Sigma_p$ gives reasonable values of the masses planets can reach within the gas disk lifetime at the semi-major axes of the solar system gas giants (see Figure \ref{fig:m_min}). Some of the effects of varying the surface density of planetesimals are discussed in Section \ref{upper_lim}. \subsection{Upper Limits on the Semi-Major axis of Gas Giant Growth} \label{upper_lim} In order to place constraints on the semi-major axis at which gas giant growth is possible, we begin by determining the minimal mass below which pebble accretion is too slow to grow a core within $\tau_{\rm{disk}}$. In order to do this, we make the approximation that once the core becomes massive enough that $t_{\rm{grow}}<\tau_{\rm{disk}}$, the growth timescale of the core will remain below $\tau_{\rm{disk}}$ as the core continues to grow to $M_{\rm{crit}}$. Thus, once the core becomes massive enough, its subsequent growth time is small compared to the disk lifetime. As can be seen in Figure \ref{fig:t_vs_m}, and from Equations \eqref{eq:frac_an} and \eqref{eq:full_an}, this approximation is quite robust. An exploration of $t_{\rm{grow}}$ vs. $M$ over a large amount of parameter space shows that this is generally true throughout the disk. We note, however, that if the sizes of the available particles are not set by Stokes number but rather by absolute particle size, there can exist regions of the disk where this approximation breaks down, as the particle sizes where growth is efficient are essentially determined by Stokes number. We consider this possibility in more detail below. Because the growth timescale is dominated by growth at low core masses, we can determine an approximate minimum mass for gas giant growth through pebble accretion by solving for the mass at which $t_{\rm{grow}}=\tau_{\rm{disk}}$. This is the mass below which growth will stall, and the core will be unable to grow to $M_{\rm{crit}}$ within $\tau_{\rm{disk}}$. This idea is shown graphically in Figure \ref{fig:m_min_ex}. \begin{figure} [h] \centering \includegraphics[width=\linewidth]{rev_min_mass_3} \caption{Graphical illustration of how gas giant core growth is limited for different semi major axes. The monotonically increasing line (black) shows the \textit{minimum} mass needed for gas-assisted growth to produce a gas giant core; for masses higher than the plotted mass, the growth timescale for the core is less than the disk lifetime. The monotonically decreasing line (blue) shows the \textit{maximum} mass it is possible to achieve via planetesimal accretion. Values lower than the indicated mass can be reached within the disk lifetime, but for larger masses the disk will dissipate before the mass is reached. The vertical line denotes the semi-major axis upper limit on where growth of gas giant cores can occur; interior to this region, planetesimal accretion can build a massive enough core rapidly enough that pebble accretion becomes efficient and dominates growth at higher masses. The green shaded region indicates where growth of gas giants is ruled out, as both planetesimal accretion and pebble accretion are too slow.} \label{fig:m_min_ex} \end{figure} The mass where growth stalls as a function of semi-major axis, calculated numerically using our full expressions, is shown in Figure \ref{fig:m_min}. Again, the effects of turbulence on growth rate are clearly visible in the figure: in the laminar case, even extremely wide orbital separation cores can grow faster than $\tau_{\rm{disk}}$ down to a very low core mass. At high turbulence ($\alpha \gtrsim 10^{-2}$), however, the core needs to reach masses $\gtrsim 10^{-3} M_\oplus$ before pebble accretion becomes fast enough for these cores to reach $M_{\rm{crit}}$ within the disk lifetime. Also shown in the plot is the \textit{maximum} mass that a core can grow to using gravitational focusing of planetesimals. We emphasize here that the interpretation of this line is the opposite of the gas-assisted growth values; for gravitational focusing, all values \textit{lower} than the given mass are approximately obtainable within the disk lifetime. \begin{figure} [h] \centering \includegraphics[width=\linewidth]{rev3_Mvsa} \caption{Minimum mass for which the growth timescale is shorter than $\tau_{\rm{disk}} = 2.5 \, \text{Myr}$, shown for various values of $\alpha$. Masses smaller than the values depicted have growth timescales larger than $\tau_{\rm{disk}}$, so if the core can exceed this mass by other means, it should be able to reach $M_{\rm{crit}}$, but growth by pebble accretion will be unable to exceed this mass within $\tau_{\rm{disk}}$. A mass for growth by planetesimal accretion is also shown, but this mass has a different interpretation: it is the \textit{largest} mass a core can grow to via gravitational focusing within $\tau_{\rm{disk}}$.} \label{fig:m_min} \end{figure} We can also obtain an analytic approximation for $M_{\rm{peb}}$, the mass where the pebble accretion timescale drops below the disk lifetime. Setting \eqref{eq:frac_an} equal to $\tau_{\rm{disk}}$ gives \begin{align} \label{eq:m_peb} M_{\rm{peb}} = \begin{dcases} 6 \times 10^{-5} \, M_\oplus \left( \frac{a}{30 \, \text{AU}} \right)^{5/4} \left( \frac{\alpha}{10^{-3}} \right)^{7/4} \times\\ \quad \left( \frac{\tau_{\rm{disk}}}{2.5 \, \text{Myr}} \right)^{-1/2} St_{\rm{max}}^{1/4}, \quad \alpha > 2 \eta St_{\rm{max}}\\ 5.6 \times 10^{-5} \, M_\oplus \left( \frac{a}{30 \, \text{AU}} \right)^{17/7} \times\\ \quad \left( \frac{\tau_{\rm{disk}}}{2.5 \, \text{Myr}} \right)^{-2/3} St_{\rm{max}}^{1/3}, \quad \alpha < 2 \eta St_{\rm{max}} \end{dcases} \end{align} which demonstrates analytically the strong dependence that the efficiency of pebble accretion has on both semi-major axis and strength of turbulence. Figure \ref{fig:m_min_ex} shows how we can use Figure \ref{fig:m_min} to determine where the interplay between canonical core accretion and gas-assisted growth will allow a gas giant to grow. The intersection between the pebble accretion and planetesimal accretion values represents the approximate semi-major axis upper limit on gas giant growth. For values higher than this semi-major axis, planetesimal accretion is too slow to bring the core to the minimum mass needed such that gas-assisted growth can subsequently grow the core to $M_{\rm{crit}}$ within the disk lifetime. For values smaller than this semi-major axis, however, planetesimal accretion can grow the core to a sufficiently massive size rapidly enough that gas-assisted growth can take over. This semi-major axis also represents an upper limit on where a core can form, as for smaller orbital separations, the growth timescale decreases (see Equations \ref{eq:frac_an} and \ref{eq:full_an}). This is not the case if the size distribution is determined by particle radius instead of Stokes number, as we discuss in Section \ref{fix_rs}. We also note that if a core larger than the pebble accretion mass were present past this semi-major axis limit (e.g. if it were scattered outward), then the core could grow sufficiently rapidly to trigger gas giant formation. Figure \ref{fig:a_max_alph} plots the maximum distance obtained by solving numerically for the mass at which $M_{\rm{pla} } = M_{ \rm{peb}}$ using our full expressions. In order to illustrate the effect of changing the upper limit on the size distribution, two different size distributions are shown -- one in which the maximum Stokes number is $St_{\rm{max}} = 0.1$, and one in which $St_{\rm{max}} =1$. From the plot, it is clear that as turbulence increases, the semi-major axis at which gas giant growth is possible drops substantially. Growth is also slightly more inhibited for the $St=1$ distribution; this is due to the fact that cores need to reach higher masses in order to accrete $St=1$ particles as opposed to $St=0.1$ particles. However, because $St=1$ particles accrete more rapidly, this effect is attenuated, causing the overall dependence on $St_{\rm{max}}$ to be rather weak. Using Equation \eqref{eq:m_peb}, we can derive analytic approximations to the curve shown in Figure \ref{fig:a_max_alph}. Setting $M_{\rm{peb}} = M_{\rm{pla}}$, we obtain, \begin{align} \label{eq:a_max} a_{\rm{upper}} \approx \begin{dcases} 70 \, \text{AU} \left( \frac{\alpha}{10^{-3}} \right)^{-7/23} \left( \frac{\Sigma_{\rm{pla}}}{5 \, \text{g} \, \text{cm}^{-2}} \right)^{12/23} \times \\ \left( \frac{\tau_{\rm{disk}}}{2.5 \, \text{Myr}} \right)^{14/23} St_{\rm{max}}^{-1/23}, \quad \quad \, \, \, \alpha > 2 \eta St_{\rm{max}}\\ 60 \, \text{AU} \left( \frac{\Sigma_{\rm{pla}}}{5 \, \text{g} \, \text{cm}^{-2}} \right)^{42/97} \times \\ \left( \frac{\tau_{\rm{disk}}}{2.5 \, \text{Myr}} \right)^{154/291} St_{\rm{max}}^{-14/291}, \quad \alpha < 2 \eta St_{\rm{max}} \end{dcases} \end{align} \begin{figure} [h] \centering \includegraphics[width=\linewidth]{rev_avsalpha_st_max} \caption{Maximum semi-major axis at which growth to critical core mass is possible as a function of $\alpha$. Curves are shown for a Dohnanyi distribution with a maximum-sized particle corresponding to $St=10^{-1}$ (solid line) and $St=1$ (dashed line). } \label{fig:a_max_alph} \end{figure} These analytic expressions are overplotted on the numerical results in Figure \ref{fig:red_pla}. Curves for two different planetesimal surface densities, one where we use our fiducial value of $\Sigma_{\rm{pla}} = \Sigma_p$ and one where we have reduced the surface density by a factor of 2, are shown. As expected, our analytic results agree well with the full numerical calculation in the limits of small and large $\alpha$. Figure \ref{fig:red_pla} also demonstrates that reducing the planetesimal surface density can have a marked effect on the semi-major axis where gas giant growth is possible. \begin{figure} [h] \centering \includegraphics[width=1.0\linewidth]{rev_avsalpha_red_pla_3} \caption{A comparison of our analytic expression for the maximal semi-major axis where gas giant growth is possible (Equation \ref{eq:a_max}), with the numerical solution. Results are presented for two different planetesimal surface densities, $\Sigma_{\rm{pla}}=\Sigma_p$ and $\Sigma_{\rm{pla}}=\Sigma_p/2$. } \label{fig:red_pla} \end{figure} The considerations discussed above can provide a plausible mechanism by which the growth of gas giants is suppressed: higher values of turbulence inhibit core growth at lower masses and make it so rapid growth via pebble accretion can only proceed once higher values of mass are reached. Because planetesimal accretion is slow at these wide orbital separations, cores will stall in their growth at low mass and be unable to reach the high masses needed for gas giant growth to proceed. In our order-of-magnitude model, the actual values quoted are of less import than the scalings and overall behavior predicted by the model. Thus, while we would not expect the quoted limit of e.g. $a \lesssim 30 \text{AU}$ for gas giant growth at $\alpha \approx 10^{-2}$, to be precise, we would argue that we should expect the general inhibiting of pebble accretion, and therefore gas giant growth, for stronger values of turbulence. \subsection{Effect of Fixing Upper Particle Radius} \label{fix_rs} Thus far, we have fixed the upper limit of our size distribution in terms of particle Stokes number. In contrast, disk models that are used to fit to observations of protoplanetary disks tend to use size distributions with fixed maximum particle radius instead of Stokes number. Size distributions fixed by particle radius can also emerge naturally if drift limits particle size as opposed to collisions. For example, \cite{pms_2017} derived an expression for the gas surface density determined by particle drift, which can be rewritten as an expression for particle radius (see their Equation 8): \begin{align} r_s = \frac{\Sigma a }{t_{\rm{disk}} \eta v_k \rho_s} \; , \end{align} where $t_{\rm{disk}}$ is the age of the disk. If $\Sigma_g \propto a^{-1}$, then the only semi-major axis dependence in the above equation comes from $\eta v_k$, which has extremely shallow radial dependence (e.g. $\eta v_k \propto a^{1/14}$ for the temperature profile we employ). Therefore, we also present results that use a size distribution where the upper size limit is fixed by particle radius. We follow the disk models of \cite{andrews_09}, who used a Dohnanyi ($d N/ds \propto r_s^{-3.5}$) distribution, with $r_{s,\rm{min}} = 0.005 \, \mu \text{m}$ and $r_{s,\rm{max}} = 1 \, \text{mm}$. This 1 mm maximum size is consistent with fitting of disk spectral energy distributions (\citealt{dal_disk_models}). A plot of the numerical solution for the semi-major axes where gas giant growth is possible for a distribution with fixed size limits is shown in Figure \ref{fig:a_max_alph_1mm}. The blue region indicates where growth is possible. As can be seen from the figure, using $r_s = 1\,\text{mm}$ as an upper size limit throughout the disk has pronounced effects on the semi major axes available for gas giant growth. For $\alpha \gtrsim 10^{-4}$, the region where gas giants can grow shrinks rapidly, causing a complete cutoff in gas giant growth for $\alpha \gtrsim 10^{-3}$. In this regime, we therefore expect turbulence to completely inhibit gas giant growth, instead of restricting growth to smaller values of semi-major axis. \begin{figure} [h] \centering \includegraphics[width=\linewidth]{rev_avsalpha_rs_max} \caption{The blue region shows where growth of gas giant cores is possible for a size distribution with maximal pebble size of $r_s = 1\, \text{mm}$, plotted as a function of the strength of turbulence. In contrast to the size distributions which used a fixed Stokes number as the upper limit, this distribution has a lower limit on where core growth can occur as well as an upper limit. The lower limit is the maximum of a fixed semi-major axis limit, and a limit for a given $\alpha$ -- an analytic expression for the latter (c.f. Equation \ref{eq:a_low}) is also shown (black dashed line).} \label{fig:a_max_alph_1mm} \end{figure} Using an upper limit fixed by particle size leads to a lower limit on semi-major axis in addition to an upper limit. This lower limit stems from the fact that fixing a maximum particle radius means that even the largest sizes of particles present may have low Stokes numbers, causing them to be accreted inefficiently or not accreted at all. This introduces two additional processes that we need to consider when calculating where gas giants can form, one that gives a fixed semi-major axis limit independent of $\alpha$, and another that gives a lower limit on $a$ for a given $\alpha$. Firstly, cores accreting the maximum size of particle may not be able to grow to $M_{\rm{crit}}$ and trigger runaway gas accretion, as the Bondi radius may grow larger than $R_{WS}^\prime$ for the maximal particle size (and therefore for all smaller values of $r_s$) before $M = M_{\rm{crit}}$. This means that all available particle sizes will be in the regime where they flow around the core's atmosphere without being accreted (see the right panel of Figure \ref{fig:rs_ex}), which will halt growth via pebble accretion. A core will have $R_{WS}^\prime = R_b$ when it reaches a mass of \begin{align} M_{R_{WS}^\prime=R_b} \approx 10 M_\oplus \left( \frac{a}{10 \, \text{AU}} \right)^{11/7} \left( \frac{r_s}{1 \, \text{mm}} \right) \; , \end{align} where we have assumed the particle is in the Epstein drag regime in converting from $t_s$ to $r_s$. Since the mass where this equality occurs is an increasing function of semi-major axis, these considerations imply that, close in to the star, the core may not be able to reach sufficient mass through pebble accretion to trigger runaway gas accretion. Using $M_{\rm{crit}} = 10 M_\oplus$ as a conservative upper limit for runaway accretion to occur requires $a \gtrsim 10 \, \text{AU}$ before cores can reach $M_{\rm{crit}}$. A second complication that can also serve to place a lower limit on semi-major axis is that, unlike for the fixed Stokes number size distribution, growth timescale can be a \textit{decreasing} function of semi-major axis when particle radii are instead fixed. In particular, the mass-independent growth timescales for accretion of the full range of sizes given by Equation \eqref{eq:full_an} can decrease as we move outwards in the disk. This stems from the fact that the Stokes number of $r_s =$ 1 mm particles will increase further out in the disk, and particles with higher values of $St$ (for $St < 1$) are generally accreted more rapidly. Thus, even if we find an upper limit on semi-major axis in the manner described above, we also have to check whether the growth timescale again becomes longer than the disk lifetime closer in to the central star. Because the low-$\alpha$ regime shown in Figure \ref{fig:a_max_alph_1mm} occurs for small values of semi-major axis, where the Stokes number of an $r_s=1\,\rm{mm}$ particle is quite low ($\sim 5\times10^{-4}$), the $ \alpha >2\eta St_{max}$ regime of Equation \eqref{eq:full_an} applies everywhere when calculating our semi-major axis lower limit. We can therefore determine our lower limit on growth analytically by setting this timescale equal to $\tau_{\rm{disk}}$. Doing so yields \begin{align} \label{eq:a_low} a_{\rm{low}} \approx 58 \, \text{AU} \left( \frac{\alpha}{10^{-3}} \right)^{7/10} \left( \frac{r_{s,\text{max}}}{1 \, \text{mm} } \right)^{-21/10} \left( \frac{\tau_{\rm{disk}}}{2.5 \, \text{Myr}} \right)^{-7/5} \; , \end{align} which is plotted in Figure \ref{fig:a_max_alph_1mm} (black dashed line). These two processes are what yields the lower limit seen in Figure \ref{fig:a_max_alph_1mm}; regardless of the value of $\alpha$, core growth cannot proceed for $a \lesssim 10 \, \rm{AU}$, as the core will be isolated from accretion of all available pebble sizes before it can trigger runaway gas accretion. As $\alpha$ increases, the growth timescale for accretion of the full range of particle sizes may become longer than the disk lifetime close in to the central star, requiring the core to be at larger values of semi-major axes before gas giant growth is possible. \section{Final Mass of Gas Giants} \label{final_mass} In this section, we consider the effect of $\alpha$ on the final mass that gas giants can reach and tie these considerations to our previous discussions on how turbulence affects the early stages of gas giant growth. Note that in this section, we take the $\alpha$ value to affect the viscosity of the disk, as opposed to merely using $\alpha$ to parameterize the RMS gas velocity, as was done in the previous sections. Once a core begins runaway gas accretion, the accretion rate for nebular gas is initially extremely rapid (e.g. \citealt{pollack_gas_giants}). If accretion proceeded unhindered at this rate, gas giants would easily be able to accrete all of the gas in their local feeding zones before the gas disk dissipated. However, the observed masses of gas giants are well below their local gas isolation mass; what, then, stops gas giants from growing? This is usually explained by appealing to gap opening by the growing planet: as the planet grows, it will gravitationally torque the local nebular gas, pushing it away. If gas is torqued away more rapidly than it is transported inward by viscosity, the planet can clear a gap in the disk, reducing the gas surface density near the growing planet. This reduction in surface density can starve the planet of material for growth and can eventually shut off growth entirely. If this process sets the final mass that gas giants can reach, then, in general, gas giants will be able to reach larger masses in disks that have higher viscosities. Thus, if we translate our $\alpha$ values into viscosities (as opposed to just parameterizations of the local turbulent gas velocity), then turbulence can play a role in \textit{both} whether a gas giant core can form and in the final mass of the planet. The physical processes that determine the final mass of gas giants remain an open question meriting further inquiry. In order to provide concrete numerical results, in what follows, we consider two possible criteria from the literature for determining the mass that gas giants reach, and we discuss the implications for the population of wide orbital separation gas giants when these criteria are coupled with growth via pebble accretion. Thus, while the expressions we use may not capture the final masses of gas giants, the results below will still hold qualitatively as long as disks with higher viscosity produce higher-mass gas giants. For our first criterion from the literature, we determine the width of the gap opened by the planet and shut off accretion when the gap has reached a certain size. The width of the gap opened can be obtained by equating the rate of angular momentum transport due to viscosity, $\dot{H_\nu} = 3 \pi \Sigma \nu a^2 \Omega$, with the rate that the planet delivers angular momentum to the disk (\citealt{lin_gap}): \begin{align} \dot{H}_T = f_g q^2 \Sigma a^4 \Omega^2 \left(\frac{a}{\Delta}\right)^3. \end{align} Here $q\equiv M/M_*$ is the planet-to-star mass ratio, $\Delta$ is the width of the gap opened, and $f_g$ is an order unity factor. Equating these two expressions gives the gap width in units of the Hill radius as \begin{align} \frac{\Delta}{R_H} = \left(\frac{f_g q}{\pi \alpha} \frac{a^2}{ H_g^2}\right)^{1/3} \end{align} From comparison with the results of numerical simulations of the growth of Jupiter by \cite{lhd_2009}, \cite{kratter_gas_giants} adopted $\Delta/R_H \sim 5$ as their criterion for starvation, which we adopt as well. Scaled to fiducial parameters, \citeauthor{kratter_gas_giants} gave the starvation mass as \begin{align} \label{eq:m_starve} M_{\text{starve}} \approx 8 M_J \left( \frac{\alpha}{4 \times 10^{-4}} \right) \left( \frac{T}{40 \, \text{K}} \right) \left( \frac{a}{70 \text{AU}} \right) \left( \frac{\Delta}{5 R_H} \right)^3 \; . \end{align} Using this expression for the starvation mass, we can calculate the final mass of gas giant planets in our model as a function of the strength of turbulence in the disk. An example is shown in the upper panel of Figure \ref{fig:m_starve}. \begin{figure} [h] \centering \includegraphics[width=1.0\linewidth]{rev_m_star_lim_grey_2panel_liss} \caption{``Starvation" mass, past which growth of a gas giant halts, plotted as a function of semi-major axis. \textit{Panel a):} In this panel, the mass is obtained by assuming that growth shuts off when the planet opens up a gap of width $\Delta = 5 R_H$ (see text for more details). Inside the gray region, the $\alpha$ value required to prevent gap opening before the core reaches the given mass is so large that a core will not be able to form within the lifetime of the gas disk (see Section \ref{upper_lim}). The shape of this region is determined using our upper limits on $\alpha$ taken from the $St_{\rm{max}} = 0.1$ in Figure \ref{fig:a_max_alph}. The labeled curves show maximum masses for constant values of $\alpha$. \textit{Panel b):} Here the starvation mass is determined numerically using fitting formulae to numerical results for the gas accretion rate from 3D hydrodynamical simulations by \cite{lhd_2009}. The starvation mass is determined by solving for the mass at which $M_{\rm{starve}}/\dot{M} = \tau_{\rm{disk}}$. The dashed lines again indicate the semi-major axes where turbulence prevents a gas giant core from forming via pebble accretion.} \label{fig:m_starve} \end{figure} The gray region shows limits on gas giant mass, which are obtained by using our values for the $St_{\rm{max}} = 0.1$ curve in Figure \ref{fig:a_max_alph}. For points inside the gray region, in order to grow a gas giant up to the given mass, the viscosity needs to be so large that early stages of growth are too slow for a core to reach $M_{\rm{crit}}$ within the lifetime of the gas disk. Said another way, for semi-major axes and masses inside the gray region, growth of gas giants is ruled out using the criteria described in Section \ref{upper_lim}. Also plotted in Figure \ref{fig:m_starve} are the upper mass limits for several constant $\alpha$ values. When these curves enter the gray region, growth of gas giants is ruled out in our model. For low levels of turbulence, growth of gas giants can proceed out to large semi-major axes, but the final masses of these planets are low. As turbulence increases, opening a gap in the disk becomes harder, allowing the gas giant planets to reach higher masses, but the semi-major axes at which growth can occur become more restricted. We stress that this is a general feature of gas giant growth via pebble accretion and, in particular, is independent of the criterion used to determine the final mass of gas giants. Because viscous torques oppose the torque from the growing planet, the final mass the gas giant reaches will increase with the viscosity in the disk. Thus, if growth of gas giants proceeds by pebble accretion, we expect disks with higher viscosities to host more massive gas giants at smaller orbital separations. While the torques from the growing planet can increase the width of the gap as the planet grows, material can still flow through the gap opened by the planet (e.g. \citealt{fsc_2014}). For example, \cite{msc_2014} showed that meridional circulation can still transport material from the top layer of the disk, which may imply that consideration of gap opening alone is insufficient to determine the final mass of gas giants. Thus, we present an alternate criterion for gap opening that takes into account gas accretion rates taken directly from the 3D hydrodynamical simulations performed by \cite{lhd_2009}. \citeauthor{lhd_2009} provided numerical results for the upper limit on the planet's gas accretion rate as a function of planetary mass for $\alpha = 4 \times 10^{-3}$ and $4 \times 10^{-4}$. Using these accretion rates, we can determine the mass past which $t_{\rm{grow}} = M/\dot{M} > \tau_{\rm{disk}}$. This scale represents another way of determining the starvation mass, since planets larger than this value will not grow substantially before the nebular gas dissipates. In practice, we can use the fitting formula given in Equation (2) of \cite{lhd_2009} for $\alpha = 4 \times 10^{-3}$ to determine this mass numerically as a function of semi-major axis. For $\alpha = 4 \times 10^{-4}$ the authors did not provide a closed-form expression; we instead interpolate between the values plotted in their Figure 2 to obtain $\dot{M}$ as a a function of $M$. The result of this calculation is shown in the lower panel of Figure \ref{fig:m_starve}. As can be seen from the figure, this criterion leads to lower starvation masses in comparison with halting growth at a fixed value of gap width. As in the upper panel, the dashed sections of the lines indicate where the timescale to form the core exceeds the lifetime of the gas disk. Unlike the upper panel however, it is not possible to indicate the overall region where this occurs, as \cite{lhd_2009} did explicitly calculate $\dot{M}$ as a function of $\alpha$. These considerations could provide an explanation for the proposed correlation between stellar mass and gas giant frequency. While the dynamical changes due to increasing stellar mass have a relatively minor effect on core growth rates (R18), these higher-mass stars are expected to have substantially higher luminosities with moderately higher amounts of ionizing radiation (e.g., \citealt{pf_2005}, \citealt{wmw_2010}). Disks with higher ionization fractions should have higher levels of MHD turbulence (e.g. \citealt{a_2011}), leading to higher effective $\alpha$ values. Thus, from the above considerations, we should expect that higher-mass stars will yield higher-mass planets. Furthermore, if disk mass is correlated with stellar mass (\citealt{pth_2016}) and the final mass of gas giants is set by accretion rate (as opposed to gap width, where the disk surface density cancels out), then we would expect planets around more massive stars to accrete at higher rates and therefore reach higher masses before their growth timescale becomes lower than the disk lifetime. Thus, the gas giant planets found around more massive stars may represent the high-mass tail of a distribution of gas giants formed at large distances via gas-assisted growth, which have their final mass dictated by gap-opening criteria. If this is the case, then we would expect that there exists a population of gas giants around these stars as well that are simply lower mass than can be detected with the current generation of imaging instruments. This is easily accomplished if these planets are $\lesssim 2 M_J$; see, e.g. \cite{bowler_DI_review}. \section{Summary/Conclusions} \label{summary} In this paper, we have used our previously discussed model of gas-assisted growth in a turbulent disk to study the problem of growth of gas giants at wide orbital separations. At these large distances, last doubling timescales for growth by planetesimal accretion are far longer than the disk dispersal timescale of the gas, making growth of gas giants by canonical core accretion extremely difficult. Gas-assisted growth allows cores to easily complete their last doubling time to critical core mass, even in strong turbulence. The maximal growth rate provided by pebble accretion, $t_{\rm{Hill}}$, is extremely rapid, even in the outer disk. For massive cores. even strong turbulence does not substantially inhibit growth. The same is not true for smaller core masses. however. Growth of gas giants at large distances can easily stall at smaller core masses. By integrating our growth rates over small-body size we obtained the minimum mass past which pebble accretion timescales drop below the lifetime of the gas disk, $M_{\rm{peb}}$. By assuming that the early stages of growth are set by gravitational focusing of planetesimals, we were able to translate these minimum masses into limits on the semi-major axes where gas giant growth is possible. We demonstrated that as the disk becomes more turbulent, the range of semi-major axes where gas giants can grow is sharply reduced. These effects may play a large role in the paucity of gas giants at wide orbital separations found by direct-imaging surveys; if disks are not quiescent enough, then pebble accretion may simply produce smaller planets that are unable to accrete sufficient mass in small bodies to go critical. In addition, our mass limits are relevant regardless of how early growth proceeds -- for example, if a body were scattered from the inner disk and it exceeded our minimum mass, it could grow to $M_{\rm{crit}}$ and trigger runaway gas accretion. We also presented approximate analytic expressions for both $M_{\rm{peb}}$ and the upper distance limit where gas giants can form, $a_{\rm{upper}}$. In addition to the strength of turbulence, we find that the available particle sizes and abundance of planetesimals are major factors in where gas giants can form. Gas-assisted growth is sensitive to the Stokes numbers of the pebbles, as opposed to their absolute size. Thus, if particles of the ``correct" range of Stokes numbers are not available, then gas-assisted growth timescales can be quite slow. Furthermore, if planetesimals are not abundant enough, then the early stages of growth via planetesimal accretion can take too long for subsequent growth via pebble accretion to occur on rapid timescales. Finally, we examined the role that viscosity plays in determining the final mass that gas giants reach, in addition to setting where a critical mass core can form. We find that, regardless of the quantitative metric used to determine the final gas giant mass, higher-viscosity disks should feature higher-mass gas giants but at smaller orbital separations. More quantitatively, at the lower $\alpha$ values needed to produce gas giants out to $a \gtrsim 70$ AU, the gas giants formed will be too low-mass to have been observed by direct-imaging surveys. Thus, there may lurk a population of wide orbital separation gas giants that the current generation of imaging surveys has yet to detect. Thus, if growth of gas giants at wide orbital separations proceeds by gas-assisted growth, and if gap opening sets the final masses of gas giants, we can make qualitative predictions about the observed population of gas giant planets. For a given stellar mass, we would expect that higher-mass gas giants should be observed closer in to the central star, as the disks with higher levels of turbulence will produce higher-mass gas giants at smaller orbital separations. We note that this conclusion may be altered if final planet masses depend on disk surface density, which is not the case for the gap-opening criterion we use. For different stellar masses, we expect higher-mass stars to exhibit higher levels of ionizing radiation. Thus, larger stars may have disks with higher $\alpha$ values and consequently host more massive gas giants. In addition, the larger disk masses exhibited by more massive stars could push the limits for growth past the distances given here for our fiducial surface density, allowing planets to form at larger distances in high-$\alpha$ disks. We suggest that this propensity to produce massive planets in high-turbulence disks may be the reason that most currently observed directly imaged gas giants have been found orbiting A stars. \vspace{1mm} \noindent The authors wish to thank Diana Powell, Renata Frelikh, and John McCann for useful discussions, and Eugene Chiang for his thoughtful suggestions on the manuscript. We also thank the anonymous referee for their helpful comments that improved the quality of the manuscript. MMR and RMC acknowledge support from NSF CAREER grant number AST-1555385.
1,108,101,562,390
arxiv
\section{Introduction} \label{sec:introduction} Successful prediction of the heat loads on a spacecraft during atmospheric entry relies, among other things, on the completeness and accuracy of the model used to describe thermo-chemical nonequilibrium and transport phenomena in the flow~\cite{park90a}. Modeling of such effects in the continuum limit is usually done with hydrodynamic-scale Computational Fluid Dynamics~\cite{hirsch88a} (CFD) methods, which require chemical-kinetic databases for calculating the rate coefficients of internal energy excitation and molecular dissociation, as well as transport properties for modeling viscous and diffusion effects. On the other hand, kinetic-scale direct simulation Monte Carlo~\cite{bird94a} (DSMC) methods allow for accurate description of the flow encountered in regions with continuum breakdown and rely on cross section models to predict the outcome of elastic and inelastic collisions. With increasing computational power it is becoming commonplace to generate high-fidelity kinetic data free from empiricism through the methods of computational chemistry. This typically involves the generation of potential energy surfaces (PES) for the molecular systems in question and subsequent quasi-classical trajectory (QCT) calculations on these surfaces to obtain reaction cross sections and the related rate coefficients (e.g. for $\mathrm{N_2}$-$\mathrm{N}$~\cite{esposito99a, esposito06a, jaffe15a}, $\mathrm{N_2}$-$\mathrm{N_2}$~\cite{bender15a, macdonald18b}, $\mathrm{O_2}$-$\mathrm{O}$~\cite{esposito08a} and $\mathrm{N_2}$-$\mathrm{O_2}$~\cite{chaudhry18b}). Due to the vast number of internal energy transfer and elementary chemical processes that must be tracked for all mixture components in Earth atmospheric entry flows, detailed chemistry CFD simulations are still too computationally expensive for practical applications. Even for relative simple mixtures consisting only of nitrogen molecules and atoms, rovibrational-specific state-to-state calculations have been limited to master equation studies involving space-homogeneous heat baths~\cite{panesi13a, kim13a, macdonald20b} and, at most, one-dimensional flows behind inviscid normal shocks~\cite{panesi14a}. Electronic-specific state-to-state CFD models have been developed to simulate electronic excitation and partial ionization in gases such as argon~\cite{kapper11a}, although in this case the number of discrete internal energy levels was much smaller than for the aforementioned molecular systems. Equivalent DSMC studies are even less common. Bruno et al~\cite{bruno02a} were the first to incorporate QCT-derived vibrational-specific $\mathrm{N_2}$-$\mathrm{N}$ cross sections into a DSMC solver to study internal energy exchange and dissociation of nitrogen across a normal shock. To date, the only state-to-state DSMC simulations using a complete set of rovibrational-specific reaction cross sections for the $\mathrm{N_2}$-$\mathrm{N}$ system have been carried out by Kim and Boyd~\cite{kim14a}. One appealing way to reduce the computational cost of state-to-state nonequilibrium reacting flow calculations has been to develop coarse-grain models. The details of the reduction vary, but can broadly be classified into vibrational-specific,~\cite{esposito00a, munafo12a} energy bin,~\cite{magin12a, munafo14c, munafo14d} hybrids of both,~\cite{liu15a, macdonald18b} or more recently adaptive grouping of rovibrational states.~\cite{sahai17a, sahai19a, sharma20a} The basic concept is always to approximate the behavior of the full kinetic database with a much smaller set of cross sections/rate coefficients by grouping together many individual processes. In addition to air chemistry, the approach has also been applied to electronic-specific simulations of argon plasma~\cite{le13a}. On the DSMC side, coarse-grain models have been investigated as well.~\cite{zhu16a, torres18b} In every case, the lumping-together of internal energy levels leads to a reduction in the number of associated state-to-state reaction rate coefficients / cross sections and greatly reduces the cost of simulations. But with the size reduction of the kinetic databases also comes a loss of fidelity in the thermodynamic and chemical-kinetic description, especially if the binning strategy chosen is inadequate. The problem was recognized early on with the uniform rovibrational collisional (URVC) bin model~\cite{magin12a}. Its main underlying assumption of ``freezing'' the populations of rovibrational levels within a given bin to constant values (short recap in Sec.~\ref{sec:coarse_grain_model}) was shown to produce a set of mass balance equations at the coarse-grain level, which would be incompatible with micro-reversibility relations linking forward and backward rates between rovibrational states. This meant that the fluid equations could not satisfy the second law of thermodynamics, and the formulation of an associated entropy equation with unambiguous non-negative entropy production terms was not possible. Thus, one would not be guaranteed to retrieve the correct thermodynamic equilibrium with the URVC model. Furthermore, it was shown~\cite{munafo14a} that for the original formulation of the URVC bin model a large number of bins (up to 50) was required to approach the chemical dynamics of the full state-to-state system. For the inviscid fluid limit, this problem was first remedied by allowing the level populations within each bin to assume a Boltzmann population at the gas temperature $T$ (so-called Boltzmann binning~\cite{munafo14c, munafo14d}), while the bin populations themselves could still be relaxing toward equilibrium. With this change, the gas would be guaranteed to eventually assume the correct equilibrium populations regardless of the number of bins used. Building on this approach, so-called Maximum Entropy grouping~\cite{liu15a, sahai17a, sahai19a, sharma20a} adopted the use of bin-specific, or ``group-internal'' temperatures to allow the level populations within each bin even more degrees of freedom for local adjustment. This made it possible to almost exactly match the thermodynamic equilibrium and reaction rates of full state-to-state calculations with as few as two, or three overall bins. As a consequence, most research has so far concentrated on refining the coarse-grain models to best approximate the full chemical kinetics in the inviscid limit. In the few cases where viscous phenomena have been taken into account,~\cite{munafo14d, bellas20a} the transport properties were assumed to be independent of the molecules' internal energy states and were computed based on the current state-of-the-art collision integrals~\cite{stallcop01a}. It has however been theorized~\cite{mccourt90a, giovangigli99a, nagnibeda09a, brun09a} that transport properties should exhibit a dependence on the internal energy distributions. Indeed, in the state-to-state framework such a dependence appears naturally when deriving the Navier-Stokes equations as asymptotic solutions to the Boltzmann equation. Therefore, it must be taken into account for a well-posed fluid model. None of the coarse-grain models proposed so far have really addressed this issue. The aforementioned Boltzmann bin and Maximum entropy reductions also imply that the partition function of each energy bin must be temperature-dependent. This may work well within the context of a CFD fluid model, where detailed balance relations involve ratios of temperature-dependent forward and backward rate coefficients, but breaks down in DSMC~\cite{torres13a}, where these same relations have to be expressed in terms of collision energy-dependent cross sections. Such coarse-grain models effectively require individual molecules in the gas to ``be aware'' of the surrounding temperature, which is not compatible with the kinetic-scale description. In this work we formulate a coarse-grain model, which works within the context of DSMC and simultaneously allows for the derivation of viscous fluid equations with consistent transport terms. We propose a small, but important change~\cite{torres20a} to the early URVC bin model~\cite{magin12a}. Instead of attempting to enforce micro-reversibility relations for all rovibrational levels, we will postulate that such relations must hold only for the bin populations. We will therefore sacrifice some of the fine-grain detail of the original system in exchange for a simpler coarse-grain model. Our model still assumes constant populations for all rovibrational levels lumped into a given bin. Its main usefulness lies not so much in the ability to reproduce the full chemical kinetics with the smallest number of bins, but in the rather simple manner with which reversibility relations can be expressed in terms of coarse-grain cross sections. Furthermore, it allows us to postulate a Boltzmann equation at the coarse-grain level, which forms the starting point for a straightforward application of the Chapman-Enskog method to derive the fluid equations. As part of this, we obtain expressions for the transport properties directly based on the same coarse-grain cross sections appearing in the kinetic equation. As a consequence, the resulting transport properties are fully consistent with the corresponding coarse-grain DSMC collision model~\cite{torres18b} and naturally account for the transfer of internal energy without the need for \emph{ad hoc} fixes, such as the Eucken correction~\cite{stephani12a, liechty19a}. Our main objectives with this paper are: (1) Formulate the state-to-state kinetic equation for the coarse-grain model including fast (elastic) and slow (inelastic and reactive) collision terms, with reversibility expressed at the kinetic scale. (2) Derive the fluid equations for the coarse-grain model as an asymptotic solution to the kinetic equation by means of the Chapman-Enskog method. This includes expressions for the chemical source terms, the viscous fluxes and an entropy equation, with reversibility found at the macroscopic scale. (3) Verify the consistency of the hydrodynamic (Euler, or Navier-Stokes eqs.) and kinetic (Boltzmann eq.) coarse-grain models by simulation of normal shocks in nitrogen with CFD and DSMC methods. Assess the degree to which continuum breakdown across the shock causes the flow fields in the hydrodynamic and kinetic solutions to depart from one another. This paper is organized as follows. In Sec.~\ref{sec:coarse_grain_model}, we introduce the coarse-grain model for inelastic processes in molecular gas mixtures and recall its main features. In Sec.~\ref{sec:boltzmann_equation}, we discuss the governing kinetic equation and detail its constituting terms, including reversibility relations. In Sec.~\ref{sec:hydrodynamic_description}, we apply the Chapman-Enskog method to derive the corresponding fluid equations, along with expressions for all necessary transport and chemical source terms. In addition, we show that the entropy production terms due to transport and chemistry are always non-negative and thus the coarse-grain fluid equations satisfy the second law of thermodynamics. In Sec.~\ref{sec:normal_shock_bins}, we apply the coarse-grain model to reveal the structure of normal shock waves in a reacting gas mixture using three distinct simulation techniques. We first obtain the flow field in the inviscid limit by solving the system of master equations coupled to total momentum and energy balances behind the shock front. Then, we solve the full fluid equations across the shock with added viscous terms (Navier-Stokes) by means of the Finite Volume method. Finally, we directly solve the Boltzmann kinetic equation for the coarse-grain model by means of direct simulation Monte Carlo. These high-fidelity calculations provide a check on the fluid model and reveal additional features of the flow field. Finally, in Sec.~\ref{sec:conclusions}, we state the conclusions and discuss possible future work. \section{Coarse-grain model for N3 system} \label{sec:coarse_grain_model} Throughout the remainder of this paper, we will consider as an example a mixture of molecular and atomic nitrogen, with both species in their ground electronic states, and use a set of cross sections derived from QCT calculations on an \emph{ab initio} PES for the $\mathrm{N_2}(v,J) + \mathrm{N}$ system, originally compiled at NASA Ames Research Center~\cite{jaffe08a, jaffe15a}. The 9390 rovibrational levels of the $\mathrm{N_2}$ molecule in its ground electronic state have been grouped together into a much smaller number of discrete internal energy bins according to the uniform rovibrational collisional (URVC) energy bin model~\cite{magin12a, torres20a, torres18b}. As a result, our mixture is composed of energy bins (labeled with indices $k = 1, 2, \ldots, \mathcal{N}_\mathrm{bins}$) that encompass all bound- and pre-dissociated levels $i \in \mathcal{I}_\mathrm{N_2}$, plus atomic nitrogen in the ground electronic state. The number of molecules per unit volume populating bin $k$ is the sum over all level populations belonging to it, i.e.: $n_k = \sum_{i \in \mathcal{I}_k} \{ \mathsf{n}_i \}$. The core assumptions in the URVC reduction are that the level populations within a bin are fixed by the relation $\mathsf{n}_i = n_k \, \mathsf{a}_i / a_k$ and that each bin possesses an internal energy defined as the weighted average over the energies of its constituting rovibrational levels, i.e.: $E_k = \sum_{i \in \mathcal{I}_k} \{ \mathsf{a}_i \, \mathsf{E}_i \} / a_k$. Here, the overall degeneracy of each bin is the sum over degeneracies of all rovibrational levels belonging to it: $a_k = \sum_{i \in \mathcal{I}_k} \{ \mathsf{a}_i \}$. The set $\mathcal{K}_\mathrm{N_2}$ contains the indices pointing to every one of the bins. For simplicity, it is assumed that atomic nitrogen only occupies a single internal energy state. The full set of $\mathcal{N}_\mathrm{s} = 1 + \mathcal{N}_\mathrm{bins}$ (pseudo)-species in the mixture then becomes $S = \left\lbrace \mathrm{N}, \mathrm{N_2} \left( k \right) \, \forall \, (k \in \mathcal{K}_\mathrm{N_2}) \right\rbrace$. This reduction effectively replaces the highly resolved representation of molecular nitrogen's thermodynamic state provided by the full set of level populations $\mathsf{n}_i, (i \in \mathcal{I}_\mathrm{N_2})$ with a similar, but lower-resolution one that only relies on the bin populations $n_k, (k \in \mathcal{K}_\mathrm{N_2})$. By applying the URVC binning approach, the level-specific reaction rate/cross section data from the Ames database are condensed into bin-resolved rate coefficients/cross sections, first for inelastic collisions between molecular and atomic nitrogen: \begin{equation} \mathrm{N_2} \left( k \right) + \mathrm{N} \underset{k_{k \rightarrow l}^{E \mathrm{b}}}{\overset{k_{k \rightarrow l}^{E \mathrm{f}}}{\rightleftharpoons}} \mathrm{N_2} \left( l \right) + \mathrm{N} \quad \begin{array}{l} k, l \in \mathcal{K}_\mathrm{N_2}. \\ (k < l) \end{array} \label{eq:n3_excitation} \end{equation} Here we have labeled the \emph{forward} rate coefficient for the transition between molecules populating bins $\mathrm{N_2}(k)$ and $\mathrm{N_2}(l)$ as $k_{k \rightarrow l}^{E \mathrm{f}}$ (i.e. when Eq.~(\ref{eq:n3_excitation}) is read from left to right), whereas the \emph{backward} rate coefficient in the opposite sense is labeled as $k_{k \rightarrow l}^{E \mathrm{b}}$. Second, we have dissociation/recombination of an $\mathrm{N_2} \left( k \right)$-molecule by collision with an $\mathrm{N}$-atom: \begin{equation} \mathrm{N_2} \left( k \right) + \mathrm{N} \underset{k_{k}^{D \mathrm{b}}}{\overset{k_{k}^{D \mathrm{f}}}{\rightleftharpoons}} 3 \, \mathrm{N}, \qquad k \in \mathcal{K}_\mathrm{N_2} \label{eq:n3_dissociation} \end{equation} where we have labeled the rate coefficients for dissociation $k_{k}^{D \mathrm{f}}$ and $k_{k}^{D \mathrm{b}}$ recombination respectively. Third, $\mathrm{N_2} \left( k \right) + \mathrm{N}$-collisions in which no transition to another bin occurs, are referred to as ``intra-bin scattering''. Such processes are the equivalent of elastic collisions in our framework, since no internal energy is exchanged. We make the rather strong assumption that, after lumping together the set of rovibrational levels into bins, detailed information about the rovibrational population distributions within each bin is irretrievably lost. This means that only the coarse-grain thermodynamic state represented by the bin populations can be tracked by the governing equations and one should not expect to retrieve any microscopically-resolved information (i.e. rovibrational populations) from the solutions to these equations. This simplification is valuable nonetheless, because it allows us to derive the governing equations at the hydrodynamic scale (i.e. Navier-Stokes) from the corresponding kinetic-scale equations (i.e. Boltzmann) in a fully consistent manner through application the Chapman-Enskog method. Finally, note that no QCT data equivalent to the N3 database was available for $\mathrm{N_2}$-$\mathrm{N_2}$, or $\mathrm{N}$-$\mathrm{N}$ collisions, and the corresponding cross sections have been replaced with very simple ones only accounting for elastic scattering. However, this does not constitute a problem for the purposes of our comparisons, as long as the simplification is done in a consistent manner when evaluating the collision terms of the Boltzmann equation and when calculating the dissipative fluxes and chemical reaction rates in the Navier-Stokes equations. \section{Kinetic description: Boltzmann equation for coarse-grain model} \label{sec:boltzmann_equation} At the kinetic scale, the evolution of the gas mixture is governed by a system of Boltzmann equations: \begin{equation} \mathscr{D}_i \left( f_i \right) = \mathcal{J}_i (f) + \mathcal{C}_i (f), \qquad i \in S. \label{eq:generalized_boltzmann_equations} \end{equation} Here, $f_i = f_i \left( x, \boldsymbol{c}_i, t \right)$ are the velocity distributions of the N-atoms and the $\mathrm{N_2}(k)$-molecules populating each one of the discrete internal states $k \in \mathcal{K}_\mathrm{N_2}$. The distributions depend on position $\boldsymbol{x}$ in physical space, particle velocity $\boldsymbol{c}_i$ and time $t$. The term $\mathscr{D}_i \left( f_i \right) = \partial f_i / \partial t + \boldsymbol{c}_i \cdot \nabla_{\boldsymbol{x}} \, f_i$ on the left hand side of Eq.~(\ref{eq:generalized_boltzmann_equations}) is the streaming operator. It accounts for local time evolution and advection of the $\mathrm{N_2}(k)$- and $\mathrm{N}$-velocity distributions in phase space. Any influence of external forces (e.g. gravitational potential) has been neglected in Eq.~(\ref{eq:generalized_boltzmann_equations}). The terms on the right hand side are the collision operators. Together they account for any changes in the velocity distributions due to collisions between the mixture species. The precise mathematical form of these operators depends on the collision types considered. In the present work we take into account the processes listed in Table~\ref{tab:collisional_processes}. There the collision types have been sub-divided into so-called \emph{fast} and \emph{slow} processes, based on their relative time scales. The \emph{fast} scattering processes are responsible for driving the mixture toward a Maxwell-Boltzmann distribution at a common kinetic temperature $T$ (i.e. thermalization) and for diffusive transport phenomena, whereas the \emph{slow} processes can be either excitation/deexcitation reactions (responsible for relaxation of internal energy) and molecular dissociation-recombination reactions. The \emph{slow} processes typically involve some energy threshold and any individual collision is far less likely to produce a significant change in the colliding particles' states than the \emph{fast} collision types. This is reflected in the relative sizes of the associated cross sections. The fast processes possess differential cross sections typically orders of magnitude greater than the slow ones, i.e. $\sigma^\mathrm{slow} \ll \sigma^\mathrm{fast}$. The sub-division into fast and slow processes is of little concern when Eq.~(\ref{eq:generalized_boltzmann_equations}) is solved directly, e.g. by means of the DSMC method. However, as discussed in Sec.~\ref{sec:macroscopic_balance_equation}, the associated difference in time scales is exploited to derive the corresponding governing equations at the hydrodynamic scale. \begin{table \centering \caption{Collision types being modeled, separated into \emph{fast} and \emph{slow} processes} \label{tab:collisional_processes} \begin{tabular}{l l l} \multicolumn{3}{l}{\textbf{Fast collision processes}} \\ \hline \\[-0.5em] N-N elastic & $\mathrm{N} \left( \boldsymbol{c}_1 \right) + \mathrm{N} \left( \boldsymbol{c}_2 \right) \rightleftharpoons$ & \\[0.1em] scattering & $\qquad \qquad \qquad \quad \mathrm{N} \left( \boldsymbol{c}_1^\prime \right) + \mathrm{N} \left( \boldsymbol{c}_2^\prime \right)$ & \\[0.5em] $\mathrm{N_2} (k)$-N & $\mathrm{N_2} \left( \boldsymbol{c}_1, E_k \right) + \mathrm{N} \left( \boldsymbol{c}_2 \right) \rightleftharpoons$ & \multirow{2}{*}{$k \in \mathcal{K}_\mathrm{N_2}$} \\[0.1em] intra-bin & $\qquad \qquad \quad \mathrm{N_2} \left( \boldsymbol{c}_1^\prime, E_k \right) + \mathrm{N} \left( \boldsymbol{c}_2^\prime \right)$ & \\ scattering & & \\[0.5em] $\mathrm{N_2} (k)$-$\mathrm{N_2} (l)$ & $\mathrm{N_2} \left( \boldsymbol{c}_1, E_k \right) + \mathrm{N_2} \left( \boldsymbol{c}_2, E_l \right) \rightleftharpoons \qquad \quad$ & \multirow{2}{*}{$k,l \in \mathcal{K}_\mathrm{N_2}$} \\[0.1em] intra-bin & $\qquad \quad \mathrm{N_2} \left( \boldsymbol{c}_1^\prime, E_k \right) + \mathrm{N_2} \left( \boldsymbol{c}_2^\prime, E_l \right)$ & \\ scattering & & \\ & & \\ \multicolumn{3}{l}{\textbf{Slow collision processes}} \\ \hline & & \\[-0.5em] $\mathrm{N_2} (k)$-$\mathrm{N}$ & $\mathrm{N_2} \left( \boldsymbol{c}_1, E_k \right) + \mathrm{N} \left( \boldsymbol{c}_2 \right) \rightleftharpoons$ & $k,l \in \mathcal{K}_\mathrm{N_2}$ \\[0.1em] de/excitation & $\qquad \qquad \quad \mathrm{N_2} \left( \boldsymbol{c}_1^\prime, E_l \right) + \mathrm{N} \left( \boldsymbol{c}_2^\prime \right)$ & $\left( k < l \right)$ \\[0.5em] $\mathrm{N_2} (k)$-$\mathrm{N}$ & $\mathrm{N_2} \left( \boldsymbol{c}_1, E_k \right) + \mathrm{N} \left( \boldsymbol{c}_2 \right) \rightleftharpoons$ & \multirow{2}{*}{$k \in \mathcal{K}_\mathrm{N_2}$} \\[0.1em] dissociation- & $\qquad \quad \mathrm{N} \left( \boldsymbol{c}_3 \right) + \mathrm{N} \left( \boldsymbol{c}_4 \right) + \mathrm{N} \left( \boldsymbol{c}_5 \right)$ & \\ recombination & & \end{tabular} \end{table} \subsection{Fast collision operators} \label{sec:fast_collision_operator} The fast collision operator in Eq.~(\ref{eq:generalized_boltzmann_equations}) corresponds to the sum $\mathcal{J}_i (f) = \sum_{j \in S} \{ \mathcal{J}_{ij} (f_i, f_j) \}$. The partial collision operators: \begin{equation} \begin{split} \mathcal{J}_{ij} (f_i, f_j) = \int\limits_{\mathcal{R}^3} \int\limits_{\mathcal{S}^2} \Bigl( f_i^\prime \, f_j^\prime - f_i \, f_j \Bigr) g \, \sigma_{ij} \, \mathrm{d} \boldsymbol{\omega} \, \mathrm{d}\boldsymbol{c}_j, & \\ (i,j \in S), & \end{split} \label{eq:fast_collision_operator_cross_section} \end{equation} all possess the same structure for the fast processes listed in Table~\ref{tab:collisional_processes}. The integral in Eq.~(\ref{eq:fast_collision_operator_cross_section}) is short notation for a three-fold integral over velocity space, plus a surface integral over the unit sphere. We take the dependence of $f_i$ on $\boldsymbol{x}$, $\boldsymbol{c}_i$ and $t$ to be implicit. The variables $f_i^\prime, f_j^\prime$ represent the velocity distributions of pseudo-species $i$ and $j$ evaluated at the ``post-collision'' particle velocities $\boldsymbol{c}_i^\prime$ and $\boldsymbol{c}_j^\prime$ respectively (i.e. the right-hand side of the collision as written in Table~\ref{tab:collisional_processes}). Conversely, the unprimed $f_i$ represent the distribution evaluated at the ``pre-collision'' particle velocities (left-hand side) in the same table. The collision operator in Eq.~(\ref{eq:fast_collision_operator_cross_section}) is made up of two competing terms: one involving the product $f_i \, f_j$, which accounts for depletion (negative sign) of $f_i$ due to collisions in the forward sense, and another one involving $f_i^\prime \, f_j^\prime$ accounts for simultaneous replenishment (positive sign) by inverse collisions. The term in parentheses is multiplied in Eq.~(\ref{eq:fast_collision_operator_cross_section}) by the magnitude of the pre-collision relative velocity $g = \left| \boldsymbol{c}_i - \boldsymbol{c}_j \right|$ and the differential scattering cross section $\sigma_{ij} = \sigma_{ij} \left( g, \boldsymbol{\omega} \right)$. The differential cross section may in general depend both on $g$ and on the orientation of the post-collision velocity $\boldsymbol{\omega} = \left( \boldsymbol{c}_i^\prime - \boldsymbol{c}_j^\prime \right) / \left| \boldsymbol{c}_i^\prime - \boldsymbol{c}_j^\prime \right|$. For the fast processes reversibility is enforced at the coarse-grain level and we postulate that the cross sections at both ``ends'' of the collision must verify the relation: \begin{equation} \sigma_{ij} \left( g, \boldsymbol{\omega} \right) = \sigma_{ij} \left( g^\prime, \boldsymbol{\omega}^\prime \right), \qquad (i,j \in S), \label{eq:inverse_collisions} \end{equation} where $g^\prime = | \boldsymbol{c}_i^\prime - \boldsymbol{c}_j^\prime |$ and $\boldsymbol{\omega}^\prime = \left( \boldsymbol{c}_i - \boldsymbol{c}_j \right) / \left| \boldsymbol{c}_i - \boldsymbol{c}_j \right|$. This is what allows us to combine the contributions of depleting and replenishing collisions in Eq.~(\ref{eq:fast_collision_operator_cross_section}) into a single integral. Strictly speaking, Eq.~(\ref{eq:inverse_collisions}) will only hold for elastic collisions, i.e. those where $g = g^\prime$ and no change in internal energy states occurs. This is the case for the N-N collisions at the top of Table~\ref{tab:collisional_processes} and also for the other two fast processes we have defined. Since $\mathrm{N_2}(k)$-$\mathrm{N}$ and $\mathrm{N_2}(k)$-$\mathrm{N_2}(l)$ intra-bin scattering comprises all possible transitions between rovibrational levels within a given bin, they are not true elastic collisions. However, this energy is only exchanged within the bin, so we effectively treat them as if they were elastic collisions and in our coarse-grain model Eq.~(\ref{eq:inverse_collisions}) is assumed to hold true for all fast collision types. Although the differential cross section $\sigma_{ij}$ may in general depend on both $g$ and $\boldsymbol{\omega}$, for the calculations discussed in Sec.~\ref{sec:normal_shock_bins}, we will neglect their dependence on the latter. This allows us to replace the differential cross sections in Eq.~(\ref{eq:fast_collision_operator_cross_section}) with their integral counterparts\footnote{In this work we refer to them as ``integral'' instead of ``total'' cross sections, because in our naming convention~\cite{torres18b} we reserve the latter to mean the sum over elastic, inelastic and reactive cross sections of a given collision pair.} $\sigma_{ij}^\mathrm{I}(g) = \int_{\mathcal{S}^2} \sigma_{ij} (g, \boldsymbol{\omega}) \, \mathrm{d} \boldsymbol{\omega}$ and employ the variable hard sphere (VHS) model~\cite{bird80a} for isotropic scattering in our DSMC calculations. As discussed in App.~\ref{app:collision_integrals}, the choice of scattering model has a direct effect on the transport properties of the corresponding Navier-Stokes calculations. We should note that employing transport coefficients based on the VHS model in CFD calculations of viscous flows is rather unusual, since much more accurate methods are available~\cite{capitelli00b, wright05a}. In fact, several researchers have gone the opposite route~\cite{kim08c, stephani12a, liechty19a} and ``calibrated'' the VHS, or similar cross sections in their DSMC codes with the state-of-the art transport collision integrals. In the present work, we base our transport properties on collision integrals derived from the VHS model (see App.~\ref{app:collision_integrals}) to ensure consistency with our DSMC calculations, thus making the comparisons in Sec.~\ref{sec:normal_shock_bins_navier_stokes_vs_dsmc} more straightforward. \subsection{Slow collision operators for $\boldsymbol{\mathrm{N}}$ and $\boldsymbol{\mathrm{N_2}(k)}$} \label{sec:slow_collision_operators} The slow collision operators account for all types of \emph{reactive} collisions in the broader sense of our coarse-grain state-to-state description. The general mathematical form of reactive collision terms has been derived in Sec.~4.2.5 of Giovangigli~\cite{giovangigli99a} and here we merely write down the particular cases applicable to the slow processes listed in Table~\ref{tab:collisional_processes}. Operator $\mathcal{C}_k (f)$ appears in all rows of Eq.~(\ref{eq:generalized_boltzmann_equations}) involving the pseudo-species $\mathrm{N_2}(k)$. It is itself composed of two separate terms, $\mathcal{C}_k (f) =\mathcal{C}_k^E (f) + \mathcal{C}_k^D (f)$. The first one accounts for the effect of excitation/deexcitation on $f_k$\footnote{Notice that we have included N-atom exchange reactions in this definition}: \begin{equation} \begin{split} \mathcal{C}_k^E (f) = \sum_{\substack{l \in \mathcal{K}_\mathrm{N_2} \\ (l \ne k)}} \int\limits_{\mathcal{R}^3} \int\limits_{\mathcal{S}^2} \Bigl( f_l^\prime f_{\mathrm{N}}^\prime \frac{a_k}{a_{l}} - f_k \, f_{\mathrm{N}} \Bigr) g \, \sigma_{k, \mathrm{N}}^{l, \mathrm{N}} \, \mathrm{d} \boldsymbol{\omega} \, \mathrm{d} \boldsymbol{c}_\mathrm{N}, & \\ k \in \mathcal{K}_\mathrm{N_2}. \quad & \end{split} \label{eq:excitation_n2k_operator} \end{equation} Here, $\sigma_{k, \mathrm{N}}^{l, \mathrm{N}} = \sigma_{k, \mathrm{N}}^{l, \mathrm{N}} (g,\boldsymbol{\omega})$ is the differential cross section for the transition of an $\mathrm{N_2}(k)$+$\mathrm{N}$ pair into an $\mathrm{N_2}(l)$+$\mathrm{N}$ collision pair. The ratio of degeneracies $a_l / a_k$ corresponding to post- and pre-collision internal energy states $\mathrm{N_2}(k)$ and $\mathrm{N_2}(l)$ appears multiplying the post-collision distributions to account for detailed balance between forward (i.e. excitation) and the backward (i.e. deexcitation) reactions. For the excitation-deexcitation reaction, the detailed balance relation expressed in terms of forward and backward differential cross sections takes on the form: \begin{equation} a_l \, g^2 \, \sigma_{k, \mathrm{N}}^{l, \mathrm{N}} ( g, \boldsymbol{\omega} ) \, \mathrm{d} \boldsymbol{\omega}^\prime = a_k \, g^{\prime \, 2} \, \sigma_{l, \mathrm{N}}^{k, \mathrm{N}} ( g^\prime, \boldsymbol{\omega}^\prime ) \, \mathrm{d} \boldsymbol{\omega}, \, \begin{array}{l}k \in \mathcal{K}_\mathrm{N_2} \\ k \ne l \end{array}. \label{eq:microreversibitily_n2k_differential_xsec} \end{equation} Again, we will assume isotropic scattering for all such collisions and replace the differential cross section with their counterparts integrated over all deflection angles: \begin{equation} a_l \, g^2 \, \sigma_{k \rightarrow l}^{E \mathrm{f}} \left( g \right) = a_k \, g^{\prime \, 2} \, \sigma_{k \rightarrow l}^{E \mathrm{b}} \left( g^\prime \right), \quad k \ne l, \in \mathcal{K}_\mathrm{N_2}. \label{eq:microreversibitily_n2k_integrated_xsec} \end{equation} Here, $\sigma_{k \rightarrow l}^{E\mathrm{f}}(g) = \int_{\mathcal{S}^2} \sigma_{k, \mathrm{N}}^{l, \mathrm{N}} (g, \boldsymbol{\omega}) \, \mathrm{d} \boldsymbol{\omega}$ is the integrated excitation cross section evaluated at the ``pre-collision'' relative speed $g = | \boldsymbol{c}_k - \boldsymbol{c}_\mathrm{N} |$ and $\sigma_{k \rightarrow l}^{E \mathrm{b}}(g^\prime)$ represents the integrated cross section for deexcitation from bin $\mathrm{N_2}(l)$ to bin $\mathrm{N_2}(k)$ evaluated at the ``post-collision'' relative speed $g^\prime = | \boldsymbol{c}_l^\prime - \boldsymbol{c}_\mathrm{N}^\prime |$. Energy conservation implies that the relation $g^\prime = \sqrt{g^2 + 2 (E_k - E_l) / \mu_{\mathrm{N_2},\mathrm{N}}}$ must hold between pre- and post-collision pairs. Here, $\mu_{\mathrm{N_2},\mathrm{N}} = m_\mathrm{N_2} \, m_\mathrm{N} / (m_\mathrm{N_2} + m_\mathrm{N})$ is the reduced mass for the $\mathrm{N_2}$-N collision pair. Notice also that the summation in Eq.~(\ref{eq:excitation_n2k_operator}) excludes the term ($k = l$), because this corresponds $\mathrm{N_2}(k)$-$\mathrm{N}$ intra-bin scattering, which we consider belonging to the fast processes. The second term contributing to $\mathcal{C}_k (f)$ is due to dissociation-recombination reactions: \begin{equation} \begin{split} \mathcal{C}_k^D (f) = \int \Bigl( \tilde{f}_\mathrm{N} \hat{f}_\mathrm{N} \check{f}_\mathrm{N} \frac{\beta_{\mathrm{N}}^2}{\beta_k} - f_k \, f_\mathrm{N} \Bigr) \times \ldots \qquad \qquad & \\ \times \mathcal{W}_{k, \mathrm{N}}^{3 \mathrm{N}} \, \mathrm{d} \tilde{\boldsymbol{c}}_\mathrm{N} \, \mathrm{d} \hat{\boldsymbol{c}}_\mathrm{N} \, \mathrm{d} \check{\boldsymbol{c}}_\mathrm{N} \, \mathrm{d} \boldsymbol{c}_\mathrm{N}, \quad k \in \mathcal{K}_\mathrm{N_2}. & \end{split} \label{eq:dissociation_n2k_operator} \end{equation} This expression is more complex than Eqs.~(\ref{eq:fast_collision_operator_cross_section}) and (\ref{eq:excitation_n2k_operator}), because it involves a three-body interaction (the three N atoms after dissociation). This is reflected in the triple product of ``post-collision'' distribution functions $f_\mathrm{N}$ appearing as part of the replenishing term in Eq.~(\ref{eq:dissociation_n2k_operator}). Notice that instead of being ``primed'', these three $f_\mathrm{N}$ are each identified by a unique overbar to distinguish them from one another. Equation~(\ref{eq:dissociation_n2k_operator}) now involves a 12-fold integral in velocity space. The factor $\mathcal{W}_{k,\mathrm{N}}^{3 \mathrm{N}}$ is referred to~\cite{alexeev94a, giovangigli99a} as the ``reaction probability'' for the dissociation-recombination reaction (in the forward sense), even though it has dimensions of $\mathrm{time}^8 \times \mathrm{length}^{-6}$. Unlike in Eqs.~(\ref{eq:fast_collision_operator_cross_section}) and (\ref{eq:excitation_n2k_operator}), it is not straightforward to write Eq.~(\ref{eq:dissociation_n2k_operator}) in terms of a differential, or integrated cross section. The factors $\beta_k$ and $\beta_\mathrm{N}$, which appear in Eq.~(\ref{eq:dissociation_n2k_operator}) multiplying the replenishing term are ``statistical weights'' of the colliding species: \begin{equation} \beta_k = \frac{\mathrm{h_P^3}}{a_k \, m_\mathrm{N_2}^3}, \quad (k \in \mathcal{K}_\mathrm{N_2}) \quad \text{and} \quad \beta_\mathrm{N} = \frac{\mathrm{h_P^3}}{a_\mathrm{N} \, m_\mathrm{N}^3}, \label{eq:statistical_weights} \end{equation} where $\mathrm{h_P}$ is Planck's constant, $m_\mathrm{N_2}$, $m_\mathrm{N}$ are the molecular masses ($m_\mathrm{N_2} = 4.65\times 10^{-26} \, \mathrm{kg}$ for all $\mathrm{N_2}(k)$ and $m_\mathrm{N} = \frac{1}{2} m_\mathrm{N_2}$ for atomic nitrogen) and $a_k$, $a_\mathrm{N}$ again the degeneracies of pseudo-species $\mathrm{N}_2(k)$ and of N respectively. The ratio of statistical weights appears in Eq.~(\ref{eq:d3_n_operator}) to account for detailed balance between the forward (i.e. dissociation) and backward (i.e. recombination) reactions. Analogous to the case for excitation-deexcitation just discussed, the terms in Eqs.~(\ref{eq:dissociation_n2k_operator}) and (\ref{eq:d3_n_operator}) accounting for dissociation-recombination have been written exclusively in terms of the \emph{forward} probability $\mathcal{W}_{\mathrm{N}, k}^{\, 3 \mathrm{N}}$, i.e. in the \emph{left-to-right} sense as written in Table~\ref{tab:collisional_processes}). This is possible, because we have postulated the existence of a reversibility relation for this three-body interaction: \begin{equation} \mathcal{W}_{k,\mathrm{N}}^{\, 3 \mathrm{N}} \, \beta_\mathrm{N}^3 = \mathcal{W}_{\, 3 \mathrm{N}}^{k,\mathrm{N}} \, \beta_k \, \beta_\mathrm{N}. \label{eq:n3_dissociation_microreversibility} \end{equation} The statistical weight of atomic nitrogen appears on both sides of Eq.~(\ref{eq:n3_dissociation_microreversibility}) with an exponent equal to its stoichiometric coefficient \emph{right} and \emph{left} of Eq.~(\ref{eq:n3_dissociation}), but simplifies once substituted into Eq.~(\ref{eq:dissociation_n2k_operator}). Notice also that Eq.~(\ref{eq:n3_dissociation_microreversibility}) implies that the dimensions of $\mathcal{W}_{\, 3 \mathrm{N}}^{k,\mathrm{N}}$ are now just $\mathrm{time}^{5}$. Finally, when considering the Boltzmann equation for atomic nitrogen, $\mathcal{C}_\mathrm{N} (f)$ accounts for the effect of $\mathrm{N}+\mathrm{N_2}(k)$ dissociation-recombination on $f_\mathrm{N}$ and assumes the form: \begin{equation} \begin{split} \mathcal{C}_\mathrm{N} (f) = \sum_{k \in \mathcal{K}_\mathrm{N_2}} \biggl\{ \int \Bigl( \bar{f}_\mathrm{N} \, \hat{f}_\mathrm{N} \, \check{f}_\mathrm{N} \frac{\beta_\mathrm{N}^2}{\beta_k} - f_\mathrm{N} \, f_k \Bigr) \times \ldots & \\ \times \mathcal{W}_{k,\mathrm{N}}^{3 \mathrm{N}} \, \mathrm{d} \bar{\boldsymbol{c}}_\mathrm{N} \mathrm{d} \hat{\boldsymbol{c}}_\mathrm{N} \mathrm{d} \check{\boldsymbol{c}}_\mathrm{N} \mathrm{d} \boldsymbol{c}_k \ldots & \\ - 3 \int \Bigl( f_\mathrm{N} \bar{f}_\mathrm{N} \hat{f}_\mathrm{N} \frac{\beta_\mathrm{N}^2}{\beta_k} - \check{f}_\mathrm{N} f_k \Bigr) \mathcal{W}_{k,\mathrm{N}}^{3 \mathrm{N}} \, \mathrm{d} \bar{\boldsymbol{c}}_\mathrm{N} \mathrm{d} \hat{\boldsymbol{c}}_\mathrm{N} \mathrm{d} \check{\boldsymbol{c}}_\mathrm{N} \mathrm{d} \boldsymbol{c}_k \biggr\}. & \label{eq:d3_n_operator} \end{split} \end{equation} Every element of the sum in Eq.~(\ref{eq:d3_n_operator}) is composed of two integrals. Both share the same structure as the one in Eq.~(\ref{eq:dissociation_n2k_operator}). The first one is focused on atomic nitrogen on the \emph{left} of Eq.~(\ref{eq:n3_dissociation}) and accounts for depletion of this species due to dissociation and its simultaneous replenishment due to recombination. The second integral does the same, but is focused on one of the three N-atoms on the \emph{right} of Eq.~(\ref{eq:n3_dissociation}). It accounts for depletion of any of the three N-atoms due to recombination and their simultaneous replenishment due to dissociation, hence the minus sign multiplying the integral. The factor 3 appears, because one must account cumulatively for the loss of the three nitrogen atoms on the right-hand side of Eq.~(\ref{eq:n3_dissociation}). Writing down Eq.~(\ref{eq:generalized_boltzmann_equations}) and the associated collision terms is a useful framework for deriving the macroscopic equations in Sec.~\ref{sec:hydrodynamic_description}. However, in this work we only solve the Boltzmann equation indirectly, by means of the particle-based DSMC method. In this approach the behavior of the collision terms has to be translated into a collision algorithm, which has been detailed previously in Ref.~\cite{torres18b}. \subsection{Macroscopic flow variables in terms of velocity distributions} \label{sec:macroscopic_moments} The set of kinetic equations represented by Eq.~(\ref{eq:generalized_boltzmann_equations}) can be solved (either indirectly using DSMC, or another suitable method) if well-posed initial and boundary conditions for the distribution functions of all mixture components are specified. From a mathematical viewpoint the solution is obtained once the distribution functions $f_i$ can be determined everywhere in phase space at any time of interest. However, from a practical viewpoint the solution only becomes useful after the distributions have been integrated over velocity space to yield their macroscopic moments. Here we recall the definitions of these flow field variables used in fluid dynamics in terms of moments of the distribution functions. The mass density of every species is given by: \begin{equation} \rho_i = m_i \int_{\mathcal{R}^3} f_i \, \mathrm{d} \boldsymbol{c}_i, \qquad i \in S, \label{eq:species_mass_density_moments} \end{equation} with individual species number densities following from $n_i = \rho_i / m_i $. Mixture number and mass densities are calculated as $n = \sum_{i \in S} \{ n_i \}$ and $\rho = \sum_{i \in S} \{ \rho_i \}$ respectively. The hydrodynamic velocity of the gas is given by: \begin{equation} \boldsymbol{u} = \frac{1}{\rho} \sum_{i \in S} \left\lbrace m_i \int_{\mathcal{R}^3} \boldsymbol{c}_i \, f_i \, \mathrm{d} \boldsymbol{c}_i \right\rbrace, \label{eq:hydrodynamic_velocity_moments} \end{equation} Diffusion velocities of each species are given by: \begin{equation} \boldsymbol{u}_i^\mathrm{d} = \frac{1}{n_i} \int_{\mathcal{R}^3} \boldsymbol{C}_i \, f_i \, \mathrm{d} \boldsymbol{C}_i, \qquad i \in S, \label{eq:diffusion_velocity_moments} \end{equation} where $\boldsymbol{C}_i = \boldsymbol{c}_i - \boldsymbol{u}$ represent the peculiar velocities of particles belonging to species $i \in S$. By definition, the diffusion velocities always verify the constraint $\sum_{i \in S} \{ \rho_i \, \boldsymbol{u}_i^\mathrm{d} \} = \boldsymbol{0}$. Of particular interest in Sec.~\ref{sec:normal_shock_bins_navier_stokes_vs_dsmc} is the diffusion velocity of $\mathrm{N_2}$, which is obtained as the mass-weighted average $\boldsymbol{u}_\mathrm{N_2}^\mathrm{d} = 1 / \rho_\mathrm{N_2} \sum_{k \in \mathcal{K}_\mathrm{N_2}} \{ \rho_k \, \boldsymbol{u}_k^\mathrm{d} \}$. The kinetic stress tensor is obtained as: \begin{equation} \underline{\underline{\mathcal{P}}} = \sum_{i \in S} \left\lbrace m_i \int_{\mathcal{R}^3} \, \boldsymbol{C}_i \otimes \boldsymbol{C}_i \, f_i \, \mathrm{d} \boldsymbol{C}_i \right\rbrace, \label{eq:pressure_tensor_moments} \end{equation} The pressure tensor can be split into an isotropic and a remaining anisotropic contribution $\underline{\underline{\mathcal{P}}} = p \, \underline{\underline{I}} - \underline{\underline{\tau}}$, where $p$ is the hydrostatic pressure, $\underline{\underline{I}}$ stands for the unit tensor and $\underline{\underline{\tau}}$ is the viscous stress tensor. The hydrostatic pressure is calculated as $1/3$ of the trace of $\underline{\underline{\mathcal{P}}}$, e.g. in Cartesian coordinates $p = \frac{1}{3} \left( \mathcal{P}_{xx} + \mathcal{P}_{yy} + \mathcal{P}_{zz} \right)$. Next, we invoke the perfect gas law to introduce the translation temperature as $T = p / ( n \mathrm{k_B} )$. Since we are dealing with a dilute gas mixture, we may express the composition in terms of partial pressures $p_i = x_i \, p$, where $x_i = n_i / n$ are the species mole fractions. Alternatively, the mixture composition can be expressed in terms of mass fractions $y_i = \rho_i / \rho$. A separate temperature $T_\mathrm{int}$ can be defined for characterizing the internal energy content of $\mathrm{N_2}$, i.e. $n_\mathrm{N_2} E_\mathrm{N_2}^\mathrm{int} = \sum_{k \in \mathcal{K}_\mathrm{N_2}} \{ n_k E_k \}$. It is an implicit function of the number densities $n_k$, as explained Appendix~C of Ref.~\cite{torres18b}. The total energy per unit volume in terms of the distribution is given by: \begin{equation} \rho E = \sum_{i \in S} \left\lbrace \int_{\mathcal{R}^3} \left( \frac{1}{2} m_i \, \boldsymbol{C}_i \cdot \boldsymbol{C}_i + E_i \right) f_i \, \mathrm{d} \boldsymbol{C}_i \right\rbrace, \label{eq:total_energy_moments} \end{equation} where the $E_i$ represent the internal energies of each species $i \in S$. In our coarse-grained state-to-state description, they correspond to the bin-averaged energies $E_k$ for each internal state $\mathrm{N_2} (k), \, \forall \, k \in \mathcal{K}_\mathrm{N_2}$ and $E_\mathrm{N}$ to the 0-K energy of formation of atomic nitrogen. For consistency with our prior definitions~\cite{torres20a, torres18b, bellas20a}, we set $E_\mathrm{N} = D_0 / 2$, where $D_0 = 9.75 \, \mathrm{eV}$ is the heat of dissociation per $\mathrm{N_2}$-molecule from the ground rovibrational level as given by the NASA Ames N3 diatomic potential~\cite{jaffe18a}. Notice that the kinetic temperature $T$ and Eq.~(\ref{eq:total_energy_moments}) are related to one another through $\rho E = \frac{1}{2} \rho \, | \boldsymbol{u} |^2 + \frac{3}{2} \, n \, \mathrm{k_B} T + n_\mathrm{N_2} E_\mathrm{N_2}^\mathrm{int} + n_\mathrm{N} E_\mathrm{N}$. Finally, the mixture heat flux is the flux of kinetic and internal energy transported with every particle along each Cartesian direction: \begin{equation} \boldsymbol{q} = \sum_{i \in S} \left\lbrace \int_{\mathcal{R}^3} \left( \frac{1}{2} \, m_i \, \boldsymbol{C}_i \cdot \boldsymbol{C}_i + E_i \right) \boldsymbol{C}_i \, f_i \, \mathrm{d} \boldsymbol{C}_i \right\rbrace, \label{eq:heat_flux_moments} \end{equation} For the exact expressions used to evaluate Eqs.~(\ref{eq:species_mass_density_moments})-(\ref{eq:heat_flux_moments}) in our DSMC calculations, refer to App.~\ref{app:macroscopic_moments}. \section{Hydrodynamic description for coarse-grain model} \label{sec:hydrodynamic_description} In this section we discuss the macroscopic balance equations used to model the flow at the hydrodynamic scale. They are derived from Eq.~(\ref{eq:generalized_boltzmann_equations}) by applying the Chapman-Enskog method.~\cite{giovangigli99a, ferziger72a, chapman70a} Here we give a quick overview of this procedure for our particular application. \subsection{Chapman-Enskog method for coarse-grain model} \label{sec:chapman_enskog} We introduce suitable reference quantities at the kinetic and macroscopic level to perform a dimensional order-of-magnitude analysis~\cite{graille09a} of Eq.~(\ref{eq:generalized_boltzmann_equations}). This allows us to re-write it in its non-dimensional form: \begin{equation} \tilde{\mathscr{D}} ( \tilde{f}_i ) = \frac{1}{\mathrm{Kn}} \left[ \tilde{\mathcal{J}}_i (\tilde{f}) + \frac{\sigma^\mathrm{slow}}{\sigma^\mathrm{fast}} \, \tilde{\mathcal{C}}_i (\tilde{f}) \right], \, i \in S, \label{eq:boltzmann_equation_nondimensional} \end{equation} where $\mathrm{Kn} = \lambda^0 / L^0$ is a pseudo-Knudsen number based on reference mean free path $\lambda^0$ and macroscopic length scale $L^0$. The scaling for arriving at the compressible Navier-Stokes equations is to select $\mathrm{Kn} \sim \varepsilon \ll 1$. The fast and slow processes in Eq.~(\ref{eq:boltzmann_equation_nondimensional}) are assumed to occur at time scales different enough to require separate reference cross sections and the Maxwellian reaction regime~\cite{giovangigli99a} is obtained assuming that $\sigma^\mathrm{slow} \sim \varepsilon^2 \sigma^\mathrm{fast}$. Applying this scaling is a choice, which ultimately determines the structure of the resulting hydrodynamic equations. Expressed in terms of the small parameter $\varepsilon$ and reverting back to dimensional variables for convenience, we will thus seek solutions to Eq.~(\ref{eq:generalized_boltzmann_equations}) in the continuum limit of the form: \begin{equation} \mathscr{D}_i \left( f_i \right) = \frac{1}{\varepsilon} \mathcal{J}_i (f) + \varepsilon \, \mathcal{C}_i (f), \qquad (i \in S) \label{eq:boltzmann_equation_small_parameter} \end{equation} Performing an Enskog expansion around the zero-order velocity distributions $f_i^0$ in terms of the small parameter $\varepsilon$: $f_i = f_i^0 \, ( 1 + \varepsilon \, \phi_i + \varepsilon^2 \, \phi_i^{(2)} + \dots )$, where $\phi_i$ and $\phi_i^{(2)}$ are perturbation functions, and substituting back into Eq.~({\ref{eq:boltzmann_equation_small_parameter}}) yields: \begin{equation} \begin{split} & \mathscr{D}_i ( f_i^0 ) + \varepsilon \, \mathscr{D}_i ( f_i^0 \phi_i ) + \dots = \frac{1}{\varepsilon} \mathcal{J}_i (f^0) - f_i^0 \mathscr{F}_i ( \phi ) \\ & + \varepsilon \left( - f_i^0 \, \mathscr{F}_i ( \phi^{(2)} ) + \mathcal{J}_i (f^0 \phi) + \mathcal{C}_i (f^0) \right) + \dots \label{eq:boltzmann_equation_enskog} \end{split} \end{equation} where $\mathscr{F}_i ( \phi)= - \sum\limits_{j \in S} \{ \mathcal{J}_{ij} ( f_i^0 \phi_i , f_j^0 ) + \mathcal{J}_{ij} ( f_i^0, f_j^0 \phi_j ) \} / f_i^0$ is the linearized fast collision operator. Solving Eq.~(\ref{eq:boltzmann_equation_enskog}) at order $\varepsilon^{-1}$ (corresponding to the fastest time scale) yields the equilibrium, or Maxwell-Boltzmann distribution. Here we have defined the macroscopic moments: species mass density $\rho_i = \int_{\mathcal{R}^3} m_i \, f_i^0 \, \mathrm{d} \boldsymbol{c}_i, \, (i \in S)$, mixture momentum density $\rho \boldsymbol{u} = \sum_{i \in S} \{ \int_{\mathcal{R}^3} m_i \, \boldsymbol{c}_i f_i^0 \, \mathrm{d} \boldsymbol{c}_i \}$ and total energy density $\rho E = \sum_{i \in S} \{ \int_{\mathcal{R}^3} \left( \frac{1}{2} m_i \, \boldsymbol{c}_i \cdot \boldsymbol{c}_i + E_i \right) f_i^0 \, \mathrm{d} \boldsymbol{c}_i \}$ exclusively in terms of the zero-order velocity distribution. In terms of the macroscopic flow variables it takes on the form: \begin{equation} f_i^0 = \left( \frac{m_i}{2 \pi \mathrm{k_B} T} \right)^{3/2} n_i \exp \left( - \frac{m_i | \boldsymbol{c}_i - \boldsymbol{u} |^2 }{2 \mathrm{k_B} T} \right) \label{eq:maxwellian} \end{equation} The equality $f_i^{0 \prime} f_j^{0 \prime} = f_i^{0} f_j^{0}$ allows us now to write the linearized fast operator as: $\mathscr{F}_i ( \phi ) = \sum_{j \in S} \{ \int_{\mathcal{R}^3} f_j^0 ( \phi_i + \phi_j - \phi_i^\prime - \phi_j^\prime) g \, \sigma_{ij} \, \mathrm{d} \boldsymbol{\omega} \, \mathrm{d} \boldsymbol{c}_j \}$. Averaging Eq.~(\ref{eq:boltzmann_equation_enskog}) at order $\varepsilon^0$ over pseudo-species mass, momentum and energy leads to the Euler equations for the non-reacting gas mixture: \begin{align} \partial_t ( \rho_i ) & + \nabla_{\boldsymbol{x}} \cdot \bigl( \rho_i \boldsymbol{u} \bigr) = 0, \qquad i \in S \label{eq:euler_mass_balance} \\ \nonumber \\[-1em] \partial_t ( \rho \boldsymbol{u} ) & + \nabla_{\boldsymbol{x}} \cdot \bigl( \rho \boldsymbol{u} \otimes \boldsymbol{u} + p \, \underline{\underline{I}} \bigr) = \boldsymbol{0} \label{eq:euler_momentum_balance} \\ \nonumber \\[-1em] \partial_t ( \rho E ) & + \nabla_{\boldsymbol{x}} \cdot \bigl( \rho \boldsymbol{u} \bigl( E + p/\rho \bigr) \bigr) = 0 \label{eq:euler_total_energy_balance} \end{align} Notice that due to the choice of constraints, the definitions of macroscopic moments in the Chapman-Enskog solution slightly differ from those introduced in Sec.~\ref{sec:macroscopic_moments} at the kinetic scale. However, out of convenience here we will use the same symbols for both definitions. Definitions for $\rho_\mathrm{N_2}$, etc. and corresponding number densities follow the same pattern as in Sec.~\ref{sec:macroscopic_moments}. Note also that, given the scaling in Eq.~(\ref{eq:boltzmann_equation_small_parameter}), the slow collision operators do not contribute to the solution at order $\varepsilon^0$, and thus no chemical source terms appear on the right hand side of Eq.~(\ref{eq:euler_mass_balance}). \subsection{Macroscopic balance (Navier-Stokes) equations for coarse-grain system including viscous and chemical source terms} \label{sec:macroscopic_balance_equation} With $f_i^0$ known, we go back to solving Eq.~(\ref{eq:boltzmann_equation_enskog}) at order $\varepsilon^0$ for the first-order perturbations $\phi = (\phi_i)_{i \in S}$: \begin{equation} \mathscr{F}_i ( \phi ) = \Psi_i, \qquad i \in S \label{eq:epsilon_0} \end{equation} Uniqueness of the solution is ensured through the constraint that the perturbations do not contribute to the macroscopic moments, i.e.: $\int_{\mathcal{R}^3} m_i \, f_i^0 \phi_i \, \mathrm{d} \boldsymbol{c}_i = 0 \, (i \in S)$, $\sum_{i \in S} \{ \int_{\mathcal{R}^3} m_i \boldsymbol{c}_i \, f_i^0 \phi_i \, \mathrm{d} \boldsymbol{c}_i \} = \boldsymbol{0}$, and $\sum_{i \in S} \{ \int_{\mathcal{R}^3} ( \frac{1}{2} m_i \, \boldsymbol{c}_i \cdot \boldsymbol{c}_i \, + E_i ) f_i^0 \phi_i \, \mathrm{d} \boldsymbol{c}_i \} = 0$. Next, we evaluate the right hand side of Eq.~(\ref{eq:epsilon_0}) $\Psi_i = - \mathscr{D}_i ( \ln f_i^0 )$. With help of Eq.~(\ref{eq:maxwellian}), we express all resulting time derivatives of macroscopic flow variables in terms of spatial gradients by re-arranging Eqs.~(\ref{eq:euler_mass_balance})-(\ref{eq:euler_total_energy_balance}). The result is a linear combination of the transport forces~\cite{giovangigli99a}, i.e. gradients in flow velocity, species partial pressure and temperature: \begin{equation} \begin{split} \Psi_i = - \boldsymbol{\Psi}_i^\eta : \nabla_{\boldsymbol{x}} \, \boldsymbol{u} & - \sum_{j \in S} \boldsymbol{\Psi}_i^{D_j} \cdot \boldsymbol{d}_j \\ & - \boldsymbol{\Psi}_i^{\widehat{\lambda}} \cdot \nabla_{\boldsymbol{x}} \left( \frac{1}{\mathrm{k_B} T} \right), \quad i \in S, \end{split} \end{equation} where we have defined the driving forces for diffusion of species $j$ as: $\boldsymbol{d}_j = ( \nabla_{\boldsymbol{x}} \, p_j ) / p$. The individual contributions to $\Psi_i$ are given by: \begin{align} \boldsymbol{\Psi}_i^\eta = & \frac{m_i}{\mathrm{k_B} T} \left( \boldsymbol{C}_i \otimes \boldsymbol{C}_i - \tfrac{1}{3} \boldsymbol{C}_i \cdot \boldsymbol{C}_i \, \underline{\underline{I}} \right), \quad i \in S, \\ \boldsymbol{\Psi}_i^{D_j} = & \frac{1}{p_i} \left( \delta_{ij} - y_i \right) \, \boldsymbol{C}_i, \qquad \qquad \quad i, j \in S, \\ \boldsymbol{\Psi}_i^{\widehat{\lambda}} = & \left( \tfrac{5}{2} \mathrm{k_B} T - \tfrac{1}{2} m_i \, \boldsymbol{C}_i \cdot \boldsymbol{C}_i \right) \boldsymbol{C}_i, \qquad i \in S \end{align} It can be shown that the unique solution to Eq.~(\ref{eq:epsilon_0}) is a linear combination of the transport fluxes: \begin{equation} \begin{split} \phi_i = - \boldsymbol{\phi_i}^\eta : \nabla_{\boldsymbol{x}} \, \boldsymbol{u} & - \sum_{j \in S} \boldsymbol{\phi}_i^{D_j} \cdot \boldsymbol{d}_j \\ & - \boldsymbol{\phi}_i^{\widehat{\lambda}} \cdot \nabla_{\boldsymbol{x}} \left( \frac{1}{\mathrm{k_B} T} \right), \quad i \in S, \end{split} \end{equation} The tensorial functions $\boldsymbol{\phi}^\eta = ( \boldsymbol{\phi}_i^\eta )_{i \in S}$ and vectorial functions $\boldsymbol{\phi}^{D_j} = ( \boldsymbol{\phi}_i^{D_j} )_{(i,j) \in S}$ and $\boldsymbol{\phi}^{\widehat{\lambda}} = (\boldsymbol{\phi}_i^{\widehat{\lambda}})_{i \in S}$ are solutions to linearized Boltzmann equations decoupled for each driving force contribution (see Eq.~(4.6.24) of Giovangigli~\cite{giovangigli99a}) \begin{equation} \mathscr{F}_i ( \boldsymbol{\phi}^\mu ) = \Psi_i^\mu, \qquad i \in S, \label{eq:epsilon_0-mu} \end{equation} with the superscript $\mu\in\{\eta,D_j,(j\in S), \widehat{\lambda}\}$. Constraints are imposed as $\int_{\mathcal{R}^3} m_i \, f_i^0 \boldsymbol{\phi}_i^\mu \, \mathrm{d} \boldsymbol{c}_i = 0 \, (i \in S)$, $\sum_{i \in S} \{ \int_{\mathcal{R}^3} m_i \boldsymbol{c}_i \, f_i^0 \boldsymbol{\phi}_i^\mu \, \mathrm{d} \boldsymbol{c}_i \} = \boldsymbol{0}$, and $\sum_{i \in S} \{ \int_{\mathcal{R}^3} ( \frac{1}{2} m_i \, \boldsymbol{c}_i \cdot \boldsymbol{c}_i \, + E_i ) f_i^0 \boldsymbol{\phi}_i^\mu \, \mathrm{d} \boldsymbol{c}_i \} = 0$. In the continuum, or hydrodynamic limit the complete governing equations are finally obtained by averaging Eq.~(\ref{eq:boltzmann_equation_enskog}) at order $\varepsilon^1$ over pseudo-species mass, total momentum and energy: \begin{align} \partial_t ( \rho_i ) & + \nabla_{\boldsymbol{x}} \cdot \bigl( \rho_i \boldsymbol{u} + \boldsymbol{j}_i \bigr) = \omega_i, \qquad i \in S, \label{eq:species_mass_balance} \\ \nonumber \\[-1em] \partial_t ( \rho \boldsymbol{u} ) & + \nabla_{\boldsymbol{x}} \cdot \bigl( \rho \boldsymbol{u} \otimes \boldsymbol{u} + p \, \underline{\underline{I}} - \underline{\underline{\tau}} \bigr) = \boldsymbol{0}, \label{eq:momentum_balance} \\ \nonumber \\[-1em] \partial_t ( \rho E ) & + \nabla_{\boldsymbol{x}} \cdot \bigl( \rho \boldsymbol{u} \bigl( E + p/\rho \bigr) - \underline{\underline{\tau}} \cdot \boldsymbol{u} + \boldsymbol{q} \bigr) = 0. \label{eq:total_energy_balance} \end{align} Here, Eq.~(\ref{eq:species_mass_balance}) represents the set of continuity equations for every pseudo-species $i \in S$. The structure of the chemical source terms on the right hand side is discussed in more detail in Sec.~\ref{sec:navier_stokes_chemistry_source_terms}. The transport fluxes for pseudo-species mass, momentum and energy appearing in Eqs.~(\ref{eq:species_mass_balance})-(\ref{eq:total_energy_balance}) are given in the Chapman-Enskog approximation by: \begin{align} \boldsymbol{j}_i & = \int_{\mathcal{R}^3} m_i \, \boldsymbol{C}_i \, f_i^0 \phi_i \, \mathrm{d} \boldsymbol{C}_i, \qquad i \in S, \label{eq:diffusion_flux_perturbation} \\ \underline{\underline{\tau}} & = - \sum_{i \in S} \left\lbrace \int_{\mathcal{R}^3} m_i \, \boldsymbol{C}_i \otimes \boldsymbol{C}_i \, f_i^0 \phi_i \, \mathrm{d} \boldsymbol{C}_i \right\rbrace, \label{eq:stress_tensor_perturbation} \\ \boldsymbol{q} & = \sum_{i \in S} \left\lbrace \int_{\mathcal{R}^3} \left( \frac{1}{2} \boldsymbol{C}_i \cdot \boldsymbol{C}_i + E_i \right) f_i^0 \phi_i \, \mathrm{d} \boldsymbol{C}_i \right\rbrace, \label{eq:heat_flux_perturbation} \end{align} respectively. We discuss the manner in which these fluxes are evaluated in Sec.~\ref{sec:navier_stokes_transport}. \subsection{Transport fluxes} \label{sec:navier_stokes_transport} Solving the kinetic equations \eqref{eq:epsilon_0} leads to expressions for Eqs.~(\ref{eq:diffusion_flux_perturbation})-(\ref{eq:heat_flux_perturbation}) in terms of spatial gradients of flow field variables and transport coefficients. The transport properties can be obtained through the solution of linear systems arising from Galerkin approximations (see Sec.~4.6.5 and 4.7 of Giovangigli~\cite{giovangigli99a} for details). This ultimately provides closure for the viscous terms in the Navier-Stokes equations. The diffusion fluxes appearing in Eq.~(\ref{eq:species_mass_balance}) are found as a solution to the system of Stefan-Maxwell equations of multi-component diffusion: \begin{equation} \sum_{j \in S} \left\lbrace \Delta_{ij} \frac{\boldsymbol{j}_j}{\rho_j} \right\rbrace = - ( \boldsymbol{d}_i + \chi_i \, \nabla_{\boldsymbol{x}} \ln T), \qquad i \in S \label{eq:stefan_maxwell_eqn} \end{equation} subject to the constraint $\sum_{j \in S} \{ \boldsymbol{j}_j \} = \boldsymbol{0}$ to ensure mass conservation. The Stefan-Maxwell matrix entries in Eq.~(\ref{eq:stefan_maxwell_eqn}) are: \begin{align} \Delta_{ij} & = - \frac{x_i x_j}{\mathcal{D}_{ij}}, \qquad i,j \in S, \quad i \ne j, \\ \Delta_{ii} & = \sum_{\substack{j \in S\\ i \ne j}} \frac{x_i x_j}{\mathcal{D}_{ij}}, \qquad i \in S. \end{align} In the absence of external force fields, all remaining driving forces for diffusion appear on the right hand side of Eq.~(\ref{eq:stefan_maxwell_eqn}). The previously introduced linearly dependent driving forces can be decomposed as $\boldsymbol{d}_i = ( \nabla_{\boldsymbol{x}} \, p_i ) / p = \nabla_{\boldsymbol{x}} \, x_i + \left( x_i - y_i \right) \nabla_{\boldsymbol{x}} \ln p$, which means they account for diffusion due to gradients of mole fraction and pressure (baro-diffusion). The remaining term in Eq.~(\ref{eq:stefan_maxwell_eqn}) represents thermo-diffusion (Soret effect), induced by temperature gradients. Formally, all three gradients can induce species mass transfer, but in the Navier-Stokes calculations of Sec.~\ref{sec:normal_shock_bins_navier_stokes_vs_dsmc} only mole fraction gradients were taken into account. In order to evaluate the entries of the Stefan-Maxwell matrix, one must supply the binary diffusion coefficients $\mathcal{D}_{ij} (p,T) \, \forall \, (i \ne j), (i,j \in S)$ and the thermal diffusion ratios $\chi_i$. The expression for $\chi_i = \chi_i \left( p_j \, \forall (j \in S), T \right)$ is given in Chapter 5 of Ref.~\cite{giovangigli99a} and in App.~\ref{app:transport_systems}. Following the structure of the matrix for the thermal conductivity transport system, it can be shown that their sign is not defined, but that $\sum_{i\in S} \{ \chi_i \} = 0$ must hold~\cite{ern94a}. Alternatively, the diffusion fluxes can be expressed in terms of multi-component diffusion coefficients $\boldsymbol{j}_i= -\sum_{j\in S} \{ D_{ij}(\boldsymbol{d}_j + \chi_j \, \nabla_{\boldsymbol{x}} \ln T)/\rho_i \}, \, (i \in S)$. The diffusion matrix is semi-positive definite, $D_{ij} \ge 0, i\neq j$, $D_{ii}>0$, $(i,j \in S)$, and is the pseudo-inverse of the Stefan-Maxwell matrix appearing in Eq.~(\ref{eq:stefan_maxwell_eqn}). The viscous stress tensor $\underline{\underline{\tau}}$ appearing in Eqs.~(\ref{eq:momentum_balance}) and (\ref{eq:total_energy_balance}) takes on the form: \begin{equation} \underline{\underline{\tau}} = 2 \, \eta \, \underline{\underline{\mathsf{S}}}, \label{eq:viscous_stress_tensor_ns} \end{equation} where $\eta$ is the mixture shear viscosity and $\underline{\underline{\mathsf{S}}} = \frac{1}{2} \left[ \nabla_{\boldsymbol{x}} \boldsymbol{u} + ( \nabla_{\boldsymbol{x}} \boldsymbol{u} )^T - \frac{2}{3} \, ( \nabla_{\boldsymbol{x}} \cdot \boldsymbol{u} ) \, \underline{\underline{I}} \right]$ is the traceless symmetric velocity gradient tensor. Notice that when compared with Eq.~(4.6.43) of Giovangigli~\cite{giovangigli99a}, Eq.~(\ref{eq:viscous_stress_tensor_ns}) lacks a reaction pressure term, since our scaling of Eq.~(\ref{eq:boltzmann_equation_small_parameter}) places us in the Maxwellian reaction regime. Furthermore, a bulk viscosity term is also missing, because it is not needed within the state-to-state description. The expression for $\eta = \eta \left( p_i \, \forall (i \in S), T \right)$ is given in Chapter 5 of Ref.~\cite{giovangigli99a} and in App.~\ref{app:transport_systems}. Following the structure of the matrix for the viscosity transport system, it can be shown that $\eta > 0$~\cite{ern94a}. Finally, the heat flux vector in Eq.~(\ref{eq:total_energy_balance}) takes on the form: \begin{equation} \boldsymbol{q} = - \lambda \, \nabla_{\boldsymbol{x}} T + \sum_{i \in S} \left\lbrace h_i \, \boldsymbol{j}_i \right\rbrace + p \, \sum_{i \in S} \left\lbrace \chi_i \, \boldsymbol{j}_i / \rho_i \right\rbrace \label{eq:heat_flux_ns} \end{equation} The first term on the right hand side is the contribution due to heat conduction $\boldsymbol{q}^\mathrm{cond}$. It is the product of the mixture thermal conductivity $\lambda$ and the temperature gradient. The second term $\boldsymbol{q}^\mathrm{diff}$ accounts for heat transfer by diffusion of enthalpy of each mixture component, i.e. $h_i = \frac{5}{2} \mathrm{k_B} T + E_i$. The expression for the thermal conductivity $\lambda = \lambda \left( p_i \, \forall (i \in S), T \right)$ is given in Chapter 5 of Ref.~\cite{giovangigli99a} and in App.~\ref{app:transport_systems}. An alternative formulation for the heat flux is to use the partial thermal conductivity $\widehat{\lambda}$ and the thermal diffusion coefficients $\theta_i, \, (i\in S)$. Both formulations are equivalent, but the one chosen here is advantageous to study the entropy production in Sec.~\ref{sec:thermodynamic_entropy_equation_derivation}. Following the structure of the matrix for the thermal conductivity transport system, it can be shown that $\lambda > 0$, provided that some conditions on the collision integral data are met~\cite{ern94a}. Note that within the state-to-state formalism there is no need to consider Eucken's correction to the thermal conductivity~\cite{ferziger72a}, because transfer of internal energy is implicitly taken into account through the chemical production rates $\omega_i, \, i \in S$. The third term formally accounts for heat transfer induced by concentration gradients (Dufour effect). It is the complement to the Soret effect appearing in Eq.~(\ref{eq:stefan_maxwell_eqn}). However, note that it is also being neglected in the Navier-Stokes calculations presented in Sec.~\ref{sec:normal_shock_bins_navier_stokes_vs_dsmc}. The necessary routines for the solution of the transport systems have been implemented in the Mutation++~\cite{scoggins20a} thermodynamic and transport library, which is tightly coupled to the Navier-Stokes flow solver used to generate the results of Sec.~\ref{sec:normal_shock_bins_navier_stokes} and \ref{sec:normal_shock_bins_navier_stokes_vs_dsmc}. \subsection{Chemistry source terms} \label{sec:navier_stokes_chemistry_source_terms} The terms on the right hand side of Eq.~(\ref{eq:species_mass_balance}) represent the mass production terms for atomic nitrogen and every $\mathrm{N_2}(k)$ respectively. For the latter, both excitation-deexcitation and dissociation-recombination reactions contribute to the source term: $\omega_k = \omega_k^E + \omega_k^D$. These two contributions are obtained by averaging Eqs.~(\ref{eq:excitation_n2k_operator}) and (\ref{eq:dissociation_n2k_operator}) (evaluated at the local Maxwellians $f^0$) under the constraint of pseudo-species mass conservation. This yields $\omega_k^E = m_\mathrm{N_2} \, \int \mathcal{C}_k^E (f^0) \, \mathrm{d} \boldsymbol{c}_k$ and $\omega_k^D = m_\mathrm{N_2} \, \int \mathcal{C}_k^D (f^0) \, \mathrm{d} \boldsymbol{c}_k$ respectively. Normalized with the respective molecular masses, these terms take on the following form: \begin{equation} \frac{\omega_k^E}{m_\mathrm{N_2}} = \sum_{\substack{l \in \mathcal{K}_\mathrm{N_2} \\ (k \ne l)}} \Bigl\{ \Bigl( - k_{k \rightarrow l}^{E \mathrm{f}} \, n_k + k_{k \rightarrow l}^{E \mathrm{b}} n_l \Bigr) n_\mathrm{N} \Bigr\}, \quad k \in \mathcal{K}_\mathrm{N_2} \label{eq:n2k_excitation_mass_production_rate} \end{equation} for excitation-deexcitation and: \begin{equation} \frac{\omega_k^D}{m_\mathrm{N_2}} = \Bigl( - k_{k}^{D \mathrm{f}} \, n_k + k_{k}^{D \mathrm{b}} \, n_\mathrm{N}^2 \Bigr) n_\mathrm{N}, \qquad k \in \mathcal{K}_\mathrm{N_2} \label{eq:n2k_dissociation_mass_production_rate} \end{equation} for dissociation-recombination. For atomic nitrogen only the dissociation-recombination reactions contribute to the source term. Taking the moments of Eq.~(\ref{eq:d3_n_operator}) in analogous manner yields $\omega_\mathrm{N} = m_\mathrm{N} \, \int \mathcal{C}_\mathrm{N}(f^0) \, \mathrm{d} \boldsymbol{c}_\mathrm{N}$, and can be simplified to: \begin{equation} \frac{\omega_\mathrm{N}}{m_\mathrm{N}} = - 2 \sum_{k \in \mathcal{K}_\mathrm{N_2}} \Bigl\{ \omega_k^D \Bigr\} \label{eq:atomic_n_mass_production_rate} \end{equation} In previous work~\cite{torres20a} the coarse-grain reaction cross sections $\sigma_{k \rightarrow l}^{E\mathrm{f}} (g)$ and $\sigma_k^{D\mathrm{f}} (g)$ were fitted to an analytical form consistent with Arrhenius-type expressions for the corresponding rate coefficients $k_{k \rightarrow l}^{E \mathrm{f}} (T)$ and $k_{k}^{D \mathrm{f}} (T)$ appearing in Eqs.~(\ref{eq:n2k_excitation_mass_production_rate})-(\ref{eq:atomic_n_mass_production_rate}). Special care was taken to ensure consistency between the kinetic and hydrodynamic description. This meant that the reversibility relations postulated to exist between \emph{forward} and \emph{backward} cross sections/probabilities as discussed in Sec.~\ref{sec:boltzmann_equation} have their counterparts at the hydrodynamic scale. For further context refer to Sec.~2.4.2 of Giovangigli~\cite{giovangigli99a} and in particular Remark 2.4.1 therein. The final result is that the backward rate coefficient for excitation/deexcitation processes in Eq.~(\ref{eq:n2k_excitation_mass_production_rate}) must be obtained from: \begin{equation} k_{k \rightarrow l}^{E \mathrm{b}} = k_{k \rightarrow l}^{E \mathrm{f}} \, Z_k / Z_l, \qquad (k \ne l \in \mathcal{K}_\mathrm{N_2}) \label{eq:deexcitation_rate_coeff} \end{equation} whereas the recombination rate coefficient appearing in Eqs.~(\ref{eq:n2k_dissociation_mass_production_rate}) and (\ref{eq:atomic_n_mass_production_rate}) is obtained as: \begin{equation} k_{k}^{D \mathrm{b}} = k_{k}^{D \mathrm{f}} \, Z_k / Z_\mathrm{N}^2, \qquad (k \in \mathcal{K}_\mathrm{N_2}). \label{eq:recombination_rate_coeff} \end{equation} Here, the partition function per unit volume of each pseudo-species $i$ has the form: $Z_i (T) = ( 2 \pi \, m_i \mathrm{k_B} T / \mathrm{h_P^2} )^{3/2} \, a_i \exp [ - E_i / ( \mathrm{k_B} T) ]$ \subsection{Entropy equation and sign of the chemical entropy production term} \label{sec:thermodynamic_entropy_equation_derivation} A macroscopic balance equation for the entropy per unit volume based on thermodynamic considerations is derived in Chapter~2.6 of Giovangigli~\cite{giovangigli99a}. In the form applicable to our case it reads: \begin{equation} \partial_t \left( \rho s \right) + \nabla_{\boldsymbol{x}} \cdot \Bigl( \rho \boldsymbol{u} \, s + \boldsymbol{j}^S \Bigr) = \Upsilon, \label{eq:thermodynamic_entropy_equation} \end{equation} where terms on the left-hand side represent the (1) local time rate of change of entropy, (2) the advection and (3) diffusion of entropy in physical space. The term $\boldsymbol{j}^S = ( \boldsymbol{q} - \sum_{i \in S} \{ \boldsymbol{j}_i \, g_i\} ) / T$ represents the diffusive flux of entropy for the gas mixture. It contains the product of diffusion fluxes of every mixture component with their respective Gibbs free energy per unit mass: $g_i = \mathrm{k_B} T / m_i \ln \left( n_i / Z_i \right)$. On the right hand side of Eq.~(\ref{eq:thermodynamic_entropy_equation}) the volumetric entropy production rate can be split up into $\Upsilon = \Upsilon_\mathrm{tran} + \Upsilon_\mathrm{chem}$, i.e. entropy production due to (a) transport phenomena and (b) chemical reactions~\footnote{Recall that in the coarse-grained approach inelastic transitions between internal energy states of a molecule are also treated as chemical reactions}. General expressions for both terms have been derived by Giovangigli,~\cite{giovangigli99a} and here we recall only the terms relevant for our fluid model. The first production term can be written as: \begin{equation} \begin{split} & \Upsilon_\mathrm{tran} = \frac{\lambda}{T^2} \, \nabla_{\boldsymbol{x}} T \cdot \nabla_{\boldsymbol{x}} T + \frac{2 \, \eta}{T} \, \underline{\underline{\mathsf{S}}} : \underline{\underline{\mathsf{S}}} \, \ldots \\ & + \frac{p}{T} \sum_{i,j \in S} D_{ij} \, (\boldsymbol{d}_i + \chi_i \nabla_{\boldsymbol{x}} \ln T ) \cdot (\boldsymbol{d}_j + \chi_j \nabla_{\boldsymbol{x}} \ln T ). \label{eq:chemical_entropy_production_transport} \end{split} \end{equation} Given the structure of the first two terms on the right hand side of Eq.~(\ref{eq:chemical_entropy_production_transport}) and the fact that $\eta, \lambda > 0$ it can be easily seen that they must always be non-negative. The third term contains as factors $D_{ij}$ the components of the multi-component diffusion matrix. Its properties guarantee that the associated entropy production term will always remain non-negative. Thus, $\Upsilon_\mathrm{tran} \ge 0$ must hold for any physically realizable flow. Now, for the particular set of reactions given by Eqs.~(\ref{eq:n3_excitation}) and (\ref{eq:n3_dissociation}), it is worthwhile to have a closer look at the entropy production due to chemical reactions: $\Upsilon_\mathrm{chem} = - ( \sum_{i \in S} \{ g_i \, \omega_i \} ) / T$. It is a function on the Gibbs free energies per unit mass and the chemical source terms appearing on the right hand side of Eq.~(\ref{eq:species_mass_balance}). Following the general procedure outlined by Giovangigli~\cite{giovangigli99a}, it is possible to show that $\Upsilon_\mathrm{chem} \ge 0$ for all cases, in accordance with the second law of thermodynamics. The key to demonstrating this lies in re-writing the chemical production terms in the \emph{symmetric} form (Sec.~4.6.6 of Giovangigli~\cite{giovangigli99a}), where the rate coefficients for the excitation-deexcitation and dissociation-recombination reaction become $k_{E(k \rightarrow l)}^{s} = [ k_{k \rightarrow l}^{E \mathrm{f}} \, k_{k \rightarrow l}^{E \mathrm{b}} \, Z_k \, Z_l \, Z_\mathrm{N}^2 ]^{1/2}$ and $k_{D(k)}^{s} = [ k_{k}^{D \mathrm{f}} \, k_{k}^{D \mathrm{b}} \, Z_k \, Z_\mathrm{N}^4 ]^{1/2}$ respectively. Consistency between these production rates in symmetric form and the original notation of Eqs.~(\ref{eq:n2k_excitation_mass_production_rate})-(\ref{eq:atomic_n_mass_production_rate}) is contingent upon the elementary reactions expressed by Eqs.~(\ref{eq:n3_excitation}) and (\ref{eq:n3_dissociation}) verifying detailed balance. This, in turn, implies that the backward rate coefficients for excitation-deexcitation and dissociation-recombination must be computed according to Eqs.~(\ref{eq:deexcitation_rate_coeff}) and (\ref{eq:recombination_rate_coeff}) respectively. After some algebraic manipulation, one arrives at the final form: \begin{equation} \begin{split} \frac{\Upsilon_\mathrm{chem}}{\mathrm{k_B}} & = \sum_{\substack{k,l \in \mathcal{K}_\mathrm{N_2} \\ (l > k)}} \biggl\{ k_{E(k \rightarrow l)}^s \ln \biggl( \frac{A}{B} \biggr) \left( A - B \right) \biggr\} \\ & + \sum_{k \in \mathcal{K}_\mathrm{N_2}} \biggl\{ k_{D(k)}^{s} \ln \biggl( \frac{A}{C} \biggr) \left( A - C \right) \biggr\} \label{eq:chemical_entropy_production_gibbs_final} \end{split} \end{equation} for the entropy production due to chemical reactions. Here, we have defined the relations $\ln \left( A \right) = \left( g_k \, m_\mathrm{N_2} + g_\mathrm{N} \, m_\mathrm{N} \right) / \mathrm{k_B} T$, $\ln \left( B \right) = \left( \bar{g}_l \, m_\mathrm{N_2} + g_\mathrm{N} \, m_\mathrm{N} \right) / \mathrm{k_B} T$ and $\ln \left( C \right) = \left( 3 \, g_\mathrm{N} \, m_\mathrm{N} \right) / \mathrm{k_B} T$. Regardless of the signs of $A$,$B$ and $C$, all the elements of the sums in Eq.~(\ref{eq:chemical_entropy_production_gibbs_final}) must be non-negative. Since the rate coefficients themselves are always non-negative, this means that $\Upsilon_\mathrm{chem} \ge 0$ in all instances. Satisfying this condition for all terms contributing to $\Upsilon$ in Eq.~(\ref{eq:thermodynamic_entropy_equation}) is crucial for constructing a fluid model fully consistent with the second law of thermodynamics. \section{Internal energy excitation and dissociation across normal shock wave} \label{sec:normal_shock_bins} In this section we present simulation results for a steady, normal shock wave. We apply three distinct numerical approaches and compare them in terms of their degree of physical fidelity. In order to formulate a discretized version of the macroscopic balance equations amenable to numerical solution, we re-write Eqs.~(\ref{eq:species_mass_balance})-(\ref{eq:total_energy_balance}) for the unsteady, one-dimensional case in the form: \begin{equation} \frac{\partial \mathbf{U}}{\partial t} + \frac{\partial \mathbf{F}}{\partial x} - \frac{\partial \mathbf{F}^\mathrm{d}}{\partial x} = \mathbf{S}, \label{eq:navier_stokes_conservative} \end{equation} where $\mathbf{U} = \left( \rho_i \; (i \in S), \, \rho u_x, \, \rho E \right)^T$ is the vector of conserved variables, $\mathbf{F} = \left( \rho_i u_x \; (i \in S), \, \rho u_x^2 + p, \, \rho u_x \, ( E + p / \rho) \right)^T$ is the inviscid flux vector and $\mathbf{F}^\mathrm{d} = \left( j_{x, i} \; (i \in S), \, -\tau_{xx}, \, -\tau_{xx} u_x + q_x \right)^T$ is the vector of diffusive fluxes. On the right-hand side of Eq.~(\ref{eq:navier_stokes_conservative}), $\mathbf{S} = \left( \omega_i \, (i \in S), \, 0, \, 0 \right)^T$ represents the source term vector. Further manipulation of Eq.~(\ref{eq:navier_stokes_conservative}) yields the appropriate discretized equations solved numerically in Sec.~\ref{sec:normal_shock_bins_inviscid}, \ref{sec:normal_shock_bins_navier_stokes} and \ref{sec:normal_shock_bins_navier_stokes_vs_dsmc}. In Sec.~\ref{sec:normal_shock_bins_inviscid} we first obtain steady-state solutions to Eq.~(\ref{eq:navier_stokes_conservative}) in the inviscid limit using the master equation approach. When coupled with the Rankine-Hugoniot jump relations for a chemically frozen free-stream, this approach is fully equivalent to solving the steady-state Euler equations across the normal shock. The resulting flow fields are influenced primarily by the detailed chemistry terms on the right hand side. Out of all our calculations, these are the cheapest from a computational standpoint. Thus, they can be easily repeated for a range of bin resolutions and allow us to study the convergence of the coarse-grain chemistry model toward the full rovibrational state-to-state solution. The inviscid post-shock profiles are obtained by marching forward in space from the initial discontinuity, so the calculations can be carried out without a-priori knowledge of the extent of the post-shock relaxation region. However, this information becomes crucial to set up the computational domains for the Finite Volume (FV) calculations described in Sec.~\ref{sec:normal_shock_bins_navier_stokes}. We first obtain FV solutions to the Euler equations, including the chemical source terms. We verify that the FV Euler solution agrees with those of Sec.~\ref{sec:normal_shock_bins_inviscid} and make sure that we limit numerical dissipation to the minimum necessary to capture the shock within a few cells. This makes us confident that any additional diffusive phenomena observed in the full Navier-Stokes flow fields are entirely caused by the viscous flux terms in Eq.~(\ref{eq:navier_stokes_conservative}). Finally, in Sec.~\ref{sec:normal_shock_bins_dsmc} we describe the process of obtaining the normal shock flow fields using DSMC and them compare them to the equivalent Navier-Stokes FV flow fields in Sec.~\ref{sec:normal_shock_bins_navier_stokes_vs_dsmc}. \subsection{Hydrodynamic inviscid solution based on Finite difference ODE method} \label{sec:normal_shock_bins_inviscid} To obtain a first estimate of the thermo-chemical non-equilibrium region, we simulate the normal shock following a steady-state, one-dimensional inviscid approach. When such conditions are assumed, the time derivatives $\partial \mathbf{U} / \partial t$ and the diffusive transport fluxes $\mathbf{F}^\mathrm{d}$ in Eq.~(\ref{eq:navier_stokes_conservative}) all vanish. This makes it possible to re-cast the original set of equations into an ordinary differential equation (ODE) system: \begin{equation} \frac{\mathrm{d} \mathbf{P}}{\mathrm{d} x} = \mathbf{Q} \left( \mathbf{P} \right), \label{eq:shocking_ode_system} \end{equation} where the solution vector is now given by $\mathbf{P} = \left( y_i \, (i \in S), u, T \right)^T$ and the $y_i$ are the mass fractions of atomic nitrogen plus each internal energy bin of $\mathrm{N_2}$. The right hand side of Eq.~(\ref{eq:shocking_ode_system}) is given by $\mathbf{Q} \left( \mathbf{P} \right) = ( \partial \mathbf{F} / \partial \mathbf{P} )^{-1} \mathbf{S}$. The system can be solved as an initial value problem~\cite{gear71a} marching along the $x$-axis under the condition that a suitable initial state $\mathbf{P} (x = 0)$ is provided. The solver used is equipped with mesh adaptation techniques implemented in the LSODE package~\cite{radhakrishnan93a} and the code used in this study has been applied to similar problems in the past~\cite{munafo14c, munafo14d, panesi14a}. Two different supersonic free-stream conditions are considered. For the \emph{high-speed} case, we impose a free-stream velocity of $u_1 = 10 \, \mathrm{km \cdot s^{-1}}$, while for the \emph{low-speed} case we use $u_1 = 7 \, \mathrm{km \cdot s^{-1}}$. All other parameters, such as free-stream temperature, pressure and composition are the same for both cases. The higher-speed conditions are listed in Table~\ref{tab:normal_shock_bins_10kmsec_bc}, where they are labeled as (1) pre-shock. In the ODE approach the shock is not captured by the numerical method. It is instead replaced by a sudden jump in flow conditions at $x = 0$, which only affects the translational mode. Therefore, the analytical Rankine-Hugoniot jump relations with specific heat ratio $\gamma = 5/3$ are used to predict the non-equilibrium post-shock state (state (1a) in Table~\ref{tab:normal_shock_bins_10kmsec_bc}). While the kinetic temperature reaches $T = 62550 \, \mathrm{K}$ behind the discontinuity, the internal temperature and composition remain \emph{frozen} at the free-stream values. Thus, the initial bin mass fractions $y_k, (k \in \mathcal{K}_\mathrm{N_2})$ in Eq.~(\ref{eq:shocking_ode_system}) are made to follow a Boltzmann distribution at $T_\mathrm{int} = 300 \, \mathrm{K}$. The ODE algorithm then marches along $x$ starting from state (1a). Notice that the free stream contains a non-zero amount of atomic nitrogen, even though the gas in equilibrium at $300 \, \mathrm{K}$ should only consist of $\mathrm{N_2}$-molecules. We add a small amount of N to the free-stream gas to trigger internal energy exchange and dissociation processes, since only reactions induced by N-$\mathrm{N_2}$ collisions are taken into account by the chemical source terms of Eqs.~(\ref{eq:n2k_excitation_mass_production_rate})-(\ref{eq:atomic_n_mass_production_rate}). The pre- and post-shock conditions for the low-speed case are listed in Table~\ref{tab:normal_shock_bins_7kmsec_bc}. Due to the lower post-shock temperature, the gas does not dissociate to the same degree as at the high-speed conditions and about 1/3 of the post-shock gas remains in the form of molecular nitrogen. We carry out four separate simulations at the high- and low-speed conditions respectively. The first simulations provide reference solutions with the original Ames database. These results are labeled ``full'' in Tables~\ref{tab:normal_shock_bins_10kmsec_bc} and \ref{tab:normal_shock_bins_7kmsec_bc} and in Figs.~\ref{fig:shocking_F90_temperatures_10kmsec_linear} and \ref{fig:shocking_F90_temperatures_7kmsec_linear} respectively. We then compare the reference curves with calculations in which the full database has been replaced with with reduced-size equivalents based on the URVC bin model~\cite{magin12a, torres20a}. In Tables~\ref{tab:normal_shock_bins_10kmsec_bc} and \ref{tab:normal_shock_bins_7kmsec_bc}, under label (2) we list the post-shock equilibrium state reached by the simulations when using 837, 100 and 10 bins respectively and compare them the ones obtained with the full database and its 9390 energy levels. As the number of bins is reduced from 837 down to 10, the post-shock equilibrium conditions begin to diverge from the ones predicted by the full model. However, even for the 10-bin system, the deviations in the post-shock equilibrium state are only of a few percent. We obtain such close agreement with the full model, because we are using energy bins of variable, instead of constant width. In previous work~\cite{torres18b} we were able to show that switching to variably-sized bins allows us to closely match the thermodynamic properties of the full model with a much smaller number of URVC bins. In particular, using more bins of smaller width to group together the lowest-energy rovibrational levels is advantageous to accurately capture the internal energy content of the cold free stream. \begin{table \centering \caption{Normal shock wave at $u_1 = 10 \, \mathrm{km \cdot s^{-1}}$: Upstream and downstream boundary conditions as a function of bin number} \label{tab:normal_shock_bins_10kmsec_bc} \begin{tabular}{l c c c c c c} & $p$ & $T$ & $T_\mathrm{int}$ & $\rho \times 10^3$ & $u$ & $x_\mathrm{N}$ \\ & [Pa] & [K] & [K] & [$\mathrm{kg / m^{3}}$] & [$\mathrm{m / s}$] & \\ \hline (1) pre-shock: & 13.3 & 300 & 300 & 0.1473 & 10000 & 0.02813 \\ \hline \multicolumn{7}{l}{(1a) post-shock frozen:} \\ & 11040 & 62550 & 300 & 0.5864 & 2511 & 0.02813 \\ \hline \multicolumn{7}{l}{(2) post-shock equilibrium:} \\ full & 13665 & 11422 & 11422 & 2.0161 & 730.5 & 0.9998 \\ 837 bins & 13665 & 11422 & 11422 & 2.0161 & 730.5 & 0.9998 \\ 100 bins & 13665 & 11422 & 11422 & 2.0161 & 730.5 & 0.9998 \\ 10 bins & 13658 & 11493 & 11493 & 2.0024 & 735.5 & 0.9998 \end{tabular} \end{table} \begin{table \centering \caption{Normal shock wave at $u_1 = 7 \, \mathrm{km \cdot s^{-1}}$: Upstream and downstream boundary conditions as a function of bin number} \label{tab:normal_shock_bins_7kmsec_bc} \begin{tabular}{l c c c c c c} & $p$ & $T$ & $T_\mathrm{int}$ & $\rho \times 10^3$ & $u$ & $x_\mathrm{N}$ \\ & [Pa] & [K] & [K] & [$\mathrm{kg / m^{3}}$] & [$\mathrm{m / s}$] & \\ \hline (1) pre-shock: & 13.3 & 300 & 300 & 0.1473 & 7000 & 0.02813 \\ \hline \multicolumn{7}{l}{(1a) post-shock frozen:} \\ & 5409.1 & 30784 & 300 & 0.5837 & 1766 & 0.02813 \\ \hline \multicolumn{7}{l}{(2) post-shock equilibrium:} \\ full & 6802.3 & 6158.1 & 6158.1 & 2.4858 & 414.7 & 0.6642 \\ 837 bins & 6802.3 & 6158.1 & 6158.1 & 2.4858 & 414.7 & 0.6642 \\ 100 bins & 6802.3 & 6157.9 & 6157.9 & 2.4859 & 414.7 & 0.6642 \\ 10 bins & 6802.8 & 6141.2 & 6141.2 & 2.4886 & 414.3 & 0.6665 \end{tabular} \end{table} Mass density and temperature profiles for the high- and low-speed cases are plotted in Figs.~\ref{fig:shocking_F90_10kmsec_linear} and \ref{fig:shocking_F90_7kmsec_linear} respectively. The initial discontinuity, where the gas suddenly transitions from the free-stream conditions to the frozen post-shock conditions, is visible at $x=0$. Recall that the ODE system is only solved starting from the frozen post-shock conditions, i.e. state (1a) in Tables~\ref{tab:normal_shock_bins_10kmsec_bc} and \ref{tab:normal_shock_bins_7kmsec_bc}, and the method does not capture the shock front itself. Close-ups immediately downstream of the discontinuity are shown as insets in all four sub-figures. All plots follow the same labeling conventions. The reference solution is shown as dashed black lines, while results obtained with the URVC binning approach are plotted as continuous lines: 837 bins (black triangle on black line), 100 bins (blue circle on blue line) and 10 bins (red line). In Fig.~\ref{fig:shocking_F90_densities_10kmsec_linear} we plot profiles of mixture density $\rho$ (continuous lines) and molecular nitrogen $\rho_\mathrm{N_2}$ (dotted lines) for the high-speed case. The behavior in all four cases is very similar and the main differences are confined to the region immediately behind the shock front. Each one of the four $\rho_\mathrm{N_2}$-profiles reaches its maximum several millimeter downstream of the discontinuity, before dissociation begins to consume the remaining molecular nitrogen. The reference solution for the full N3 system exhibits the quickest response to the shock, whereas the coarse-grained systems lag behind. The response becomes slower with decreasing number of bins. In Fig.~\ref{fig:shocking_F90_temperatures_10kmsec_linear} we examine the corresponding temperature profiles. The kinetic temperature $T$ quickly decreases from its initial value of $62550 \, \mathrm{K}$ to about $30000 \, \mathrm{K}$ in the first $2-3 \, \mathrm{mm}$ behind the discontinuity. Simultaneously, the internal temperature rises from its free-stream value of $300 \, \mathrm{K}$ to a maximum of about $25000 \, \mathrm{K}$ in the same distance, before slowly decreasing again. Both temperatures then slowly approach each other as the gas continues to cool due to the effect of $\mathrm{N_2}$-dissociation. The relaxation of translational and internal energy proceeds quickest in the reference solution (dashed lines) and becomes progressively slower for the coarse-grain cases with decreasing number of bins. The internal temperatures reported in Figs.~\ref{fig:shocking_F90_temperatures_10kmsec_linear} and \ref{fig:shocking_F90_temperatures_7kmsec_linear} are the result of post-processing the internal state populations behind the shock. For the full reference solution, $T_\mathrm{int}$ is based on the rovibrational level populations (refer to Eqs.~(23) and (24) in Panesi et al.~\cite{panesi13a}). For the coarse-grained systems $T_\mathrm{int}$ is based on the bin populations and obtained in an analogous manner, following the procedure of App.~C of Ref.~\cite{torres18b}. Thanks to the variably-spaced bin formulation, the macroscopic post-shock equilibrium state (i.e. temperature, composition) reached by all simulations closely matches the reference solution. As was shown by Munaf\`o et al~\cite{munafo14c, munafo14d} for the same flow conditions, the internal energy level populations exhibit strong departure from Boltzmann distributions and internal energy relaxation and dissociation effectively proceed at a common time scale. \begin{figure \begin{minipage}{0.8\columnwidth} \subfloat[Mixture density and partial density of $\mathrm{N_2}$ $\mathrm{[kg/m^3] \times 10^3}$]{\includegraphics[width=\columnwidth]{shocking_F90_densities_10kmsec_linear.eps}\label{fig:shocking_F90_densities_10kmsec_linear}} \end{minipage} \begin{minipage}{0.8\columnwidth} \subfloat[Mixture kinetic temperature $T$ and internal temperature of $\mathrm{N_2}$ $T_\mathrm{int} \mathrm{[K]}$ behind the shock]{\includegraphics[width=\columnwidth]{shocking_F90_temperatures_10kmsec_linear.eps}\label{fig:shocking_F90_temperatures_10kmsec_linear}} \end{minipage} \caption{Inviscid shock at $u_1 = 10 \, \mathrm{km \cdot s^{-1}}$.} \label{fig:shocking_F90_10kmsec_linear} \end{figure} In Fig.~\ref{fig:shocking_F90_densities_7kmsec_linear} we now show the density profiles for the low-speed case. Again, all four systems follow the same general behavior. Whereas in the high-speed case practically all molecular nitrogen eventually dissociated behind of the shock front, at these lower-speed conditions the $\mathrm{N_2}$-profiles remain fairly flat further downstream. However, the trend is now reversed, in the sense that the 10-bin system is the quickest to react to the shock, whereas the response becomes slower as the number of bins is increased all the way up to the full system. Figure~\ref{fig:shocking_F90_temperatures_7kmsec_linear} shows the corresponding temperatures for the low-speed case. With a length of approximately $5 \, \mathrm{m}$, the post-shock non-equilibrium region is now almost two orders of magnitude greater than in Fig.~\ref{fig:shocking_F90_temperatures_10kmsec_linear}. A closer look suggests that at these lower-speed conditions internal energy relaxation and cooling due to $\mathrm{N_2}$-dissociation proceed at distinct time scales. For the full reference solution, $T$ and $T_\mathrm{int}$ reach a common value of $\approx 15000 \, \mathrm{K}$ about $1 \, \mathrm{cm}$ downstream of the discontinuity, while the N mole fraction at this point has barely surpassed 20\% (not shown). Beyond $x = 1.5 \, \mathrm{cm}$ the remainder of the dissociation then effectively proceeds at a common temperature. With regard to the coarse-grained model solutions, another difference relative to the high-speed conditions is apparent. Whereas in Fig.~\ref{fig:shocking_F90_temperatures_10kmsec_linear} the reference solution showed the quickest initial relaxation, in Fig.~\ref{fig:shocking_F90_temperatures_7kmsec_linear} the full system is now the slowest of all four cases. In fact the ``convergence'' of the coarse-grained profiles with increasing bin number toward the reference solution occurs in the opposite sense relative to the high-speed case. \begin{figure \begin{minipage}{0.8\columnwidth} \subfloat[Mixture density and partial density of $\mathrm{N_2}$ $\mathrm{[kg/m^3] \times 10^3}$]{\includegraphics[width=\columnwidth]{shocking_F90_densities_7kmsec_linear.eps}\label{fig:shocking_F90_densities_7kmsec_linear}} \end{minipage} \begin{minipage}{0.8\columnwidth} \subfloat[Mixture kinetic temperature $T$ and internal temperature of $\mathrm{N_2}$ $T_\mathrm{int} \mathrm{[K]}$ behind the shock]{\includegraphics[width=\columnwidth]{shocking_F90_temperatures_7kmsec_linear.eps}\label{fig:shocking_F90_temperatures_7kmsec_linear}} \end{minipage} \caption{Inviscid shock at $u_1 = 7 \, \mathrm{km \cdot s^{-1}}$.} \label{fig:shocking_F90_7kmsec_linear} \end{figure} By studying these two flow conditions with the inviscid ODE method we found that the relaxation region for the high-speed case extends for about $10 \, \mathrm{cm}$ and for the low-speed case roughly $5 \, \mathrm{m}$ from the discontinuity. This helps us size the domain and to adjust the computational parameters for the Navier-Stokes and DSMC calculations discussed in Secs.\ref{sec:normal_shock_bins_navier_stokes} and \ref{sec:normal_shock_bins_dsmc}. Furthermore, we see that the coarse-grained model has an influence on the evolution of the gas state in the post-shock region and these profiles diverge to some degree from the reference solution. As would be expected, the closest agreement with the full system is observed for the cases with the largest number of bins (837), while the biggest differences are observed for the 10-bin cases. However, these deviations become less severe further downstream of the initial discontinuity. \subsection{Normal shock solution Euler vs. Navier-Stokes using Finite Volume method} \label{sec:normal_shock_bins_navier_stokes} Based on the findings of Sec.~\ref{sec:normal_shock_bins_inviscid}, we simulate the normal shock by solving the Euler and Navier-Stokes equations on a one-dimensional domain with the finite volume (FV) method~\cite{hirsch88a}. Equations~(\ref{eq:navier_stokes_conservative}) are discretized in space and advanced in time using the implicit Backward-Euler method~\cite{gear71a}. The numerical inviscid fluxes at cell interfaces are computed using Roe's approximate Riemann solver~\cite{roe81a}. The particular form of Roe's dissipation matrix for the set of variables in Eq.~(\ref{eq:navier_stokes_conservative}) is discussed in detail elsewhere~\cite{munafo14d}. The purpose of this study is two-fold. First we compare the FV Euler result to the inviscid ODE results of Sec.~\ref{sec:normal_shock_bins_inviscid} to confirm that, when solving them on a sufficiently refined FV grid, we obtain the same answer as in Fig.~\ref{fig:shocking_F90_temperatures_10kmsec_linear}. Then we show how the shock structure changes once the viscous and diffusive terms of the Navier-Stokes equations are taken into account. For the sake of conciseness, in this section we only compare results for the high-speed case using the 10-bin coarse-grained system. However, the findings also apply to the low-speed flow condition and other bin numbers studied. Additional FV Navier-Stokes results will then be shown in Sec.~\ref{sec:normal_shock_bins_navier_stokes_vs_dsmc}, where we compare to equivalent DSMC simulations. All viscous shock solutions are obtained in a two-step approach. First, an Euler FV calculation is performed until reaching the inviscid steady-state solution. The simulation is carried out in the shock's frame of reference, where its steady-state structure develops over time around an initial discontinuity in flow parameters. The portion of the flow field left of the discontinuity is initialized to the pre-shock equilibrium state, whereas to its right the post-shock equilibrium state is imposed (recall Tables~\ref{tab:normal_shock_bins_10kmsec_bc} and \ref{tab:normal_shock_bins_7kmsec_bc} for the equilibrium conditions imposed in the high- and low-speed cases respectively). The final steady-state Euler solution is then re-used as initial condition for the subsequent Navier-Stokes simulation on the same grid. For both flow conditions a one-dimensional FV mesh with variable spacing is used. The region near the initial discontinuity is highly resolved, with a grid spacing of $\Delta x = 2 \times 10^{-5} \, \mathrm{m}$. Such severe refinement was performed only to minimize the effect of numerical diffusion near the shock front and lies well below the mean free path of $\lambda \approx 10^{-3} \, \mathrm{m}$ estimated at the same location. From this central region the grid is gradually coarsened in both the upstream and downstream directions to reduce computational cost, while ensuring numerical stability in the FV scheme. Figures~\ref{fig:fv_comparison_density_10v} and \ref{fig:fv_comparison_temperatures_10v} show a comparison between the FV Euler (x-symbols on blue lines), Navier-Stokes (black lines) and inviscid ODE flow field of Sec.~\ref{sec:normal_shock_bins_inviscid} (red lines). All profiles shown are for the high-speed condition using the 10-bin coarse-grained system. Density profiles are shown first in Fig.~\ref{fig:fv_comparison_density_10v}. The origin of the $x$-axis lies at the location of the initial discontinuity for the Euler cases. Due to numerical diffusion in the FV approach this discontinuity is captured over an extent of 2-3 cells (see close-up in Fig.~\ref{fig:fv_comparison_density_10v}(b)). However, the grid has been carefully refined in the vicinity to ensure that this adverse numerical effect remains minimal. This is confirmed by the excellent agreement of the FV-Euler and inviscid ODE density profiles over the remainder of Fig.~\ref{fig:fv_comparison_density_10v}(a): past the discontinuity both the FV Euler and ODE solution curves lie on top of each other. Once the diffusive terms in the Navier-Stokes equations are taken into account, the discontinuity at $x=0$ disappears and is replaced by a smooth transition from pre-shock to post-shock density. Differences between the inviscid and viscous solutions are appreciable within about $\pm 0.01 \, \mathrm{m}$ of the initial discontinuity. The corresponding temperature profiles are shown in Fig.~\ref{fig:fv_comparison_temperatures_10v}. Excellent agreement between the FV-Euler and inviscid ODE solutions is observed to within 2 cells of the discontinuity (see close-up in Fig.~\ref{fig:fv_comparison_temperatures_10v}(b)). The jump in kinetic temperature is captured well by the FV method, as is its peak value for the inviscid case. Again, viscous effects act to smooth out these flow features and diffuse the shock front upstream. In the Navier-Stokes profile the gas temperature begins to depart from its pre-shock value about $0.003 \, \mathrm{m}$ ahead of the initial discontinuity and reaches a lower maximum ($T_\mathrm{max} \approx 51800 \, \mathrm{K}$ for Navier-Stokes vs. $62550 \, \mathrm{K}$ for Euler). The internal temperature profile is also affected by the inclusion of diffusive transport. The peak in the viscous profile ($T_\mathrm{int, max} \approx 21600 \, \mathrm{K}$) lies slightly upstream compared to the maximum of $24200 \, \mathrm{K}$ for the inviscid case. Consistent with the density profiles, differences in the viscous and inviscid temperature fields are only significant up to about $\pm 0.01 \, \mathrm{m}$ away from the initial discontinuity. This comparison only covered flow quantities, which exhibit sharp discontinuities in their inviscid FV profiles. It showed that the Euler FV solutions are consistent with the inviscid ODE approach of Sec.~\ref{sec:normal_shock_bins_inviscid} and not polluted by numerical diffusion. This guarantees that any diffusive effects observed in the Navier-Stokes profiles reported in Sec.~\ref{sec:normal_shock_bins_navier_stokes_vs_dsmc} are physical in nature, i.e. exclusively due to the actual molecular diffusion terms in the Navier-Stokes equations. \begin{figure \raggedright \includegraphics[width=\columnwidth]{fv_comparison_density_10v.eps} \caption{Gas density $\rho \times 10^3$ $[\mathrm{kg/m^3}]$ for shock at $u_1 = 10 \, \mathrm{km \cdot s^{-1}}$ with 10 bins. FVM solutions for Euler vs. Navier-Stokes and inviscid ODE approach.} \label{fig:fv_comparison_density_10v} \end{figure} \begin{figure \raggedright \includegraphics[width=\columnwidth]{fv_comparison_temperatures_10v.eps} \caption{Kinetic and internal temperatures [K] for shock at $u_1 = 10 \, \mathrm{km \cdot s^{-1}}$ with 10 bins. FVM solutions for Euler vs. Navier-Stokes and inviscid ODE approach.} \label{fig:fv_comparison_temperatures_10v} \end{figure} \subsection{Normal shock solution with DSMC} \label{sec:normal_shock_bins_dsmc} In this section we describe how the normal shock for both the high- and low-speed conditions was simulated using the DSMC method~\cite{bird94a}. The macroscopic flow profiles with DSMC are then compared with corresponding Navier-Stokes solutions in Sec.~\ref{sec:normal_shock_bins_navier_stokes_vs_dsmc}. Since DSMC can be used to indirectly solve the Boltzmann equation~\cite{wagner92a}, it allows us to resolve the shock structure with the highest level of detail. The VKI DSMC code used for this purpose is able to simulate one-dimensional steady and unsteady flows. Coarse-grained URVC cross sections~\cite{torres20a} for the N-$\mathrm{N_2}$ system are used and implementation details concerning the inelastic and reactive collision routines are discussed elsewhere.~\cite{torres18b}. As was the case in Secs.~\ref{sec:normal_shock_bins_inviscid} and \ref{sec:normal_shock_bins_navier_stokes}, here we simulate the steady, one-dimensional flow across a normal shock. However, the precise manner in which the DSMC solution is obtained differs for the high- and low-speed cases. For the former, we simulate the flow in the shock's frame of reference. Both extremes of the domain are treated as open stream boundaries~\cite{bird94a}. In the VKI DSMC code~\cite{torres17a} we use the surface reservoir technique~\cite{tysanner05a} to generate the correct number and distribution of particles each time step at the upstream and downstream boundaries. The supersonic upstream gas enters from the left and, after traversing the standing shock wave, leaves the domain toward the right, where particles conforming to the post-shock equilibrium conditions are injected. The boundary conditions, expressed in terms of the equilibrium macroscopic flow parameters, are listed in Table~\ref{tab:normal_shock_bins_10kmsec_bc}. The velocity distributions at both boundaries conform to Maxwellians with the respective average velocities $\boldsymbol{u}_1 = \left( u_1, 0, 0 \right)^T$ and $\boldsymbol{u}_2 = \left( u_2, 0, 0 \right)^T$ and equilibrium temperatures $T_1$ and $T_2$. The particles representing molecular nitrogen entering at the left and right boundaries populate the rovibrational bins according to Boltzmann distributions at the pre- and post-shock equilibrium temperatures respectively. Given the degree of dissociation in the post-shock region, the number of $\mathrm{N_2}$-particles injected through the downstream boundary is negligible. As before, a trace amount of atomic nitrogen is added to the upstream gas to trigger inelastic $\mathrm{N}$-$\mathrm{N_2}(k)$ processes. To ensure that the shock front builds up at a well-defined location within the domain, we generate initial particles corresponding to the pre-shock equilibrium state (1) in the region left of the initial discontinuity and particles corresponding to post-shock equilibrium state (2) to the right of this location. This becomes the point where the supersonic free stream is ``tripped'' into transitioning to the post-shock equilibrium state and marks the initial location of the standing shock. As the simulation progresses, this discontinuity is smoothed out by particle transport. Once this phase is complete, the steady-state flow parameters are gathered from the DSMC particles and further refined through time-averaging. The location of the initial discontinuity is somewhat arbitrary, but if it is placed too close to either boundary, random walk may push the shock front out of the domain before steady-state macro-parameters can be extracted. Given that our primary goal is to observe as much of the relaxation region behind the shock, we place it as close as is reasonable to the left boundary. By setting $L_u = 3 \, \mathrm{cm}$ (see Table~\ref{tab:normal_shock_bins_simulation_parameters}), we make sure to leave ample space (i.e. $6\,000$ cells) between the inlet and the location of the initial discontinuity. Notice that for the high-speed condition only parameters for the 10-bin and 100-bin systems are listed in Table~\ref{tab:normal_shock_bins_simulation_parameters}. Due to the greater computational cost of the DSMC method compared to the ODE approach of Sec.~\ref{sec:normal_shock_bins_inviscid} and the Navier-Stokes calculations of Sec.~\ref{sec:normal_shock_bins_navier_stokes}, no DSMC simulations for the higher-resolution 837-bin case and the full database were carried out. In both high-speed simulations, the DSMC particle weight is set to ensure that at least 20 particles are present in every upstream cell. Due to the rise in density across the shock, there are $\approx 540$ particles per cell in the downstream region. For the 100-bin case the domain length is reduced to $L_\mathrm{u} = 3 \, \mathrm{cm}$ and $L_\mathrm{d} = 10 \, \mathrm{cm}$ respectively. This reduction is justified, as we are still able to capture the full relaxation region, while significantly reducing the computational expense. Two complementary measures are taken to reduce the statistical noise inherent in DSMC flow fields. For the two high-speed cases in Table~\ref{tab:normal_shock_bins_simulation_parameters} we perform 64 simulations (using independent random number seeds) and ensemble-average the results. Thus, they become equivalent to a single simulation using 1280 particles per cell in the upstream- and about 34500 particles per cell in the downstream region. Past the transient phase (which lasts between $600\,000$ and $700\,000$ time steps) steady-state flow field samples are gathered over another $50\,000$ time steps. During this phase, instantaneous samples are taken every 10 time steps and added to a cumulative steady-state sample. \begin{table}[htb] \centering \caption{Normal shock wave with DSMC: domain and simulation parameters} \label{tab:normal_shock_bins_simulation_parameters} \begin{tabular}{r c c c c c} Case & & \multicolumn{2}{c}{high-speed} & & low-speed \\ System & & 10 bins & 100 bins & $\quad$ & 10 bins \\ \hline & & & & \\[-1em] DSMC cell size $\Delta x$ & [$\mathrm{\mu m}$] & 5 & 5 & & 1.5 \\ \\[-1em] \hline \\[-1em] Domain length & [cm] & 20 & 13 & & 9 \\ upstream $L_\mathrm{u}$ & [cm] & 3 & 3 & & - \\ downstream $L_\mathrm{d}$ & [cm] & 17 & 10 & & - \\ \hline & & & & & \\[-1em] DSMC cells & & $40\,000$ & $26\,000$ & & $60\,000$ \\ upstream & & $6\,000$ & $6\,000$ & & - \\ downstream & & $34\,000$ & $20\,000$ & & - \\ \\[-1em] \hline \\[-1em] Total simulator & & & & & \\ particles (million) & $\approx$ & 18.5 & 11 & & 16 \\ \hline \\[-1em] Particle weight & & \multicolumn{2}{c}{$8.02762 \times 10^{14}$} & & $2.4083 \times 10^{14}$ \\ \hline \\[-1em] DSMC $\Delta t$ & [ns] & 0.5 & 0.5 & & 0.2 \\ DSMC steps & & & & & \\ transient & & $600\,000$ & $700\,000$ & & $600\,000$ \\ time avg. & & $50\,000$ & $50\,000$ & & $300\,000$ \\ & & \multicolumn{2}{c}{(every 10 steps)} & & (every 1000) \end{tabular} \end{table} The flow field for the low-speed condition could not be obtained in the shock's frame of reference. Given the available computational resources, the domain size necessary to contain the entire steady-state shock profile would have become prohibitively large. Based on Fig.~\ref{fig:shocking_F90_temperatures_7kmsec_linear}, such a domain would have to extend at least $5 \, \mathrm{m}$ downstream of the shock front. While for the high-speed case we could comfortably contain the entire shock within $40\,000$ collision cells, this was not feasible for the low-speed case. Fortunately, for our purposes it is not necessary to simulate the entire post-shock relaxation region with DSMC. As was seen for the high-speed case, most of the diffusive effects are only appreciable within a narrow region surrounding the shock front. By concentrating on this portion we managed to significantly reduce the domain size. To accomplish this, we resort to the approach described by Strand and Goldstein~\cite{strand13a}, where the normal shock is treated as inherently unsteady. The supersonic free stream is fed into the domain on the left boundary, while a specular wall reflects all particles on the boundary to the right. This stagnates the incoming flow and generates a shock wave moving from right to left into the undisturbed gas upstream. Unlike in the previous set-up, the reference frame is now attached to the post-shock equilibrium gas, implying that $u_2^\prime = 0$. Therefore, in order to obtain the desired post-shock thermodynamic conditions of Table~\ref{tab:normal_shock_bins_7kmsec_bc} in our simulation, we adjust the inflow velocity to $u_1^\prime = u_1 - u_2$. Once the shock front has left the near-wall region, it begins to take on its steady-state structure and travels upstream at approximately $u_\mathrm{shock} = -u_2$. At this point macroscopic flow parameters can be sampled and individual samples time-averaged to reduce statistical noise. Since the shock is continuously moving upstream, these instantaneous samples have to be displaced to a common origin before time-averaging. Again, we resort to the procedure described in Ref.~\cite{strand13a} to define a common reference location for all profiles. At the low-speed condition the higher post-shock density and lower temperature (see Table~\ref{tab:normal_shock_bins_7kmsec_bc}) impose more stringent constraints on the collision cell- and time step size. Thus, in the rightmost column of Table~\ref{tab:normal_shock_bins_simulation_parameters}, several simulation parameters were adjusted accordingly. Just as for the high-speed condition, ensemble-averaging over 64 independent simulations is used to reduce the statistical scatter in the instantaneous samples. \subsection{Comparison Navier Stokes vs. DSMC} \label{sec:normal_shock_bins_navier_stokes_vs_dsmc} We now examine the flow fields obtained through the methods described in Secs.~\ref{sec:normal_shock_bins_navier_stokes} and \ref{sec:normal_shock_bins_dsmc}. First, in Figs.~\ref{fig:comparison_01_10kmsec} and \ref{fig:comparison_02_10kmsec} we compare DSMC profiles obtained using the 100-bin (blue dot on blue line) and 10-bin (red line) systems to Navier-Stokes profiles with the 10-bin system (black square on black line) at the high-speed conditions. We start with the gas density profiles in Fig.~\ref{fig:comparison_density_10v}. The DSMC and Navier-Stokes curves have been translated on the $x$-axis, such that the initial rise in density occurs at the same location for all three profiles. The location of the origin is arbitrary, but the same convention is used consistently in all flow parameter plots in Figs.~\ref{fig:comparison_01_10kmsec} and \ref{fig:comparison_02_10kmsec}. Focusing on the 10-bin system, both the DSMC and Navier-Stokes density profiles show close agreement, except for a weak increase of the density slope in the DSMC result at $x \approx 0$, which is absent from the Navier-Stokes curve. The Navier-Stokes density profile exhibits a quicker and more uniform initial rise, before intersecting the DSMC profile at $x \approx 0.003 \, \mathrm{m}$. Next, in Fig.~\ref{fig:comparison_temperatures_10v} we compare the corresponding kinetic and internal temperatures. Here, the differences between both methods are more apparent. The maximum $T$-value obtained with DSMC (10 bins) is $T_\mathrm{max} \approx 58800 \, \mathrm{K}$, which lies roughly $7000 \, \mathrm{K}$ above the corresponding peak for Navier-Stokes. Incidentally, both maxima lie very close to one another, at $x \approx -0.002 \, \mathrm{m}$. By contrast, the maximum $T_\mathrm{int}$-values for all three curves are much closer to one another, with the Navier-Stokes curve slightly leading the DSMC profiles. The most noticeable difference is that both $T$-curves for DSMC begin to rise farther upstream and more gradually than the Navier-Stokes profile. Back in Fig.~\ref{fig:comparison_density_10v}, we also plot the partial density of $\mathrm{N_2}$ using dotted lines. As was observed in Fig.~\ref{fig:shocking_F90_densities_10kmsec_linear} for the inviscid case, there is an initial rise in $\rho_\mathrm{N_2}$ across the shock, before dissociation kicks in and gradually consumes the molecular nitrogen further downstream. At these high temperatures, the post-shock gas is almost entirely made up of atoms. Here, Navier-Stokes predicts dissociation occurring slightly ahead of the corresponding DSMC (10 bins) curve. This is consistent with the lower kinetic temperature observed for Navier-Stokes in Fig.~\ref{fig:comparison_temperatures_10v}. \begin{figure \begin{minipage}{0.8\columnwidth} \subfloat[Density $\rho \times 10^{3} \, \mathrm{[kg/m^3]}$ (solid lines) and partial density of molecular nitrogen $\rho_\mathrm{N_2} \times 10^{3}$ (dotted lines)]{\includegraphics[width=\columnwidth]{comparison_partial_densities_10v.eps}\label{fig:comparison_density_10v}} \end{minipage} \begin{minipage}{0.8\columnwidth} \subfloat[Gas kinetic temperature $T \, \mathrm{[K]}$ and internal temperature of $\mathrm{N_2}$-molecules $T_\mathrm{int} \, \mathrm{[K]}$]{\includegraphics[width=\columnwidth]{comparison_temperatures_10v.eps}\label{fig:comparison_temperatures_10v}} \end{minipage} \caption{Gas density and temperature profiles for high-speed condition ($u_1 = 10 \, \mathrm{km \cdot s^{-1}}$). DSMC with 100 bins (dot on blue lines) vs. DSMC with 10 bins (red lines) vs. Navier-Stokes with 10 bins (unfilled squares on black lines).} \label{fig:comparison_01_10kmsec} \end{figure} We now move on to Fig.~\ref{fig:comparison_02_10kmsec} and the comparison of flow parameters associated with diffusive transport at the high-speed condition. In Fig.~\ref{fig:comparison_diffusion_fluxes_N2_10v} we first show the mass diffusion flux of $\mathrm{N_2}$ along the $x$-direction. For the two DSMC curves and the single Navier-Stokes result $j_{x, \mathrm{N_2}}$ is calculated as the mass-weighted average over all internal energy bins, i.e.: $j_{x,\mathrm{N_2}} = \sum_{k \in \mathcal{K}_\mathrm{N_2}} \{ \rho_k \, u_k^\mathrm{d} \}$. The corresponding mass diffusion flux of atomic nitrogen: $j_{x,\mathrm{N}} = \rho_\mathrm{N} \, u_\mathrm{N}^\mathrm{d}$ (not shown) is equal in magnitude, but opposite in sign. The peak of $j_{x, \mathrm{N_2}}$ captured by the DSMC and Navier-Stokes methods with 10 bins agrees to within less than 5\%, although in the Navier-Stokes profile this maximum appears slightly ahead of the DSMC curve. For the 100-bin DSMC case, the peak diffusion flux lies about 10\% below the corresponding 10-bin DSMC value, but at almost exactly the same $x$-location. Next, in Fig.~\ref{fig:comparison_viscous_stresses_10v} we plot the three normal components of the viscous stress tensor. For our one-dimensional flow configuration only the velocity derivative $\partial u_x / \partial x$ becomes non-zero across the shock. As a consequence, the only components of $\underline{\underline{\tau}}$ in Eq.~(\ref{eq:momentum_balance}), which take on non-zero values turn out to be $\tau_{xx} = \frac{4}{3} \, \eta \, (\partial u_x / \partial x)$ and $\tau_{yy} = \tau_{zz} = - \frac{2}{3} \, \eta \, (\partial u_x / \partial x)$. Both DSMC and the Navier-Stokes profiles reach their maxima at essentially the same $x$-location. The DSMC stress profiles are slightly more spread out than their Navier-Stokes counterparts. The ratio $\tau_{xx, \mathrm{max}} / \tau_{yy, \mathrm{max}}$ yields exactly $-2$ for the Navier-Stokes profiles, in accordance with the analytical expressions for $\tau_{xx}$ and $\tau_{yy}$. The same ratio of $-2$ is maintained for the DSMC profiles, although the peak viscous stresses obtained with Navier-Stokes lie about 34\% above the corresponding DSMC values. As can be seen by comparing the two DSMC profiles, the number of bins has practically no effect on the shape of the viscous stress profiles. Finally, in Fig.~\ref{fig:comparison_heat_fluxes_10v} we compare $q_x$, i.e. the heat flux component along the flow direction. Both DSMC and the Navier-Stokes profiles exhibit their peak negative values (due to heat being transferred upstream across the shock front) at roughly the same $x$-location. However, the maximum flux for DSMC is nearly $-22.1 \, \mathrm{MW/m^2}$, while for the Navier-Stokes result it only reaches $-16.7 \, \mathrm{MW/m^2}$. As was the case for the kinetic temperature in Fig.~\ref{fig:comparison_temperatures_10v}, the DSMC heat flux profiles are noticeably more diffuse and begin to deviate from zero much sooner upstream than their Navier-Stokes counterpart. A second smaller, but positive peak appears in all three $q_x$-profiles further downstream. Thus, some amount of heat is also being transferred from the shock front in the downstream direction. It is interesting to note that the location of this second, positive peak in $q_x$ nearly coincides with the maximum in $j_{x,\mathrm{N_2}}$ reported in Fig.~\ref{fig:comparison_diffusion_fluxes_N2_10v} for all three calculations. One might thus assume that ``diffusion of enthalpy'' plays a significant role in shaping the heat flux profile in this region. In order to answer this question we have decomposed the Navier-Stokes (solid black lines) result into $q_x^\mathrm{cond}$, i.e. its contributions due to heat conduction (dash-dotted line) and $q_x^\mathrm{diff}$, i.e. its contribution due to diffusion of enthalpy (dotted line). It turns out that the second peak observed in the $q_x$-profile is the net result of a sizable conductive heat flux in the downstream direction and a nearly as large diffusive heat flux in the opposite sense. With about $10 \, \mathrm{MW / m^2}$ the peak of $q_x^\mathrm{cond}$ in the downstream direction is about 2/3 in magnitude of the amount being transferred upstream. Simultaneously, this effect is almost completely compensated for by the $q_x^\mathrm{diff}$-contribution in the opposite sense, which reaches a peak value of nearly $-8 \, \mathrm{MW / m^2}$. No such decomposition is shown for the DSMC results in Fig.~\ref{fig:comparison_heat_fluxes_10v}. Indeed it would be tricky to achieve a rigorous separation into the aforementioned $q^\mathrm{cond}$ and $q^\mathrm{diff}$ terms for the DSMC profiles. In DSMC the macroscopic heat flux emerges as the net result of advection of kinetic and internal energy attached to each individual molecule and atom (see App.~\ref{app:macroscopic_moments} for the definitions used in our calculations). The DSMC heat flux profiles naturally account for all contributions due to conduction, diffusion of enthalpy and heat transfer induced by concentration gradients (Dufour effect). However, since transport coefficients, such as thermal conductivity $\lambda$ and species-dependent thermal diffusion ratio $\chi_i$ have no meaning at the gas-kinetic scale, a rigorous separation into individual contributions is not possible. The overall close agreement between the DSMC and Navier-Stokes profiles in Figs.~\ref{fig:comparison_01_10kmsec} and \ref{fig:comparison_02_10kmsec} is somewhat surprising. Given the strong deceleration, the molecular velocity distributions across the shock obtained with DSMC will deviate significantly from the Chapman-Enskog distribution, on which the Navier-Stokes solution is based. Thus, one might have expected a greater difference between both results. Another noteworthy aspect is that, apart from minor differences in the mixture and partial density profiles, the 10-bin and 100-bin DSMC flow fields exhibit almost the same behavior. This is in contrast with what was observed in Fig.~\ref{fig:shocking_F90_temperatures_10kmsec_linear} for the inviscid case, where the temperature profiles are very sensitive to the number of bins employed. Although an exhaustive study was not conducted, this suggests that diffusive phenomena significantly reduce differences due to bin number originally observed in the inviscid profiles. \begin{figure \begin{minipage}[t]{0.5\columnwidth} \subfloat[$x$-component of mass diffusion flux for $\mathrm{N_2} \, \mathrm{[kg \cdot m/s]}$]{\includegraphics[width=\columnwidth]{comparison_diffusion_fluxes_N2_10v.eps}\label{fig:comparison_diffusion_fluxes_N2_10v}} \end{minipage}~ \begin{minipage}[t]{0.5\columnwidth} \subfloat[Normal components of viscous stress tensor $\mathrm{[kPa]}$]{\includegraphics[width=\columnwidth]{comparison_viscous_stresses_10v.eps}\label{fig:comparison_viscous_stresses_10v}} \end{minipage} \begin{minipage}{0.5\columnwidth} \subfloat[$x$-component of heat flux $\mathrm{[MW/m^2]}$. Navier-Stokes profile split into contributions due to conduction (dash-dotted lines) and diffusion of enthalpy (dotted lines)]{\includegraphics[width=1.0\columnwidth]{comparison_heat_fluxes_10v.eps}\label{fig:comparison_heat_fluxes_10v}} \end{minipage} \caption{Diffusive transport fluxes for high-speed condition ($u_1 = 10 \, \mathrm{km \cdot s^{-1}}$). DSMC with 100 bins (filled circle on blue lines) vs. DSMC with 10 bins (red lines) vs. Navier-Stokes with 10 bins (unfilled squares on black lines).} \label{fig:comparison_02_10kmsec} \end{figure} In Figs.~\ref{fig:comparison_01_7kmsec} and \ref{fig:comparison_02_7kmsec}, we now compare DSMC (red lines) and Navier-Stokes results (unfilled squares on black lines) for the low-speed case. Here we focus exclusively on the 10-bin system. Recall from Sec.~\ref{sec:normal_shock_bins_inviscid} that at $7 \, \mathrm{km \cdot s^{-1}}$ the post-shock chemical nonequilibrium region extends much farther downstream than at $10 \, \mathrm{km \cdot s^{-1}}$. However, here we focus on the region immediately surrounding the shock front, where the strongest thermo-chemical nonequilibrium is observed. Thus, density, temperature and in particular mixture composition do not fully reach their post-shock equilibrium values in the $x$-range plotted. However, the moments associated with viscous and diffusive phenomena adjust much more quickly and are fully contained within the region shown. In Fig.~\ref{fig:comparison_density_10v_7kmsec} we begin by plotting density profiles. As was done for the high-speed case, the DSMC and Navier-Stokes profiles have been aligned such that the initial rise in density occurs at a common $x$-location. For both the DSMC and Navier-Stokes calculations the overall gas density $\rho$ is represented by solid lines, whereas $\rho_\mathrm{N_2}$ is shown using dotted lines. One can see two distinct ``bumps'' in both $\rho$-profiles, with the first one appearing at the same $x$-location with both methods. Near the second bump further downstream, the two $\rho$-curves begin to diverge, and beyond this point the DSMC profile remains slightly above the corresponding Navier-Stokes curve. Up until the second bump in the $\rho$-profiles dissociation plays only a minor role. But past this point the amount of atomic nitrogen begins to rapidly increase, while $\rho_\mathrm{N_2}$ remains almost constant. In Fig.~\ref{fig:comparison_temperatures_10v_7kmsec} we plot the corresponding temperature profiles. As was seen for the high-speed case in Fig.~\ref{fig:comparison_temperatures_10v}, the peaks in kinetic temperature $T$ appear at almost the same $x$-location for both DSMC and Navier-Stokes. Of course, given the significantly lower total enthalpy of the flow, the peak $T$-values are much lower than for the high-speed case. At $T_\mathrm{max} \approx 31100 \, \mathrm{K}$, DSMC predicts a somewhat higher peak value than Navier-Stokes, where a maximum of $\approx 28200 \, \mathrm{K}$ is reached. Similar to the high-speed case, the kinetic temperature profile from DSMC in Fig.~\ref{fig:comparison_temperatures_10v_7kmsec} is more diffuse and exhibits a more gradual initial rise than the Navier-Stokes curve. The location of the $T_\mathrm{int}$-maximum appears almost exactly at the same $x$-location and both values differ by less than 2\% (DSMC: $T_\mathrm{int} \approx 15300 \, \mathrm{K}$ vs. Navier-Stokes: $T_\mathrm{int} \approx 15100 \, \mathrm{K}$). Slightly different behavior is seen downstream of this point, with the common DSMC temperature decreasing somewhat faster than in the Navier-Stokes profile. It is worth noting that both methods predict the highest kinetic temperature about $0.005 \, \mathrm{m}$ upstream of the point where significant amounts of N-atoms begin to be produced. In fact, for both methods the location in Fig.~\ref{fig:comparison_density_10v_7kmsec} where the $\rho$ and $\rho_\mathrm{N_2}$ profiles begin to diverge coincides with the peak in $T_\mathrm{int}$ observed in Fig.~\ref{fig:comparison_temperatures_10v_7kmsec}, and beyond which the translational and internal temperatures reach a common value. This suggests that at these lower-speed conditions a noticeable ``incubation length'' for dissociation exists and that dissociation primarily occurs under near-equilibrium conditions downstream of the shock front. Overall, DSMC predicts slightly quicker dissociation of $\mathrm{N_2}$ than the Navier-Stokes calculation. This can be seen by comparing the density profiles in Fig.~\ref{fig:comparison_density_10v_7kmsec}. The behavior of the temperature profiles in Fig.~\ref{fig:comparison_temperatures_10v_7kmsec} is consistent with this fact. Since in the DSMC calculation a slightly larger number of endothermic dissociation reactions remove a greater amount of energy from the translational and internal modes, the DSMC temperature stays below the Navier-Stokes profile past the initial shock front. \begin{figure \begin{minipage}{0.8\columnwidth} \subfloat[Density $\rho \times 10^{3} \, \mathrm{[kg/m^3]}$ (solid lines) and partial density of molecular nitrogen $\rho_\mathrm{N_2} \times 10^{3}$ (dotted lines)]{\includegraphics[width=\columnwidth]{comparison_partial_densities_10v_7kmsec.eps}\label{fig:comparison_density_10v_7kmsec}} \end{minipage} \begin{minipage}{0.8\columnwidth} \subfloat[Gas kinetic temperature and internal temperature of $\mathrm{N_2}$ molecules $\mathrm{[K]}$]{\includegraphics[width=\columnwidth]{comparison_temperatures_10v_7kmsec.eps}\label{fig:comparison_temperatures_10v_7kmsec}} \end{minipage} \caption{Gas density and temperature profiles for low-speed condition ($u_1 = 7 \, \mathrm{km \cdot s^{-1}}$). DSMC with 10 bins (red lines) vs. Navier-Stokes with 10 bins (squares on black lines).} \label{fig:comparison_01_7kmsec} \end{figure} Next, in Fig.~\ref{fig:comparison_02_7kmsec} we compare the flow parameters associated with diffusive transport for the low-speed case. First, in Fig.~\ref{fig:comparison_diffusion_fluxes_N2_10v_7kmsec} we examine the diffusion fluxes of $\mathrm{N_2}$ along the $x$-direction. Here, slightly different behavior between DSMC and the Navier-Stokes profiles are apparent. The diffusion flux for $\mathrm{N_2}$ obtained with DSMC exhibits two distinct peaks, one at $x \approx -0.001 \, \mathrm{m}$ and another closer to $x = 0.0075 \, \mathrm{m}$. This behavior is exactly mirrored for $\mathrm{N}$, although with opposite sign (not shown). By contrast, in the Navier-Stokes solution the first peak does not appear at all. Furthermore, the maxima in predicted $j_{x, \mathrm{N_2}}$ lie at about $0.0065 \, \mathrm{kg \cdot m/s}$ for DSMC vs. $0.005 \, \mathrm{kg \cdot m/s}$ for Navier-Stokes. In Fig.~\ref{fig:comparison_viscous_stresses_10v_7kmsec} we plot the three normal components of the viscous stress tensor for the low-speed case. The magnitudes of these stresses are approximately half of those for the high-speed case, but follow the same general behavior. Both for DSMC and Navier-Stokes we retrieve precisely $\tau_{xx, \mathrm{max}} / \tau_{yy, \mathrm{max}} = -2$, but the ratio between the peak values is now $[\tau_\mathrm{xx, \mathrm{max}}]_\mathrm{NS} / [\tau_\mathrm{xx, \mathrm{max}}]_\mathrm{DSMC} = 1.23$. In a slight departure from the high-speed case, the normal stresses do not immediately return to zero downstream of their peaks. Instead, a small plateau forms in both the DSMC and Navier-Stokes profiles. Finally, in Fig.~\ref{fig:comparison_heat_fluxes_10v_7kmsec} we compare the heat flux profiles for the low-speed shock. The peak heat flux for DSMC was observed to be $-8.06 \, \mathrm{MW/m^2}$, whereas it was $-5.70 \, \mathrm{MW/m^2}$ in the Navier-Stokes result. This amounts to a ratio $[q_\mathrm{max}]_\mathrm{NS} / [q_\mathrm{max}]_\mathrm{DSMC} = 0.708$, as opposed to $0.756$ for the high-speed case. As was the case for the high-speed case, the DSMC and Navier-Stokes profiles agree in general shape, but differ somewhat in the location and magnitude of their maxima. As had been observed for the high-speed case, the initial departure from zero begins further upstream and is more gradual in DSMC than in the Navier-Stokes profile. Past the initial negative peak in $q_x$, both profiles exhibit a second, slightly positive overshoot downstream of the shock front. This peak, or plateau is much less pronounced and more spread out than in the high-speed case. Again, in Fig.~\ref{fig:comparison_heat_fluxes_10v_7kmsec} we have split up the Navier-Stokes profile into contributions due to heat conduction (dash-dotted line) and diffusion of enthalpy (dotted line) to assess the relative contributions of both transfer mechanisms. It can be seen that in the plateau region heat conduction in the downstream direction is almost exactly compensated for by diffusion of enthalpy in the opposite sense. The magnitudes of these fluxes are less significant when compared to the high-speed case, but the general effect is still present at this condition. \begin{figure \begin{minipage}[t]{0.5\columnwidth} \subfloat[$x$-component of mass diffusion flux for $\mathrm{N_2} \, \mathrm{[kg \cdot m/s]}$]{\includegraphics[width=\columnwidth]{comparison_diffusion_fluxes_N2_10v_7kmsec.eps}\label{fig:comparison_diffusion_fluxes_N2_10v_7kmsec}} \end{minipage}~ \begin{minipage}[t]{0.5\columnwidth} \subfloat[Normal components of viscous stress tensor $\mathrm{[kPa]}$]{\includegraphics[width=\columnwidth]{comparison_viscous_stresses_10v_7kmsec.eps}\label{fig:comparison_viscous_stresses_10v_7kmsec}} \end{minipage} \begin{minipage}{0.5\columnwidth} \subfloat[$x$-component of heat flux $\mathrm{[MW/m^2]}$. Navier-Stokes profile split into contributions due to conduction (dash-dotted lines) and diffusion of enthalpy (dotted lines)]{\includegraphics[width=1.0\columnwidth]{comparison_heat_fluxes_10v_7kmsec.eps}\label{fig:comparison_heat_fluxes_10v_7kmsec}} \end{minipage} \caption{Diffusive transport fluxes for low-speed condition ($u_1 = 7 \, \mathrm{km \cdot s^{-1}}$). DSMC with 10 bins (red lines) vs. Navier-Stokes with 10 bins (squares on black lines)} \label{fig:comparison_02_7kmsec} \end{figure} \section{Conclusions} \label{sec:conclusions} We have presented the procedure to build a coarse-grain fluid model incorporating internal energy exchange and nonequilibrium chemistry fully consistent with the gas-kinetic description. The resulting hydrodynamic equations are equipped with dissipative transport and chemical source terms that are rigorously derived from the collision operators of the underlying kinetic equation. We have used a state-to-state approach, which allows for detailed description of inelastic processes in a gas mixture. A set of coarse-grain cross sections and corresponding rate coefficients derived from the NASA Ames \emph{\emph{ab initio}} database for the $\mathrm{N_2} (v,J)$-N system was employed to model internal energy exchange and dissociation-recombination reactions. The uniform rovibrational collisional (URVC) bin model was used to reduce this database to a manageable size for flow calculations. The simplicity of the URVC model makes it possible to impose reversibility relations between forward and backward elementary reactions at the coarse-grain level. These relations are expressed in terms of cross section pairs at the kinetic scale and equivalent rate coefficient pairs at the hydrodynamic scale. By means of the Chapman-Enskog method we have obtained expressions for the diffusive and viscous transport terms in the Navier-Stokes equations that are consistent with the elastic collision operators of the Boltzmann equation. All associated transport properties are calculated from the corresponding scattering cross sections. These two features of the coarse-grain model allow for the unambiguous formulation of the entropy production rates due to viscous transport and chemistry, which in turn ensures that the second law of thermodynamics is respected by the fluid equations. Demonstrating strict non-negativity of the entropy production terms is a sanity check on our derivations and in fact a fundamental requirement for any well-posed coarse-grain fluid model. We have implemented both the fluid-scale and kinetic-scale coarse grain model in dedicated flow solvers. In order to compare their behavior, we have performed simulations of normal shock waves in nitrogen exhibiting strong thermo-chemical nonequilibrium. Flow fields at two different shock speeds were obtained, each through three numerical approaches of increasing fidelity: (1) a steady, one-dimensional inviscid flow solution obtained by coupling the master equations for detailed chemistry to momentum and energy balances along the flow direction, (2) a one-dimensional viscous flow solution to the Navier-Stokes equations by means of the Finite Volume method and (3) a gas-kinetic-scale solution using the direct simulation Monte Carlo (DSMC) method. Our calculations reveal rather close agreement between the Navier-Stokes and DMSC predictions. This is somewhat surprising given the free-stream Mach numbers we studied ($\mathrm{Ma}_\infty \approx 28$ and $20$ respectively). At such extreme conditions one could have expected that the inability of the Navier-Stokes solutions to fully reproduce the strong translational nonequilibrium effects across the shock (i.e. bi-modal velocity distributions) would cause them to more noticeably deviate from the DSMC results. However, even though the macroscopic flow properties predicted with DSMC are more diffuse than in the Navier-Stokes calculations, all major features appear in both solutions at nearly the same $x$-location and are comparable in magnitude. With regard to resolving the shock structure with Navier-Stokes, our study reinforces the notion that employing transport properties consistent with the corresponding scattering cross sections is fundamental to obtaining close agreement with kinetic-scale solutions. Furthermore, our DMSC calculations suggest that the sensitivity of the flow field to the number of bins used in our coarse-grain model is greatly attenuated when viscous and diffusive transport effects are included. It should be recalled that the URVC bin model we employ assumes a constant average energy for all rovibrational levels and freezes their relative populations within a bin. This is rather restrictive and clearly not the ideal reduction strategy. On the other hand, these constraints make deriving the associated fluid equations rather simple from a mathematical viewpoint. At this stage it is not clear whether equivalent asymptotic solutions can be derived in the same manner for other existing coarse-grain models. In particular, if each bin is assumed to have an associated temperature (e.g. Boltzmann bins), it is not straightforward to translate the model into the Chapman-Enskog framework, because of the need for compatibility of this ansatz with associated scaling of the collision operators. A rigorous treatment of the internal energy for multi-temperature gases is still an open problem for the Chapman-Enskog method. Other types of closures for transport phenomena, such as as the Maximum Entropy closure,~\cite{muller93a, levermore96a} may be more natural. \begin{acknowledgments} The authors would like to thank Dr. Federico Bariselli for his contributions to the improvement of the URVC bin model and to Dr. Alessandro Munaf\`o for the codes used to generate the hydrodynamic solutions. We would also like to thank Dr. R. L. Jaffe and Dr. D. W. Schwenke from NASA Ames Research Center for access to the kinetic database used in this work. \end{acknowledgments} \section*{Data availability statement} The data that support the findings of this study are available from the corresponding author upon reasonable request
1,108,101,562,391
arxiv
\section{INTRODUCTION} The phase diagram of Ca$_{2-x}$Sr$_{x}$RuO$_4$ \ exhibits a rich variety of physical phenomena spanning the unconventional superconductor Sr$_2$RuO$_4$ \ to the antiferromagnetic Mott-insulator Ca$_2$RuO$_4$ \cite{1,2,3,4}. The essentially different character of these physical ground states is quite outstanding in view of the fact that only the ionic radius on the Ca/Sr-site varies throughout this series. The Ca$_{2-x}$Sr$_{x}$RuO$_4$ -phase diagram offers therefore the interesting possibility to tune through a Mott-transition by structural changes only. The smaller ionic radius of divalent Ca compared to that of divalent Sr induces a series of structural phase transitions characterized by rotations or tilts of the RuO$_6$-octahedra as it is typically observed in perovskites and related compounds. Such structural deformations have a strong impact on the electronic band structure since they modify the metal-oxygen hopping parameters and thereby the electronic band widths \cite{5,6}. In Ca$_{2-x}$Sr$_{x}$RuO$_4$ \ the decrease of the Sr content $x$ first stabilizes a rotation of the octahedra around the $c$ axis and then, for $x \leq 0.5$, a tilting of the octahedra around an in-plane axis \cite{7}. Further decrease of the Sr-content finally leads to the Mott-transition associated with another structural transition, across which the RuO$_6$ octahedra become flattened and their tilting increases \cite{7,8,9}. Remarkable physical properties were reported for the Sr-concentration range $0.2 \leq x \leq 0.5$, i.e. in the metallic phase close to the Mott transition \cite{8,9}. Approaching the Sr content x=0.5 from higher values Nakatsuji et al. report a continuous increase of the low-temperature magnetic susceptibility reaching at x=0.5 a value 200 times larger than that of pure Sr$_2$RuO$_4$ \cite{8,9}. Furthermore, the electronic coefficient of the specific heat is exceptionally high, of the order of ${C_p/T}\sim 250\,\frac{\rm mJ}{\rm mole\,K^2}$ \cite{8,9,jin}, well in the range of typical heavy fermion compounds. Inelastic neutron scattering has revealed strongly enhanced magnetic fluctuations \cite{friedt-prl} with a propagation vector of \vq $\sim$(0.2,0,0). The fluctuations in Ca$_{2-x}$Sr$_{x}$RuO$_4$ \ with $x$ close to 0.5 are quite different from those in pure Sr$_2$RuO$_4$\ \cite{sidis,braden2002}: Although the magnetic instability observed for $x=0.5$ is still incommensurate, its character is closer to ferromagnetism. The magnetic properties of the Ca$_{2-x}$Sr$_{x}$RuO$_4$ -compounds with a Sr content close to 0.5 show some resemblance to localized electron systems; it has even been proposed that in these materials an orbital-selective Mott-transition occurs leaving a part of the 4d-electrons itinerant \cite{10}. This proposal has initiated a strong debate concerning its theoretical basis as well as concerning its applicability to the phase diagram of Ca$_{2-x}$Sr$_{x}$RuO$_4$ . It appears safe to assume however, that the $\gamma$-band \ associated with the $d_{xy}$ -orbital exhibits a much smaller band width than that of the $\alpha$- or $\beta$-band , since the rotation mainly influences the hybridization of the $d_{xy}$ -electrons \cite{5,6}. Upon further decrease of the Sr-content the tilt transition occurs with an apparently strong impact on the magnetic properties. The low-temperature magnetic susceptibility rapidly decreases with increasing tilt and the electronic specific-heat coefficient is reduced but remains at a rather high level. Applying a magnetic field to Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ \ at low temperature induces a metamagnetic transition with a step-like increase of the magnetisation of about 0.4\ $\mu_B$ \ per Ru. The metamagnetic transition field, $H_{mm}$ sensitively depends on the direction of applied field; the transition occurs at 5.5\ T when the field is applied along the $c$-direction \cite{9} whereas values of 2 and 7\ T are found for field directions along the $a,b$-plane \cite{balicas}. The strong anisotropy of the metamagnetic transition field, in particular the difference for the two orthorhombic in-plane directions, suggests the relevance of spin orbit coupling. The meta\-magnetic transition in Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ \ strongly resembles that observed in the double-layer ruthenate Sr$_3$Ru$_2$O$_7$ , which has attracted special interest due to quantum-critical phenomena related with the end-point of the first-order metamagnetic transition\cite{11,12,13}. Apart from the different basic structure, these two- and single-layer ruthenates possess similar structural characteristics, in particular, they both exhibit the structural deformation characterized by octahedra rotation around the $c$-axis. In recent work, we have analyzed the structural aspects of the metamagnetism in Ca$_{2-x}$Sr$_{x}$RuO$_4$ \ by a combination of diffraction, of high-resolution thermal expansion, and of magnetostriction experiments \cite{kriener,baier-physb}. These studies gave evidence of a change of the orbital arrangement driven by either temperature or magnetic field. The thermal-expansion anomalies observed at zero magnetic field illustrate an increase of the $d_{xy}$ \ orbital occupation upon cooling, whereas the anomalies at high field point to a decrease of the $d_{xy}$-occupation upon cooling. Accordingly, the structural effects seen as a function of magnetic field at low temperature (diffraction and magnetostriction results) show that upon increasing magnetic field electrons are transferred from the $d_{xz}$ - and $d_{yz}$ -orbitals into the $d_{xy}$ -orbital. In this work we have completed these studies for Ca$_{2-x}$Sr$_{x}$RuO$_4$ \ with $x=0.2$ and $x=0.5$ by additional diffraction studies and by high-resolution dilatometer measurements along different direction in longitudinal and transverse magnetic fields. In addition we have focused on the metamagnetic transition itself by collecting more data close to the critical field and by extending the measurements towards lower temperatures for Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ \ where the metamagnetic transition is best defined. \section{EXPERIMENT} Single crystals of Ca$_{2-x}$Sr$_x$RuO$_4$ were grown by a floating-zone technique in image furnaces at Kyoto University $(x = 0.2)$ and at Universit\'e Paris Sud $(x = 0.5)$. Details of the preparation process are reported in reference ~\cite{nakatsuji01a} . In addition, powder samples of Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ and Ca$_{1.5}$Sr$_{0.5}$RuO$_4$ \ were prepared following the standard solid-state reaction. The samples were from the same batches as those studied in Refs.~\cite{8,9} or characterized by x-ray diffraction and susceptibility to possess identical properties. Thermal expansion and magnetostriction were studied in magnetic fields up to 14 T in two different home-built capacitive dilatometers down to a lowest temperature of 300\,mK \cite{braendli73a,lorenz97a,pott83a,heyer}. The magnetization measurements were performed using a Quantum Design vibrating sample magnetometer (VSM). With the GEM diffractometer at the ISIS facility, neutron powder diffraction patterns were recorded as a function of temperature and in fields up to 10 T for Ca$_{1.8}$Sr$_{0.2}$RuO$_4$. X-ray powder diffraction patterns were recorded between 10 and 1000K using a D5000 Siemens diffractometer and Cu-K$_{\alpha}$-radiation. \begin{figure} \centerline{\psfig{file=Graph1.eps,width=11cm}} \caption{Results of x-ray and neutron diffraction studies on Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ \ and Ca$_{1.5}$Sr$_{0.5}$RuO$_4$. The left column gives the three orthorhombic lattice constants in Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ \ as a function of temperature (results obtained on GEM); the middle columns show the lattice constants and the RuO bond lengths in Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ \ as a function of magnetic field. The right column shows the $a$ and $b$ parameters (above) and the $c$ lattice constant (below) for a wider temperature range for x=0.2 and 0.5. Note that Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ \ exhibits a tetragonal to orthorhombic transition above room temperature whereas Ca$_{1.5}$Sr$_{0.5}$RuO$_4$ \ stays tetragonal down to about 50\ K. }\label{diffraction} \end{figure} \section{RESULTS AND DISCUSSION} \subsection{Metamagnetic transition in Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ } Among the Ca$_{2-x}$Sr$_{x}$RuO$_4$ \ series the metamagnetic transition is best defined in Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ , therefore, we have chosen this composition for the most detailed thermodynamic studies. Figure 1 presents the results of diffraction experiments on a powder sample to characterize the structural evolution as function of temperature and magnetic field. These data were taken with the GEM diffractometer using a magnetic-field cryostat. In addition, the right panels of Figure 1 show the lattice parameters determined by x-ray diffraction in a wider temperature interval. Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ exhibits two distinct structural distortions at low temperatures, the rotation of the RuO$_6$-octahedrons around the c-axis and the tilting. Above $\sim$350\ K, we find only the rotational distortion in space group $I4_1/acd$ with a lattice of $\sqrt{2}\cdot a, \sqrt{2} \cdot a, 2\cdot c$ with respect to the ideal tetragonal lattice. A characteristic feature of this phase observed in many Ca$_{2-x}$Sr$_{x}$RuO$_4$ -compounds \cite{steffens-unp} concerns the negative thermal expansion along the c-axis which persists over a broad temperature range, see the x-ray data in Fig. 1. The structural phase transition associated with the octahedra tilting further reduces the symmetry towards $Pbca$ with the nearly same lattice parameters as in $I4_1/acd$. Although the $Pbca$ space group has also been reported for the insulating and metallic phases observed for $x < 0.2$ , see reference \cite{7}, the symmetries are different, since in the case of the low-temperature phase for $x \geq 0.2$ the rotational distortion still leads to a doubling of the $c$-axis. High-resolution measurements of the thermal expansion along and perpendicular to the $c$-axis reveal strong low-temperature anomalies \cite{kriener}, which are also visible in the diffraction data. Both in-plane parameters expand upon cooling whereas the c-axis shrinks, see figure 1. The diffraction data further show that upon increase of the magnetic field both in-plane directions shorten while $c$ elongates. With the full structure analysis one may attribute this effect essentially to a change in the octahedron shape, which becomes elongated at high field indicating a shift of orbital occupation from $d_{xy}$ \ to $d_{xz}$ \ and $d_{yz}$ \ states \cite{kriener}. A set of powder-diffraction patterns were recorded on GEM using a zero-field cryostat in order to better characterize the temperature dependence of the crystal structure in a wider range. Up to a temperature of 160\ K the essential structural change arises from a weak variation of the tilt distortion. At low temperature the tilt angle saturates at values of 5.9 and 4.4 degrees determined at the basal (O1) and apical oxygen (O2), respectively. The minor differences in these two tilt angles indicate that the RuO$_6$-octahedrons are not perfect in Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ . The tilt distortion is coupled to the orthorhombic strain $a > b$ which at first sight is counterintuitive, as the tilt occurs around the $b$-axis which is the shorter one. Similar to other K$_2$NiF$_4$ materials with a tilt distortion, the interactions in the Ca/Sr-O rock-salt layer induce an elongation of the lattice and of the RuO$_6$-octahedron perpendicular to the tilt axis. Although this octahedral distortion might be relevant in splitting the $d_{xz}$ \ and $d_{yz}$ -$t_{2g}$-levels and hence cause the in-plane anisotropy of the metamagnetic transition field, it is not related to an orbital ordering effect. The orthorhombic splitting, as well as the difference in the O1-O1-edge lengths of the octahedrons, are clearly coupled to the tilt angles, see Fig. 1 and 2. The low-temperature anomalies seen in the thermal expansion \cite{kriener} are clearly visible in the lattice parameters, but the effect in the internal crystal structure is within the error of this measurement, see Fig. 2. \begin{figure} \centerline{\psfig{file=figure1.eps,width=6.8cm}} \caption{Structural evolution of Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ \ as determined by powder neutron diffraction on GEM without magnetic field, the structural results correspond to Rietveld-fits in space-group $Pbca$, but constraints had to be used to limit the number of free parameters.} \label{diffraction-b} \end{figure} Figure 3 shows the magnetostriction data recorded along the c-axis for field directions parallel and perpendicular to $c$. Qualitatively, both orientations show the same effect, -- the elongation of the lattice when passing the metamagnetic transition at $H_{meta-magn}$=5.7\ T for the field applied along $c$ and at 2.0\ T for the field perpendicular to $c$, see also reference \cite{kriener}. At fields well above the metamagnetic transition, there is little quantitative difference between the two field directions: the elongation is only 20\% smaller when the field is applied parallel to $c$. The comparable structural effects along both directions imply that the magnetostriction of Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ \ does not simply arise from the alignment of anisotropic ionic coordinations as it is the case in rare-earth compounds. Instead it has to be attributed to a transition or at least to a cross-over between distinct phases. Spin-orbit coupling seems to cause the sizeable anisotropy of the metamagnetic transition field, but it seems not to play a major role in the metamagnetic transition itself. The magnetostriction data and their field derivative show that the transition becomes smeared out with increasing temperatures. The maxima of the derivatives shift towards higher fields with increasing temperatures. Furthermore, the height of the peak in the magnetostriction rapidly decreases with increasing temperature for both field directions; roughly $1 \over \lambda _{max}$ scales with $(T^2 + const.)$, see Figure 2e) and f). \begin{figure} \centerline{\psfig{file=baier_fig1.eps,width=12cm}} \caption{Magnetostriction $\Delta L(H)/L_0$ along the $c$ axis of Ca$_{1.8}$Sr$_{0.2}$RuO$_4$, measured in a field applied parallel (a) and perpendicular (b) to the $c$ axis. Panel (c) displays the derivative $\lambda(H)=\frac{1}{L_0}\frac{\partial L}{\partial H}$ for the $c$ axis in a magnetic field from selected measurements with d$H$/d$t > 0$. A comparison of $\lambda(H)$ for increasing (d$H$/d$t > 0$) and decreasing (d$H$/d$t < 0$) field is presented in panel (d). The inverse peak height $1/\lambda_{\rm max}$ of the $\lambda(H)$ anomaly at the metamagnetic transition is plotted versus $T$ and $T^2$ in panel (e) and (f), respectively.} \label{fig:baier_fig1} \end{figure} At the lowest temperature of 0.3\ K, the magnetostriction along both field directions was measured upon increasing and decreasing field. There is no hysteresis discernible, which is surprising in view of the expected first-order character of the metamagnetic transition. Furthermore, even at the lowest temperature studied, the transition appears quite broad, in particular when compared to the metamagnetic transition in Sr$_3$Ru$_2$O$_7$ , which consists of three contributions each of them possessing a width of the order of a tenth of a Tesla \cite{13,gegenwart}. We cannot exclude that similar features are hidden in the broader peak in Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ \ but one may note that the symmetry in Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ \ is orthorhombic already at zero field. In Ca$_{2-x}$Sr$_{x}$RuO$_4$ \ the mixed occupation of Ca and Sr with distinct ionic radii induces strong intrinsic disorder with local variations of the tilt and rotation angles together with the concomitant local variation of the electronic structure. Evidence for local disorder in the Ca$_{2-x}$Sr$_{x}$RuO$_4$ -series has recently been found in ARPES and STM studies \cite{unordnung}. The intrinsic disorder most likely is responsible for the broadening of the transition and it may further suppress any hysteresis. Keeping the strong microscopic disorder in mind it appears very difficult to determine the thermodynamic critical end-point in Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ \ or even to discuss its existence at finite temperatures. \begin{figure} \centerline{\psfig{file=baier_fig2.eps,width=11cm}} \caption{Thermal expansion of Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ parallel to the $c$ axis in a longitudinal magnetic field. The upper left panel shows the thermal expansion coefficient $\alpha_c(T)=\frac{\partial c}{c\partial{\rm T}}$ and the upper right panel the length change $\Delta c/c$ in fields below and above the metamagnetic transition at $H_c \simeq 5.7$\,T. The inset shows a magnified view of the $\alpha_c(T)$ anomaly in the vicinity of the metamagnetic transition. In the lower panels, $\alpha_c/T$ is plotted on a logarithmic scale in fields far away from (left) and in the vicinity (right) of the metamagnetic transition.} \label{fig:baier_fig2} \end{figure} Figure 4 shows the results of the thermal-expansion measurements taken along the $c$-direction with the field parallel to $c$ (longitudinal configuration). Both the thermal expansion coefficient $\alpha_c(T)=\frac{\partial{ c}}{c\partial{\rm T}}$ and the length change $\Delta c/c$ are shown in the upper panels. One immediately recognizes that the strong thermal expansion anomaly occurring around 20 \ K changes its sign upon increase of the magnetic field in accordance with the idea that all these effects are due to the orbital rearrangement \cite{kriener}. Here, we want to discuss whether the effects across the metamagnetic transition in Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ \ can be related with the accumulation of entropy expected at the thermodynamic end-point. For Sr$_3$Ru$_2$O$_7$ \ it is argued that quantum criticality plays a dominant role in spite of the first-order character of the metamagnetic transition, since the end-point of the metamagnetic transition would be sufficiently low in temperature \cite{11,12,13}. It is therefore interesting to look for signatures of a quantum-critical end-point in the thermodynamic properties of Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ \ motivating us to extend the previous data \cite{kriener} to lower temperatures. In the inset of the upper-left panel of Figure 4 we show the thermal expansion coefficient for magnetic fields close to the low-temperature metamagnetic transition. The anomalous effects seem to be essentially suppressed when approaching the transition field. This effect is also seen when plotting the ratio of the thermal expansion coefficient over temperature $\alpha_c(T) \over T$, see the lower panels of Figure 4. Upon increase of the magnetic field $\alpha_c(T) \over T$ first increases to exhibit maxima slightly below the metamagnetic transition, see Figure 5. At the transition it changes its sign and upon further field increase there is a minimum slightly above the transition. The absolute value of $\alpha_c(T) \over T$ is roughly symmetric in $H-H_{mm}$ and the distance of the two extrema is in agreement with the width of the transition seen in the low-temperature magnetostriction. Dividing the $\alpha_c(T) \over T$ values by the analogous $C_p(T) \over T$ values one may determine the Gr\"uneisen-parameter. The field dependence of the $C_p(T) \over T$-ratio was reported in our previous paper \cite{kriener}. Upon increasing the magnetic field at low temperature, $C_p(T) \over T$ increases only by about 20\% up to a maximum at the metamagnetic transition and then drops rapidly above the transition. Consequently, the Gr\"uneisen-parameter does not diverge when approaching $H_{mm}$ in Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ . \begin{figure} \centerline{\psfig{file=baier_alpha_T_H.eps,width=6cm}} \caption{ The $c$-axis thermal-expansion coefficient divided by the temperature in Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ \ as a function of the magnetic field. } \label{diffraction-c} \end{figure} From inelastic neutron scattering, it is known that Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ \ exhibits at least two magnetic instabilities \cite{friedt-prl,steffens-unp} related with strongly enhanced magnetic fluctuations. An incommensurate antoferromagnetic contribution arising from Fermi-surface effects appears to compete with a quasi-ferromagnetic instability. The latter can be directly deduced from the temperature dependence of the macroscopic susceptibility \cite{friedt-prl,9}. For compositions close to Ca$_{1.5}$Sr$_{0.5}$RuO$_4$ \ a ferromagnetic cluster glass has even been reported \cite{9}. At intermediate temperatures the macroscopic susceptibility for $x=0.2$ exceeds that of Ca$_{1.5}$Sr$_{0.5}$RuO$_4$ \ but it exhibits a maximum at around 10\ K and is much smaller than that for $x=0.5$ at low temperature. Compared to a Curie-Weiss extrapolation the susceptibility for $x=0.2$ is significantly reduced at the lowest temperatures. The incipient ferromagnetic instability occurring in Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ \ as well as in Ca$_{1.5}$Sr$_{0.5}$RuO$_4$ \ seems to get efficiently blocked through the structural anomaly flattening the RuO$_6$-octahedron at low temperatures. This effect may reduce the amplitude of the associated fluctuations and enhance their characteristic energy. Through the transfer of electrons into the $\gamma$-band , the ferromagnetic instability is weakened possibly due to a shift of the van-Hove singularity. At higher fields, the compound is forced into a ferromagnetic ordering and, therefore, quasi-ferromagnetic fluctuations are weakened by further stabilizing this ferromagnetic ordering explaining the reversed structural anomalies occurring upon cooling for fields above $H_{mm}$. The sign change of the thermal-expansion anomaly just at the transition field, its large amplitude and its nearly symmetric behavior around the transition field imply that the metamagnetic transition is related with strong fluctuations \cite{garst}. The critical end-point of the metamagnetic transition in Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ \ although hidden by the intrinsic disorder of the system must be close within the relevant energy scales. \begin{figure} \centerline{\psfig{file=baier_fig4.eps,width=11cm}} \caption{Thermal expansion and magnetostriction of Ca$_{1.5}$Sr$_{0.5}$RuO$_4$ parallel (left) and perpendicular (right) to the $c$ axis. The uppermost diagrams display $\alpha(T)$ for both directions, each with longitudinal applied magnetic field. The corresponding length change $\Delta L/L$ is presented below. These diagrams show additionally the results obtained from measurements in transverse magnetic field as broken lines. The lowermost panels show the magnetostriction, each recorded in longitudinal applied magnetic field.} \label{fig:baier_fig4} \end{figure} Garst and Rosch \cite{garst} and Gegenwart et al. \cite{gegenwart} have made quantitative predictions for a metamagnetic transition related with a quantum-critical end-point which were already tested for the Sr$_3$Ru$_2$O$_7$ -compound. First $\alpha /T$ should vary as $\vert H -H_{mm}\vert ^{-{4 \over 3}}$ at both sides of the transition. The Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ -data shown in Fig. 5 clearly deviate from such a behavior, in particular there is no divergence in the experimental data. Close to the transition the microscopic disorder may superpose positive and negative thermal expansion anomalies cancelling each other. The almost complete suppression of the anomaly close to the metamagnetic transition field is only possible if the intrinsic ${\alpha \over /T}(H-H_{mm})$ dependence is fully antisymmetric giving further weight to our interpretation that the strongest fluctuations appear just at the metamagnetic transition and that the critical end-point must be quite close. These theories furthermore correctly predict that the thermal expansion anomalies increase in temperature with increasing $\vert H -H_{mm}\vert$. However, the scaling laws proposed for the thermal expansion do not agree perfectly with our data \cite{baier}. Again the intrinsic disorder might change the temperature dependencies quite drastically. In addition the strong antiferromagnetic fluctuations which are well established in Ca$_{2-x}$Sr$_{x}$RuO$_4$ \ will also interfere with the thermodynamic parameters. \begin{figure} \centerline{\psfig{file=baier_fig5.eps,width=10cm}} \caption{ Comparison of the anomalous behavior of the $c$-axis magnetostriction (upper panels) and the magnetization (lower panels) for Ca$_{1.5}$Sr$_{0.5}$RuO$_4$. On the right, the data obtained from measurements in a magnetic field applied along $c$ direction are shown. The diagrams on the left display the corresponding derivative. The broken line in the upper panels represent an example of the magnetostriction in a magnetic field perpendicular to the $c$ axis.} \label{fig:baier_fig5} \end{figure} \subsection{ Magnetism in Ca$_{1.5}$Sr$_{0.5}$RuO$_4$ } Concerning the crystal structure, Ca$_{1.5}$Sr$_{0.5}$RuO$_4$ \ differs from the Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ -compound by the absence of the long-range tilt distortion. This is visible in the $c$-axis thermal-expansion coefficient which is negative over a wide temperature interval, see Figure 1. Due to the larger and positive thermal-expansion coefficient in the $a,b$ plane the volume thermal expansion however is positive also for $x=0.5$. Taking into account the different thermal expansion behavior at intermediate temperatures, the low-temperature $c$-axis anomalies are qualitatively similar in Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ \ and in Ca$_{1.5}$Sr$_{0.5}$RuO$_4$ , compare Figures 6 and 4. This suggests that a metamagnetic transition also occurs in Ca$_{1.5}$Sr$_{0.5}$RuO$_4$ . However, all structural anomalies are significantly smaller for $x=0.5$. The field-dependent thermal expansion for $x=0.5$ was measured in the four configurations with field and length change either parallel or perpendicular to the $c$-axis. Again the difference in the longitudinal and transverse configurations arise mainly from a shift in the transition fields which is much smaller for fields oriented perpendicular to the $c$-axis. In addition to the results discussed for $x=0.2$, these Ca$_{1.5}$Sr$_{0.5}$RuO$_4$ -data show that the in-plane lattice parameters anomalously increase upon cooling in zero field and become shorter in high fields. The magnetostriction data shown in the lowest panels of Figure 6 confirm the opposite signs of the field-induced length changes parallel and perpendicular to the $c$-axis. The volume magnetostriction is about an order of magnitude smaller than the uniaxial components confirming our interpretation that these effects arise from an orbital rearrangement between the $t_{2g}$-orbitals \cite{kriener}. The strong reduction of the magnetostriction and of the thermal expansion anomalies for $x=0.5$ must be related with the absence of the long-range tilt deformation. Either the lattice in the crystal structure with a simple rotational distortion is much harder thereby reducing the structural response, or there is a direct interplay between the tilt and electronic parameters. One may expect the tilt deformation to act more strongly on the $d_{xz}$ \ and $d_{yz}$ \ orbitals in contrast to the octahedron rotation which interferes mostly with the $d_{xy}$ \ orbitals \cite{5,6}. Figure 7 compares the magnetization and the field-induced length change along the $c$-direction and their derivatives versus magnetic field. In the magnetization hysteresis cycles we find a remanent magnetization of a few thousands of a $\mu_B$ \ for fields along and perpendicular to the $c$-direction in agreement with the cluster-glass behavior reported in reference \cite{9}. The underlying short-range ferromagnetic ordering seems to be another consequence of the intrinsic disorder implied by the Ca/Sr mixing. Even though the remanent magnetization is extremely small the underlying ferromagnetic ordering has a strong impact on the magnetization curves, in particular for the field within the planes. At low field the magnetization sharply increases hiding any metamagnetic transition at higher field. The magnetization data shown in Figure 7 do not yield direct evidence for the metamagnetic transition. Furthermore, the steep low-field increase of the magnetization is not accompanied by the magnetostriction in contrast to the close coupling between these entities in Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ . The low-field increase of the magnetization seems to fully arise from the short-range ferromagnetic correlations; this feature is not related with the metamagnetic transition and there is no comparable feature observed for $x=0.2$. However, the field derivative of the magnetization for Ca$_{1.5}$Sr$_{0.5}$RuO$_4$ \ shown in the lower-right panel of Figure 7 clearly exhibits two features: in addition to the finite low-field value there is a clear shoulder at higher fields resembling the peak at the metamagnetic transition in Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ . This second feature corresponds to the maximum in the magnetostriction (upper right panel). Therefore, we may conclude that a metamagnetic transition in Ca$_{1.5}$Sr$_{0.5}$RuO$_4$ \ still occurs, although the associated jump in the magnetization is strongly reduced compared to that in the Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ . \section{Conclusions} Detailed studies of the structural properties in Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ \ and in Ca$_{1.5}$Sr$_{0.5}$RuO$_4$ \ by diffraction and by dilatometer methods allow us to clarify the microscopic and the thermodynamic aspects of the metamagnetic transition in these materials. A temperature and magnetic-field driven redistribution of the orbital occupation seems to be responsible for the anomalous structural effects. Upon cooling in zero field $3d$-electrons seem to move into the $d_{xy}$ \ orbitals causing a suppression of a quasi-ferromagnetic instability. This effect is reversed either by cooling at high magnetic field or by applying a magnetic field at low temperature. The structural difference between Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ \ and Ca$_{1.5}$Sr$_{0.5}$RuO$_4$ \ consists in the long-range tilt deformation which is found to strongly enhance the structural as well as the magnetic anomalies. Even though the magnetization data in Ca$_{1.5}$Sr$_{0.5}$RuO$_4$ \ do not exhibit the well-defined metamagnetic transition, the field derivative of the magnetization as well as the magnetostriction clearly show that a qualitatively similar metamagnetic transition also occurs in Ca$_{1.5}$Sr$_{0.5}$RuO$_4$ . However, this material already exhibits short-range ferromagnetic ordering at low temperature and zero magnetic field which partially hides the metamagnetic transition. The identical ferromagnetic instability is also present in Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ \ at intermediate temperature, but here it is fully suppressed at low temperature due to the octahedron tilting. By analyzing the thermal expansion anomalies close to the metamagnetic transition in Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ , we present evidence that the related critical end-point must be close to the low-temperature transition in the relevant scales. In particular, we find that the $\alpha /T$-coefficient is nearly symmetric across the transition field. The more precise scaling predictions for the thermal expansion coefficient across a metamagnetic transition are, however, not fulfilled in Ca$_{1.8}$Sr$_{0.2}$RuO$_4$ . In this material intrinsic disorder as well as the competition of different magnetic instabilities appear to play an important role. \section*{ACKNOWLEDGMENTS} We wish to dedicate this manuscript to Professor Hilbert von L\"ohneysen on the occasion of his 60th birthday. He has made numerous contributions to the field of strongly correlated electron systems and quantum phase transitions from which we all have greatly benefited. This work was supported by the DFG through the Sonderforschungsbereich 608.
1,108,101,562,392
arxiv
\section{Introduction} A task of major importance in the interplay between machine learning and network science is semi-supervised learning (SSL) over graphs. In a nutshell, SSL aims at predicting or extrapolating nodal attributes given: i) the values of those attributes at a subset of nodes and (possibly) ii) additional features at all nodes. A relevant example is protein-to-protein interaction networks, where the proteins (nodes) are associated with specific biological functions (the nodal attributes in this case are binary values indicating whether the protein participates in the function), thereby facilitating the understanding of pathogenic and physiological mechanisms. While significant progress has been achieved for this problem, most works consider that the relation among the nodal variables is represented by a single graph. This may be inadequate in many contemporary applications, where nodes may engage in multiple types of relations~\cite{kivela2014multilayer}, motivating the generalization of traditional SSL approaches for \emph{single-relational} graphs to \emph{multi-relational} graphs\footnote{{Many works in the literature refer to these graphs as multi-layer graphs~\cite{kivela2014multilayer}.}}. In the particular case of protein interaction networks, each layer of the graph could correspond to a different type of tissue (brain, muscle...). Alternatively, in a social network, each layer of the graph would capture a specific form of social interaction, such as friendship, family bonds, or coworker-ties \cite{wasserman1994social}. Such graphs can be represented in a tensor graph, where each slab of the tensor corresponds to a single relation. Albeit their ubiquitous presence, development of SSL methods that account for multi-relational networks is only in its infancy, see, e.g.,~\cite{kivela2014multilayer,ioannidis2018multilay}. This work develops a novel \emph{robust} deep learning framework for SSL over \emph{multi-relational} graphs. Graph-based SSL methods typically assume that the true labels are ``smooth'' with respect to the underlying network structure, which naturally motivates leveraging the topology of the network to propagate the labels and increase classification performance. Graph-induced smoothness may be captured by kernels on graphs~\cite{belkin2006manifold,ioannidis2018kernellearn}; Gaussian random fields \cite{zhu2003semi}; or low-rank {parametric} models based on the eigenvectors of the graph Laplacian or adjacency matrices~\cite{shuman2013emerging,marques2015aggregations}. Alternative approaches use the graph to embed the nodes in a vector space, and classify the points~\cite{weston2012deep,berberidis2018adaptive}. More recently, another line of works postulates that the mapping between the input data and the labels is given by a neural network (NN) architecture that incorporates the graph structure and generalizes the typical convolution operations; see e.g., \cite{bronstein2017geometric, gama2018convolutional, kipf2016semi,schlichtkrull2018modeling,ioannidis2018graphrnn,simonovsky2017dynamic}. The parameters describing the graph convolutional NN (GCN) are then learned using labeled examples and feature vectors, and those parameters are finally used to predict the labels of the unobserved nodes. See, e.g., \cite{kipf2016semi,bronstein2017geometric,velivckovic2017graph,xu2018powerful}, for state-of-the-art results in SSL when nodes are accompanied with additional features. With the success of GCNs on graph learning tasks granted, recent results indicate that perturbations of the graph topology can severely deteriorate their classification performance \cite{zugner18adv,xu2019topology,dai2018adversarial}. Such uncertainty in the graph topology may be attributed to several reasons. First, oftentimes the graph is implicit and data-driven methods are employed for learning the topology~\cite{giannakis2017tutor}. However, each method relies on a different model and assumptions, and in the absence of a ground tuth graph selecting the appropriate graph-learning technique is challenging. An inappropriate model may introduce model-based perturbations to the learned graph. Moreover, consider the case of random graph models \cite{newman2018networks}, where the graph is a particular realization of the model and edges may be randomly perturbed. Similarly, this idea is also relevant in adversarial settings, where the links of the nominal graph are corrupted by some foe that aims to poison the learning framework. Adversarial perturbations target a subset of nodes and modify their links to promote miss-classification of targeted nodes~\cite{wu19adv}. The designed graph perturbations are ``unnoticeable'', which is feasible so long as the degree distribution of the perturbed graphs is similar to the initial distribution~\cite{zugner18adv}. GCNs learn nodal representations by extracting information within local neighborhoods. These learned features may be significantly perturbed if the neighborhood is altered. Hence, this vulnerability of GCNs challenges their deployment in critical applications dealing with security or healthcare, where robust learning is of major importance. Defending against adversarial, random, or model-based perturbations may unleash the potential of GCNs and broaden the scope of machine learning applications altogether. \vspace{.1cm} \noindent\textbf{Contributions.} This paper develops a deep learning framework for SSL over a collection of graphs with applications to both multi-relational data and robust learning. Specifically, the contribution of this work is five-fold \begin{itemize} \item[\textbf{C1}.] A tensor-GCN architecture is developed that accounts for multi-relational graphs. Learnable coefficients are introduced allowing the flexible model to adapt to the multiple graphs and identify the underlying structure of the data. \item[\textbf{C2}.] A multi-hop convolution together with a residual feed of the data for each of the graphs are proposed, broadening the class of (graph signal) transformations the GCN implements and, hence, facilitating the diffusion of the features across the graph. In the training phase suitable (graph-based) regularizers are considered to avoid overfitting and further capitalize on the graph topology. \item[\textbf{C3}.] For datasets where nodes are involved in different relations and (multi-relational) data associated with the different graphs exist, the proposed TGCN architecture provides a powerful learning framework to carry out predictions that leverage the information codified in the multiple graphs. \item[\textbf{C4}.] Our TGCN also facilitates robust SSL for single or multi-relational data when the underlying topology is perturbed. Model-based, random, and adversarial perturbations are considered and our TGCN is adapted to robustify SSL over these perturbed graphs. To defend against adversaries a novel edge-dithering (ED) approach is developed that generates ED graphs by sampling edges of the original graph with probabilities selected to enhance robustness. \item[\textbf{C5}.] Numerical tests with multi-relational protein networks showcase the merits of the proposed tensor-graph framework. Further experiments under noisy features, noisy edge weights, and random as well as adversarial edge perturbations verify the robustness properties of our novel approach. \end{itemize} \noindent\textbf{Notation.} Scalars are denoted by lowercase, column vectors by bold lowercase, matrices by bold uppercase, and tensors by bold uppercase underscored letters. Superscripts $~^{\top}$ and $~^{-1}$ denote, respectively, the transpose and inverse operators; while $\boldsymbol 1_N$ stands for the $N\times1$ all-one vector. Finally, if $\mathbf A$ is a matrix and $\mathbf x$ a vector, then $||\mathbf x||^2_{\mathbf A}:= \mathbf x^{\top} \mathbf A^{-1} \mathbf x$ (provided that the inverse exists), $\|\mathbf A\|_1$ denotes the $\ell_1$-norm of the vectorized matrix, and $\|\mathbf A\|_F$ is the Frobenius norm of $\mathbf A$. \section{SSL over multi-relational graphs}\label{sec:probform} Consider a network of ${N}$ nodes, with nodal (vertex) set $\mathcal{V}:=\{v_1,\ldots,v_{N}\}$, connected through ${I}$ relations. The connectivity at the ${i}$th relation is captured by the ${N}\times{N}$ matrix $\mathbf{S}_{i}$, and the scalar $S_{nn'i}$ represents the influence of $v_{n}$ to $v_{n'}$ under the ${i}$th relation. The matrices $\{\mathbf{S}_{i}\}_{{i}=1}^{I}$ are collected in the ${N}\times{N}\times{I}$ tensor $\underline{\mathbf{S}}$. To complement the examples already provided in the introduction, and focusing on the case social networks, each $i$ could for instance represent a relation via a particular online social app such as Facebook, LinkedIn, and Twitter; see Fig.\ref{fig:multilayer}. Regardless of the particular application, the graph-induced neighborhood of $v_{n}$ for the ${i}$th relation is \begin{align} \label{eq:neighborhood} \mathcal{N}_{n}^{({i})}:=\{{n'}:S_{nn'i}\ne0,~~ v_{n'}\in\mathcal{V}\}. \end{align} \begin{figure} \centering \input{figs/multiRelGraphs.tex} \caption{A multi-relational network of voters.} \label{fig:multilayer} \end{figure} We associate an $ F\times 1$ feature vector $\mathbf{x}_{{n}}$ with the ${n}$th node, and collect those vectors in the ${N}\times F$ feature matrix $\mathbf{X}:=[\mathbf{x}_{1}^{\top},\ldots,\mathbf{x}_{{N}}^{\top}]^{\top}$. The entry $X_{{n}p}$ may denote, for example, the salary of the ${n}$th individual in the LinkedIn social network. We also consider that each node ${n}$ has a label of interest $y_{n}\in\{0,\ldots,K-1\}$, which, in the last example, could represent the education level of a person. In SSL we have access to the labels only at a subset of nodes $\{y_{{n}}\}_{{n}\in\mathcal{M}}$, with $\mathcal{M} \subset\mathcal{V}$. This partial availability may be attributed to privacy concerns (medical data); energy considerations (sensor networks); or unrated items (recommender systems). The ${N}\times K$ matrix $\mathbf{Y}$ is the ``one-hot'' representation of the true nodal labels, that is, if $y_{n}=k$ then $Y_{{n},k}=1$ and $Y_{{n},k'}=0, \forall k'\ne k$. The goal of this paper is to develop a \textit{robust tensor-based deep learning architecture} for SSL over \textit{multi-relational graphs}. Given $\mathbf{X}$, the proposed network maps each node $n$ to a corresponding label $y_{n}$ and, hence, estimates the unavailable labels. \section{Proposed TGCN architecture} Deep learning architectures typically process the input information using a succession of $L$ hidden layers. Each of the layers is composed of a conveniently parametrized linear transformation, a scalar nonlinear transformation, and, oftentimes, a dimensionality reduction (pooling) operator. The intuition is to combine nonlinearly local features to progressively extract useful information~\cite{goodfellow2016deep}. GCNs tailor these operations to the graph that supports the data \cite{bronstein2017geometric}, including the linear \cite{defferrard2016convolutional}, nonlinear \cite{defferrard2016convolutional} and pooling \cite{gama2018convolutional} operators. In this section, we describe the architecture of our novel multi-relational TGCN, which inputs the known features at the first layer and outputs the predicted labels at the last layer. We first present the operation of the TGCN, and the output layers, and finally discuss the training of our NN. \subsection{Single layer operation} Let us consider an intermediate layer (say the $l$th one) of our architecture. The output of that layer is the ${N}\times{I}\times P^{(l)}$ tensor $\check{\underline{\mathbf{Z}}}^{(l)}$ that holds the $P^{(l)}\times 1$ feature vectors $\check{\mathbf{z}}_{{n}{i}}^{(l)}, \forall {n},{i}$, with $P^{(l)}$ being the number of output features at $l$. Similarly, the ${N}\times{I}\times P^{(l-1)}$ tensor $\check{\underline{\mathbf{Z}}}^{(l-1)}$ represents the input to the layer. Since our focus is on predicting labels on all the nodes, we do not consider a dimensionality reduction (pooling) operator in the intermediate layers. The mapping from $\check{\underline{\mathbf{Z}}}^{(l-1)}$ to $\check{\underline{\mathbf{Z}}}^{(l)}$ can then be split into two steps. First, we define a linear transformation that maps the ${N}\times{I}\times P^{(l)}$ tensor $\check{\underline{\mathbf{Z}}}^{(l-1)}$ into the ${N}\times{I}\times P^{(l)}$ tensor $\underline{\mathbf{Z}}^{(l)}$. A scalar nonlinear transformation $\sigma({\cdot})$ is applied to $\underline{\mathbf{Z}}^{(l)}$ as follows \begin{align}\label{eq:nonlinear} \check{\underline{{Z}}}_{{i}{n}p}^{(l)}:=\sigma( {\underline{{Z}}_{{i}{n}p}^{(l)}}). \end{align} Collecting all the elements in \eqref{eq:nonlinear}, we obtain the output of the $l$th layer $\check{\underline{\mathbf{Z}}}^{(l)}$. A common choice for $\sigma{(\cdot)}$ is the rectified linear unit (ReLU), i.e. $\sigma{(c)}=\text{max}(0,c)$ \cite{goodfellow2016deep}. Hence, the main task is to define a linear transformation that maps $\check{\underline{\mathbf{Z}}}^{(l-1)}$ to $ \underline{\mathbf{Z}}^{(l)}$ and is tailored to our problem setup. Traditional convolutional NNs (CNNs) typically consider a small number of trainable weights and then generate the linear output as a convolution of the input with these weights~\cite{goodfellow2016deep}. The convolution combines values of close-by inputs (consecutive time instants, or neighboring pixels) and thus extracts information of local neighborhoods. Permeating the benefits of CNNS to the graph domain, GCNs replace the convolution with a graph filter whose parameters are also learned~\cite{bronstein2017geometric}. This preserves locality, reduces the degrees of freedom of the transformation, and leverages the structure of the graph. In the following three subsections we present the structure of the novel tensor-graph linear transformation and discuss in detail how the multi-relational graph is taken into account. \noindent\textbf{Neighborhood aggregation module (NAM)}. First, we consider a neighborhood aggregation module that, for each of the graphs, combines linearly the information available locally within each {graph} neighborhood. Since the neighborhood depends on the particular relation \eqref{eq:neighborhood}, we obtain for the $i$th relation and $n$th node \begin{align} \mathbf{h}_{{n}{i}}^{(l)} := \sum_{{n'}\in\mathcal{N}_{n}^{({i})}} S_{nn'i} \check{\mathbf{z}}_{{n'}{i}}^{(l-1)}. \label{eq:sem} \end{align} While the entries of $\mathbf{h}_{{n}{i}}^{(l)}$ depend only on the one-hop neighbors of $n$ (one-hop diffusion), the successive application of this operation across layers will increase the reach of the diffusion, spreading the information across the network. Specifically, consider the $r$th power of the matrix $\mathbf{S}^r$. Indeed, the vector $\mathbf{S}^r\mathbf{x}$ holds the linear combination of the values of $\mathbf{x}$ in the $r$-hop neighborhood~\cite{marques2015aggregations}. After defining the matrices $\mathbf{S}_{i}^{(r)}:=\mathbf{S}_{i}^r$ for $r=1,\ldots,R$ and ${i}=1,\ldots,{I}$, consider the following parametrized mapping \begin{align} \mathbf{h}_{{n}{i}}^{(l)} :=\sum_{{r=1^{\hspace{.015cm} }}}^R \sum_{n'=1}^{N} C^{(r,l)}_{i}S_{nn'i}^{(r)} \check{\mathbf{z}}_{{n'}{i}}^{(l-1)}, \label{eq:gf} \end{align} where the learnable coefficients $C^{(r,l)}_{i}$ weight the effect of the corresponding $r$th hop neighbors of node $n$ according to relation ${i}$. At the $l$th layer, the coefficients $\{C^{(r,l)}_{i}\}_{\forall (r,i)}$ are collected in the $R\times{I}$ matrix $\mathbf{C}^{(l)}$. The proposed transformation in \eqref{eq:gf} aggregates the diffused signal in the $R$-hop neighborhoods per $i$; see also Fig. \ref{fig:neighboragrmodule}. \begin{figure} \centering \input{figs/neighboragrmodule.tex} \caption{The NAM combines the features using the multi-relational graph. The picture focuses on node $n$ and illustrates the case where $R$-hop neighbors (with $R=2$) are considered. Note that, as shown in the picture, the local neighborhood is not the same across the different graphs.} \label{fig:neighboragrmodule} \end{figure} \noindent\textbf{Graph adaptive module (GAM)}. The extracted feature $\mathbf{h}_{{n}{i}}^{(l)}$ captures the diffused input per relation ${i}$. The importance of a particular feature or relation will depend on the inference task at hand. For example, in predicting the voting preference the friendship network may be more important than the coworker relation; cf. Fig.~\ref{fig:multilayer}. As a result, the learning algorithm should be able to adapt to the prevalent features. To that end, we adapt to the different relations and combine $\mathbf{h}_{{n}{i}}^{(l)}$ across $i$ as follows \begin{align}\label{eq:graphadaptive} \mathbf{g}_{{n}{i}}^{(l)}:=&\sum_{{i'}=1}^{I}{R}_{{i}{i'}{n}}^{(l)}{\mathbf{h}_{{n}{i'}}^{(l)}} \end{align} where ${R}_{{i}{i'}{n}}^{(l)}$ mixes the outputs at different graphs. Another key contribution of this paper is the consideration of the graph-mixing weights $\{{R}_{{i}{i'}{n}}^{(l)}\}_{\forall ({i},{i'},{n})}$, which can be collected in the ${I}\times{I}\times N$ tensor $\underline{\mathbf{R}}^{(l)}$, as a training parameter. The graph-mixing weights endow our TGCN with the ability of learning how to combine and adapt to the different relations encoded in the multi-relational graph; see also Fig.~\ref{fig:graphagrmodule}. Clearly, if prior information on the dependence among relations exists, this can be used to constrain the structure $\underline{\mathbf{R}}^{(l)}$ (e.g., by imposing to be diagonal or sparse). The graph-adaptive combination in \eqref{eq:graphadaptive} allows for different $ R_{ii'n}$ per $n$. Considering the same $ R$ for each $n$, that is $ R_{ii'n}^{(l)}= R_{ii'}^{(l)}$, results in a design with less parameters at the expense of reduced flexibility. For example, certain voters may be affected more significantly from their friends whereas others from their coworkers. Using the adaptive module our network can achieve personalized predictions. \begin{figure} \centering \input{figs/graphaggregationmodule.tex} \caption{The GAM combines the features per $i$, based on the trainable coefficients $\{{R}_{{i}{i'}{n}}\}$. When $\underline{\mathbf{R}}$ is sparse only features corresponding to the most significant relations will be active.} \label{fig:graphagrmodule} \end{figure} \noindent\textbf{Feature aggregation module (FAM)}. Next, the extracted graph adaptive diffused features are mixed using learnable scalars $W_{nipp'} ^{(l)}$ as follows \begin{align}\label{eq:linconv} \underline{{Z}}_{{n}{i}p}^{(l)}:=& \sum_{p'=1}^{P^{(l-1)}}W_{nipp'}^{(l)}G_{nip'}^{(l)}, \end{align} for all $(n,i,p)$ and where $G_{nip'}^{(l)}$ represents the $p'$th entry of $\mathbf{g}_{{n}{i}}^{(l)}$. The ${N}\times{I}\times P^{(l)}\times P^{(l-1)}$ tensor $\underline{\weightmat}^{(l)}$ collects the feature mixing weights $\{W_{nipp'}^{(l)}\}_{\forall (n,i,p,p')}$. The linear transformations that transform the input tensor $\check{\underline{\mathbf{Z}}}^{(l-1)}$ to $\underline{\mathbf{Z}}^{(l)}$ are summarized as follows \begin{align} \label{eq:linGCN} \underline{\mathbf{Z}}^{(l)}&:= f(\check{\underline{\mathbf{Z}}}^{(l-1)}; \bm{\theta}_z^{(l)}),\;\;\text{with}\\ \label{eq:param} \bm{\theta}_z^{(l)}&:=[\text{vec}(\underline{\weightmat}^{(l)});\text{vec}(\underline{\mathbf{R}}^{(l)}) ;\text{vec}(\mathbf{C}^{(l)})]^{\top}, \end{align} where $f$ has been used to denote the successive application of the three linear modules just introduced (namely NAM, GAM and FAM) and $\bm{\theta}_z^{(l)}$ collects the learnable weights involved in those modules [cf. \eqref{eq:gf}-\eqref{eq:linconv}]. \subsection{Residual GCN layer} Successive application of $L$ TGCN layers diffuses the input ${\mathbf{X}}$ across the $LR$-hop graph neighborhood, cf.~\eqref{eq:sem}. However, the exact size of the relevant neighborhood is not always known a priori. To endow our architecture with increased flexibility, we propose a residual TGCN layer that inputs ${\mathbf{X}}$ at each $l$ and, thus, captures multiple types of diffusions\footnote{This is also known as a skip connection\cite{he2016deep}}. Hence, the linear operation in \eqref{eq:linGCN} is replaced by the residual (auto-regressive) linear tensor mapping \cite[Ch. 10]{goodfellow2016deep} \begin{align} \underline{\mathbf{Z}}^{(l)}:= f(\check{\underline{\mathbf{Z}}}^{(l-1)}; \bm{\theta}_z^{(l)})+ f({\mathbf{X}}; \bm{\theta}_x^{(l)}) \label{eq:residuallayer} \end{align} where $\bm{\theta}_x^{(l)}$ encodes trainable parameters, cf. \eqref{eq:param}. When viewed as a transformation from ${\mathbf{X}}$ to $\underline{\mathbf{Z}}^{(l)}$, the operator in \eqref{eq:residuallayer} implements a broader class of graph diffusions than the one in \eqref{eq:linGCN}. If, for example, $l=3$ and $k=1$, then the first summand in \eqref{eq:residuallayer} is a 1-hop diffusion of a signal that corresponded to a $2$-hop (nonlinear) diffused version of ${\mathbf{X}}$ while the second summand diffuses ${\mathbf{X}}$ in one hop. At a more intuitive level, the presence of the second summand also guarantees that the impact of ${\mathbf{X}}$ in the output does not vanishes as the number of layers grow. The auto-regressive mapping in \eqref{eq:residuallayer} facilitates the application of our architecture in scenarios with time-varying inputs and labels. Specifically, with $t$ denoting the time index and given time-varying data $\{{\mathbf{X}}_t\}_{t}^T$, one would set $l=t$, replace ${\mathbf{X}}$ in \eqref{eq:residuallayer} with ${\mathbf{X}}^{(l)}$ and then set ${\mathbf{X}}^{(l)}={\mathbf{X}}_t$. This will be studied in detail in our future work towards predicting dynamic processes over multi-relational graphs. \subsection{Initial and final layers} Regarding layer $l=1$, the input $\check{\underline{\mathbf{Z}}}^{(0)}$ is defined as \begin{align}\label{eq:input_first_layer} \check{\mathbf{z}}_{{n}{i}} ^{(0)}=\mathbf{x}_{n} \;\;\text{for}\;\; \text{all}\;\; (n,i). \end{align} On the other hand, the output of our graph architecture is obtained by taking the output of the layer $l=L$ and applying \begin{align}\label{eq:output} \hat{\mathbf{Y}}:=g(\check{\underline{\mathbf{Z}}} ^{(L)};\bm{\theta}_g), \end{align} where $g(\cdot)$ is a nonlinear function, $\hat{\mathbf{Y}}$ is an ${N}\times K$ matrix, $\hat{Y}_{{n},k}$ represents the probability that $y_{n}=k$, and $\bm{\theta}_g$ are trainable parameters. The function $g(\cdot)$ depends on the specific application, with the normalized exponential function (softmax) being a popular choice for classification problems that is \begin{align} \hat{Y}_{{n},k}=\frac{\exp{\check{\underline{{Z}}}_{n,k}^{(L)}}}{ \sum_{k=1}^K\exp{\check{\underline{{Z}}}_{n,k}^{(L)}}}. \end{align} For notational convenience, the global mapping $\mathcal{F}$ from $\mathbf{X}$ to $\hat{\mathbf{Y}}$ dictated by our TGCN architecture is denoted as \begin{align} \hat{\mathbf{Y}}:=\mathcal{F}\big(\mathbf{X};\{\bm{\theta}_z^{(l)}\}_{l=1}^L,\{\bm{\theta}_x^{(l)}\}_{l=1}^L,\bm{\theta}_g\big), \end{align} and represented in the block diagram depicted in Fig.~\ref{fig:grnn}. \begin{figure*} \centering \input{figs/grnn.tex} \caption{TGCN with $L$ hidden (black) and one output (red) layers. The input $\mathbf{X}$ contains a collection of features per node and the output to be predicted is the probability of each node to belong to each the $K$ classes (labels) considered. Each layer of the TGCN is composed of our three novel modules (NAM, GAM, FAM) described in equations \eqref{eq:gf}, \eqref{eq:graphadaptive}, and \eqref{eq:linconv}. Notice the skip connections that input $\mathbf{X}$ to each layer [cf. \eqref{eq:residuallayer}]. } \label{fig:grnn} \end{figure*} \subsection{Training and graph-smooth regularizers} The proposed architecture depends on the weights in \eqref{eq:residuallayer} and \eqref{eq:output}. We estimate these weights by minimizing the discrepancy between the estimated labels and the given ones. Hence, we arrive at the following minimization objective \begin{align}\label{eq:trainobj} \min_{\{\bm{\theta}_z^{(l)}\}_{l=1}^L,\{\bm{\theta}_x^{(l)}\}_{l=1}^L,\bm{\theta}_g}& \mathcal{L}_{tr} (\hat{\mathbf{Y}},\mathbf{Y}) +\mu_1\sum_{{i}=1}^{I}\text{Tr}(\hat{\mathbf{Y}}^{\top}\mathbf{S}_{i}\hat{\mathbf{Y}}) \nonumber\\ +&\mu_2\rho\big(\{\bm{\theta}_z^{(l)}\}_{l=1}^L,\{\bm{\theta}_x^{(l)}\}_{l=1}^L\big) +\lambda \sum_{l=1}^L\|\underline{\mathbf{R}}^{(l)}\|_1\nonumber\\ \text{s.t.}&~~ \hat{\mathbf{Y}}=\mathcal{F}\big( \mathbf{X};\{\bm{\theta}_z^{(l)}\}_{l=1}^L,\{\bm{\theta}_x^{(l)}\}_{l=1}^L,\bm{\theta}_g\big). \end{align} In our classification setup, a sensible choice for the fitting cost is the cross-entropy loss function over the labeled examples that is $\mathcal{L}_{tr} (\hat{\mathbf{Y}},\mathbf{Y}):=-\sum_{{n}\in\mathcal{M}}\sum_{k=1}^K Y_{{n} k}\ln{\hat{Y}_{{n} k}}$. The first (graph-based) regularizer in \eqref{eq:trainobj} promotes smooth label estimates over the graphs \cite{smola2003kernels}, and $\rho(\cdot)$ is an $\ell_2$ norm over the TGCN parameters typically used to avoid overfitting~\cite{goodfellow2016deep}. Finally, the $\ell_1$ norm in the third regularizer encourages learning sparse mixing coefficients, and hence it promotes activating only a subset of relations per $ l$. The learning algorithm will assign larger combining weights to topologies that are most appropriate for the given data. A backpropagation algorithm~\cite{rumelhart1986learning} is employed to minimize \eqref{eq:trainobj}. The computational complexity of evaluating \eqref{eq:residuallayer} scales linearly with the number of nonzero entries in $\underline{\mathbf{S}}$ (edges) [cf. \eqref{eq:sem}]. To recap, while most of the works in the GCN literature use a single graph with one type of diffusion \cite{bronstein2017geometric,kipf2016semi}, in this section we have proposed a (residual) TGCN architecture that: i) accounts for a collection of graphs defined over the same set of nodes; ii) diffuses the signals across each of the different graphs; iii) combines the signals at the different graphs using adaptive (learnable) coefficients; iv) implements the simple but versatile residual tensor mapping \eqref{eq:residuallayer}; and v) includes several types of graph-based regularizers. \section{Robust GCNs via tensor-graphs} In the previous section, the nodes were involved in $I$ different types of relations, with each slab of our tensor graph $\underline{\mathbf{S}}$ representing one of those relations. In this section, the proposed tensor-graph architecture is applied to robustify classical \emph{single-graph} GCNs. Consider that the nodes are involved in a \emph{single} relation represented by the graph $\bar{\mathcal{G}}$ and $\bar{\mathcal{G}}$ does not necessarily represent the true graph but an approximate (nominal) version of it. That can be the case, for example, in applications involving random graph models \cite{bollobas2001random,newman2018networks,harris2013introduction}, where $\bar{\mathcal{G}}$ is a particular realization but other realizations could be considered as well. Similarly, this idea is also relevant in adversarial settings, where the links of the nominal graph $\bar{\mathcal{G}}$ are corrupted by some foe (see Fig. \ref{fig:sampled} for an illustration of this setup along with additional details). Our approach in this section is to use $\bar{\mathcal{G}}=(\mathcal{V},\mathbf{A})$ to generate a set of $I$ candidate graphs $\{\mathcal{G}_{i}\}_{i=1}^{I}=(\mathcal{V},\mathbf{A}_{i})$ and then collect the adjacency matrices of those graphs in the tensor $\underline{\mathbf{S}}$. Clearly, this approach can also be used for multi-relational graphs, generating multiple candidate graphs for each relation. The next subsections elaborate on three scenarios of particular interest. \subsection{Robustness to the graph learning method} While in applications dealing with communications, power or transportation systems the network connecting the different nodes may be explicitly known, in a number of scenarios the graph is implicit and must be learned from observed data. Several methods to infer the topology exist, each relying on a different model that relates the graph with the properties of the data~\cite{giannakis2017tutor}. Since in most applications a ground-truth graph does not exist, the issue of how to select the appropriate graph-learning method arises. More importantly for the setup considered in this paper, the particular selected method (and, hence, the particular graph) will have an impact on the performance of the GCN. Consider first the $\kappa$-nearest neighbors ($\kappa$-NN) method, which is employed to construct graphs in various applications, including collaborative filtering, similarity search, and many others in data mining and machine learning~\cite{dong2011efficient}. This method typically computes the link between ${n}$ and ${n'}$ based on a distance between their nodal features. For instance, for the Euclidean distance we simply have $d(n,n')=\|\mathbf{x}_{n}-\mathbf{x}_{n'}\|_2^2$. Then, for each node $n$ the distances with respect to all other nodes $n'\neq n$ are ranked and $n$ is connected with the $\kappa$ nodes with the smallest distances $\{d(n,n')\}$. However, selecting the appropriate $\kappa$ and distance metric $d(\cdot,\cdot)$ is often arbitrary and may not generalize well to unseen data, especially if the learning system operates in an online fashion. Hence, our approach to robustify SSL in that scenario is to consider a tensor graph where each slab corresponds to a graph constructed using a different value of $\kappa$ and (or) distance. A similar challenge arises in the so-called correlation network methods \cite{giannakis2017tutor}. In this case, the graph is learned based on the correlation between the data observed at each pair of nodes. Among other things, this requires comparing the observed sample correlation to a threshold $\eta$ and, then, declare that the edge exists if the measured correlation is above $\eta$. Selecting the proper value for $\eta$ is oftentimes arbitrary and can compromise the prediction performance of the GCN. Similarly, there are applications, including those related to Markov random fields, where correlation networks are not appropriate but partial correlation networks (which look at the inverse of the covariance matrix \cite{giannakis2017tutor}) are. In such cases, we can collect the multiple learned graphs, originating from possibly different methods, as slabs of $\underline{\mathbf{S}}$, and then train our TGCN architecture. Depending on the application at hand, it may be prudent to include in the training a block-sparsity penalty on the coefficients $\underline{\mathbf{R}}$, so that we exploit available prior on the most appropriate graphs. \begin{figure} \centering \input{figs/sampledGraph.tex} \caption{ED in operation on a perturbed social network among voters. Black solid edges are the true links and dashed red edges represent adversarially perturbed links.} \label{fig:sampled} \end{figure} \subsection{Robustness to edge attacks via edge dithering} The ever-expanding interconnection of social, email, and media service platforms presents an opportunity for adversaries manipulating networked data to launch malicious attacks~\cite{zugner18adv,goodfellow2014explaining,aggarwal2015outlier}. Perturbed edges modify the graph neighborhoods, which leads to significant degradation of the performance achieved by GCNs. In the voting network (see Fig.~\ref{fig:multilayer}), some of the edges may be adversarially manipulated so that the voters are influenced in a specific direction. This section explains how to use our TGCN to deal with learning applications for which the graph has been adversarially perturbed. In particular, we consider an edge-dithering (ED) module that, given the nominal graph, creates a new graph by randomly adding/removing links with the aim to restore a node's initial graph neighborhood. Dithering in visual and audio applications, refers to the intentional injection of noise so that the quantization error is converted to random noise, which can be handled more easily~\cite{ulichney1988dithering}. Therefore, the approach that we advocate is to use an instance of our TGCN architecture where each of the slabs of the tensor $\underline{\mathbf{S}}$ corresponds to a graph that has been obtained after dithering some of the links of the nominal (potentially compromised) graph $\bar{\mathcal{G}}$. Mathematically, given the (perturbed) graph $\bar{\mathcal{G}}=(\mathcal{V},\bar{\mathbf{A}})$, we generate $I$ ED graphs $\{\mathcal{G}_{i}\}_{i=1}^{I}$, with $\mathcal{G}_{i}=(\mathcal{V},\mathbf{A}_{i})$ and where the edges of the auxiliary graph $\mathbf{A}_{i}$ are selected in a probabilistic fashion as follows \begin{align} \label{eq:samplegraph} A_{n,n',i}=\left\{ \begin{array}{ll} 1& \text{wp.}~~~q_1^{ \delta(\bar{A}_{n,n'}=1)}{(1-q_2)}^{ \delta(\bar{A}_{n,n'}=0)}\\ 0&\text{wp.}~~~q_2^{ \delta(\bar{A}_{n,n'}=0)}{(1-q_1)}^{ \delta(\bar{A}_{n,n'}=1)}. \end{array} \right. \end{align} In the previous expression, $\delta(\cdot)$ is the indicator function and the dithering probabilities are set as $q_1={\rm Pr}(A_{n,n',i}=1|\bar{A}_{n,n'}=1)$ and $q_2={\rm Pr}(A_{n,n',i}=0|\bar{A}_{n,n'}=0)$. If $n$ and $n'$ are connected in $\bar{\mathcal{G}}$, the edge connecting $n$ with $n'$ is deleted with probability $1-q_1$. Otherwise, if $n$ and $n'$ are not connected in $\bar{\mathcal{G}}$ i.e. $(\bar{A}_{n,n'}=0)$, an edge between $n$ and $n'$ is inserted with probability $1-q_2$. The ED graphs give rise to different neighborhoods $\mathcal{N}_n^{(i)}$, and the role of the ED module is to ensure that the unperturbed neighborhood of each node will be present with high probability in at least one of the $I$ graphs. For clarity, we formalize this intuition in the following remark. \begin{myremark} With high probability, there exists $\mathcal{G}_i$ such that a perturbed edge will be restored to its initial value. This means that there exists an ED graph $i$ such that $A_{n,n',i}=A_{n,n'}$. Since, each $\mathcal{G}_i$ is independently drawn, it holds that \begin{align} {\rm Pr}\big(\Pi_{i=1}^I\delta(A_{n,n',i}=1)\big|\bar{A}_{n,n'}=1,A_{n,n'}=0\big)=q_1^I\nonumber\\ \nonumber{\rm Pr}\big(\Pi_{i=1}^I\delta(A_{n,n',i}=0)\big|\bar{A}_{n,n'}=0,A_{n,n'}=1\big)=q_2^I. \end{align} \end{myremark} That is, as $I$ increases, the probability that none of the graphs gets the true value for the perturbed link decreases exponentially. By following a similar argument, one can argue that, as $I$ increases, the probability that none of the graphs recovers the original neighborhood structure decreases, so that there would exist an ED graph $i$ such that $\mathcal{N}_n^{(i)}=\mathcal{N}_n$. More importantly, since our architecture linearly combines (outputs of) different graphs, this will effectively span the range of graphs that we are able to represent, rendering the overall processing scheme less sensitive to adversarial edge perturbations. Indeed, numerical experiments with adversarial attacks will demonstrate that, even with a small $I$, the use of ED significantly boosts classification performance. The operation of the ED module is illustrated in Fig.~\ref{fig:sampled}. \subsection{Learning over random graphs} Uncertainty is ubiquitous in nature and graphs are no exception. Testament to this fact are the efforts to develop meaningful and tractable models for random graphs stemming not only from the graph-theory community (from the early Erd\H{o}s-R\'enyi models to more-recent low-rank graphon generalizations \cite{bollobas2001random}), but also from the network-science (e.g., preferential attachment models \cite[Ch. 12-16]{newman2018networks}) and statistics (e.g., exponential random graph models \cite{harris2013introduction}) communities. Those random graph models provide an excellent tool for studying structural features of networks, such as giant and small components, degree distributions, path lengths, and so forth. Equally important, they provide parsimonious parametric models that can be leveraged for inference and inverse problems. That is the case in scenarios where we have access to limited graph-related observations such as the induced graph at a subset of nodes, or the mean and variance of some graph motifs (see, e.g., \cite{schaub2019blind}). In those cases, inferring the full graph can be infeasible, but one can postulate a particular random graph model and use the available observations to infer the parameters that best fit the data. A natural issue is, then, how to use such random graph models for the purpose of learning from an (incomplete) set of graph signals in the context of GCNs. A number of alternatives exist, including, for example, implementing a multi-layer graph convolutional architecture where, at each layer, a different realization of the graph is used \cite{isufi2017filtering}. A different approach, which is the one advocated here, is to leverage the TGCN architecture put forth in this paper. In this case, the idea is to draw $I$ realizations of the random graph model, collect those in the $N \times N \times I$ tensor $\underline{\mathbf{S}}$, and train a TGCN. This way, we guarantee that each layer considers not one, but multiple realizations of the graph. Clearly, if we consider an online setup where the layers of the GCN can be associated with time, the proposed model can be related with importance sampling and particle filtering approaches, with each slab of the tensor $\underline{\mathbf{S}}$ representing a different particle of the graph probability space \cite{candy2016bayesian}. This hints at the possibility of developing TGCN schemes for the purpose of nonlinear Bayesian estimation over graphs. While certainly of interest, we leave that as future work. \section{Numerical tests}\label{sec:ntest} This section tests the performance of TGCN in learning from multiple potentially perturbed graphs and provides tangible answers to the following research questions. \begin{itemize} \item[\textbf{Q1}.] How does TGCN compare to state-of-the-art methods for SSL over multi-relational graphs? \item[\textbf{Q2}.] How can TGCN leverage topologies learned from multiple graph-learning methods? \item[\textbf{Q3}.] How robust is TGCN compared to GCN under noisy features, noisy edge weights, and random as well as adversarial edge perturbations? \item[\textbf{Q4}.] How sensitive is TGCN to the parameters of the ED module (i.e., $q_1,q_2$ and $I$)? \end{itemize} To that end, and unless it is otherwise stated, we test the proposed TGCN with $R=2$, $L=3$, $P^{(1)}=64$, $P^{(2)}=8$, and $P^{(3)}=K$. The regularization parameters $\{\mu_1,\mu_2,\lambda\}$ are chosen based on the performance of the TGCN in the validation set for each experiment. For the training stage, an ADAM optimizer with learning rate 0.005 was employed \cite{kingma2015adam}, for 300 epochs\footnote{An epoch is a cycle through all the training examples} with early stopping at 60 epochs\footnote{Training stops if the validation loss does not decrease for 60 epochs}. The simulations were run using TensorFlow~\cite{abadi2016tensorflow} and the code is available online\footnote{https://sites.google.com/site/vasioannidispw/github}. \begin{figure*}[t] \begin{subfigure}[b]{0.5\columnwidth} \centering{\input{figs/synthaccvsfeatsnr.tex}} \vspace{-.1cm} \caption{ } \end{subfigure}% ~ \begin{subfigure}[b]{0.5\columnwidth} \centering{\input{figs/synthaccvsadjsnr.tex}} \vspace{-.1cm} \caption{ } \end{subfigure}\\% \vspace{.1cm} \begin{subfigure}[b]{0.5\columnwidth} \centering{\input{figs/ionosphereaccvsfeatsnr.tex}} \vspace{-.1cm} \caption{ } \end{subfigure}% ~ \begin{subfigure}[b]{0.5\columnwidth} \centering{\input{figs/ionosphereaccvsadjsnr.tex}} \vspace{-.1cm} \caption{ } \end{subfigure}% \caption{Classification accuracy on the synthetic (a)-(b) and ionosphere (c)-(d) graphs described in Sec. \ref{Sec:NumericalSNR} as the noise level in the features [cf. \eqref{eq:featpertub}] or in the links [\eqref{eq:toppertub}] varies. Panels (a) and (c) show the classification accuracy for noisy features while panels (b) and (d) show the same metric as the power of the noise added to the graph links varies. } \label{fig:robust} \end{figure*} \subsection{SSL using multiple learned graphs}\label{Sec:NumericalSNR} This section reports the performance of the proposed architecture when multiple learned graphs are employed and data are corrupted by noise. Oftentimes, the available topology and feature vectors might be noisy. In those cases, the observed $\underline{\mathbf{S}}$ and $\mathbf{X}$ can be modeled as \begin{align} \label{eq:toppertub} \underline{\mathbf{S}}=&\underline{\mathbf{S}}_{tr}+\underline{\mathbf{O}}_{\shifttensor}\\ \mathbf{X}=&\mathbf{X}_{tr}+\mathbf{O}_{\datamatrix}\label{eq:featpertub}. \end{align} where $\underline{\mathbf{S}}_{tr}$ and $\mathbf{X}_{tr}$ represent the \textit{true} topology and features and $\underline{\mathbf{O}}_{\shifttensor}$ and $\mathbf{O}_{\datamatrix}$ denote the corresponding additive perturbations. We draw $\underline{\mathbf{O}}_{\shifttensor}$ and $\mathbf{O}_{\datamatrix}$ from a zero-mean uncorrelated multivariate Gaussian distribution with specified signal to noise ratio (SNR). The robustness of our method is tested in two datasets: i) A synthetic dataset of ${N}=1000$ points that belong to $K=2$ classes generated as $\mathbf{x}_{{n}}\in\mathbb{R}^{ F\times1}\sim\mathcal{N}(\mathbf{m}_x,0.4\mathbf{I})$ for ${n}=1,\ldots,1000$, with $ F=10$ and the mean vector $\mathbf{m}_x\in \mathbb{R}^{ F\times1}$ being all zeros for the first class and all ones for the second one. ii) The ionosphere dataset, which contains ${N}=351$ data points with $ F=34$ features that belong to $K=2$ classes \cite{Dua:2017}. We generate $\kappa$-NN graphs by varying $\kappa$, and observe $|\mathcal{M}|=200$ and $|\mathcal{M}|=50$ nodes uniformly at random. With this simulation setup, we test the different TGCNs in SSL for increasing SNR values (Figs. \ref{fig:robust}a, \ref{fig:robust}b, \ref{fig:robust}c, \ref{fig:robust}d). We deduce from the classification performance of our method in Fig. \ref{fig:robust} that multiple graphs lead to learning more robust representations of the data, demonstrating the merits of the proposed tensor graph architecture. \subsection{Robustness of TGCNs to random graph perturbations}\label{Sec:NumericalCitationED} For this experiment, our TGCN utilizes our novel ED module and TGCN architecture to account for perturbations on the graph edges. In this case, the experiments are run using three of the citation network datasets in~\cite{sen2008collective}. The adjacency matrix of the citation graph is denoted as $\mathbf{S}$, its nodes correspond to different documents from the same scientific category, and $S_{nn'}=1$ implies that paper ${n}$ cites paper ${n'}$. Each document ${n}$ is associated with a label $y_{n}$ that indicates the document's subcategory. ``Cora'' contains papers related to machine learning, ``Citeseer'' includes papers related to computer and information science, while ``Pubmed'' contains biomedical papers, see also Table \ref{tab:citation}. To facilitate comparison, we reproduce the same experimental setup than in \cite{kipf2016semi}, i.e., the same split of the data in train, validation, and test sets. For this experiment, the perturbed graph $\bar{\mathbf{A}}$ is generated by inserting new edges in the original graphs between a random pair of nodes $n,n'$ that are not connected in $\mathbf{A}$, i.e. $A_{n,n'}=0$. This can represent, for example, documents that should have been cited but the authors missed. The added edges can be regarded as drawn from Bernoulli distribution. The TGCN utilizes the multiple graphs generated via the ED module with $I=10$ samples, $q_1=0.9$, and $q_2=1$ since no edge is deleted in $\bar{\mathbf{A}}$. Fig.~\ref{fig:adrandpert} demonstrates the classification accuracy of the GCN~\cite{kipf2016semi} compared to that of the proposed TGCN as the number of perturbed edges is increasing. Clearly, our ED-TGCN is more robust than a classical GCN. Moreover, even when no edges are perturbed, the TGCN outperforms the GCN. This observation may be attributed to noisy links in the original graphs, which hinder classification performance. Furthermore, the SSL performance of the GCN significantly degrades as the number of perturbed edges increases, which suggests that GCN is challenged even by ``random attacks''. \begin{figure*} \begin{subfigure}[b]{0.5\columnwidth} \centering\input{figs/adlinkscora.tex} \caption{Cora} \end{subfigure}~\begin{subfigure}[b]{0.5\columnwidth} \centering\input{figs/adlinkspubmed.tex} \caption{Pubmed} \end{subfigure} \begin{subfigure}[b]{0.5\columnwidth} \centering\input{figs/adlinkciteseer.tex} \caption{Citeseer} \end{subfigure}~\begin{subfigure}[b]{0.5\columnwidth} \centering\input{figs/adlinkspolblog.tex} \caption{Polblogs} \end{subfigure} \caption{Classification accuracy for the setup described in Sec. \ref{Sec:NumericalCitationED} as the number of perturbed edges increases.} \label{fig:adrandpert} \end{figure*} \begin{table}[] \hspace{0cm} \centering \caption{List of citation graph datasets considered in Secs. \ref{Sec:NumericalCitationED} and \ref{Sec:NumericalCitationED} along with most relevant dimensions.} \vspace{0.2cm} \begin{tabular}{c c c } \hline \textbf {Dataset} & \textbf {Nodes} ${N}$ & \textbf {Classes} $K$ \\ \hline \hline Cora & 2,708 & 7 \\ Citeseer & 3,327 & 6 \\ Pubmed & 19,717 & 3 \\ Polblogs & 1,224 & 2\\ \hline \end{tabular} \label{tab:citation} \end{table} {\setlength\extrarowheight{2pt} \begin{table*} \caption{Classification accuracy for the setup described in Sec. \ref{Sec:NumericalCitationED2} as the number of attacked nodes $|\mathcal{T}|$ increases. } \label{tab:results} \centering \begin{tabular}{@{}p{2cm}p{2cm}cccccc@{}} \hline \multirow{3}{*}{\vspace*{8pt}\textbf{Dataset}}& \multirow{3}{*}{\vspace*{8pt}\textbf{Method}}&\multicolumn{4}{c}{\textbf{Number of attacked nodes} $|\mathcal{T}|$}\\\cmidrule{3-7} & & {\textsc{20}} & {\textsc{30}} & {\textsc{40}} & {\textsc{50}} & {\textsc{60}} \\ \hline\hline \multirow{2}{*}{\rotatebox{0}{\hspace*{-0pt}{Citeseer}} } & \textsc{GCN} & 60.49 & 56.00 & 61.49 & 56.39 & \textbf{58.99} \\ & \textsc{TGCN} & \textbf{70.99}& \textbf{56.00} & \textbf{61.49} & \textbf{61.20} & 58.66 \\ \cmidrule{1-7} \multirow{2}{*}{\rotatebox{0}{\hspace*{-0pt}{Cora}}} & \textsc{GCN} & 76.00 & 74.66 & 76.00 & 62.39 & 73.66 \\ & \textsc{TGCN} & \textbf{78.00} & \textbf{82.00} & \textbf{84.00} & \textbf{73.59} & \textbf{74.99}\\ \cmidrule{1-7} \multirow{2}{*}{\rotatebox{0}{\hspace*{-0pt}{Pubmed}}} & \textsc{GCN} & \textbf{74.00} & 71.33 & 68.99 & 66.40 & 69.66 \\ & \textsc{TGCN} & 72.00 & \textbf{75.36} & \textbf{71.44} & \textbf{68.50} & \textbf{74.43} \\ \cmidrule{1-7} \multirow{2}{*}{\rotatebox{0}{\hspace*{-0pt}{Polblogs}}} & \textsc{GCN} & \textbf{85.03} & 86.00 & 84.99 & 78.79 & 86.91 \\ & \textsc{TGCN} & 84.00 & \textbf{88.00} & \textbf{91.99} & \textbf{78.79} & \textbf{92.00} \\ \hline \end{tabular \end{table*} }\setlength\extrarowheight{0pt} \subsection{Robustness to adversarial attacks on edges}\label{Sec:NumericalCitationED2} The original graphs in Cora, Citeseer, Pubmed, and Polblogs were perturbed using the adversarial setup in~\cite{zugner18adv}, where structural attacks are effected on attributed graphs. These attacks perturb connections adjacent to $\mathcal{T}$ a set of targeted nodes by adding or deleting edges~\cite{zugner18adv}. Our ED module uses $I=10$ sampled graphs with $q_1=0.9$, and $q_2=0.999$. For this experiment, 30\% of the nodes are used for training, 30\% for validation and 40\% for testing. The nodes in $\mathcal{T}$ are in the testing set. Table \ref{tab:results} reports the classification accuracy of the GCN and the proposed TGCN for different numbers of attacked nodes ($|\mathcal{T}|$). Different from Fig.~\ref{fig:adrandpert} where the classification accuracy over the test set is reported, Table \ref{tab:results} reports the classification accuracy over the set of attacked nodes $\mathcal{T}$. It is observed that the proposed TGCN is more robust relative to GCN under adversarial attacks~\cite{zugner18adv}. This finding justifies the use of the novel ED in conjunction with the TGCN that judiciously selects extracted features originating from non-corrupted neighborhoods. Fig. \ref{fig:robustsens} showcases the sensitivity of the TGCN to varying parameters of the ED module for the experiment in Table \ref{tab:results} with the Cora and $|\mathcal{T}|=30$. It is observed that the TGCN's performance is relative smooth for certain ranges of the parameters. In accordance with Remark 2, notice that even for small $I$ the TGCN's performance is increased significantly. \begin{figure*}[t] {\input{figs/agcnq2per.tex}}{\input{figs/agcnp2per.tex}} {\input{figs/agcnI2per.tex}}\vspace{-0.0cm} \caption{SSL classification accuracy of the TGCN under varying edge creation prob. $q_1$, edge deletion prob. $q_2$, and number of samples $I$. } \label{fig:robustsens} \end{figure*} \subsection{Predicting protein functions}\label{Sec:NumericalProteins} This section reports the performance of the proposed TGCN in \emph{predicting ``protein functions''}. Protein-to-protein interaction networks relate two proteins via multiple cell-dependent relations that can be modeled using \emph{multi-relational} graphs; see Fig.\ref{fig:multilayer}. Protein classification seeks the unknown function of some proteins (nodes) based on the known functionality of a small subset of proteins and the protein-to-protein networks~\cite{zitnik2017predicting}. Given a target function $y_n$ that is known on a subset of proteins ${n}\in\mathcal{M}$, known functions on all proteins summarized in $\mathbf{X}$, and the multi-relational protein networks $\underline{\mathbf{S}}$, the goal is to predict whether the proteins in ${n}\in{\mathcal{V}-\mathcal{M}}$ are associated with the target function or not. Hence, the number of target classes is $K=2$. In this setting, $\mathbf{S}_{i}$ represents the protein connectivity in the ${i}$th cell type which might be a cerebellum, midbrain, or frontal lobe cell. Table \ref{tab:biodata} summarizes the three datasets used in the following experiments. \begin{table}[t] \hspace{0cm} \centering \caption{List of protein-to-protein interaction datasets considered in Sec. \ref{Sec:NumericalProteins} and their associated dimensions.} \vspace{0.2cm} \begin{tabular}{c c c c c} \hline \textbf{Dataset} & \textbf{Nodes} ${N}$ & \textbf{Features} $ F$ & \textbf{Relations} ${I}$\\ \hline\hline Generic cells & 4,487 & 502 & 144\\ Brain cells & 2,702 & 81 & 9 \\ Circulation cells & 3,385 & 62 & 4\\ \hline \end{tabular} \vspace{0.0cm} \label{tab:biodata} \end{table} We compare the TGCN with the GCN~\cite{kipf2016semi}, which is the single-relational alternative, and Mune~\cite{ye2018robust}, which is a state-of-the-art diffusion-based approach for SSL over multi-relational graphs. Since GCN only accounts for a single graph, we select for the GCN the relation $i$ that achieves the best results in the validation set. Furthermore, Mune does not account for feature vectors in the nodes of the graph. Hence, to lay out a fair comparison, we employ the TGCN without using the feature vectors, i.e., $\mathbf{X}=\mathbf{I}_{N}$. Finally, since the classes are heavily unbalanced, we evaluate the performance of the various approaches using the macro F1 score for predicting the protein functions.\footnote{Accurate classifiers achieve macro F1 values close to 1.} \begin{figure*} \begin{floatrow} \ffigbox{% \centering \input{figs/macrof1brain.tex} \label{fig:bc} }{% \caption{Brain cells}% } \ffigbox{% \centering \input{figs/macrof1circulation.tex} \label{fig:cc} }{% \caption{Circulation cells}% } \end{floatrow} \end{figure*} \begin{figure} \centering \input{figs/macrof1general.tex} \caption{Generic cells} \label{fig:gc} \end{figure} Figs. 9-\ref{fig:gc} report the macro F1 values for the aforementioned approaches for varying numbers of labeled samples $|\mathcal{M}|$. It is observed for all datasets that: i) the macro F1 score improves for increasing $|\mathcal{M}|$ across all algorithms ii) the TGCN that judiciously combines the multiple-relations outperforms the GCN by a large margin iii) For the case where nodal features are not used (last two rows at each table), the TGCN outperforms the state-of-the-art Mune. \section{Conclusions} This paper put forth a novel deep learning framework for SSL that utilized a tensor-graph architecture to sequentially process the input data. The proposed architecture is able to handle scenarios where nodes engage in multiple relations, can be used to reveal the structure of the data, and is computationally affordable, since the number of operations scales linearly with respect to the number of graph edges. Instead of committing a fortiori to a specific type of diffusion, the TGCN learns the diffusion pattern that best fits the data. Our TGCN was also adapted to robustify SSL over a single graph with model-based, adversarial or random edge perturbations. To account for adversarial perturbations, an ED module was developed that first performed random dithering to the (nominal) graph edges and then used the dithered graphs as input to the TGCN. Our approach achieved state-of-the-art classification results over multi-relational graphs when nodes are accompanied by feature vectors. Further experiments demonstrate the performance gains of the TGCN in the presence of noisy features, noisy edge weights, and random as well as adversarial edge perturbations. Future research includes predicting time-varying labels and using TGCN for nonlinear Bayesian estimation over graphs. \bibliographystyle{IEEEtran} \section{Introduction} A task of major importance at the interface of machine learning with network science is semi-supervised learning (SSL) over graphs. In a nutshell, SSL aims at predicting or extrapolating nodal attributes given: i) the values of those attributes at a subset of nodes and (possibly) ii) additional features at all nodes. A relevant example is protein-to-protein interaction networks, where the proteins (nodes) are associated with specific biological functions (the nodal attributes in this case are binary values indicating whether the protein participates in the function or not), thereby facilitating the understanding of pathogenic and physiological mechanisms. While significant progress has been made, most works consider that the relation among nodal variables is represented by a single graph. This may be inadequate in many contemporary applications, where nodes may engage in multiple types of relations~\cite{kivela2014multilayer}, motivating the generalization of traditional SSL approaches to \emph{single-relational} graphs to \emph{multi-relational} (a.k.a. multi-layer) graphs. In the particular case of protein interaction networks, each layer of the graph could correspond to a different type of tissue, e.g., brain or muscle. In a social network, each layer could amount to a form of social interaction, such as friendship, family bonds, or coworker-ties \cite{wasserman1994social}. Such graphs can be represented by a tensor graph, where each tensor slab corresponds to a single relation. With their ubiquitous presence granted, {the} development of SSL methods that account for multi-relational networks is only in its infancy, see, e.g.,~\cite{kivela2014multilayer,ioannidis2018multilay}. This work develops a novel \emph{robust} deep learning framework for SSL over \emph{multi-relational} graphs. Graph-based SSL methods typically assume that the true labels are ``smooth'' over the graph, which naturally motivates leveraging the network topology to propagate the labels and increase learning performance. Graph-induced smoothness can be captured by graph kernels~\cite{belkin2006manifold,smola2003kernels,ioannidis2018kernellearn}; Gaussian random fields \cite{zhu2003semi}; or low-rank {parametric} models~\cite{shuman2013emerging}. Alternative approaches use the graph to embed nodes in a vector space, and then apply learning approaches to the resultant vectors~\cite{weston2012deep,yang2016revisiting,berberidis2018adaptive}. More recently, the map from input data to their labels is given by a neural network (NN) that incorporates the graph structure and generalizes the typical convolution operations; see e.g., \cite{bronstein2017geometric,gama2018convolutionaljournal,ioannidis2018graphrnn,schlichtkrull2018modeling}. The parameters describing the graph convolutional NN (GCN) are then learned using labeled examples and feature vectors, and those parameters are employed to predict labels of the unobserved nodes; see, e.g., \cite{kipf2016semi,bronstein2017geometric,velivckovic2017graph}, for state-of-the-art results in SSL when nodes are attributed with features. With the success of GCNs on graph learning tasks granted, recent reports point out that perturbations of the graph topology can severely deteriorate learning performance \cite{zugner18adv,xu2019topology,dai2018adversarial}. Such uncertainty in the topology may be attributed to several reasons. First, the graph is implicit and its topology is identified using data-driven methods~\cite{giannakis2017tutor}. However, each method relies on a different model and assumptions, and without ground truth selecting the appropriate graph-learning technique is challenging. A less accurate model can induce perturbations to the learned graph. Further, in random graph models one deals with a realization of {a} graph whose edges may be randomly perturbed~\cite{newman2018networks}. Similarly, this is also relevant in adversarial settings, where the links of the nominal graph are corrupted by some foe that aims to poison the learning process. Adversarial perturbations target a subset of nodes and modify their links to promote {the} miss-classification of targeted nodes~\cite{wu19adv}. Crafted graph perturbations are ``unnoticeable,'' which is feasible so long as the degree distribution of the perturbed graphs is similar to the initial distribution~\cite{zugner18adv}. GCNs learn nodal representations by extracting information within local neighborhoods. These learned features may be significantly perturbed if the neighborhood is altered. Hence, this vulnerability of GCNs challenges their deployment in critical applications dealing with security or healthcare, where robust learning is of paramount importance. Defending against adversarial, random, or model-based perturbations may unleash the potential of GCNs, and broaden the scope of machine learning applications altogether. \vspace{.1cm} \noindent\textbf{Contributions.} This paper develops a deep SSL approach over multiple graphs with applications to both multi-relational data and robust learning. Specifically, the contribution is five-fold. \begin{itemize} \item[\textbf{C1}.] A tensor-based GCN is developed to account for multi-relational graphs. Learnable coefficients are introduced to effect model adaptivity to multiple graphs, and identification of the underlying data structure. \item[\textbf{C2}.] A multi-hop convolution is introduced along with a residual data feed per graph, thus broadening the class of (graph signal) transformations the GCN implements; and hence, facilitating the diffusion of nodal features across the graph. In the training phase suitable (graph-based) regularizers are incorporated to guard against overfitting, and further capitalize on the graph topology. \item[\textbf{C3}.] For nodes involved in different relations, and for (multi-relational) datasets adhering to several graphs, the proposed TGCN provides a powerful SSL approach by leveraging the information codified across multiple graphs. \item[\textbf{C4}.] The novel TGCN enables robust SSL for single- or multi-relational data when the underlying topology is perturbed. Perturbations include model induced, random, and adversarial ones. To defend against adversaries, a novel edge-dithering (ED) approach is developed that generates ED graphs by sampling edges of the original graph with probabilities selected to enhance robustness. \item[\textbf{C5}.] Numerical tests with multi-relational protein networks showcase the merits of the proposed tensor-graph framework. Further experiments with noisy features, noisy edge weights, and random as well as adversarial edge perturbations verify the robustness of our novel approach. \end{itemize} \noindent\textbf{Notation.} Scalars are denoted by lowercase, column vectors by bold lowercase, matrices by bold uppercase, and tensors using bold uppercase underscored letters. Superscripts $~^{\top}$ and $~^{-1}$ denote, respectively, the transpose and inverse operators; while $\boldsymbol 1_N$ stands for the $N\times1$ all-one vector. Finally, if $\mathbf A$ is a nonsingular matrix and $\mathbf x$ a vector, then $||\mathbf x||^2_{\mathbf A}:= \mathbf x^{\top} \mathbf A^{-1} \mathbf x$, $\|\mathbf A\|_1$ denotes the $\ell_1$-norm of the vectorized matrix, and $\|\mathbf A\|_F$ is the Frobenius norm of $\mathbf A$. \section{SSL over multi-relational graphs}\label{sec:probform} Consider a network of ${N}$ nodes, with nodal (vertex) set $\mathcal{V}:=\{v_1,\ldots,v_{N}\}$, connected through ${I}$ relations. The ${i}$th relation is captured by the ${N}\times{N}$ adjacency matrix $\mathbf{A}_{i}$, whose entry $A_{nn'i}$ represents the weight of the edge connecting nodes $v_{n}$ and $v_{n'}$ as effected by the ${i}$th relation. The matrices $\{\mathbf{A}_{i}\}_{{i}=1}^{I}$ are collected in the ${N}\times{N}\times{I}$ tensor $\underline{\mathbf{A}}$. In the social network examples already provided in the previous section, each $i$ could for instance represent a relation via a particular app, such as Facebook, LinkedIn, or Twitter; see Fig.\ref{fig:multilayer}. Regardless of the application, the neighborhood of $v_{n}$ induced by relation ${i}$ is specified by the set \begin{align} \label{eq:neighborhood} \mathcal{N}_{n}^{({i})}:=\{{n'}:A_{nn'i}\ne0,~~ v_{n'}\in\mathcal{V}\}. \end{align} \begin{figure} \centering \input{figs/multiRelGraphs.tex} \caption{A multi-relational network of voters.} \label{fig:multilayer} \end{figure} We further associate an $ F \times 1$ feature vector $\mathbf{x}_{{n}}$ with the ${n}$th node, and collect those vectors in the $N \times F$ feature matrix $\mathbf{X} :=[\mathbf{x}_{1},\ldots,\mathbf{x}_{{N}}]^\top$, where entry $X_{{n}p}$ may denote e.g., the salary of individual $n$ in the LinkedIn social network. Each node ${n}$ has a label $y_{n}\in\{0,\ldots,K-1\}$, which in the last example could represent the education level of a person. In SSL, we know labels only for a subset of nodes $\{y_{{n}}\}_{{n}\in\mathcal{M}}$, with $\mathcal{M} \subset\mathcal{V}$. This partial availability may be attributed to privacy concerns (medical data); energy considerations (sensor networks); or unrated items (recommender systems). The ${N}\times K$ matrix $\mathbf{Y}$ is the ``one-hot'' representation of the true nodal labels; that is, if $y_{n}=k$ then $Y_{{n},k}=1$ and $Y_{{n},k'}=0, \forall k'\ne k$. Given $\mathbf{X}$ and $\underline{\mathbf{A}}$, the goal is to develop a \textit{robust tensor-based deep} SSL approach over \textit{multi-relational} graphs; that is, develop a TGCN mapping each node $n$ to its label $y_{n}$; and hence, learn the unavailable labels. \section{Proposed TGCN architecture} Deep learning architectures typically process the input information using a succession of $L$ hidden layers. Each of the layers comprises a conveniently parametrized linear transformation, a scalar nonlinear transformation, and possibly a dimensionality reduction (pooling) operator. By successively combining (non)linearly local features, the aim at a high level is to progressively extract useful information for learning~\cite{goodfellow2016deep}. GCNs tailor these operations to the graph that supports the data~\cite{bronstein2017geometric}, including the linear \cite{defferrard2016convolutional}, and nonlinear \cite{defferrard2016convolutional} operators. In this section, we describe the blocks of our novel multi-relational TGCN, which inputs the known features at the first layer, and outputs the predicted labels at the last layer. We first present the TGCN operation, the output layers, and finally discuss how training is performed. \subsection{Single layer operation} Consider the output ${N}\times{I}\times P^{(l)}$ tensor $\check{\underline{\mathbf{Z}}}^{(l)}$ of an intermediate layer, say the $l$th one, that holds the $P^{(l)}\times 1$ feature vectors $\check{\mathbf{z}}_{{n}{i}}^{(l)}, \forall {n},{i}$, with $P^{(l)}$ being the number of output features at $l$. Similarly, the ${N}\times{I}\times P^{(l-1)}$ tensor $\check{\underline{\mathbf{Z}}}^{(l-1)}$ represents the input of layer $l$. The mapping from $\check{\underline{\mathbf{Z}}}^{(l-1)}$ to $\check{\underline{\mathbf{Z}}}^{(l)}$ consists of two sub-maps. A linear one that maps the ${N}\times{I}\times P^{(l)}$ tensor $\check{\underline{\mathbf{Z}}}^{(l-1)}$ to the ${N}\times{I}\times P^{(l)}$ tensor $\underline{\mathbf{Z}}^{(l)}$; followed by a memoryless scalar nonlinearity $\sigma({\cdot})$ applied to $\underline{\mathbf{Z}}^{(l)}$ as \begin{align}\label{eq:nonlinear} \check{\underline{{Z}}}_{{i}{n}p}^{(l)}:=\sigma( {\underline{{Z}}_{{i}{n}p}^{(l)}}). \end{align} The output $\check{\underline{\mathbf{Z}}}^{(l)}$ of layer $l$ is formed by the entries in \eqref{eq:nonlinear}. A common choice for $\sigma{(\cdot)}$ is the rectified linear unit (ReLU), that is, $\sigma{(c)}=\text{max}(0,c)$ \cite{goodfellow2016deep}. The linear map from $\check{\underline{\mathbf{Z}}}^{(l-1)}$ to $ \underline{\mathbf{Z}}^{(l)}$ will be designed during training. Convolutional NNs (CNNs) typically consider a small number of trainable weights and then generate the linear output as a convolution of the input with these weights~\cite{goodfellow2016deep}. The convolution combines values of close-by inputs (consecutive time instants, or neighboring pixels) and thus extracts information of local neighborhoods. Permeating CNN benefits to the graph domain, GCNs replace the convolution with a `graph filter' whose taps are learnable~\cite{bronstein2017geometric}. Graph filters can have low order (degrees of freedom), and certainly account for the graph structure. In the following three subsections, we introduce the structure of the novel tensor-graph linear transformation, and elaborate on how the multi-relational graph is taken into account. \noindent\textbf{Neighborhood aggregation module (NAM)}. Consider a neighborhood aggregation module per relation and per node, that combines linearly the information available locally in each neighborhood. Since the neighborhood depends on the particular relation $i$ and node $n$, we have (cf. \eqref{eq:neighborhood}) \begin{align} \mathbf{h}_{{n}{i}}^{(l)} := \sum_{{n'}\in\mathcal{N}_{n}^{({i})}} A_{nn'i} \check{\mathbf{z}}_{{n'}{i}}^{(l-1)}. \label{eq:sem} \end{align} While the entries of $\mathbf{h}_{{n}{i}}^{(l)}$ depend only on the one-hop neighbors of $n$ (one-hop diffusion), successive application of this operation across layers will expand the diffusion reach, eventually spreading the information across the network. Letting $\mathbf{A}_{i}^{(r)}:=\mathbf{A}_{i}^r$ denote the $r$th power of feature matrices for $r=1,\ldots,R$ and ${i}=1,\ldots,{I}$, vectors $\mathbf{A}_i^r\mathbf{x}$ hold linear combinations of the values of $\mathbf{x}$ in the $r$-hop neighborhood~\cite{shuman2013emerging}; thus, \eqref{eq:sem} becomes \begin{align} \mathbf{h}_{{n}{i}}^{(l)} =\sum_{{r=1^{\hspace{.015cm} }}}^R \sum_{n'=1}^{N} C^{(r,l)}_{i}A_{nn'i}^{(r)} \check{\mathbf{z}}_{{n'}{i}}^{(l-1)} \label{eq:gf} \end{align} where the learnable coefficients $C^{(r,l)}_{i}$ weigh the corresponding $r$th hop neighbors of node $n$ according to relation ${i}$. Per layer $l$, $\{C^{(r,l)}_{i}\}_{\forall (r,i)}$ are collected in the $R\times{I}$ matrix $\mathbf{C}^{(l)}$. The proposed transformation in \eqref{eq:gf} aggregates the diffused signal in the $R$-hop neighborhoods per $i$; see also Fig. \ref{fig:neighboragrmodule}. \begin{figure} \centering \input{figs/neighboragrmodule.tex} \caption{NAM combines features using the multi-relational graph. With reference to node $n$, and aggregation of $R$-hop neighbors (here $R=2$), note that the local neighborhood is not the same across different graphs.} \label{fig:neighboragrmodule} \end{figure} \noindent\textbf{Graph adaptive module (GAM)}. Feature vector $\mathbf{h}_{{n}{i}}^{(l)}$ captures the diffused input per relation ${i}$, and its role will depend on the inference task at hand. In predicting voting {preference,} for instance, the friendship network may be more important than the coworker relation; cf. Fig.~\ref{fig:multilayer}. As a result, the learning algorithm should be able to adapt to the prevalent features. This motivates the weighted combination \begin{align}\label{eq:graphadaptive} \mathbf{g}_{{n}{i}}^{(l)}:=&\sum_{{i'}=1}^{I}{R}_{{i}{i'}{n}}^{(l)}{\mathbf{h}_{{n}{i'}}^{(l)}} \end{align} where $\{R_{{i}{i'}{n}}^{(l)}\}$ mix features of graphs $i$ and $i'$. Collecting weights $\{{R}_{{i}{i'}{n}}^{(l)}\}$ ${\forall ({i},{i'},{n})}$, yields the trainable ${I}\times{I}\times N$ tensor $\underline{\mathbf{R}}^{(l)}$. The graph-mixing weights enable our TGCN to learn how to combine and adapt across different relations encoded by the multi-relational graph; see also Fig.~\ref{fig:graphagrmodule}. Clearly, if prior information on the dependence among relations is {available}, this can be used to constrain the structure $\underline{\mathbf{R}}^{(l)}$ to be e.g., diagonal or sparse. The graph-adaptive combination in \eqref{eq:graphadaptive} allows for different $ R_{ii'n}$ per $n$. Considering the same $ R$ for each $n$, that is $ R_{ii'n}^{(l)}= R_{ii'}^{(l)}$, results in a design with {less} parameters at the expense of reduced flexibility. For example, certain voters may be affected more {by} their friends, whereas others {by} their coworkers. Using the GAM, our network can achieve personalized predictions. \begin{figure} \centering \input{figs/graphaggregationmodule.tex} \caption{GAM combines the features per $i$, based on the trainable coefficients $\{{R}_{{i}{i'}{n}}\}$. When $\underline{\mathbf{R}}$ is sparse only features corresponding to the most significant relations will be active.} \label{fig:graphagrmodule} \end{figure} \noindent\textbf{Feature aggregation module (FAM)}. Next, the extracted GAM features are mixed using learnable scalars $W_{nipp'} ^{(l)}$ as \begin{align}\label{eq:linconv} \underline{{Z}}_{{n}{i}p}^{(l)}:=& \sum_{p'=1}^{P^{(l-1)}}W_{nipp'}^{(l)}G_{nip'}^{(l)} \end{align} for all $(n,i,p)$, where $G_{nip'}^{(l)}$ represents the $p'$th entry of $\mathbf{g}_{{n}{i}}^{(l)}$. The ${N}\times{I}\times P^{(l)}\times P^{(l-1)}$ tensor $\underline{\weightmat}^{(l)}$ collects the feature mixing weights $\{W_{nipp'}^{(l)}\}_{\forall (n,i,p,p')}$. The linear modules that map the input tensor $\check{\underline{\mathbf{Z}}}^{(l-1)}$ to $\underline{\mathbf{Z}}^{(l)}$ can be now summarized as follows \begin{align} \label{eq:linGCN} \underline{\mathbf{Z}}^{(l)}&:= f(\check{\underline{\mathbf{Z}}}^{(l-1)}; \bm{\theta}_z^{(l)}),\;\;\;\text{with}\\ \label{eq:param} \bm{\theta}_z^{(l)}&:=[\text{vec}(\underline{\weightmat}^{(l)});\text{vec}(\underline{\mathbf{R}}^{(l)}) ;\text{vec}(\mathbf{C}^{(l)})]^{\top} \end{align} where $f$ denotes the synthesis of the three linear modules introduced (namely NAM, GAM and FAM), while $\bm{\theta}_z^{(l)}$ collects the learnable weights involved in those modules [cf. \eqref{eq:gf}-\eqref{eq:linconv}]. \subsection{Residual GCN layer} Successive application of $L$ TGCN layers diffuses the input ${\mathbf{X}}$ across the $LR$-hop graph neighborhood, cf.~\eqref{eq:sem}. However, the exact size of the relevant neighborhood is not always known a priori. To endow our architecture with increased flexibility, we propose a residual TGCN layer that inputs ${\mathbf{X}}$ at each $l$, and thus can include ``sufficient data statistics'' that may have been lost after successive diffusions. This `raw data reuse' is also known as a skip connection~\cite{he2016deep,ruiz2019gated}. Skip connections also emerge when an optimization solver is `unrolled' as a deep neural network with each layer having the form of an iteration; see also~\cite{zhang2019real}. Specifically, the linear operation in \eqref{eq:linGCN} is replaced by the residual linear tensor mapping \cite[Ch. 10]{goodfellow2016deep} \begin{align} \underline{\mathbf{Z}}^{(l)}:= f(\check{\underline{\mathbf{Z}}}^{(l-1)}; \bm{\theta}_z^{(l)})+ f({\mathbf{X}}; \bm{\theta}_x^{(l)}) \label{eq:residuallayer} \end{align} where $\bm{\theta}_x^{(l)}$ collects trainable parameters as those in \eqref{eq:param}. When viewed as a transformation from ${\mathbf{X}}$ to $\underline{\mathbf{Z}}^{(l)}$, the operator in \eqref{eq:residuallayer} implements a broader class of graph diffusions than the one in \eqref{eq:linGCN}. If, for example, $l=3$ and $k=1$, then the first summand in \eqref{eq:residuallayer} is a 1-hop diffusion of a signal that corresponded to a $2$-hop (nonlinear) diffused version of ${\mathbf{X}}$, while the second summand diffuses ${\mathbf{X}}$ in one hop. At a more intuitive level, the presence of the second summand also guarantees that the impact of ${\mathbf{X}}$ in the output does not vanish as the number of layers grow. The autoregressive mapping in \eqref{eq:residuallayer} facilitates {the} application of our architecture with time-varying inputs and labels. Specifically, with $t$ indexing time and given time-varying data $\{{\mathbf{X}}_t\}_{t}^T$, one would set $l=t$, replace ${\mathbf{X}}$ in \eqref{eq:residuallayer} with ${\mathbf{X}}^{(l)}$, and set ${\mathbf{X}}^{(l)}={\mathbf{X}}_t$. This will be studied in detail in our future work towards predicting dynamic processes over multi-relational graphs. \subsection{Initial and final layers} Regarding layer $l=1$, its input $\check{\underline{\mathbf{Z}}}^{(0)}$ is \begin{align}\label{eq:input_first_layer} \check{\mathbf{z}}_{{n}{i}} ^{(0)}=\mathbf{x}_{n} \;\;\text{for}\;\; \text{all}\;\; (n,i). \end{align} AT the other end, the output of our graph architecture is obtained by taking the output of layer $l=L$, and applying \begin{align}\label{eq:output} \hat{\mathbf{Y}}:=g(\check{\underline{\mathbf{Z}}} ^{(L)};\bm{\theta}_g) \end{align} where $g(\cdot)$ is a nonlinear function, $\hat{\mathbf{Y}}$ is an ${N}\times K$ matrix, $\hat{Y}_{{n},k}$ represents the probability that $y_{n}=k$, and $\bm{\theta}_g$ collects trainable parameters. Function $g(\cdot)$ depends on the specific application, with the normalized exponential function (softmax) being a popular choice for classification problems; that is, \begin{align} \hat{Y}_{{n},k}=\frac{\exp{\check{\underline{{Z}}}_{n,k}^{(L)}}}{ \sum_{k=1}^K\exp{\check{\underline{{Z}}}_{n,k}^{(L)}}}. \end{align} For notational convenience, the global mapping $\mathcal{F}$ from $\mathbf{X}$ to $\hat{\mathbf{Y}}$ dictated by our TGCN architecture is \begin{align} \hat{\mathbf{Y}}:=\mathcal{F}\big(\mathbf{X};\{\bm{\theta}_z^{(l)}\}_{l=1}^L,\{\bm{\theta}_x^{(l)}\}_{l=1}^L,\bm{\theta}_g\big) \end{align} and it is summarized by the block diagram of Fig.~\ref{fig:grnn}. \begin{figure*} \centering \input{figs/grnn.tex} \caption{TGCN with $L$ hidden (black) and one output (red) layers. The input $\mathbf{X}$ contains a collection of features per node and the output to be predicted is the probability of each node {belonging} to each the $K$ classes (labels) considered. Each layer of the TGCN is composed of our three novel modules (NAM, GAM, FAM) described in equations \eqref{eq:gf}, \eqref{eq:graphadaptive}, and \eqref{eq:linconv}. Notice the skip connections that input $\mathbf{X}$ to each layer [cf. \eqref{eq:residuallayer}]. } \label{fig:grnn} \end{figure*} \subsection{Training and graph-smooth regularizers} The proposed architecture is parameterized by the weights in \eqref{eq:residuallayer} and \eqref{eq:output}. We learn these weights during the training phase by minimizing the discrepancy between the estimated and the given labels; that is, we solve \begin{align}\label{eq:trainobj} \min_{\{\bm{\theta}_z^{(l)}\}_{l=1}^L,\{\bm{\theta}_x^{(l)}\}_{l=1}^L,\bm{\theta}_g}& \mathcal{L}_{tr} (\hat{\mathbf{Y}},\mathbf{Y}) +\mu_1\sum_{{i}=1}^{I}\text{Tr}(\hat{\mathbf{Y}}^{\top}\mathbf{A}_{i}\hat{\mathbf{Y}}) \nonumber\\ +&\mu_2\rho\big(\{\bm{\theta}_z^{(l)}\}_{l=1}^L,\{\bm{\theta}_x^{(l)}\}_{l=1}^L\big) +\lambda \sum_{l=1}^L\|\underline{\mathbf{R}}^{(l)}\|_1\nonumber\\ \text{s.t.}&~~ \hat{\mathbf{Y}}=\mathcal{F}\big( \mathbf{X};\{\bm{\theta}_z^{(l)}\}_{l=1}^L,\{\bm{\theta}_x^{(l)}\}_{l=1}^L,\bm{\theta}_g\big). \end{align} For SSL, a reasonable choice for the fitting cost is the cross-entropy loss over the labeled examples, i.e., $\mathcal{L}_{tr} (\hat{\mathbf{Y}},\mathbf{Y}):=-\sum_{{n}\in\mathcal{M}}\sum_{k=1}^K Y_{{n} k}\ln{\hat{Y}_{{n} k}}$. The first regularization term in \eqref{eq:trainobj} promotes smooth label estimates over the graphs \cite{smola2003kernels}, while the second $\rho(\cdot)$ is an $\ell_2$ norm over the TGCN parameters typically used to avoid overfitting~\cite{goodfellow2016deep}. Finally, the $\ell_1$ norm in the third regularizer encourages learning sparse mixing coefficients, and hence it promotes activating only a subset of relations per $ l$. The learning algorithm will assign larger combining weights to topologies that are most appropriate for the given data. A backpropagation algorithm~\cite{rumelhart1986learning} is employed to minimize \eqref{eq:trainobj}. The computational complexity of evaluating \eqref{eq:residuallayer} scales linearly with the number of nonzero entries in $\underline{\mathbf{A}}$ (edges) [cf. \eqref{eq:sem}]. To recap, while most prior GCN works entail a single graph with one type of diffusion \cite{bronstein2017geometric,kipf2016semi}, this section has introduced a (residual) TGCN that: i) accounts for multiple graphs over the same set of nodes; ii) diffuses signals across each of the different graphs; iii) combines the signals of the different graphs using adaptive (learnable) coefficients; iv) implements a simple but versatile residual tensor map \eqref{eq:residuallayer}; and v) includes several types of graph-based regularizers. \section{Tensor graphs for robust GCNs} In the previous section, the nodes were involved in $I$ different relations, with each slab of our tensor graph $\underline{\mathbf{A}}$ representing one of those relations. In this section, the TGCN architecture is leveraged to robustify popular \emph{single-graph} GCNs. Consider that the nodes are involved in a \emph{single} relation represented by the graph $\bar{\mathcal{G}}$ that does not necessarily represent the true graph, but an approximate (nominal) version of it. This shows up for example, in applications involving random graph models \cite{newman2018networks,bollobas2001random,harris2013introduction}, where $\bar{\mathcal{G}}$ is one of {the} multiple possible realizations. Similarly, this model also fits adversarial settings, where the links of the nominal graph $\bar{\mathcal{G}}$ are corrupted by a foe (see Fig. \ref{fig:sampled} for a more detailed illustration of this setup). Our approach here is to use $\bar{\mathcal{G}}=(\mathcal{V},\mathbf{A})$ to generate a set of $I$ candidate graphs $\{\mathcal{G}_{i}=(\mathcal{V},\mathbf{A}_{i})\}_{i=1}^I$, whose adjacency matrices form the tensor $\underline{\mathbf{A}}$. Clearly, this approach can also be used for multi-relational graphs, generating multiple candidate graphs per relation. The ensuing subsections elaborate on three scenarios of interest. \subsection{Robustness to the graph topology identification method} In applications dealing with communications, power, and transportation systems, the network connectivity may be explicitly known. In several other {settings, however, the graph is implicit and} must be learned from observed data. Several methods to infer the topology exist, each relying on a different model that relates the graph with data interdependencies~\cite{giannakis2017tutor}. Since in most applications a ground-truth graph is not available, one faces the challenge of selecting the appropriate graph-aware learning approach. Especially for the setup considered here, the approach selected (and hence the resultant graph) will have an impact on GCN performance. Consider first the $\kappa$-nearest neighbors ($\kappa$-NN) method, the `workhorse' SSL approach that is employed to construct graphs in data mining and machine learning tasks, including regression, classification, collaborative filtering, and similarity search, to list just a few~\cite{dong2011efficient}. Whether nodes ${n}$ and ${n'}$ are linked in $\kappa$-NN depends on a distance metric between their nodal features. For the Euclidean distance, we simply have $d(n,n')=\|\mathbf{x}_{n}-\mathbf{x}_{n'}\|_2^2$. Then, for each node $n$ the distances with respect to all other nodes $n'\neq n$ are ranked and $n$ is connected with the $\kappa$ nodes with the smallest distances $\{d(n,n')\}$. However, selecting the appropriate $\kappa$ and distance metric $d(\cdot,\cdot)$ is often arbitrary, and may not generalize well to unseen data, especially if the learning system operates in an online fashion. Hence, our approach to robustify SSL in that scenario is to consider a tensor graph where each slab corresponds to a graph constructed using a different value of $\kappa$ and (or) distance. A similar challenge arises in the so-called correlation network methods \cite{giannakis2017tutor}. Here the topology is identified using pairs of feature vectors at nodes $n$ and $n'$, by comparing the sample correlation coefficient $\rho_{nn'}$ to a threshold $\eta$, and asserting the nonzero edge weight as $\rho_{nn'}$ if $|\rho_{nn'}|>\eta$. Selecting $\eta$ depends on the prescribed false-alarm rate, and can compromise the GCN's learning performance. If cause-effect links are of interest, partial correlation coefficients that can be related to the inverse covariance matrix of the nodal features, have well-documented merits, especially when regularized with edge sparsity as in the graphical Lasso method; see e.g., \cite{giannakis2017tutor} and references therein. In such cases, our fresh idea is to collect the multiple learned graphs, originating from possibly different methods, as slabs of $\underline{\mathbf{A}}$, and then train our TGCN architecture. Depending on the application at hand, it may be prudent to include in the training a block-sparsity penalty on the coefficients $\underline{\mathbf{R}}$, so that we exploit possibly available prior information on the most appropriate graphs. \begin{figure} \centering \input{figs/sampledGraph.tex} \caption{ED in operation on a perturbed social network among voters. Black solid edges are the true links and dashed red edges represent adversarially perturbed links.} \label{fig:sampled} \end{figure} \subsection{Robustness to edge attacks via edge dithering} The ever-expanding interconnection of social, email, and media service platforms presents an opportunity for adversaries manipulating networked data to launch malicious attacks~\cite{zugner18adv,goodfellow2014explaining,aggarwal2015outlier}. Perturbed edges modify the graph neighborhoods, which can markedly degrade the learning performance of GCNs. With reference to Fig.~\ref{fig:multilayer}, several edges of the voting network can be adversarially manipulated so that the voters are steered toward a specific direction. This section explains how TGCN can deal with learning applications, where graph edges have been adversarially perturbed. To this end, we introduce our so-termed edge dithering (ED) module, which for the given nominal graph, creates a new graph by randomly adding/removing links with the aim to restore a node's initial graph neighborhood. Dithering in visual and audio applications, refers to the intentional injection of noise so that the quantization error is converted to random noise, which is visually more desirable~\cite{ulichney1988dithering}. We envision using our TGCN with each slab of the tensor $\underline{\mathbf{A}}$ corresponding to a graph that has been obtained after dithering some of the edges in the nominal (and potentially compromised) graph $\bar{\mathcal{G}}$. Mathematically, given the (perturbed) graph $\bar{\mathcal{G}}=(\mathcal{V},\bar{\mathbf{A}})$, we generate $I$ ED graphs $\{\mathcal{G}_{i}\}_{i=1}^{I}$, with $\mathcal{G}_{i}=(\mathcal{V},\mathbf{A}_{i})$, and with the edges of the auxiliary graph $\mathbf{A}_{i}$ selected randomly as \begin{align} \label{eq:samplegraph} A_{n,n',i}=\left\{ \begin{array}{ll} 1& \text{wp.}~~~q_1^{ \delta(\bar{A}_{n,n'}=1)}{(1-q_2)}^{ \delta(\bar{A}_{n,n'}=0)}\\ 0&\text{wp.}~~~q_2^{ \delta(\bar{A}_{n,n'}=0)}{(1-q_1)}^{ \delta(\bar{A}_{n,n'}=1)} \end{array} \right. \end{align} where $\delta(\cdot)$ is the indicator function; while the dithering probabilities are set to $q_1={\rm Pr}(A_{n,n',i}=1|\bar{A}_{n,n'}=1)$, and $q_2={\rm Pr}(A_{n,n',i}=0|\bar{A}_{n,n'}=0)$. If $n$ and $n'$ are connected in $\bar{\mathcal{G}}$, the edge connecting $n$ with $n'$ is deleted with probability $1-q_1$. Otherwise, if $n$ and $n'$ are not connected in $\bar{\mathcal{G}}$ i.e. $(\bar{A}_{n,n'}=0)$, an edge between $n$ and $n'$ is inserted with probability $1-q_2$. The ED graphs give rise to different neighborhoods $\mathcal{N}_n^{(i)}$, and the role of the ED module is to ensure that the unperturbed neighborhood of each node will be present with high probability in at least one of the $I$ graphs. For clarity, we formalize this intuition in the ensuing remark. \begin{myremark} With high probability, there exists $\mathcal{G}_i$ such that a perturbed edge will be restored to its initial value. This means that there exists an ED graph $i$ such that $A_{n,n',i}=A_{n,n'}$. {Since each} $\mathcal{G}_i$ is independently drawn, it holds that \begin{align} {\rm Pr}\big(\Pi_{i=1}^I\delta(A_{n,n',i}=1)\big|\bar{A}_{n,n'}=1,A_{n,n'}=0\big)=q_1^I\nonumber\\ \nonumber{\rm Pr}\big(\Pi_{i=1}^I\delta(A_{n,n',i}=0)\big|\bar{A}_{n,n'}=0,A_{n,n'}=1\big)=q_2^I. \end{align} \end{myremark} That is, as $I$ increases, the probability that the true edge weight appears in none of the perturbed graphs, decreases exponentially. By following a similar argument, one can argue that, as $I$ increases, the probability that none of the graphs recovers the original neighborhood structure decreases, so that there exists an ED graph $i$ such that $\mathcal{N}_n^{(i)}=\mathcal{N}_n$. At least as important, since TGCN linearly combines (outputs of) different graphs, it will effectively span the range of graphs that we are able to represent, rendering the overall processing scheme less sensitive to adversarial edge perturbations. Indeed, numerical experiments with adversarial attacks will demonstrate that, even with a small $I$, the use of ED significantly boosts classification performance. The operation of the ED module is illustrated in Fig.~\ref{fig:sampled}. \subsection{Learning over random graphs} Uncertainty is ubiquitous in nature and graphs are no exception. Hence, an important research directions is to develop meaningful and tractable models for random graphs. Such models have originated from the graph-theory community (from the early Erd\H{o}s-R\'enyi models to more-recent low-rank graphon generalizations \cite{bollobas2001random}), but also from the network-science community (e.g., preferential attachment models \cite[Ch. 12-16]{newman2018networks}), and the statistics community (e.g., exponential random graph models \cite{harris2013introduction}). These random graph models provide valuable tools for studying structural features of networks, such as giant and small components, degree distributions, path lengths, and so forth. They further provide parsimonious parametric models that can be leveraged to solve challenging inference and inverse problems. This is the context of having access to limited graph-related observations such as the induced graph at a subset of nodes, or the mean and variance of certain graph motifs; see e.g., \cite{alon2007network}. In those cases, inferring the full graph can be infeasible, but one can postulate a particular random graph model, and utilize the available observations to infer the parameters that best fit the data. It is then natural to ponder whether such random graph models can be employed to learn from an (incomplete) set of graph signals using GCNs. Several alternatives are available, including, for example, implementing a multi-layer GCN with a different realization of the graph per layer~\cite{isufi2017filtering}. Differently, we advocate here to leverage once again our TGCN architecture. Our idea is to draw $I$ realizations of the random graph model, form the $N \times N \times I$ tensor $\underline{\mathbf{A}}$, and train a TGCN. This way, each layer considers not only one, but multiple realizations of the graph. Clearly, if we consider an online setup where GCN layers are associated with time, the proposed model can be related to importance sampling and particle filtering approaches, with each slab of the tensor $\underline{\mathbf{A}}$ representing a different particle of the graph probability space \cite{candy2016bayesian}. This hints at the possibility of developing TGCN schemes for the purpose of nonlinear Bayesian estimation over graphs. While certainly of interest, this will be part of our future research agenda. \section{Numerical tests}\label{sec:ntest} This section tests the performance of TGCN in learning from multiple potentially perturbed graphs, and provides tangible answers to the following questions. \begin{itemize} \item[\textbf{Q1}.] How does TGCN compare to state-of-the-art methods for SSL over multi-relational graphs? \item[\textbf{Q2}.] How can TGCN leverage topologies learned from multiple graph-learning methods? \item[\textbf{Q3}.] How robust is TGCN compared to GCN in the presence of noisy features, noisy edge weights, and random as well as adversarial edge perturbations? \item[\textbf{Q4}.] How sensitive is TGCN to the ED parameters, namely $q_1$, $q_2$, and $I$? \end{itemize} Unless stated otherwise, we test the proposed TGCN with $R=2$, $L=3$, $P^{(1)}=64$, $P^{(2)}=8$, and $P^{(3)}=K$. The regularization parameters $\{\mu_1,\mu_2,\lambda\}$ are chosen based on the performance of the TGCN in the validation set of each experiment. For training, an ADAM optimizer with learning rate 0.005 was employed \cite{kingma2015adam}, for 300 epochs\footnote{An epoch is a cycle through all the training examples} with early stopping at 60 epochs\footnote{Training stops if the validation loss does not decrease for 60 epochs}. The simulations were run using TensorFlow~\cite{abadi2016tensorflow}, and the code is available online\footnote{https://sites.google.com/site/vasioannidispw/github}. \begin{figure*}[t] \begin{subfigure}[b]{0.5\columnwidth} \centering{\input{figs/synthaccvsfeatsnr.tex}} \vspace{-.1cm} \caption{ } \end{subfigure}% ~ \begin{subfigure}[b]{0.5\columnwidth} \centering{\input{figs/synthaccvsadjsnr.tex}} \vspace{-.1cm} \caption{ } \end{subfigure}\\% \vspace{.1cm} \begin{subfigure}[b]{0.5\columnwidth} \centering{\input{figs/ionosphereaccvsfeatsnr.tex}} \vspace{-.1cm} \caption{ } \end{subfigure}% ~ \begin{subfigure}[b]{0.5\columnwidth} \centering{\input{figs/ionosphereaccvsadjsnr.tex}} \vspace{-.1cm} \caption{ } \end{subfigure}% \caption{Classification accuracy on the synthetic (a)-(b) and ionosphere (c)-(d) graphs described in Sec. \ref{Sec:NumericalSNR} as the noise level in the features [cf. \eqref{eq:featpertub}] or in the links [\eqref{eq:toppertub}] varies. Panels (a) and (c) show the classification accuracy for noisy features while panels (b) and (d) show the same metric as the power of the noise added to the graph links varies. } \label{fig:robust} \end{figure*} \subsection{SSL using multiple learned graphs}\label{Sec:NumericalSNR} This section reports the performance of the proposed architecture when multiple learned graphs are employed and data are corrupted by noise. When topologies and feature vectors are noisy, the observed $\underline{\mathbf{A}}$ and $\mathbf{X}$ is modeled as \begin{align} \label{eq:toppertub} \underline{\mathbf{A}}=&\underline{\mathbf{A}}_{tr}+\underline{\mathbf{O}}_\mathbf{A}\\ \mathbf{X}=&\mathbf{X}_{tr}+\underline{\mathbf{O}}_\mathbf{X}\label{eq:featpertub}. \end{align} where $\underline{\mathbf{A}}_{tr}$ and $\mathbf{X}_{tr}$ represent the \textit{true} (nominal) topology and features, while $\underline{\mathbf{O}}_\mathbf{A}$ and $\underline{\mathbf{O}}_\mathbf{X}$ denote the corresponding additive perturbations (outliers). We draw $\underline{\mathbf{O}}_\mathbf{A}$ and $\underline{\mathbf{O}}_\mathbf{X}$ from a zero-mean uncorrelated multivariate Gaussian distribution with specified signal to noise ratio (SNR). The robustness of our method is tested in two datasets: i) A synthetic dataset of ${N}=1,000$ samples that belong to $K=2$ classes generated as $\mathbf{x}_{{n}}\in\mathbb{R}^{ F\times1}\sim\mathcal{N}(\mathbf{m}_x,0.4\mathbf{I})$ for ${n}=1,\ldots,1,000$, with $ F=10$ and the mean vector $\mathbf{m}_x\in \mathbb{R}^{ F\times1}$ being all zeros for the first class and all ones for the second class. ii) The ionosphere dataset, which contains ${N}=351$ data with $F=34$ features that belong to $K=2$ classes \cite{Dua:2017}. We generate $\kappa$-NN graphs by varying $\kappa$, and observe $|\mathcal{M}|=200$ and $|\mathcal{M}|=50$ nodes uniformly at random. With this simulation setup, we test the different TGCNs in SSL for increasing SNR values (Figs. \ref{fig:robust}a, \ref{fig:robust}b, \ref{fig:robust}c, \ref{fig:robust}d). We deduce from the classification performance of our method in Fig. \ref{fig:robust} that multiple graphs lead to learning more robust representations of the data, demonstrating the merits of the proposed TGCN architecture. \subsection{Robustness of TGCNs to random graph perturbations}\label{Sec:NumericalCitationED} For this experiment, the novel ED module and TGCN architecture are used to account for perturbations on the graph edges. In this case, the experiments are run using three citation network datasets from~\cite{sen2008collective}. The adjacency matrix of the citation graph is $\mathbf{A}$, its nodes correspond to different documents from the same scientific category, and $A_{nn'}=1$ implies that paper $n$ cites paper $n'$. Each document ${n}$ is associated with a label $y_{n}$ that indicates the document's subcategory. ``Cora'' contains papers related to machine learning, ``Citeseer'' includes papers related to computer and information science, while ``Pubmed'' contains biomedical-related papers, see also Table \ref{tab:citation}. To facilitate comparison, we reproduce the same experimental setup as in \cite{kipf2016semi}, i.e., the same split of the data in training, validation, and testing subsets. For this experiment, the perturbed graph $\bar{\mathbf{A}}$ is generated by inserting new edges in the original graphs between a random pair of nodes $n,n'$ that are not connected in $\mathbf{A}$, meaning $A_{n,n'}=0$. This can represent, for example, documents that should have been cited, but the authors missed. The added edges can be regarded as drawn from Bernoulli's distribution. TGCN utilizes the multiple graphs generated via the ED module with $I=10$ samples, $q_1=0.9$, and $q_2=1$, since no edge is deleted in $\bar{\mathbf{A}}$. Fig.~\ref{fig:adrandpert} depicts the classification accuracy of the GCN~\cite{kipf2016semi} compared to that of the proposed TGCN as the number of perturbed edges is increasing. Clearly, our ED-TGCN is more robust than the standard GCN. Moreover, even when no edges are perturbed, the TGCN outperforms the GCN. This observation may be attributed to noisy links in the original graphs, which hinder classification performance. Furthermore, the SSL performance of the GCN significantly degrades as the number of perturbed edges increases, which suggests that GCN is challenged even by ``random attacks.'' \begin{figure*} \begin{subfigure}[b]{0.5\columnwidth} \centering\input{figs/adlinkscora.tex} \caption{Cora} \end{subfigure}~\begin{subfigure}[b]{0.5\columnwidth} \centering\input{figs/adlinkspubmed.tex} \caption{Pubmed} \end{subfigure} \begin{subfigure}[b]{0.5\columnwidth} \centering\input{figs/adlinkciteseer.tex} \caption{Citeseer} \end{subfigure}~\begin{subfigure}[b]{0.5\columnwidth} \centering\input{figs/adlinkspolblog.tex} \caption{Polblogs} \end{subfigure} \caption{Classification accuracy for the setup described in Sec. \ref{Sec:NumericalCitationED} as the number of perturbed edges increases.} \label{fig:adrandpert} \end{figure*} \begin{table}[] \hspace{0cm} \centering \caption{List of citation graph datasets considered in Secs. \ref{Sec:NumericalCitationED} and \ref{Sec:NumericalCitationED} along with most relevant dimensions.} \vspace{0.2cm} \begin{tabular}{c c c } \hline \textbf {Dataset} & \textbf {Nodes} ${N}$ & \textbf {Classes} $K$ \\ \hline \hline Cora & 2,708 & 7 \\ Citeseer & 3,327 & 6 \\ Pubmed & 19,717 & 3 \\ Polblogs & 1,224 & 2\\ \hline \end{tabular} \label{tab:citation} \end{table} {\setlength\extrarowheight{2pt} \begin{table*} \caption{Classification accuracy for the setup described in Sec. \ref{Sec:NumericalCitationED2} as the number of attacked nodes $|\mathcal{T}|$ increases. } \label{tab:results} \centering \begin{tabular}{@{}p{2cm}p{2cm}cccccc@{}} \hline \multirow{3}{*}{\vspace*{8pt}\textbf{Dataset}}& \multirow{3}{*}{\vspace*{8pt}\textbf{Method}}&\multicolumn{4}{c}{\textbf{Number of attacked nodes} $|\mathcal{T}|$}\\\cmidrule{3-7} & & {\textsc{20}} & {\textsc{30}} & {\textsc{40}} & {\textsc{50}} & {\textsc{60}} \\ \hline\hline \multirow{2}{*}{\rotatebox{0}{\hspace*{-0pt}{Citeseer}} } & \textsc{GCN} & 60.49 & 56.00 & 61.49 & 56.39 & \textbf{58.99} \\ & \textsc{TGCN} & \textbf{70.99}& \textbf{56.00} & \textbf{61.49} & \textbf{61.20} & 58.66 \\ \cmidrule{1-7} \multirow{2}{*}{\rotatebox{0}{\hspace*{-0pt}{Cora}}} & \textsc{GCN} & 76.00 & 74.66 & 76.00 & 62.39 & 73.66 \\ & \textsc{TGCN} & \textbf{78.00} & \textbf{82.00} & \textbf{84.00} & \textbf{73.59} & \textbf{74.99}\\ \cmidrule{1-7} \multirow{2}{*}{\rotatebox{0}{\hspace*{-0pt}{Pubmed}}} & \textsc{GCN} & \textbf{74.00} & 71.33 & 68.99 & 66.40 & 69.66 \\ & \textsc{TGCN} & 72.00 & \textbf{75.36} & \textbf{71.44} & \textbf{68.50} & \textbf{74.43} \\ \cmidrule{1-7} \multirow{2}{*}{\rotatebox{0}{\hspace*{-0pt}{Polblogs}}} & \textsc{GCN} & \textbf{85.03} & 86.00 & 84.99 & 78.79 & 86.91 \\ & \textsc{TGCN} & 84.00 & \textbf{88.00} & \textbf{91.99} & \textbf{78.79} & \textbf{92.00} \\ \hline \end{tabular \end{table*} }\setlength\extrarowheight{0pt} \subsection{Robustness to adversarial attacks on edges}\label{Sec:NumericalCitationED2} The original graphs corresponding to Cora, Citeseer, Pubmed, and Polblogs were perturbed using the adversarial setup in~\cite{zugner18adv}, where structural attacks are effected on attributed graphs. These attacks perturb connections adjacent to a set $\mathcal{T}$ of targeted nodes by adding or deleting edges~\cite{zugner18adv}. Our ED module uses $I=10$ sampled graphs with $q_1=0.9$, and $q_2=0.999$. For this experiment, 30\% of the nodes are used for training, 30\% for validation, and 40\% for testing. The nodes in $\mathcal{T}$ are in the testing set. Table \ref{tab:results} reports the classification accuracy of the GCN and the proposed TGCN for different numbers of attacked nodes $|\mathcal{T}|$. Different from Fig.~\ref{fig:adrandpert}, where the classification accuracy over the test set is reported, Table \ref{tab:results} reports the classification accuracy over the set of attacked nodes $\mathcal{T}$. It is observed that the proposed TGCN is more robust than GCN under adversarial attacks~\cite{zugner18adv}. This finding justifies the use of the novel ED in conjunction with the TGCN that judiciously selects extracted features originating from non-corrupted neighborhoods. Fig. \ref{fig:robustsens} showcases the sensitivity of TGCN to varying parameters of the ED module for the experiment in Table \ref{tab:results} with the Cora, and with $|\mathcal{T}|=30$. Evidently, TGCN's performance is relatively smooth for certain ranges of the parameters. In accordance with Remark 2, notice that even for small $I$ TGCN performance improves considerably. \begin{figure*}[t] {\input{figs/agcnq2per.tex}}{\input{figs/agcnp2per.tex}} {\input{figs/agcnI2per.tex}}\vspace{-0.0cm} \caption{SSL classification accuracy of the TGCN under varying edge creation prob. $q_1$, edge deletion prob. $q_2$, and number of samples $I$. } \label{fig:robustsens} \end{figure*} \subsection{Predicting protein functions}\label{Sec:NumericalProteins} This section tests the performance of TGCN in \emph{predicting ``protein functions.''} Protein-to-protein interaction networks relate two proteins via multiple cell-dependent relations that can be modeled using \emph{multi-relational} graphs; see Fig.\ref{fig:multilayer}. Protein classification seeks the unknown function of some proteins (nodes) based on the known functionality of a small subset of proteins, and the protein-to-protein networks~\cite{zitnik2017predicting,ioannidis2019camsap}. Given a target function $y_n$ that is available on a subset of proteins ${n}\in\mathcal{M}$, known functions on all proteins summarized in $\mathbf{X}$, and the multi-relational protein networks $\underline{\mathbf{A}}$, the goal is to predict whether the proteins in ${n}\in{\mathcal{V}-\mathcal{M}}$ are associated with the target function or not. Hence, the number of target classes is $K=2$. In this setting, $\mathbf{A}_{i}$ represents the protein connectivity in the ${i}$th cell type, which could be a cerebellum, midbrain, or frontal lobe cell. Table \ref{tab:biodata} summarizes the three datasets used in the following experiments. \begin{table}[t] \hspace{0cm} \centering \caption{List of protein-to-protein interaction datasets considered in Sec. \ref{Sec:NumericalProteins} and their associated dimensions.} \vspace{0.2cm} \begin{tabular}{c c c c c} \hline \textbf{Dataset} & \textbf{Nodes} ${N}$ & \textbf{Features} $ F$ & \textbf{Relations} ${I}$\\ \hline\hline Generic cells & 4,487 & 502 & 144\\ Brain cells & 2,702 & 81 & 9 \\ Circulation cells & 3,385 & 62 & 4\\ \hline \end{tabular} \vspace{0.0cm} \label{tab:biodata} \end{table} We compare TGCN with the GCN in~\cite{kipf2016semi}, which is the single-relational alternative, and Mune~\cite{ye2018robust}, that represents a state-of-the-art diffusion-based approach for SSL over multi-relational graphs. Since GCN only accounts for a single graph, we select for the GCN the relation $i$ that achieves the best results in the validation set. Furthermore, Mune does not account for feature vectors in the nodes of the graph. For a fair comparison, we employ the TGCN without using the feature vectors, that {is,} $\mathbf{X}=\mathbf{I}_{N}$. Finally, since the classes are heavily unbalanced, we evaluate the performance of the various approaches using the macro F1 score for predicting the protein functions.\footnote{Accurate classifiers achieve macro F1 values close to 1.} \begin{figure*} \begin{floatrow} \ffigbox{% \centering \input{figs/macrof1brain.tex} \label{fig:bc} }{% \caption{Brain cells}% } \ffigbox{% \centering \input{figs/macrof1circulation.tex} \label{fig:cc} }{% \caption{Circulation cells}% } \end{floatrow} \end{figure*} \begin{figure} \centering \input{figs/macrof1general.tex} \caption{Generic cells} \label{fig:gc} \end{figure} Figs. 9-\ref{fig:gc} report the macro F1 values for the aforementioned approaches for varying numbers of labeled samples $|\mathcal{M}|$. It is observed for all datasets that: i) the macro F1 score improves for increasing $|\mathcal{M}|$ across all algorithms; ii) the TGCN that judiciously combines the multiple-relations outperforms the GCN by a large margin; and, iii) When nodal features are not used (last two rows at each table), TGCN outperforms the state-of-the-art Mune. \section{Conclusions} This paper put forth a novel deep SSL approach based on a tensor graph convolutional network (TGCN). The proposed architecture is able to account for nodes engaging in multiple relations, can be used to reveal the data structure, and it is computationally affordable since the number of operations scales linearly with the number of graph edges. Instead of committing a fortiori to a specific type of diffusion, the TGCN learns the diffusion pattern that best fits the data. Our TGCN was also adapted to robustify SSL over a single graph with model-based, adversarial or random edge perturbations. To cope with adversarial perturbations, random edge dithering (ED) was performed on the (nominal) graph edges, and the dithered graphs were used as input to the TGCN. Our approach achieved state-of-the-art classification results over multi-relational graphs when nodes are accompanied by feature vectors. Further experiments demonstrate the performance gains of TGCN in the presence of noisy features, noisy edge weights, and random as well as adversarial edge perturbations. Future research includes predicting time-varying labels, and using TGCN for nonlinear Bayesian estimation over graphs. \bibliographystyle{IEEEtran}
1,108,101,562,393
arxiv
\section{Introduction} Among the candidates for a theory of quantum gravity, the non-perturbative quantum gravity develops rapidly. In recent years, as a non-perturbative quantum gravity scheme, Loop Quantum Gravity (LQG) shows more and more strength on gravitational quantization \cite{thiemann06}. LQG is rigorously constructed on the kinematical Hilbert space. Many spatial geometrical operators, such as the area, the volume and the length operator have also been constructed on this kinematical Hilbert space. The successful examples of LQG include quantized area and volume operators \cite{rovelli95,ashtekar97,ashtekar98a,thiemann98}, a calculation of the entropy of black holes \cite{rovelli96}, Loop Quantum Cosmology (LQC) \cite{bojowald05}, etc. As a successful application of LQG to cosmology, LQC has an outstanding and interesting result---replacing the big bang spacetime singularity of cosmology with a big bounce \cite{bojowald01}. In addition, LQC also gives a quantum suppression of classical chaotic behavior near singularities in Bianchi-IX models \cite{bojowald04a,bojowald04b}. Furthermore, it has been shown that the non-perturbative modification of the matter Hamiltonian leads to a generic phase of inflation \cite{bojowald02,date05,xiong1}. Recently the authors in \cite{Mielczarek08} proposed a new quantization scheme (we will call it $\mu_{MS}$ scheme in the following) for LQC \footnote{The Eq.~(3) of \cite{Mielczarek08} is somewhat confusing. In fact the new quantization scheme has nothing to do with equation (3). The argument for the new scheme is just expanding the standard LQC term (see Eqs.~(8)-(10)) and keeping the terms of expansion up to the 4th order.}. In this new quantization scheme, the classical big bang singularity is also replaced by a quantum bounce. The most interestingly, this new scheme introduces a novel quantum singularity. At this quantum singularity the Hubble parameter diverges, but the universe can evolve through it regularly, which is different from the case for the classical big bang singularity. In order to investigate the novel quantum singularity in this $\mu_{MS}$ scheme, we apply it to the universe filled with a tachyon field and compared the result with the classical dynamics and the $\bar{\mu}$ scheme which is presented in our previous paper \cite{xiong1}. The organization of this paper is as follows. In Sec. \ref{Sec.2} we give out the effective framework of LQC coupled with the tachyon field, and a brief review of the new quantization scheme suggested in \cite {Mielczarek08}. In addition we show some general properties of this specified effective LQC system. In Sec. \ref{Sec.3}, we use the numerical method to investigate the detailed dynamics of the universe filled with the tachyon field in the $\mu_{MS}$ scheme. At the same time we compare the difference between the $\mu_{MS}$ scheme, the $\bar{\mu}$ scheme and the classical behavior. In Sec. \ref{Sec.4} we present some comments on the traversable singularity. In Sec.\ref{Sec.5} we conclude and discuss our results on the $\mu_{MS}$ quantization scheme. Throughout the paper we adopt units with $c=G=\hbar=1$. \section{the effective framework of LQC coupled with tachyon field} \label{Sec.2}The tachyon scalar field arises in string theory \cite{Sen02,Sen02b}, which may provide an explanation for inflation \cite{tachyon_inflation} at the early epochs and could contribute to some new form of the cosmological dark energy \cite{tachyon_dark} at late times. Moreover, the tachyon scalar field can also be used to interpret the dark matter \cite{causse04}. In this paper we investigate the tachyon field in the effective framework of LQC. According to Sen \cite{Sen02}, in a spatially flat FRW cosmology the Hamiltonian for the tachyon field can be written as \[ H_\phi (\phi ,\Pi _\phi )=a^3\sqrt{V^2(\phi )+a^{-6}\Pi _\phi ^2} \] where $\Pi _\phi =\frac{a^3V\dot{\phi}}{\sqrt{1-\dot{\phi}^2}}$ is the conjugate momentum for the tachyon field $\phi $, $V(\phi )$ is the potential term for the tachyon field, and $a$ is the FRW scale factor. So we have \[ -1\leq \dot{\phi}\leq 1. \] Following our previous work \cite{xiong1}, we take a specific potential for tachyon field \cite{Sen02b} in this paper, \[ V(\phi )=V_0e^{-\alpha \phi }, \] where $V_0$ is a positive constant and $\alpha $ is the tachyon mass. Similar to \cite{xiong1}, we set $V_0=0.82$ and $\alpha =0.5$. For the flat model of universe, the phase space of LQC is spanned by coordinates $% c=\gamma \dot{a}$ and $p=a^2$, being the only remaining degrees of freedom after the symmetry reduction and the gauge fixing. In terms of the connection and triad, the classical Hamiltonian constraint is given by \cite{ashtekar03} \begin{equation} H_{cl}=-\frac 3{8\pi \gamma ^2}\sqrt{p}c^2+H_\phi . \end{equation} Considering $\bar{\mu}$ quantization scheme, the effective Hamiltonian in LQC is given by \cite{ashtekar06b} \begin{equation} H_{eff,\bar{\mu}}=-\frac 3{8\pi \gamma ^2\bar{\mu}^2}\sqrt{p}% \sin ^2\left( \bar{\mu}c\right) +H_\phi . \end{equation} The variable $\bar{\mu}$ corresponds to the dimensionless length of the edge of the elementary loop and is given by \begin{equation} \bar{\mu}=\xi p^{-1/2},\label{mubar} \end{equation} where $\xi $ is a constant ($\xi >0$) and depends on the particular scheme in the holonomy corrections. $\xi$ is given by \begin{equation} \xi ^2=2\sqrt{3}\pi \gamma l_p^2, \end{equation} where $l_p$ is the Planck length. Considering the $\mu_{MS}$ quantization scheme, the effective Hamiltonian in LQC is given by \cite{Mielczarek08} \begin{eqnarray} H_{eff,\mu_{MS}}=-\frac 3{8\pi \gamma ^2}\frac{1-\sqrt{1-\frac 43\sin ^2(\bar{\mu}c)}}{\frac 23\bar{\mu}^2}\sqrt{p}+H_\phi . \end{eqnarray} In analogy to the $\mu_0$ scheme and the $\bar{\mu}$ scheme, we can understand this $\mu_{MS}$ scheme as follows. We write the effective Hamiltonian as \begin{equation} H_{eff,\mu_{MS}}=-\frac 3{8\pi \gamma ^2\mu_{MS}^2}\sqrt{p}% \sin ^2\left( \mu_{MS}c\right) +H_\phi . \end{equation} While $\mu_{MS}$ is given by \begin{eqnarray} \frac {\sin ^2\left( \mu_{MS}c\right)}{\mu_{MS}^2}=\frac{1-\sqrt{1-\frac 43\sin ^2(\bar{\mu}c)}}{\frac 23\bar{\mu}^2}, \end{eqnarray} with $\bar{\mu}$ determined by Eq.~(\ref{mubar}). As to the argument for the above equation we refer to \cite{Mielczarek08}. With notation of the effective energy density $\rho _{eff}$ and the effective pressure $% P_{eff}$, we can write the modified Friedmann equation, modified Raychaudhuri equation and the conservation equation in the same form \begin{eqnarray} &&H^2=\frac{8\pi }3\rho_{eff}, \\ &&\frac{\ddot{a}}a=\dot{H}+H^2=-\frac{4\pi }3\left( \rho _{eff}+3P_{eff}\right) , \label{equation_H} \\ &&\dot{\rho}_{eff}+3H\left( \rho_{eff}+P_{eff}\right) =0. \end{eqnarray} In the above equations, $H\equiv \frac{\dot{a}}a$ stands for the Hubble parameter; $\rho_{eff}$ and $P_{eff}$ are the effective energy density and the effective pressure, respectively. For the following three situations: the classical cosmology, the $\bar{\mu}$ scheme and the $\mu_{MS}$ scheme in LQC, we have \begin{widetext} \begin{eqnarray} \rho _{eff,cl} &=&\rho _\phi =\frac V{\sqrt{1-\dot{\phi}^2}}, \\ P_{eff,cl} &=&P_\phi =-V\sqrt{1-\dot{\phi}^2}, \\ \rho _{eff,\bar{\mu}} &=&\rho _\phi \left( 1-\frac{\rho _\phi }{% \rho _c}\right) , \\ P_{eff,\bar{\mu}} &=&P_\phi \left( 1-\frac{2\rho _\phi }{\rho _c}% \right) -\frac{\rho _\phi ^2}{\rho _c}, \\ \rho _{eff,\mu_{MS}} &=&\rho _\phi \left( 1-\frac{\rho _\phi }{% 3\rho _c}\right) \left[ \frac 34+\frac 14\left( 1-\frac 23\frac{\rho _\phi }{% \rho _c}\right) ^{-2}\right] , \\ P_{eff,\mu_{MS}} &=&\left[ P_\phi \left( 1-4\frac{\rho _\phi }{% \rho _c}\right) -\frac{\rho _\phi ^2}{3\rho _c}\right] \left[ \frac 34+\frac % 14\left( 1-\frac 23\frac{\rho _\phi }{\rho _c}\right) ^{-2}\right] \nonumber \\ &&+\frac 13\left( 1-\frac{\rho _\phi }{3\rho _c}\right) \left( 1-\frac 23% \frac{\rho _\phi }{\rho _c}\right) ^{-3}\left( \frac{\rho _\phi ^2}{\rho _c}+% \frac{\rho _\phi P_\phi }{\rho _c}\right) , \end{eqnarray} \end{widetext} where $\rho _c=\frac{\sqrt{3}}{16\pi ^2\gamma ^3l_p^2}$. In the first two lines we have already used the Hamiltonian for the tachyon field and the definitions of the energy density and the pressure \cite{hossain05} \begin{equation} \rho _\phi =\frac{H_\phi}{a^3} ,\ P_\phi =-\frac{1}{3a^2}\frac{\partial H_\phi }{% \partial a}. \end{equation} Correspondingly, we have the following evolution equations \begin{widetext} \begin{eqnarray} classical &:&\ddot{\phi}=-\left( 1-\dot{\phi}^2\right) \frac{V^{\prime }}% V\mp 3\dot{\phi}\left( 1-\dot{\phi ^2}\right) \left[ \frac{8\pi }3\rho _\phi \right] ^{1/2}, \label{ddotphi_cl} \\ \bar{\mu} &:&\ddot{\phi}=-\left( 1-\dot{\phi}^2\right) \frac{% V^{\prime }}V\mp 3\dot{\phi}\left( 1-\dot{\phi ^2}\right) \left[ \frac{8\pi }3\rho _\phi \left( 1-\frac{\rho _\phi }{\rho _c}\right) \right] ^{1/2}, \label{ddotphi_2} \\ \mu_{MS} &:&\ddot{\phi}=-\left( 1-\dot{\phi}^2\right) \frac{% V^{\prime }}V \nonumber \\ &&\mp 3\dot{\phi}\left( 1-\dot{\phi ^2}\right) \left\{ \frac{8\pi }3\rho _\phi \left( 1-\frac{\rho _\phi }{3\rho _c}\right) \left[ \frac 34+\frac 14\left( 1-\frac 23\frac{\rho _\phi }{\rho _c}\right) ^{-2}\right] \right\} ^{1/2}. \label{ddotphi_4} \end{eqnarray} \end{widetext} In the above equations, ``$-$'' corresponds to the expanding universe while ``$+$'' corresponds to the contracting universe. For the $\bar{\mu}$ scheme, the bounce happens at $\rho _\phi =\rho _c$, so we have \cite {ashtekar06a,ashtekar06b,xiong1} \[ \rho _\phi =\frac{V_0e^{-\alpha \phi }}{\sqrt{1-\dot{\phi}^2}}\leq \rho _c. \] For the $\mu_{MS}$ scheme, the bounce happens at $\rho _\phi =3\rho _c$, so we have \cite{Mielczarek08} \[ \rho _\phi =\frac{V_0e^{-\alpha \phi }}{\sqrt{1-\dot{\phi}^2}}\leq 3\rho _c. \] The comparison of these features are shown in Fig. \ref{fig1} and \ref{fig2}. In the left panel of Fig. \ref{fig1} we compare the different behavior of the Hubble parameter versus the energy density of the matter. The upper half corresponds to the expansion stage of the universe, and the lower half corresponds to the contraction stage. For the $\bar{\mu}$ quantization scheme, we can see clearly the bounce behavior at $\rho _\phi =\rho _c$. When $\rho_\phi>\rho_c/2$ the universe meets a superinflation phase ($\dot{H}>0$). When $\rho_\phi$ becomes small, the universe behaves as the standard picture. While for the $\mu_{MS}$ scheme, the bounce happens at $\rho _\phi =3\rho _c$. When $\rho_\phi>3\rho_c/2$, the universe meets a superinflation phase. In the region where $\rho_\phi$ is small, the Hubble parameter of the $\mu_{MS}$ scheme is smaller than the one of the standard cosmology. In this sense the universe of the $\mu_{MS}$ scheme expands slower than the universe of the standard cosmology. In Fig. \ref{fig2} we compare the different behavior of $\ddot{\phi}$ in the expanding universe (taking ``-'' in Eqs. (\ref{ddotphi_cl})-(\ref{ddotphi_4})), in the phase space, i.e., $ \dot{\phi}$-$\phi $ space. Firstly, from the upper three subfigures, we can see that the quantum correction in the $\bar{\mu}$ scheme of LQC changes the amplitude of classical $\ddot{\phi}$ very small, while in contrast, the quantum correction in the $\mu_{MS}$ scheme changes this amplitude significantly. Besides, the quantum correction in the $\bar{\mu}$ scheme changes obviously the line shapes for iso-$\ddot{\phi}$, while the quantum correction in the $\mu_{MS}$ scheme changes the line shapes negligibly. This difference results from the different locations of the bounce region. In fact, the quantum effect is the strongest in the bounce region for different quantum corrections. The region shown in these three upper subfigures is near the bounce region for the $\bar{\mu}$ scheme while some far away from the bounce region of the $\mu_{MS}$ scheme. Secondly, in the lower three subfigures, we compare the quantum correction from $\mu_{MS}$ of LQC with the classical behavior. We see that the behavior of $\ddot{\phi}$ near the singularity region is changed completely. $\ddot{\phi}$ diverges when the state approaches the singularity line (the contour line of $\rho _\phi =\frac 32\rho _c$) except for one point $\left( \phi ,\dot{\phi}\right) =\left( -\frac 1\alpha \ln {% \frac{3\rho _c}{2V_0}},0\right) $ where $\ddot{\phi}=\alpha $. Yet near the bounce region, the behavior of $\ddot{\phi}$ is similar to the one of the $\bar{\mu}$ scheme in its corresponding bounce region. \section{quantitative analysis of the cosmological dynamics coupled with tachyon field} \label{Sec.3}In the original paper \cite{Mielczarek08}, the authors provided qualitative analysis of the dynamics for LQC in the $\mu_{MS}$ scheme. Take the advantage of the specific model of the universe coupled with the tachyon field, we can investigate this dynamics for LQC quantitatively, and compare the difference between the classical, the $\bar{\mu}$ scheme and the $\mu_{MS}$ scheme dynamics. We solve the equations (\ref {ddotphi_cl}), (\ref{ddotphi_2}) and (\ref{ddotphi_4}) with the Rung-Kutta subroutine. The result is presented in Fig. \ref{fig3}. For the $\mu_{MS}$ scheme, the difference between the classical and the quantum behaviors is more explicit in the region between the bouncing boundary and the singularity line. While the difference in the region on the right-hand side of singularity is negligible. Note that the admissible states for the $\bar{\mu}$ scheme all locate in this region. So we can imagine that the difference between the $\mu_{MS}$ scheme and the $\bar{\mu}$ scheme is nothing but the difference between the classical one and the $\bar{\mu}$ one, which is presented in Figs. 1 and 2 of our previous paper \cite{xiong1}. Near the bounce region, the quantum behavior is similar for both the $\bar{\mu}$ scheme and the $\mu_{MS}$ scheme. Certainly, this behavior emerges at different places in $\dot{\phi}$-$\phi $ space for the $\bar{\mu}$ scheme and the $\mu_{MS}$ scheme respectively. Near the singularity region, there is not special respects for the quantum evolution in the $% \dot{\phi}$-$\phi $ space, except that the quantum trajectories are much steeper than the classical ones. This steeper behavior is resulted from the singularity behavior of the quantum dynamics which makes $\ddot{\phi}$ much larger than the original classical ones. From the right panel of Fig.\ref{fig3}, we can see that both the hyper-inflationary and deflationary phases of the universe emerged clearly. The universe expands increasingly faster before singularity untill the acceleration becomes infinity. This stage corresponds to the hyper-inflationary phase for the universe. After this singularity, the universe expands more and more slowly. The behavior comes back to the classical one \cite{sami02,guo03} quickly. This stage is the deflationary phase for the universe. \section{Comments on the traversable singularity} \label{Sec.4} As a dynamical system, (\ref{ddotphi_4}) is singular when $\rho_\phi=3\rho_c/2$, which corresponds to the right ``C" shaped curve of the left panel of Fig. ~\ref{fig3}. That means that the dynamical system is only defined in two separated regions. One is the region between the left ``C" shaped line and the right ``C" shaped line, and the another is the region on the right-hand side of the right ``C" shaped line. Then a question arises naturally--does the numerical behavior traversing the separated ``C" shaped line make sense or is it only a numerical cheating \footnote{We thank our referee for pointing out this problem which improved our understanding of the traversable singularity.}? If we consider these two regions separately the dynamical system is well defined in the sense of the Cauchy uniqueness theorem. The orbits in the phase space $\phi$-$\dot{\phi}$ are smooth. Taking the phase space as $R^2$, these orbits are smooth curves. So these smooth curves have proper limit points on the separated ``C" shaped line. From the numerical solutions shown in Fig. ~\ref{fig3}, we can see that the different orbits have different limit points on the ``C" shaped line. This is because the vector flow generating the trajectories (on the $\phi$-$\dot{\phi}$ plane) has well-defined directions (vertical) at the singularity, which means the trajectories cannot intersect there. Then it is natural to join the two orbits in the two separated regions with the same limit points. In this way we get a well-defined dynamical system in the total region which lies on the right-hand side of the left ``C" shaped line. We can expect the numerical solution with the Rung-Kutta method will converge to this solution (we also performed numerical integrations starting from both sides towards the singular point and obtained the same result). So the numerical result presented in the above sections does make sense. In the following, we come back to the spacetime to check the property of this traversable singularity. Corresponding to the two separated regions of phase space $\phi$-$\dot{\phi}$, the scale factor $a$ can be determined by $\phi$ and $\dot{\phi}$ through \begin{eqnarray} a(\phi,\dot{\phi})=(\rho_\phi/H_\phi)^{1/3}\label{determinea} \end{eqnarray} which is a smooth function of $\phi$ and $\dot{\phi}$. Therefore, the two spacetime regions of the universe corresponding to these two regions is smooth. When the universe evolves to $\rho_\phi=3\rho_c/2$, $a$ is also well defined through (\ref{determinea}). That implies that the whole spacetime of the universe is well defined through joining the above mentioned two regions together by the well-defined $a$ at $\rho_\phi=3\rho_c/2$. But since $\ddot{\phi}$ is singular there, $\dot{a}$ is also singular there, which makes the spacetime of the universe unsmooth. In this sense the spatial slice corresponding to this special universe time is a traversable singularity of the spacetime. According to \cite{ellis77,tipler77,krolak88,seifert77}, singularities can be classified into strong and weak types. A singularity is strong if the tidal forces cause complete destruction of the objects irrespective of their physical characteristics, whereas a singularity is considered weak if the tidal forces are not strong enough to forbid the passage of objects. In this classification the singularity discussed here is only a weak singularity. As to the cosmological singularities, they can be classified in more details with the triplet of variables ($a,\rho,P$) \cite{nojiri05,cattoen05,fernandez06,singh09}: Big bang and Big Crunch ($a=0$, $\rho$ and curvature invariants diverge at a finite proper time); Big Rip or type I singularity ($a$, $\rho$, $P$ and curvature invariants diverge at a finite proper time); Sudden or type II singularity ($a$ and $\rho$ are finite while $P$ diverges); type III singularity ($a$ is finite while $\rho$ and $P$ diverge); type IV singularity ($a$, $\rho$, $P$ and the curvature invariants are finite while the curvature derivatives diverge). For the singularity discussed in this paper, $a$, $\rho$ and $P$ are all finite because of the regularity of $\phi$ and $\dot{\phi}$ at the singularity. But the Ricci curvature invariant, \begin{eqnarray} R=6(H^2+\frac{\ddot{a}}{a}), \end{eqnarray} diverges at the singularity. So the singularity here does not fall in any type of above clarification. But it is more similar to type IV singularity than to the other types. \section{discussion and conclusion} \label{Sec.5}In the classical cosmology our universe has a big bang singularity. All physical laws break down there. LQC replaces this singularity with a quantum bounce and the universe can evolve through the bounce point regularly. Considering the quantum ambiguity of quantization scheme in LQC, the authors in \cite{Mielczarek08} proposed a new scheme. In addition to the quantum bounce, a novel quantum singularity emerges in this new scheme. The quantum singularity is different from the big bang singularity and is traversable, although the Hubble parameter diverges at this singularity. In this paper, we follow our previous work \cite{xiong1} to investigate this novel dynamics with the tachyon scalar field in the framework of the effective LQC. We analyze the evolution of the tachyon field with an exponential potential in the context of LQC, and obviously, any other choice of potential can lead to the similar result. In the high energy region (approaching the critical density $\rho_c$), LQC in the new quantization scheme greatly modifies the classical FRW cosmology and predicts a nonsingular bounce at density $3\rho_c$, which is located at a different density region compared with the $\bar{\mu}$ quantization scheme. In addition, besides this quantum bounce, this new quantization scheme also introduces a quantum singularity, which emerges at density $1.5\rho_c$. At this quantum singularity the Hubble parameter diverges. But this singularity is different from the classical one. The universe can evolve through this quantum singularity regularly. Different from the $\bar{\mu}$ scheme, the dynamics of the new quantization scheme will deviates from the classical one even in a small energy density region (see Fig. 1). \acknowledgments It is a pleasure to thank our anonymous referee for many valuable comments. The work was supported by the National Natural Science of China (No. 10875012).
1,108,101,562,394
arxiv
\section{Introduction \label{s.1}} General Relativity provides an excellent account of all known gravitational phenomena, such as planetary orbits, gravitational lensing and the dynamics of binary neutron stars. It also predicts the existence of black holes and gravitational waves, which has motivated intense efforts of physicists and astronomers to observe their presence and properties. To observe gravitational phenomena in the laboratory is much more difficult, as a result of the weakness of gravity as compared to other fields of force, in particular electromagnetism, but also strong nuclear interactions and even the weak interactions of quarks and leptons mediated by the massive vector bosons $(Z, W^{\pm})$. Gravitational experiments in the laboratory are mostly confined to measurements of Newton's constant and tests of the equivalence principle, although the gravitational redshift has also been established in terrestrial experiments and gravitational time-dilation is nowadays of practical importance for the accuracy of satellite-based global position measurements. The gravitational fields involved in these tests of General Relativity are almost always provided by large masses, such as that of the earth, the sun and other stars or massive compact objects. However, as Einstein's theory tells us that all forms of energy are a source of gravitational fields, it is of some interest to study the gravitational field associated with wave-phenomena, such as electromagnetic waves, as well as certain related purely gravitational waves. Solutions of General Relativity and of Einstein-Maxwell theory describing these physical situations are known to exist \cite{brinkmann}-\cite{jwvh}. In this paper I study their properties and analyze the dynamics of massive and massless test particles in the presence of these fields, paying in particular attention to the gravitational effects. \section{Waves in Einstein-Maxwell theory \label{s.2}} In the absence of massive particles and electric charges or currents, the combined theory of gravitational and electromagnetic fields is specified by the Einstein-Maxwell equations \begin{equation} \begin{array}{l} R_{\mu\nu} - \frac{1}{2}\, g_{\mu\nu} R = - 8\pi G\, T_{\mu\nu}, \\ \\ D^{\mu} F_{\mu\nu} = 0, \end{array} \label{2.1} \end{equation} where the energy-momentum tensor of the electromagnetic field is \begin{equation} T_{\mu\nu} = F_{\mu\lambda} F_{\nu}^{\;\;\lambda} - \frac{1}{4}\, g_{\mu\nu} F^2, \label{2.2} \end{equation} $F^2$ being the trace of the first term, and where we have chosen units such that $c = \varepsilon_0 = 1$. In edition to the equations (\ref{2.1}) there are Bianchi identities, implying for the Maxwell tensor $F_{\mu\nu}$ that it can be written in terms of a vector potential $A_{\mu}$: \begin{equation} F_{\mu\nu} = \partial_{\mu} A_{\nu} - \partial_{\nu} A_{\mu}, \label{2.3} \end{equation} and for the Riemann tensor that it can be expressed in terms of the metric via a torsion-free symmetric connection. The Maxwell equations can be solved in empty Minkowski space-time in terms of plane waves. Such plane waves are characterized by a constant wave vector and electric and magnetic field strengths which are orthogonal to the wave vector and to each other, as in fig.\ 1. \begin{center} \scalebox{0.5}{\hs{12}\includegraphics{emwave.pdf}} \vs{1} {\footnotesize{Fig.\ 1: Electric and magnetic field strength of a harmonic plane wave.}} \end{center} \noindent The properties of plane-wave solutions must be preserved in the full Einstein-Maxwell theory, in particular there should exist covariantly constant light-like wave vector fields \begin{equation} k^2 = g_{\mu\nu} k^{\mu} k^{\nu} = 0, \hs{2} D_{\mu} k_{\nu} = 0, \label{2.7} \end{equation} and electromagnetic vector potentials $A_{\mu}$ which describe waves traveling at the velocity of light, and which are locally orthogonal to the wave vector field $k^{\mu}$. Such solutions exist \cite{papapetrou} and can be cast in the following form. In $4$-dimensional space-time we introduce light-cone co-ordinates \begin{equation} u = t - z, \hs{1} v = t + z. \label{2.4} \end{equation} Then the vector potential, written as a 1-form, for waves traveling in the positive $z$-direction is \begin{equation} A = A_i(u) dx^i, \label{2.8} \end{equation} where $x^i$ are the co-ordinates in the 2-dimensional transverse plane, and the components of the vector potential can be expanded in plane waves: \begin{equation} A_i(u) = \int \frac{dk}{2\pi}\, \left( a_i(k) \sin ku + b_i(k) \cos ku \right). \label{2.9} \end{equation} The corresponding field strength $F = 2 dA$ has components \begin{equation} F = 2 F_{ui}\, du \wedge dx^i = 2 A^{\prime}_i(u)\, du \wedge dx^i , \label{2.10} \end{equation} from which it follows that the electric and magnetic fields are transverse, taking the values \begin{equation} E_i = - \varepsilon_{ij} B_j = F_{ui}(u) = A^{\prime}_i(u). \label{2.11} \end{equation} In flat space-time the energy-momentum tensor of this electromagnetic field reads \begin{equation} T_{\mu\nu}\, dx^{\mu} dx^{\nu} = T_{uu}\, du^2, \hs{2} T_{uu} = F_{ui} F_{u}^{\;\;i} = \frac{1}{2} \left( {\bf E}^2 + {\bf B}^2 \right). \label{2.12} \end{equation} In view of eq.\ (\ref{2.1}) the Ricci tensor is required to have the same form; this holds for space-times with a metric of the {\em pp}-wave type: \begin{equation} g_{\mu\nu} dx^{\mu} dx^{\nu} = - du dv - \Phi(u,x^i) du^2 + dx^{i\,2}. \label{2.13} \end{equation} The explicit components of the connection and the Riemann tensor are presented in the appendix. Here it suffices to notice, that the Ricci tensor is of the required type indeed: \begin{equation} R_{\mu\nu} dx^{\mu} dx^{\nu} = R_{uu} du^2, \hs{2} R_{uu} = -\frac{1}{2}\, \partial_i^2\, \Phi. \label{2.15} \end{equation} The metric (\ref{2.13}) admits a constant light-like Killing vector \begin{equation} K = k^{\mu} \partial_{\mu} = 2 k \partial_v, \label{2.14} \end{equation} signifying the translation invariance of all fields in the $v$-direction. Substitution of the Ricci tensor (\ref{2.15}) and the electro-magnetic energy-momentum tensor (\ref{2.12}) in the Einstein equations leads to a single non-trivial field equation \begin{equation} \partial_i^2\Phi = 8 \pi G \left( {\bf E}^2 + {\bf B}^2 \right). \label{2.16} \end{equation} Moreover, the explicit form of electro-magentic field strength tensor (\ref{2.10}) and the connection coefficients (\ref{a.1}) guarantees that the Maxwell equations \begin{equation} D^{\mu} F_{\mu\nu}= 0, \label{2.17} \end{equation} reduce to the same equations in Minkoswki space, and therefore hold for the vector potentials (\ref{2.8}), (\ref{2.9}). Similar {\em pp}-wave solutions in the presence of non-gravitational fields can be constructed for scalar and Dirac fields \cite{skmh,jwvh}, Yang-Mills fields \cite{fuster-vh} and higher-rank antisymmetric tensors as in 10-D supergravity \cite{blau-et-al,maldacena-maoz,fuster}. Dimensional reduction of {\em pp}-waves has been used to construct explicit solutions of lower-dimensional non-relativistic field theories \cite{duval-horv-palla}. \section{{\em PP}-wave solutions \label{s.3}} As the electric and magnetic fields ${\bf E}(u)$, ${\bf B}(u)$ in eq.\ (\ref{2.16}) depend only on the light-cone variable $u$, the equation can be integrated to give the result \begin{equation} \Phi = 2 \pi G\, (x^2 + y^2) \left( {\bf E}^2 + {\bf B}^2 \right) + \Phi_0, \label{3.1} \end{equation} where $\Phi_0$ is a solution of the homogeneous equation \begin{equation} \partial_i^2 \Phi_0 = 0. \label{3.2} \end{equation} Trivial solutions of the homogeneous equation are represented by the linear expressions \begin{equation} \Phi_0(u,x^i) = \Phi_{flat}(u,x^i) = \alpha(u) + \alpha_i(u) x^i. \label{3.3} \end{equation} Since the Riemann tensor is composed only of second derivatives $\Phi_{,ij}$, these linear solutions by themselves describe a flat space-time with $R_{\mu\nu\kappa\lambda} = 0$. The metric only looks non-trivial because it describes Minkowski space as seen from an accelerated co-ordinate system. In general the number of non-trivial quadratic and higher solutions depends on the dimensionality of the space-time. In 4-$D$ space-time there are two linearly independent quadratic solutions: \begin{equation} \Phi_0(u,x^i) = \Phi_{gw}(u,x^i) = \kappa_+(u) \left( x^2 - y^2 \right) + 2 \kappa_{\times}(u) x y, \label{3.4} \end{equation} where $\kappa_{+,\times}(u)$ are the amplitudes of the two polarization modes. The Riemann tensor now has non-vanishing components \begin{equation} R_{uxux} = - R_{uyuy} = - \kappa_+(u), \hs{2} R_{uxuy} = R_{uyux} = - \kappa_{\times}(u). \label{3.5} \end{equation} If these amplitudes are well-behaved, the solutions represent non-singular gravitational-wave space-times. We can also infer that these modes have spin-2 behaviour; indeed, under a rotation around the $z$-axis represented by the co-ordinate transformation \begin{equation} \left( \begin{array}{c} x^{\prime} \\ y^{\prime} \end{array} \right) = \left( \begin{array}{cc} \cos \varphi & - \sin \varphi \\ \sin \varphi & \cos \varphi \end{array} \right) \left( \begin{array}{c} x \\ y \end{array} \right), \label{3.6} \end{equation} $\Phi_{gw}$ is invariant if we simultaneously perform a rotation between the amplitudes \begin{equation} \left( \begin{array}{c} \kappa_+^{\prime} \\ \kappa_{\times}^{\prime} \end{array} \right) = \left( \begin{array}{cc} \cos 2 \varphi & - \sin 2 \varphi \\ \sin 2 \varphi & \cos 2 \varphi \end{array} \right) \left( \begin{array}{c} \kappa_+ \\ \kappa_{\times} \end{array} \right). \label{3.7} \end{equation} Note that the form of the special solution (\ref{3.1}) implies, that with this rule the full metric is invariant under rotations in the transverse plane. For all integers $n > 2$ there also exist non-trivial solutions $\Phi_0$ constructed from linear combinations of $n$-th order monomials in $x^i$. They form a spin-$n$ representation of the transverse rotation group $SO(2)$. However, all such solutions give rise to singular curvature components at spatial infinity. Thus the spin-2 solutions (\ref{3.4}) seem to be the only globally {\em bona fide} free gravitational wave solutions of this kind, and we restrict the space of the {\em pp}-wave solutions of the Einstein-Maxwell equations to the line elements defined by the solution (\ref{3.1}) with $\Phi_0 = \Phi_{gw}$ as in (\ref{3.4}). \section{Geodesics \label{s.4}} Returning to the em-wave space-times (\ref{3.1}), the geodesics can be determined from the connection coefficients (\ref{a.1}), given by the components of the gradient of $\Phi$. As a result, the relevant equations reduce to those of a particle moving in a potential, as we now show \cite{duval-horv-palla,jwvh}. Consider time-like geodesics $X^{\mu}(\tau)$, parametrized by the proper time $\tau$: \begin{equation} d\tau^2 = dU dV + \Phi(U,X^i) dU^2 - dX^{i\,2}. \label{4.1} \end{equation} This choice of parameter immediately establishes the square of the 4-velocity as a constant of motion: \begin{equation} g_{\mu\nu}(X)\, \dot{X}^{\mu} \dot{X}^{\nu} = - \dot{U} \dot{V} - \Phi(U,X^i)\, \dot{U}^2 + \dot{X}^{i\,2} = - 1, \label{4.2} \end{equation} where the overdot denotes a proper-time derivative. This equation gives a first integral of motion for the light-cone co-ordinate $V(\tau)$ in terms of solutions for the other three co-ordinates. These have to be obtained from the geodesic equation \begin{equation} \ddot{X}^{\mu} + \Gamma_{\lambda\nu}^{\;\;\;\mu} \dot{X}^{\lambda} \dot{X}^{\nu} = 0. \label{4.2.1} \end{equation} The existence of the Killing vector (\ref{2.14}) implies a simple equation for the other light-cone co-ordinate $U(\tau)$: \begin{equation} \ddot{U} = 0 \hs{1} \Rightarrow \hs{1} \dot{U}(\tau) = \gamma = \mbox{constant}, \label{4.3} \end{equation} as there are no connection coefficients with contravariant index $\mu = u$. It follows, that $U$ can be used to parametrize geodesics, instead of $\tau$. Now eq.\ (\ref{4.2}) implies for the laboratory time co-ordinate $T = X^0$: \begin{equation} \frac{dT}{d\tau} = \sqrt{\frac{1 - \gamma^2 \Phi}{1 - {\bf v}^2}}, \label{4.4} \end{equation} where ${\bf v} = d {\bf X}/dT$ is the velocity in laboratory co-ordinates\footnote{Note, that our notation here is not covariant: \[ {\bf v}^2 = \sum_{a=1}^3 v_a^2 \neq \sum_{a,b = 1}^3 g_{ab} v^a v^b. \]} Now, as \begin{equation} \frac{dU}{dT} = 1 - v_z, \label{4.5} \end{equation} it follows from eqs.\ (\ref{4.2}) and (\ref{4.4}) that $h$ defined by \begin{equation} h \equiv \frac{1 - {\bf v}^2}{(1 - v_z)^2} + \Phi = \frac{1}{\gamma^2}, \label{4.6} \end{equation} is a constant of motion, the gravitational equivalent of the total particle energy. In particular, for a particle initially at rest in a locally flat space-time one finds $h = \gamma = 1$. For light-like geodesics one can follow a similar procedure by introducing an affine parameter $\lambda$ such that the geodesic equation (\ref{4.2.1}) holds upon interpreting the overdot as a derivative w.r.t.\ $\lambda$, whilst the line element and the left-hand side of eq.\ (\ref{4.2}) are taken to vanish. It then follows, that \begin{equation} h = \frac{1 - {\bf v}^2}{(1 - v_z)^2} + \Phi = 0. \label{4.7} \end{equation} Observe in particular that, as ${\bf v}^2$ is not a covariant expression, in general ${\bf v}^2 \neq 1$, even for light. Next we turn to the transverse part of the motion. Considering either time-like or light-like geodesics, the geodesic equation (\ref{4.2.1}) for the transverse co-ordinates $X^i$ takes the form \begin{equation} \ddot{X}^i = - \frac{\gamma^2}{2}\, \Phi_{,i} \hs{1} \Leftrightarrow \hs{1} \frac{\partial^2 X^i}{\partial U^2} + \frac{1}{2}\, \dd{\Phi}{X^i} = 0. \label{4.8} \end{equation} For a quadratic potential it follows that \begin{equation} \Phi = \kappa_{ij}(U)\, X^i X^j \hs{1} \Rightarrow \hs{1} \dd{^2 X^i}{U^2} + \kappa_{ij}(U)\, X^j = 0. \label{4.9} \end{equation} This is the equation for a parametric oscillator with real or imaginary frequencies, depending on the signs of the components $\kappa_{ij}(U)$. A special case is that of negative constant curvature; after a diagonalization of the coefficients $\kappa_{ij}$ this situation is characterized by (temporarily suspending the summation convention): \begin{equation} R_{uiuj} = - \kappa_{ij} \equiv - \mu_i^2\, \delta_{ij} \hs{1} \Rightarrow \hs{1} X^i(U) = X_0^i\, \cos \mu_i (U - U_0). \label{4.10} \end{equation} The remarkable aspect of this result, is that the magnitude of the curvature (the gravitational field strength) determines the {\em frequency} of the geodesic motion, rather than its amplitude. This represents a gravitational analogue of the Josephson effect, where a constant voltage generates an oscillating current. \section{The gravitational field of a light wave \label{s.5}} In section \ref{s.2} we discussed general wave solutions of the Einstein-Maxwell theory. We now consider the special case of monochromatic waves. As our first example we take the light wave to be circularly polarized; the corresponding vector potential can be written as \begin{equation} {\bf A} = a \left( \cos ku, \sin ku, 0 \right) \hs{1} \Rightarrow \hs{1} \left\{ \begin{array}{l} {\bf E} = ka \left( - \sin ku, \cos ku, 0 \right), \\ \\ {\bf B} = ka \left( \cos ku, \sin ku, 0 \right). \end{array} \right. \label{5.1} \end{equation} As the electric and magnetic fields are $90^{\circ}$ out of phase the energy density is constant, and \begin{equation} \Phi_{circ} = 2 \pi G \left( {\bf E}^2 + {\bf B}^2 \right) \left( x^2 + y^2 \right) = 4\pi G\, k^2 a^2 \left( x^2 + y^2 \right). \label{5.2} \end{equation} Thus the potential is of the quadratic type (\ref{4.9}), (\ref{4.10}), with \begin{equation} \mu_x^2 = \mu_y^2 = \mu^2 \equiv 4 \pi G\, k^2 a^2. \label{5.3} \end{equation} Numerically, we find in SI units \begin{equation} \mu = 1.3 \times 10^{-9}\, \frac{E}{E_c}\; \mbox{m$^{-1}$}, \hs{2} E_c = \frac{m_e^2}{e} = 1.3 \times 10^{18}\; \mbox{Vm$^{-1}$}. \label{5.4} \end{equation} Here $E_c$ is the critical field for electron-positron pair production. For the limiting value $E = E_c$ we find an angular frequency of $\omega = \mu c = 0.4$ rad/s. As a second example we take linearly polarized light, for which \begin{equation} {\bf A} = a \left( \cos ku, 0, 0 \right) \hs{1} \Rightarrow \hs{1} \left\{ \begin{array}{l} {\bf E} = ka \left( - \sin ku, 0, 0 \right), \\ \\ {\bf B} = ka \left( 0, \sin ku, 0 \right). \end{array} \right. \label{5.5} \end{equation} Then the potential takes the form \begin{equation} \Phi_{lin} = 4\pi G\, k^2 a^2\, \sin^2 ku \left( x^2 + y^2 \right). \label{5.6} \end{equation} Introducing the time variable $s = kU$, the transverse equations of motion becomes \begin{equation} \frac{d^2 X^i}{ds^2} + \nu^2 \left( 1 - \cos 2s \right) X^i = 0, \label{5.7} \end{equation} with \begin{equation} \nu^2 = 2\pi G\, a^2. \label{5.8} \end{equation} This is a Mathieu equation, with Bloch-type periodic solutions \begin{equation} \begin{array}{l} X^i(s) = u(s) \cos qs + v(s) \sin qs, \\ \\ u(s+\pi) = u(s), \hs{1} v(s+\pi) = v(s). \end{array} \label{5.9} \end{equation} For very large $\nu^2$, with electromagnetic field intensities of the order of the Planck scale, the wave numbers $q$ become complex and the solutions exhibit parametric resonance. \section{Scattering by the gravitational wave field \label{s.6}} As a light-wave (\ref{2.8}), (\ref{2.9}) is accompanied by a gravitational field of the {\em pp}-type, and as gravity is a universal force, even electrically and magnetically neutral particles are scattered by a light-wave. Although this gravitational force acts on charged particles as well, their dynamics is generally dominated by the Lorentz force, depending on the ratio of charge to mass. In this section I discuss the scattering of neutral classical point particles by wave-like gravitational fields. The general solution to this scattering problem is provided by the geodesics discussed in sect.\ \ref{s.4}. However, a more generic scattering situation is specified by taking both the initial and final states of a particle to be states of inertial motion in flat Minkowski space-time, the state of motion being changed at intermediate times by the passage of a wave-like gravitational field of finite extent. For simplicity, let us consider a circularly polarized electromagnetic block wave, accompanied by a gravitational block wave as sketched in Fig.\ 2: \begin{center} \scalebox{0.75}{\includegraphics{blockwave.pdf}} \\ \vs{1} {\footnotesize Fig.\ 2: Curvature block wave.} \end{center} \begin{equation} \Phi = \kappa(u) \left( x^2 + y^2 \right), \hs{1} \kappa(u) = \left\{ \begin{array}{ll} 0, & u < 0; \\ \mu^2, & 0 \leq u \leq L; \\ 0, & u > L. \end{array} \right. \label{6.1} \end{equation} Now in the asymptotic regions the time-like geodesics are straight lines: \begin{equation} \begin{array}{ll} X^i(U) = X^i_0 + p^i U, & U < 0; \\ & \\ X^i(U) = \bar{X}^i_0 + \bar{p}^i U, & U > L. \end{array} \label{6.2} \end{equation} The interpolating solution of oscillating type (\ref{4.10}) must match these asymptotic solutions at $U = 0$ and $U = L$, such that both $X^i(U)$ and $X^{i\,\prime}(U)$ are continuous. By matching at $U = 0$ one gets: \begin{equation} X^i(U) = X^i_0\, \cos \mu U + \frac{p^i}{\mu}\, \sin \mu U, \label{6.3} \end{equation} Then matching with the Minkowski-solution for $U > L$ the following linear relations between the transverse position and velocity set-offs are found: \begin{equation} \left( \begin{array}{c} \bar{X}^i_0 \\ \\ \bar{p}^i \end{array} \right) = \left( \begin{array}{cc} \cos \mu L + \mu L \sin \mu L & L \left( \frac{\sin \mu L}{\mu L} - \cos \mu L \right) \\ & \\ - \mu \sin \mu L & \cos \mu L \end{array} \right) \left( \begin{array}{c} X^i_0 \\ \\ p^i \end{array} \right), \label{6.4} \end{equation} In particular, if a particle is initially at rest: ${\bf v} = 0$, hence $p_x = p_y = 0$ and $h = \gamma = 1$, then the final velocity $\bar{{\bf v}}$ is: \begin{equation} \begin{array}{l} \displaystyle{ \bar{v}_x = \frac{d\bar{X}}{d\bar{T}} = - \frac{\mu X_0 \sin \mu L}{1 + \alpha}, \hs{1} \bar{v}_y = \frac{d\bar{Y}}{d\bar{T}} = - \frac{\mu Y_0 \sin \mu L}{1 + \alpha}, }\\ \\ \displaystyle{ \bar{v}_z = \frac{d\bar{Z}}{d\bar{T}} = \frac{\alpha}{1 + \alpha}, } \end{array} \label{6.5} \end{equation} with \begin{equation} \alpha = \frac{\mu^2}{2} \left( X_0^2 + Y_0^2 \right) \sin^2 \mu L. \label{6.6} \end{equation} Observe, that the particle will be at rest again in the final state if $\mu L = n \pi$, with $n$ integer, whereas the transverse velocity after scattering is maximal for $\mu L = (n + 1/2) \pi$. By the results (\ref{6.5}) the scattering angle is given by \begin{equation} \tan \psi = \frac{\sqrt{\bar{v}_x^2 + \bar{v}_y^2}}{\bar{v}_z} = \sqrt{\frac{2}{\alpha}}, \label{6.7} \end{equation} and therefore $\tan \psi$ is large for small $\alpha$, i.e.\ $\mu L \ll 1$. In contrast, for large $\alpha$ the transverse velocity vanishes: $\bar{v}_i \rightarrow 0$, whilst the velocity in the $z$-direction approaches the speed of light: $\bar{v}_z \rightarrow 1$, hence $\psi \rightarrow 0$. A similar analysis can be done for light-like geodesics, for which $h = 0$. If a massless particle, like a photon, initially travels in the $-z$ direction with velocity ${\bf v} = (0, 0, -1)$, and therefore $p_x = p_y = 0$, one finds for the final velocity vector $\bar{{\bf v}}$: \begin{equation} \begin{array}{l} \displaystyle{ \bar{v}_x = \frac{d\bar{X}}{d\bar{T}} = - \frac{2\mu X_0 \sin \mu L}{2\alpha + 1}, \hs{1} \bar{v}_y = \frac{d\bar{Y}}{d\bar{T}} = - \frac{2\mu Y_0 \sin \mu L}{2\alpha + 1}, }\\ \\ \displaystyle{ \bar{v}_z = \frac{d\bar{Z}}{d\bar{T}} = \frac{2\alpha - 1}{2\alpha + 1}, } \end{array} \label{6.8} \end{equation} The corresponding scattering angle for massless particles is \begin{equation} \tan \psi = \left| \frac{\sqrt{\bar{v}_x^2 + \bar{v}_y^2}}{\bar{v}_z} \right| = \frac{2 \sqrt{2 \alpha}}{1 - 2 \alpha}. \label{6/9} \end{equation} Observe, that for $\alpha = 1/2$ the velocity of massless particles is purely transverse, whilst for $\alpha > 1/2$ the sign of $\bar{v}_z$ reverses, and its value approaches $+ 1$ at large transverse distances. \section{Quantum fields in a {\em pp}-wave background \label{s.7}} Like classical particles, also quantum fields are affected by the presence of the gravitational field of a light wave, even in the absence of direct electromagnetic interactions. One can observe this in the behaviour of a scalar field in a {\em pp}-wave background, described by the metric (\ref{2.13}). The d'Alembert operator then takes the form \begin{equation} \Box_{pp} = \frac{1}{\sqrt{-g}} \partial_{\mu} \sqrt{-g} g^{\mu\nu} \partial_{\nu} = - 4 \partial_u \partial_v + 4 \Phi(u;x^i)\, \partial_v^2 + \Del_{trans}, \label{7.1} \end{equation} where $\Del_{trans} = \sum_i \partial_i^2$ is the Laplace operator in the transverse plane. The Klein-Gordon equation \begin{equation} \left( - \Box_{pp} + m^2 \right) \Psi = 0, \label{7.2} \end{equation} can be transformed by a Fourier transformation \begin{equation} \Psi(u,v; x^i) = \frac{1}{2\pi}\, \int ds dq\, \psi(s,q; x^i)\, e^{-i(su + qv)}. \label{7.3} \end{equation} The equation for $\psi(s,q;x^i)$ then becomes \begin{equation} \left( - \Del_{trans} + 4 q^2 \Phi(-i \partial_s; x^i) - 4 qs + m^2 \right) \psi = 0. \label{7.4} \end{equation} In the special case of quadratic $\Phi$ with constant curvature, as in eqs.\ (\ref{4.9}) and (\ref{4.10}): \begin{equation} \Phi = \mu^2_x x^2 + \mu_y y^2, \label{7.5} \end{equation} the equation can be solved in closed form, in terms of hermite polynomials $H_n(x)$. More generally, introducing the ladder operators \begin{equation} {\bf a}_i = \frac{1}{2\sqrt{|q|\mu_i}} \left( \partial_i + 2|q|\mu_i\, x_i \right), \hs{1} {\bf a}_i^{\dagger} = \frac{1}{2\sqrt{|q|\mu_i}} \left( - \partial_i + 2|q|\mu_i\, x_i\right), \label{7.6} \end{equation} with commutation relations \begin{equation} \left[ {\bf a}_i, {\bf a}_j^{\dagger} \right] = \delta_{ij}, \label{7.7} \end{equation} the Klein-Gordon equation (\ref{7.4}) becomes \begin{equation} \left[ \sum_{i = (x,y)} 4 |q| \mu_i \left( {\bf a}_i^{\dagger} {\bf a}_i + \frac{1}{2} \right) - 4qs + m^2 \right] \psi = 0. \label{7.8} \end{equation} Now write \begin{equation} E = s + q, \hs{1} p = s - q \hs{1} \Rightarrow \hs{1} s u + q v = Et - pz; \label{7.9} \end{equation} then the integer eigenvalues of the occupation number operator ${\bf n}_i = {\bf a}_i^{\dagger} {\bf a}_i$ turn the Klein-Gordon equation into an equation for the spectrum of energy eigenvalues for the scalar field: \begin{equation} \left( E \mp \sigma \right)^2 = \left( p \mp \sigma \right)^2 +m^2, \hs{1} \sigma(n_i) = \sum_i \mu_i \left( n_i + \frac{1}{2} \right) \geq \frac{1}{2}\, \sum_i \mu_i, \label{7.10} \end{equation} where the sign depends on whether $E > p$ (upper sign), or $E < p$ (lower sign). The levels $\sigma$ are quantized, the $n_i$ being non-negative integers. Taking into account (\ref{7.8}) the general solution for the Klein-Gordon equation can be written in explicit form as \begin{equation} \begin{array}{l} \displaystyle{ \Psi(u,v;x^i) = \frac{1}{2\pi}\, \sum_{n_i = 0}^{\infty} \int_{-\infty}^{\infty} ds \int_{-\infty}^{\infty} dq\, \delta(4sq - 4 \sigma |q| - m^2)\, \chi_{n_i}(q)\, e^{- i su - i vq} }\\ \\ \hs{7} \displaystyle{ \times\, \prod_{j = x,y} \left[ \left( \frac{2\mu_j|q|}{\pi} \right)^{1/4} \frac{H_{n_j}(\xi_j)}{\sqrt{2^{n_j} n_j!}}\, e^{-\xi_j^2 /2} \right], } \end{array} \label{7.11} \end{equation} where \begin{equation} \xi_i = \sqrt{2 \mu_i |q|}\, x_i. \label{7.12.0} \end{equation} Performing the integral over $s$ and taking $\Psi$ to be real, this takes the form \begin{equation} \begin{array}{l} \displaystyle{ \Psi(u,v;x^i) = \frac{1}{2\pi}\, \sum_{n_i = 0}^{\infty} \int_0^{\infty} \frac{dq}{q}\, \left( a_{n_i}(q)\, e^{-iqv - i \left( \frac{m^2}{4q} + \sigma \right) u} + a^*_{n_i}(q)\, e^{iqv + i \left( \frac{m^2}{4q} + \sigma \right) u} \right) }\\ \\ \hs{7} \displaystyle{ \times\, \prod_{j = x,y} \left[ \left( \frac{2\mu_j|q|}{\pi} \right)^{1/4} \frac{H_{n_j}(\xi_j)}{\sqrt{2^{n_j} n_j!}}\, e^{-\xi_j^2 /2} \right], } \end{array} \label{7.12} \end{equation} with the reality condition resulting in \begin{equation} a_{n_i}(q) = \frac{1}{4}\, \chi_{n_i}(q), \hs{1} a_{n_i}^*(q) = - \frac{1}{4}\, \chi_{n_i}(-q), \hs{2} q > 0. \label{7.13} \end{equation} The Fock space of the quantum scalar field is then generated by taking the Fourier coefficients to be operators with commutation relation \begin{equation} \left[ a_{n_i}(q), a^*_{m_i}(k) \right] = \pi |q|\, \delta_{n_x,m_x} \delta_{n_y,m_y}\, \delta (q - k). \label{7.14} \end{equation} Equivalently, the space-time fields themselves then obey the equal light-cone time commutation relation \begin{equation} \left[ \Psi(u,v,x^i), \Psi(u,v^{\prime},x^{\prime\,i}) \right] = \frac{i}{4}\, \epsilon(v - v^{\prime})\, \delta^2\left( x^i - x^{\prime\, i} \right), \label{7.15} \end{equation} with $\epsilon(x)$ the Heavyside step function \begin{equation} \epsilon(x) = \left\{ \begin{array}{cl} +1, & x > 0; \\ -1, & x < 0. \end{array} \right. \label{7.16} \end{equation} From expression (\ref{7.12}) we read off that the lowest single-particle energy is $E = m + \sigma(0)$, for $p = \sigma(0)$. \section{Discussion \label{s.8}} In this paper I have presented the properties of the gravitational field associated with a light wave. The effects of this gravitational field are extremely small, but qualitatively and conceptually very interesting. So far I have described the light-wave as a classical Maxwell field; however, ultimately one would like to consider a quantum description of light and its associated gravitational effects. To see what kind of issues are at stake, let me momentarily restore SI units, and summarize the equations for the electromagnetic energy density and flux, and the corresponding space-time curvature: \begin{equation} \Phi = c {\cal E} = \frac{\varepsilon_0 c}{2} \left( E^2 + B^2 \right) = - \frac{c^5}{8\pi G}\, R_{uu}. \label{8.1} \end{equation} Now according to the quantum theory as first developed by Planck and Einstein, we can also think of the wave in terms of photons of energy $\hbar \omega$. Then the flux is expressed in terms of photons per unit of time and area as \begin{equation} \Phi = \hbar \omega\, \frac{dN}{dt dA}. \label{8.2} \end{equation} Equating the two expressions above, we get the relationship \begin{equation} \frac{1}{\omega}\, \frac{dN}{dt} = -\frac{R_{uu}}{k^2}\, \frac{dA}{l_{Pl}^2}, \hs{2} l_{Pl}^2 = \frac{8 \pi G \hbar}{c^3}. \label{8.3} \end{equation} Both sides of this equation represent dimensionless numbers, with the right-hand side actually a product of two dimensionless quantities: the ratio of Ricci curvature $R_{uu}$ and wavenumber squared $k^2 = 4\pi^2/\lambda^2$, and the area $dA$ in Planck units. At the microscopic level at least some, and possibly all, of these quantities have to exhibit quantized behaviour.
1,108,101,562,395
arxiv
\section{#1}\setcounter{equation}{0}} \def\nappendix#1{\vskip 1cm\noindent{\bf Appendix #1}\def#1{#1} \setcounter{equation}{0}} \font\tendl=msbm10 scaled \magstep \font\sevendl=msbm7 scaled \magstep1 \font\fivedl=msbm5 scaled \magstep1 \font\tengl=eufm10 scaled \magstep \font\sevengl=eufm7 scaled \magstep1 \font\fivegl=eufm5 scaled \magstep1 \newfam\dlfam \def\dl{\fam\dlfam\tendl} \textfont\dlfam=\tendl \scriptfont\dlfam=\sevendl \scriptscriptfont\dlfam=\fivedl \newfam\glfam \def\gl{\fam\glfam\tengl} \textfont\glfam=\tengl \scriptfont\glfam=\sevengl \scriptscriptfont\glfam=\fivegl \def\number\month/\number\day/\number\year\ \ \ \hourmin {\number\month/\number\day/\number\year\ \ \ \hourmin } \def\pagestyle{draft}\thispagestyle{draft{\pagestyle{draft}\thispagestyle{draft} \global\def0{1}} \global\def0{0} \catcode`\@=12 \def\widetilde{\widetilde} \def\widehat{\widehat} \documentstyle[11pt]{article} \def{\thesection.\arabic{equation}}{{#1.\arabic{equation}}} \setlength{\textwidth}{15cm} \setlength{\textheight}{22.12cm} \hoffset -1.45cm \topmargin= -0.4cm \raggedbottom \raggedbottom \renewcommand{\baselinestretch}{1.0} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}\vskip 0.5 cm}{\end{eqnarray}\vskip 0.5 cm} \newcommand{\nonumber}{\nonumber} \newcommand{\noindent}{\noindent} \newcommand{\vskip}{\vskip} \newcommand{\hspace}{\hspace} \newcommand{\'{e}}{\'{e}} \newcommand{\`{e}}{\`{e}} \newcommand{\partial}{\partial} \newcommand{\underline}{\underline} \newcommand{\NR}{{{\dl R}} \newcommand{\NA}{{{\dl A}} \newcommand{\NP}{{{\dl P}} \newcommand{\NC}{{{\dl C}} \newcommand{\NT}{{{\dl T}} \newcommand{\NZ}{{{\dl Z}} \newcommand{\NH}{{{\dl H}} \newcommand{\NN}{{{\dl N}} \newcommand{\NS}{{{\dl S}} \newcommand{\NW}{{{\dl W}} \newcommand{\NV}{{{\dl V}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\bar\partial}{\bar\partial} \newcommand{\partial}{\partial} \newcommand{{\rm e}}{{\rm e}} \newcommand{{\rm Ker}}{{\rm Ker}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\mbox{\boldmath $\lambda$}}{\mbox{\boldmath $\lambda$}} \newcommand{\mbox{\boldmath $\alpha$}}{\mbox{\boldmath $\alpha$}} \newcommand{\mbox{\boldmath $x$}}{\mbox{\boldmath $x$}} \newcommand{\mbox{\boldmath $\xi$}}{\mbox{\boldmath $\xi$}} \newcommand{\mbox{\boldmath $k$}}{\mbox{\boldmath $k$}} \newcommand{\hbox{tr}}{\hbox{tr}} \newcommand{\hbox{ad}}{\hbox{ad}} \newcommand{\hbox{Lie}}{\hbox{Lie}} \newcommand{{\rm w}}{{\rm w}} \newcommand{{\cal A}}{{\cal A}} \newcommand{{\cal B}}{{\cal B}} \newcommand{{\cal C}}{{\cal C}} \newcommand{{\cal D}}{{\cal D}} \newcommand{{\cal E}}{{\cal E}} \newcommand{{\cal F}}{{\cal F}} \newcommand{{\cal G}}{{\cal G}} \newcommand{{\cal H}}{{\cal H}} \newcommand{{\cal I}}{{\cal I}} \newcommand{{\cal J}}{{\cal J}} \newcommand{{\cal K}}{{\cal K}} \newcommand{{\cal L}}{{\cal L}} \newcommand{{\cal M}}{{\cal M}} \newcommand{{\cal N}}{{\cal N}} \newcommand{{\cal O}}{{\cal O}} \newcommand{{\cal P}}{{\cal P}} \newcommand{{\cal Q}}{{\cal Q}} \newcommand{{\cal R}}{{\cal R}} \newcommand{{\cal S}}{{\cal S}} \newcommand{{\cal T}}{{\cal T}} \newcommand{{\cal U}}{{\cal U}} \newcommand{{\cal V}}{{\cal V}} \newcommand{{\cal W}}{{\cal W}} \newcommand{{\cal X}}{{\cal X}} \newcommand{{\cal Y}}{{\cal Y}} \newcommand{{\cal Z}}{{\cal Z}} \newcommand{\hspace{0.05cm}}{\hspace{0.05cm}} \newcommand{\hspace{0.025cm}}{\hspace{0.025cm}} \newcommand{{\rm ch}}{{\rm ch}} \newcommand{{\rightarrow}}{{\rightarrow}} \newcommand{{\mapsto}}{{\mapsto}} \newcommand{{_1\over^2}}{{_1\over^2}} \newcommand{{h\hspace{-0.23cm}^-}}{{h\hspace{-0.23cm}^-}} \newcommand{{\slash\hs{-0.21cm}\partial}}{{\slash\hspace{-0.21cm}\partial}} \newcommand{{\bf x}}{{\bf x}} \pagestyle{plain} \begin{document} \title{\bf SU(2) WZNW model at higher genera from gauge field functional integral} \author{\ \\Krzysztof Gaw\c{e}dzki \\ C.N.R.S., I.H.E.S., Bures-sur-Yvette, 91440, France} \date{ } \maketitle \vskip 1 cm \begin{abstract} We compute the gauge field functional integral giving the scalar product of the \hspace{0.05cm}$SU(2)\hspace{0.05cm}$ Chern-Simons theory states on a Riemann surface of genus \hspace{0.05cm}$>\s1\hspace{0.05cm}$. The result allows to express the higher genera partition functions of \hspace{0.025cm} the \hspace{0.05cm}$SU(2)\hspace{0.05cm}$ WZNW conformal field theory by explicit finite dimensional integrals. Our calculation may also shed new light on the functional integral of the Liouville theory. \end{abstract} \vskip 0.9cm \hspace{2.8cm}{\it Dedicated to Ludwig Dmitrievich Faddeev for the 60$^{th}$ birthday} \vskip 1.7cm \nsection{\hspace{-.6cm}.\ \ Introduction} \vskip 0.5cm Since the 1967 seminal paper of Faddeev and Popov \cite{FP}, the functional integral has become the main tool in the treatment of quantum gauge theories. The main breakthrough which this paper has achieved was the realization that the gauge invariance is not an obstruction but an aid in the treatment of quantum gauge fields. Subsequently, this idea has revealed its full power and the gauge invariance has become the cornerstone of modern theoretical physics. \vskip 0.5cm In the present note, we shall discuss how the same idea allows an explicit solution of the WZNW model of conformal field theory on an arbitrary two-dimensional surface. To make things simpler, we shall limit the discussion to the partition functions of the \hspace{0.05cm}$SU(2)\hspace{0.05cm}$ WZNW model on closed compact Riemann surfaces \hspace{0.05cm}$\Sigma\hspace{0.05cm}$ of genus \hspace{0.05cm}$>\s1\hspace{0.05cm}$. The correlation functions at genus zero were discussed along similar lines in \cite{Quadr} for the \hspace{0.05cm}$SU(2)\hspace{0.05cm}$ case and in \cite{FalGaw0} for the general compact groups. Higher genera, however, are more difficult and took some time to understand. The aim of this note is to present the main points of the argument leaving the technical details to the forthcoming publication \cite{Ja}. The complete work is a small {\it tour de force}. This fact was hard to hide even in a softened exposition which may only envy the work \cite{FP} its striking simplicity. \vskip 0.5cm Our approach is based on the relation between the WZNW partition functions and the Schr\"{o}dinger picture states of the Chern-Simons (CS) theory \cite{WittenJones}\cite{Quadr}. The partition function of the level \hspace{0.05cm}$k\hspace{0.05cm}$ (\hspace{0.05cm}$k=1,2,\dots\hspace{0.05cm}$) \hspace{0.05cm}$SU(2)\hspace{0.05cm}$ WZNW model in an external two-dimensional \hspace{0.05cm}$su(2)\hspace{0.05cm}$ gauge field \hspace{0.05cm}$A\equiv A_zdz+A_{\bar z}d\bar z\hspace{0.05cm}$ is formally given by the functional integral \begin{eqnarray} Z(A)\hspace{0.05cm}=\hspace{0.05cm}\int{\rm e}^{-k\hspace{0.025cm} S(g,A)}\ Dg \end{eqnarray} over \hspace{0.05cm}$g:\Sigma\rightarrow SU(2)\hspace{0.05cm}$. \hspace{0.05cm} On the other hand, the \hspace{0.05cm}$SU(2)\hspace{0.05cm}$ CS states are holomorphic functionals \hspace{0.05cm}$\Psi\hspace{0.05cm}$ of the gauge field \hspace{0.05cm}$A_{\bar z}\hspace{0.05cm}$ s.\hspace{0.05cm} t. for \hspace{0.05cm}$h:\Sigma\rightarrow SL(2,\NC)\hspace{0.05cm}$, \begin{eqnarray} {\rm e}^{-k\hspace{0.025cm} S(h,A_{\bar z}d\bar z)}\ \Psi({}^{h^{-1}} \hspace{-0.17cm}A_{\bar z})\hspace{0.025cm}\hspace{0.05cm}=\hspace{0.05cm}\Psi(A_{\bar z})\ , \label{CoV} \end{eqnarray} where \hspace{0.05cm}\s${}^{h^{-1}} \hspace{-0.17cm}A_{\bar z}\equiv h^{-1}\hspace{-0.04cm} A_{\bar z}\hspace{0.025cm} h\hspace{0.025cm}+\hspace{0.025cm} h^{-1}\partial_{\bar z}h\hspace{0.05cm}$. \hspace{0.025cm} Above \hspace{0.05cm}$S(\hspace{0.025cm}\cdot\hspace{0.05cm}, \hspace{0.025cm}\cdot\hspace{0.025cm})\hspace{0.05cm}$ denotes the WZNW action in the external gauge field \cite{WittenBos}. The CS states form a finite-dimensional space with the dimension expressed by the Verlinde formula \cite{Verl}. Their scalar product is formally given by the functional integral \begin{eqnarray} \|\Psi\|^2\hspace{0.05cm}\hspace{0.025cm}=\hspace{0.05cm}\int|\Psi(A_{\bar z})|^2\ {\rm e}^{\hspace{0.05cm}{k\over\pi} \int_{_\Sigma}\hspace{-0.03cm}{\rm tr}\hspace{0.05cm}\s A_z A_{\bar z}\hspace{0.025cm} d^2z}\ DA \label{FI} \end{eqnarray} (with the convention that \hspace{0.05cm}$A_z=-A_{\bar z}^\dagger\hspace{0.05cm}$ for an \hspace{0.05cm}$su(2)\hspace{0.05cm}$ gauge field \hspace{0.05cm}$A\hspace{0.05cm}$)\hspace{0.025cm}. \hspace{0.05cm} The partition function of the WZNW model is \cite{Quadr} \begin{eqnarray} Z(A)\hspace{0.05cm}\hspace{0.025cm}=\hspace{0.05cm}\sum\limits_{a,b}H^{ab} \hspace{0.05cm}\hspace{0.025cm}\Psi_a(A_{\bar z})\hspace{0.05cm}\s\overline{\Psi_b(A_{\bar z})} \hspace{0.05cm}\ {\rm e}^{\hspace{0.05cm}{k\over\pi} \int_{_\Sigma}\hspace{-0.03cm}{\rm tr}\hspace{0.05cm}\s A_z A_{\bar z}\hspace{0.025cm} d^2z}\ , \label{BF} \end{eqnarray} where \hspace{0.05cm}$(H^{ab})\hspace{0.05cm}$ is the matrix inverse to that of the scalar products \hspace{0.05cm}$(\psi_a\hspace{0.025cm},\hspace{0.05cm}\psi_b)\hspace{0.05cm}$, \hspace{0.05cm} for an arbitrary basis of the CS states. Hence, in order to find the WZNW partition functions, we should be able to construct CS states and to compute their scalar product. The calculation of the latter is another exercise in the functional integration over gauge fields and may be dealt with similarly as in the original Faddeev-Popov's geometric argument. The main point is to use the action \hspace{0.05cm}$A_{\bar z}\mapsto {}^{h^{-1}}\hspace{-0.17cm}A_{\bar z}\hspace{0.05cm}$ of the group \hspace{0.05cm}${\cal G}^\NC\hspace{0.05cm}$ of the complex gauge transformations in order to reduce the functional integral over the space \hspace{0.05cm}${\cal A}\hspace{0.05cm}$ of gauge fields to the orbit space \hspace{0.05cm}${\cal A}/{\cal G}^\NC\hspace{0.05cm}$, which is, in fact, finite dimensional. The main aspect differing our situation from that considered by Faddeev and Popov is that neither the integrated function nor the (formal) integration measure \hspace{0.05cm}$DA\hspace{0.05cm}$ are invariant under the complex gauge transformations. Nevertheless, both transform in a controllable way. This difference has to be properly accounted for. \vskip 0.5cm The first step in the reduction of the functional integral to the orbit space is to choose a slice \hspace{0.05cm}\s${\cal A}/{\cal G}^\NC\ni n\hspace{0.05cm}\smash{\mathop{\mapsto}\limits^s} A(n)\in{\cal A}\hspace{0.05cm}\s$ which cuts every orbit only once (generically) and to change variables by parametrizing \begin{eqnarray} A_{\bar z}\hspace{0.05cm}=\hspace{0.05cm}{}^{h^{-1}}\hspace{-0.16cm}A_{\bar z}(n)\ . \label{ChV} \end{eqnarray} The Jacobian of the change of variables \hspace{0.05cm}${\partial\hspace{0.05cm}(\hspace{0.05cm} A\hspace{0.05cm} ) \over\partial\hspace{0.05cm}(h,n)}\hspace{0.05cm}$ plays then the role similar to that of the ghost determinant in the approach of \cite{FP} where the slice was fixed by constraining functions. The next step is the calculation of the \hspace{0.05cm}$h$-integral. This step is not trivial, in contrast to the gauge invariant situation where the gauge group integral factors out as an overall constant. In fact, in the case at hand, the integration over \hspace{0.05cm}$h\hspace{0.05cm}$, more involved at genera \hspace{0.05cm}$>\s1\hspace{0.05cm}$ than for \hspace{0.05cm}$g=0\hspace{0.05cm}$ and \hspace{0.05cm}$g=1\hspace{0.05cm}$, leads to a result which may look surprising at the first sight: it gives not a function but a singular distribution on the orbit space \hspace{0.05cm}${\cal A}/{\cal G}^\NC\hspace{0.05cm}$. Its treatment may shed some light on the more complicated case of the Liouville theory functional integral. After the integration over \hspace{0.05cm}$h\hspace{0.05cm}$ is done, one is left with an explicit finite-dimensional distributional integral, essentially over the orbit space \hspace{0.05cm}${\cal A}/{\cal G}^\NC\hspace{0.05cm}$, \hspace{0.05cm} which may be further reduced to the standard integral over the support of the distribution. \vskip 0.5 cm The paper is organized as follows. Section 2 is devoted to the description of the slice \hspace{0.05cm}$s:{\cal A}/{\cal G}^\NC\rightarrow {\cal A}\hspace{0.05cm}$. \hspace{0.05cm} The use of \hspace{0.05cm}$s\hspace{0.05cm}$ allows also a more explicit characterization of the CS states. In Section 3, we perform the change of variables (\ref{ChV}) and study its Jacobian. In Section 4, we describe the calculation of the \hspace{0.05cm}$h$-integral over the \hspace{0.05cm}${\cal G}^\NC$-orbits in \hspace{0.05cm}${\cal A}\hspace{0.05cm}$ which turns out to be iterative Gaussian. Finally, in Section 5 we discuss the resulting finite-dimensional integral representation for the CS scalar product. \vskip 1cm \nsection{\hspace{-.6cm}.\ \ The slice} \vskip 0.5cm We would like to describe a surface \hspace{0.05cm}$\{A(n)\}\hspace{0.05cm}$ inside \hspace{0.05cm}${\cal A}\hspace{0.05cm}$ which cuts each orbit of \hspace{0.05cm}${\cal G}^\NC\hspace{0.05cm}$ once (or a fixed number of times). The orbit space \hspace{0.05cm}${\cal A}/{\cal G}^\NC\hspace{0.05cm}$ (after removal of a small subset of bad orbits) is an object well studied in the mathematical literature under the name of the moduli space of (stable) holomorphic \hspace{0.05cm}$SL(2,\NC)$-bundles \cite{Sesh}\cite{NaraRama}. It has complex dimension \hspace{0.05cm}$3(g-1)\hspace{0.05cm}$, \hspace{0.05cm} where \hspace{0.05cm}$g\hspace{0.05cm}$ denotes the genus of the underlying Riemann surface. This should then also be the dimension of our slice of \hspace{0.05cm}${\cal A}\hspace{0.05cm}$. \hspace{0.05cm} Let \hspace{0.05cm}$L_0\hspace{0.05cm}$ be a spin bundle over \hspace{0.05cm}$\Sigma\hspace{0.05cm}$. \hspace{0.05cm}$L_0\hspace{0.05cm}$ is a holomorphic line bundle with local sections \hspace{0.05cm}$(dz)^{1/2}\hspace{0.05cm}$. \hspace{0.05cm} \hspace{0.05cm}$L_0^{-1}\hspace{0.05cm}$ will denote its dual bundle. \hspace{0.05cm}$L_0^{-1}\oplus L_0\hspace{0.05cm}$ is a rank two vector bundle over \hspace{0.05cm}$\Sigma\hspace{0.05cm}$ which, as a smooth bundle, is isomorphic to the trivial bundle \hspace{0.05cm}$\Sigma\times\NC^2\hspace{0.05cm}$. We shall fix a smooth isomorphism \begin{eqnarray} U:\hspace{0.025cm} L_0^{-1}\oplus L_0\rightarrow\Sigma\times\NC^2\ . \end{eqnarray} We may assume that \hspace{0.05cm}$U\hspace{0.05cm}$ preserves the length of the vectors calculated in \hspace{0.05cm}$L_0^{-1}\oplus L_0\hspace{0.05cm}$ with the help of a fixed Riemannian metric \hspace{0.05cm}$\gamma= \gamma_{z\bar z}\hspace{0.025cm} dz\hspace{0.025cm} d\bar z\hspace{0.05cm}$ on \hspace{0.05cm}$\Sigma\hspace{0.05cm}$. \hspace{0.05cm}$U\hspace{0.05cm}$ may be used to transport the gauge fields \hspace{0.05cm}$A_{\bar z}\hspace{0.05cm}$ to the bundle \hspace{0.05cm}$L_0^{-1}\oplus L_0\hspace{0.05cm}$. More exactly, the relation \begin{eqnarray} B_{\bar z}\hspace{0.05cm}=\hspace{0.05cm} UA_{\bar z}U^{-1}\hspace{0.025cm}+\hspace{0.025cm} U\partial_{\bar z}U^{-1} \label{rel} \end{eqnarray} establishes a one to one correspondence between \hspace{0.05cm}$A_{\bar z}d\bar z\hspace{0.05cm}$ and \begin{eqnarray} B_{\bar z}d\bar z\hspace{0.05cm}=\left(\matrix{-a&b\cr c&a}\right)\ , \end{eqnarray} where \hspace{0.05cm}$ a\hspace{0.05cm}$ is a scalar 01-form on \hspace{0.05cm}$\Sigma\hspace{0.05cm}$ (\hspace{0.05cm}$a\in\wedge^{01}\hspace{0.05cm}$)\hspace{0.05cm},\ \hspace{0.05cm}$b\hspace{0.05cm}$ is an \hspace{0.05cm}$L_0^{-2}$-valued one (\hspace{0.05cm}$b\in\wedge^{01}(L_0^{-2})\hspace{0.05cm})\hspace{0.05cm}$ and \hspace{0.05cm}$c\in \wedge^{01}(L_0^2)\hspace{0.05cm}$. \vskip 0.5cm Let us present the surface \hspace{0.05cm}$\Sigma\hspace{0.05cm}$ as a polygone with sides \hspace{0.05cm}$a_1,\hspace{0.05cm} b_1,\hspace{0.05cm} a_1^{-1},\hspace{0.05cm} b_1^{-1},\hspace{0.05cm}\dots\hspace{0.025cm},\hspace{0.05cm} a_g,\hspace{0.05cm} b_g,\hspace{0.05cm} a_g^{-1},$ $b_g^{-1}\hspace{0.05cm}$ given by the basic cycles. We shall fix \hspace{0.05cm}$x_0\in\Sigma\hspace{0.05cm}$ in the corner where \hspace{0.05cm}$b_g^{-1}\hspace{0.05cm}$ and \hspace{0.05cm}$a_1\hspace{0.05cm}$ meet. Let \hspace{0.05cm}$\omega^i\hspace{0.05cm}$,\ \hspace{0.05cm}$i=1,\dots,g\hspace{0.05cm}$, \hspace{0.05cm} be the standard basis of holomorphic forms with \hspace{0.05cm}$\smallint_{_{a_i}}\hspace{-0.06cm}\omega^j=\delta^{ij}\hspace{0.05cm},$ \ \hspace{0.05cm}$\smallint_{_{b_i}}\hspace{-0.06cm} \omega^j=\tau^{ij}\hspace{0.05cm}$. \hspace{0.05cm} We shall take a slice of \hspace{0.05cm}${\cal A}\hspace{0.05cm}$ formed by the gauge fields \hspace{0.05cm}$A^{x,b}\hspace{0.05cm}$ corresponding to \begin{eqnarray} B_{\bar z}d\bar z\hspace{0.05cm}=\left(\matrix{-a^x&b\cr 0&a^x}\right) \equiv\hspace{0.05cm} B^{x,b}_{\bar z}\ , \label{B} \end{eqnarray} where \begin{eqnarray} a^x\hspace{0.05cm}=\hspace{0.05cm}\pi\hspace{0.05cm}(\hspace{-0.03cm}\smallint_{_{x_0}}^{^x} \hspace{-0.06cm}\omega^i)\hspace{0.05cm} ({_1\over^{{\rm Im}\hspace{0.05cm}\tau}})_{_{ij}}\hspace{0.025cm}\bar\omega^j\hspace{0.05cm}\equiv \hspace{0.05cm}\pi\hspace{0.05cm}(\hspace{-0.03cm}\smallint_{_{x_0}}^{^x}\hspace{-0.06cm}\omega)\hspace{0.05cm} ({\rm Im}\hspace{0.05cm}\tau)^{-1}\hspace{0.025cm}\bar\omega\ . \end{eqnarray} Let \hspace{0.05cm}$L_x\hspace{0.05cm}$ denote the holomorphic line bundle obtained from \hspace{0.05cm}$L_0\hspace{0.05cm}$ by replacing its \hspace{0.05cm}$\bar\partial\hspace{0.05cm}$ operator by \hspace{0.05cm}$\bar\partial+a^x\hspace{-0.04cm} \wedge\equiv\bar\partial_{L_x}\hspace{0.05cm}$. We shall restrict forms \hspace{0.05cm}$b\hspace{0.05cm}$ further by taking one representative in each class of \hspace{0.05cm}\s${\wedge^{01}(L_0^{-2})\over(\bar\partial-2a^x\hspace{-0.04cm} \wedge\hspace{-0.03cm}) \hspace{0.025cm}({\cal C}^{\infty}(L^{-2}_0))}\hspace{0.025cm}\cong\hspace{0.025cm} H^1(L_x^{-2})\hspace{0.05cm}$. This may be done by imposing the condition \begin{eqnarray} (\nabla+2\hspace{0.025cm}\overline{a^x}\hspace{-0.02cm}\wedge)\hspace{0.05cm} b\hspace{0.05cm}=\s0 \label{b} \end{eqnarray} with \hspace{0.05cm}$\nabla\hspace{0.05cm}$ standing for the holomorphic covariant derivative of the sections of \hspace{0.05cm}$L_0^{-2}=T^{10}\Sigma\hspace{0.05cm}$. Finally, only one \hspace{0.05cm}$b\hspace{0.05cm}$ in each complex ray of solutions of (\ref{b}) should be taken since \begin{eqnarray} (\matrix{_\lambda&_0\cr^0&^{\lambda^{-1}}})\hspace{0.025cm}\hspace{0.05cm} B^{x,b}_{\bar z} \hspace{0.025cm}\m(\matrix{_{\lambda^{-1}}&_0\cr^0&^\lambda})\hspace{0.05cm}=\hspace{0.05cm} B^{x,\hspace{0.025cm} \lambda{\hspace{-0.02cm}}^{^{_2}}\hspace{-0.03cm}b} \end{eqnarray} and, consequently, \hspace{0.05cm}$A^{x,b}\hspace{0.05cm}$ and \hspace{0.05cm}$A^{x,\hspace{0.025cm}\lambda b}\hspace{0.05cm}$ are gauge related. It may be shown that the union of the \hspace{0.05cm}${\cal G}^\NC$-orbits passing through a slice of \hspace{0.05cm}${\cal A}\hspace{0.05cm}$ constructed this way is dense in \hspace{0.05cm}${\cal A}\hspace{0.05cm}$ and that the generic \hspace{0.05cm}${\cal G}^\NC$-orbit cuts the slice a fixed number of times \cite{Ja}. Here, we shall content ourselves with the count of dimensions. By the Riemann-Roch theorem, the dimension of \hspace{0.05cm}$H^1(L_x^{-2})\hspace{0.05cm}$ is \hspace{0.05cm}$3(g-1)\hspace{0.05cm}$. \hspace{0.05cm} The projectivization subtracts one dimension which is restored by varying \hspace{0.05cm}$x\in\Sigma\hspace{0.05cm}$. \vskip 0.5cm The CS states \hspace{0.05cm}$\Psi\hspace{0.05cm}$ admit an explicit representation when restricted to the slice described above. Let us put \begin{eqnarray} \psi(x,b)\hspace{0.05cm}=\hspace{0.05cm}{\rm e}^{-\pi\hspace{0.05cm}(\hspace{-0.02cm}\int_{_{x_0}}^{^x} \hspace{-0.1cm}\omega)\hspace{0.05cm}({\rm Im}\hspace{0.05cm}\tau)^{-1}\hspace{0.025cm}(\hspace{-0.02cm} \int_{_{x_0}}^{^x}\hspace{-0.1cm}\omega)}\ \hspace{0.025cm}{\rm e}^{\hspace{0.05cm}{k\over\pi} \int_{_\Sigma}\hspace{-0.05cm}{\rm tr}\hspace{0.05cm} A_{z}^0 A_{\bar z}^{x,b}\hspace{0.05cm} d^2z}\ \Psi(A_{\bar z}^{x,b})\ , \end{eqnarray} where \hspace{0.05cm}$A^0_z\equiv U\nabla_z U^{-1}\hspace{0.05cm}$. \hspace{0.05cm}$\psi\hspace{0.05cm}$ is an analytic function of \hspace{0.05cm}$x\hspace{0.05cm}$ and \hspace{0.05cm}$b\hspace{0.05cm}$ depending only of the class of \hspace{0.025cm}\hspace{0.05cm}$b\hspace{0.05cm}$ modulo \hspace{0.05cm}\s$(\bar\partial-2a^x\wedge) ({\cal C}^{\infty}(L_0^{-2}))\hspace{0.05cm}$: \begin{eqnarray} \hbox to 11.5cm{\hspace{1.5cm}$\psi(x,\hspace{0.025cm} b+ (\bar\partial-2a^x\hspace{-0.04cm}\wedge)\hspace{0.025cm} w) \hspace{0.05cm}=\hspace{0.05cm}\psi(x,b)\ .$\hfill}\label{1} \end{eqnarray} Under constant complex rescalings of \hspace{0.05cm}$b\hspace{0.05cm}$, \begin{eqnarray} \hbox to 11.5cm{\hspace{1.5cm}$\psi(x,\lambda b) \hspace{0.05cm}=\hspace{0.05cm}\lambda^{k(g-1)}\hspace{0.05cm}\psi(x,b)\ .$\hfill}\label{2} \end{eqnarray} Under the action of the fundamental group \hspace{0.05cm}$\pi_1(\Sigma)\hspace{0.05cm}$, \begin{eqnarray} \hbox to 11.5cm{\hspace{1.5cm}$\psi(a_jx,\hspace{0.025cm} c_{a_j}^2 b)\hspace{0.05cm}=\hspace{0.05cm} \nu(a_j)^k\hspace{0.05cm}\s\psi(x,b)\ ,$\hfill}\label{3}\\ \hbox to 11.5cm{\hspace{1.5cm}$\psi(b_jx,\hspace{0.025cm} c_{b_j}^2 b)\hspace{0.05cm}\s=\hspace{0.05cm} \nu(b_j)^k\hspace{0.05cm}\s{\rm e}^{-2\pi i k\tau^{jj} \hspace{0.025cm}-\m4\pi i k\hspace{-0.04cm}\int_{_{x_0}}^{^x}\hspace{-0.06cm}\omega^j} \hspace{0.05cm}\psi(x,b)\ ,$\hfill}\label{4} \end{eqnarray} where, for each \hspace{0.05cm}$p\in\pi_1(\Sigma)\hspace{0.05cm}$, \hspace{0.05cm}$c_p\hspace{0.05cm}$ is a non-vanishing (univalued) function on \hspace{0.05cm}$\Sigma\hspace{0.05cm}$, \begin{eqnarray} c_p(y)\hspace{0.05cm}=\hspace{0.05cm}{\rm e}^{\m2\pi i\hspace{0.05cm}\s{\rm Im}\hspace{0.05cm}\s(\hspace{0.025cm}(\hspace{-0.02cm}\int_{_p} \hspace{-0.06cm}\omega)\hspace{0.05cm}({\rm Im}\hspace{0.05cm}\tau)^{-1}\hspace{0.025cm}(\hspace{-0.02cm} \int_{_{x_0}}^{^y}\hspace{-0.07cm}\bar \omega)\hspace{0.025cm})} \end{eqnarray} and \hspace{0.05cm}$\nu\hspace{0.05cm}$ is a character of \hspace{0.05cm}$\pi_1(\Sigma)\hspace{0.05cm}$, \begin{eqnarray} \nu(p)\hspace{0.05cm}=\hspace{0.05cm}{\rm e}^{-{i\over 2\pi}\int_{_\Sigma}\hspace{-0.04cm}R\hspace{0.05cm}\ln\hspace{0.025cm} c_p} \ \prod\limits_{j=1}^g\hspace{0.05cm}\hspace{0.025cm} W_{a_j}^{-{1\over\pi}\hspace{-0.02cm}\int_{_{b_j}} \hspace{-0.1cm}c_p^{-1}dc_p}\hspace{-0.02cm} W_{b_j}^{\hspace{0.025cm}{1 \over\pi}\hspace{-0.02cm} \int_{_{a_j}}\hspace{-0.1cm}c_p^{-1}dc_p}\ . \end{eqnarray} In the last formula, the integral \hspace{0.05cm}$\int_{_\Sigma}\hspace{-0.07cm}R\hspace{0.05cm}\ln\hspace{0.025cm} c_p\hspace{0.05cm}$ is over the polygone representing the surface, \hspace{0.05cm}$R\hspace{0.05cm}$ is the metric curvature form of \hspace{0.05cm}$\Sigma\hspace{0.05cm}$ normalized so that \hspace{0.05cm}$\smallint_{_\Sigma}\hspace{-0.04cm}R=4\pi i(g-1)\hspace{0.05cm}$, \hspace{0.05cm}$\ln\hspace{0.05cm} c_p(y)=\smallint_{_{x_0}}^{^y}\hspace{-0.06cm}c_p^{-1}dc_p\hspace{0.05cm}$ and \hspace{0.05cm}$W_p\hspace{0.05cm}$ stands for the holonomy of \hspace{0.05cm}$L_0\hspace{0.05cm}$ around the closed curve \hspace{0.05cm}$p\hspace{0.05cm}$. One may identify functions \hspace{0.05cm}$\psi\hspace{0.05cm}$ satisfying relations (\ref{1}), (\ref{2}), (\ref{3}) and (\ref{4}) with sections of the \hspace{0.05cm}$k^{\rm th}\hspace{0.05cm}$ power of Quillen's determinant bundle \cite{Quill} of the holomorphic family \hspace{0.05cm}$(\hspace{0.05cm}\bar\partial+B^{x,b}\hspace{0.05cm})\hspace{0.05cm}$ of operators in \hspace{0.05cm}$L_0^{-1}\oplus L_0\hspace{0.05cm}$. \hspace{0.05cm} Holomorphic \hspace{0.05cm}$\psi\hspace{0.05cm}$'s \hspace{0.05cm} form a finite dimensional space and the relation \hspace{0.05cm}$\Psi\mapsto\psi\hspace{0.05cm}$ realizes the space of CS states as its, in general proper, subspace. \vskip 1cm \nsection{\hspace{-.6cm}.\ \ The change of variables} \vskip 0.5cm We apply the change of variables (\ref{ChV}) with \hspace{0.05cm}$A_{\bar z}(n)\equiv A_{\bar z}^{x,b}\hspace{0.05cm}$ in the functional integral (\ref{FI}) giving the scalar product of CS states. The Jacobian is \begin{eqnarray} {\partial\hspace{0.05cm}(\hspace{0.05cm} A\hspace{0.025cm}\hspace{0.05cm} )\over\partial\hspace{0.05cm}(h,n)}\hspace{0.05cm}=\hspace{0.025cm}\hspace{0.05cm}{\rm det}\hspace{0.05cm}( D_{\bar z}(h,n)^\dagger\hspace{0.025cm} D_{\bar z}(h,n))\hspace{0.05cm}\ \hspace{0.025cm}\hspace{0.05cm}{\rm det}\hspace{0.05cm}(\hspace{0.025cm}({_{\partial A_{\bar z}} \over{^{\partial{n_\alpha}}}})^{^{\hspace{-0.04cm}\perp}}\hspace{-0.06cm}, \hspace{0.05cm}({_{\partial A_{\bar z}} \over{^{\partial{n_\beta}}}})^{^{\hspace{-0.04cm}\perp}}\hspace{-0.02cm})\ , \end{eqnarray} where \hspace{0.05cm}\s$D_{\bar z}(h,n)\hspace{-0.01cm}=\hspace{-0.02cm} h^{^{\hspace{-0.02cm}{-1}}}\hspace{-0.02cm}(\partial_{\bar z} +[A_{\bar z}(\hspace{-0.02cm}n\hspace{-0.02cm})\hspace{0.025cm},\hspace{0.05cm} \cdot\hspace{0.05cm}\s])\hspace{0.025cm} h\hspace{0.05cm}\s\hspace{0.025cm}$ and \hspace{0.025cm}\hspace{0.05cm}\s$({{\partial A_{\bar z}} \over{{\partial{n_\alpha}}}}\hspace{-0.03cm})^{^{\hspace{-0.04cm}\perp}}\hspace{0.05cm}$ denotes the component of \hspace{0.05cm}\hspace{0.025cm}\hspace{0.05cm}$h^{^{\hspace{-0.03cm}{-1}}}\hspace{-0.05cm} {{\partial A_{\bar z}(n)}\over{\partial n_\alpha}}\hspace{0.025cm} h\hspace{0.05cm}$ perpendicular to the image of \hspace{0.05cm}\s$D_{\bar z}(h,n)\hspace{0.05cm}$. The first determinant should be regularized (what specific regularization is used does not matter as long as it is insensitive to unitary conjugations of the operator, like the zeta-function regularization). The \hspace{0.05cm}$h$-dependence of the regularized determinants is given by the global version of the non-abelian chiral anomaly formula \cite{PolyWieg} \begin{eqnarray} {\rm det}\hspace{0.05cm}( D_{\bar z}(h,n)^\dagger\hspace{0.025cm} D_{\bar z}(h,n))\hspace{0.05cm}\ \hspace{0.05cm}\hspace{0.025cm}{\rm det}\hspace{0.05cm}(\hspace{0.025cm}({_{\partial A_{\bar z}} \over{^{\partial{n_\alpha}}}})^{^{\hspace{-0.04cm}\perp}}\hspace{-0.06cm}, \hspace{0.05cm}({_{\partial A_{\bar z}} \over{^{\partial{n_\beta}}}})^{^{\hspace{-0.04cm}\perp}}\hspace{-0.02cm}) \hspace{4cm} \cr =\hspace{0.05cm}\s{\rm e}^{\s4\hspace{0.025cm} S(hh^\dagger\hspace{-0.02cm},\hspace{0.025cm} A(n))}\ \hspace{0.05cm}\s {\rm det}\hspace{0.05cm}( D_{\bar z}(n)^\dagger\hspace{0.025cm} D_{\bar z}(n))\hspace{0.05cm}\ \hspace{0.05cm}\hspace{0.025cm}{\rm det}\hspace{0.05cm}(\hspace{0.025cm}({_{\partial A_{\bar z}(n)} \over{^{\partial{n_\alpha}}}})^{^{\hspace{-0.04cm}\perp}}\hspace{-0.03cm}, \hspace{0.05cm}({_{\partial A_{\bar z}(n)} \over{^{\partial{n_\beta}}}})^{^{\hspace{-0.04cm}\perp}}\hspace{-0.02cm})\ , \label{ChA} \end{eqnarray} where \hspace{0.05cm}\s$D_{\bar z}(n)\equiv\hspace{0.05cm} D_{\bar z}(1,n)\hspace{0.05cm}$. The covariance property (\ref{CoV}) implies that \begin{eqnarray} |\Psi(A_{\bar z})|^2\ {\rm e}^{\hspace{0.05cm}{k\over\pi}\hspace{-0.03cm} \int_{_\Sigma}\hspace{-0.03cm}{\rm tr}\hspace{0.05cm}\s A_z\hspace{-0.02cm} A_{\bar z}\hspace{0.05cm} d^2z}\hspace{0.05cm}=\hspace{0.05cm} |\Psi(A_{\bar z}(n)|^2\ {\rm e}^{\hspace{0.05cm}{k\over\pi}\hspace{-0.03cm} \int_{_\Sigma}\hspace{-0.03cm}{\rm tr}\hspace{0.05cm}\s A_z(n)\hspace{0.025cm} A_{\bar z}(n)\hspace{0.05cm} d^2z} \ {\rm e}^{\hspace{0.05cm} k\hspace{0.025cm} S(hh^\dagger\hspace{-0.02cm},\hspace{0.05cm} A(n))}\ . \end{eqnarray} Hence, after the change of variables, the functional integral (\ref{FI}) becomes \begin{eqnarray} \|\Psi\|^2\hspace{0.05cm}\hspace{0.025cm}=\hspace{0.05cm}\int|\Psi(A_{\bar z}(n)|^2\ {\rm e}^{\hspace{0.05cm}{k\over\pi} \hspace{-0.03cm}\int_{_\Sigma}\hspace{-0.03cm}{\rm tr} \hspace{0.05cm}\s A_z(n)\hspace{0.025cm} A_{\bar z}(n)\hspace{0.05cm} d^2z} \ {\rm e}^{\hspace{0.05cm}(k+4)\hspace{0.025cm} S(hh^\dagger\hspace{-0.02cm},\hspace{0.05cm} A(n))}\ \hspace{0.05cm}{\rm det}\hspace{0.05cm}( D_{\bar z}(n)^\dagger\hspace{0.025cm} D_{\bar z}(n))\hspace{0.05cm}\s\ \ \ \cr \cdot\ \hspace{0.05cm}{\rm det}\hspace{0.05cm}(\hspace{0.025cm}({_{\partial A_{\bar z}(n)} \over{^{\partial{n_\alpha}}}})^{^{\hspace{-0.04cm}\perp}}\hspace{-0.03cm}, \hspace{0.05cm}({_{\partial A_{\bar z}(n)} \over{^{\partial{n_\beta}}}})^{^{\hspace{-0.04cm}\perp}}\hspace{-0.02cm}) \ \hspace{0.05cm} D(hh^\dagger)\ \prod\limits_{\alpha}d^2n_\alpha\ ,\ \ \label{FI1} \end{eqnarray} where we have used the \hspace{0.05cm}$SU(2)\hspace{0.05cm}$ gauge invariance of the integral to factor out the integration over the \hspace{0.05cm}$SU(2)$-valued gauge transformations, like in Faddeev-Popov's case. There remains, however, the integral over the field \hspace{0.05cm}$hh^\dagger\hspace{0.05cm}$ effectively taking values in the hyperbolic space \hspace{0.05cm}${\cal H}\equiv SL(2,\NC)/SU(2)\hspace{0.05cm}$. \hspace{0.05cm}\s$D(hh^\dagger)\hspace{0.05cm}$ should be viewed a formal product of the \hspace{0.05cm}$SL(2,\NC)$-invariant measures on \hspace{0.05cm}${\cal H}\hspace{0.05cm}$. \vskip 0.5cm Working out the explicit form of various terms under the integral in eq.\hspace{0.05cm}\s(\ref{FI1}) for \hspace{0.05cm}$A(n)\equiv A^{x,b}\hspace{0.05cm}$ is a rather straightforward matter. One obtains \begin{eqnarray} |\Psi(A_{\bar z}^{x,b})|^2\ {\rm e}^{\hspace{0.05cm}{k\over\pi} \hspace{-0.03cm}\int_{_\Sigma}\hspace{-0.03cm}{\rm tr} \hspace{0.05cm}\s A_z^{x,b}\hspace{-0.02cm} A_{\bar z}^{x,b}\hspace{0.05cm} d^2z} \hspace{0.05cm}\s=\hspace{0.05cm}\s\hspace{0.05cm} {\rm e}^{\hspace{0.05cm}{k\over \pi}\hspace{-0.03cm}\int_{_\Sigma} \hspace{-0.05cm}{\rm tr}\hspace{0.05cm}\hspace{0.025cm} A^0_z\hspace{0.025cm}(A^0_z)^\dagger\hspace{0.05cm} d^2z} \ {\rm e}^{-4\pi \hspace{0.025cm} k\hspace{0.05cm}\hspace{0.025cm} (\int_{_{x_0}}^{x}\hspace{-0.1cm}{\rm Im}\hspace{0.05cm}\omega)\hspace{0.05cm}\hspace{0.025cm} ({\rm Im}\hspace{0.05cm}\tau)^{-1}\hspace{0.025cm} (\int_{_{x_0}}^{x}\hspace{-0.1cm}{\rm Im}\hspace{0.05cm}\omega)}\ \cr \cdot\ {\rm e}^{\hspace{0.025cm}{k\over 2\pi i}\hspace{-0.03cm}\int_{_\Sigma} \hspace{-0.07cm}\langle b,\hspace{0.025cm}\wedge b\rangle}\ \hspace{0.05cm}|\psi(x,b)|^2 \ .\ \label{psi2} \end{eqnarray} (Here and below, \hspace{0.05cm}$\langle\hspace{0.025cm}\cdot\hspace{0.05cm},\hspace{0.025cm}\cdot\rangle\hspace{0.05cm}$ denotes the hermitian structure induced by the Riemannian metric of \hspace{0.05cm}$\Sigma\hspace{0.05cm}$ on the powers of its canonical bundle.) The determinant of the operator \hspace{0.05cm}$D_{\bar z}(n)^\dagger\hspace{0.025cm} D_{\bar z}(n)\hspace{0.05cm}$, unitarily equivalent to the operator \hspace{0.05cm}\s$(\partial_{\bar z}+\hspace{0.025cm}[B^{x,b}_{\bar z} \hspace{0.025cm},\hspace{0.05cm}\cdot\ ]\hspace{0.025cm})^\dagger \hspace{0.025cm}(\partial_{\bar z}+\hspace{0.025cm}[B^{x,b}_{\bar z}\hspace{0.025cm},\hspace{0.05cm}\cdot\ ]\hspace{0.025cm})\hspace{0.05cm}\s$ acting on smooth endomorphism of \hspace{0.05cm}$L_0^{-1}\oplus L_0\hspace{0.05cm}$, \hspace{0.05cm} may be found by performing the Gaussian integration \begin{eqnarray} {\rm det}\hspace{0.025cm}(\bar D_{\bar z}(n)^\dagger\hspace{0.025cm}\bar D_{\bar z}(n)) \hspace{0.05cm}\s=\hspace{0.05cm}\int{\rm e}^{-i\int_{_\Sigma} (\hspace{0.025cm} 2\hspace{0.025cm}(\overline{\bar\partial X+Zb})\hspace{0.025cm}\wedge \hspace{0.025cm}(\bar\partial X+Zb)\hspace{0.05cm} +\hspace{0.05cm}\langle\hspace{0.025cm}(\bar\partial-2a^x)Y-2Xb\hspace{0.025cm},\hspace{0.05cm}\wedge \hspace{0.05cm}((\bar\partial-2a^x)Y-2Xb)\hspace{0.025cm}\rangle\hspace{0.025cm})} \cr \cdot\ {\rm e}^{\hspace{0.05cm}\langle\hspace{0.025cm}(\bar\partial+2a^x)Z\hspace{0.025cm}, \hspace{0.05cm}\wedge\hspace{0.05cm}(\bar\partial-2a^x)Z\hspace{0.025cm}\rangle} \hspace{0.025cm}\ DY\ DX\ DZ\ \ \end{eqnarray} over the anticommuting ghost fields: the scalar \hspace{0.05cm}$X\hspace{0.05cm}$, \hspace{0.05cm} the \hspace{0.05cm}$L_0^{-2}$-valued \hspace{0.05cm}$Y\hspace{0.05cm}$ and the \hspace{0.05cm}$L_0^2$-valued \hspace{0.05cm}$Z\hspace{0.05cm}$. Formally, the computation may be done iteratively, first over \hspace{0.05cm}$Y\hspace{0.05cm}$, then over \hspace{0.05cm}$X\hspace{0.05cm}$ and, at the end, over \hspace{0.05cm}$Z\hspace{0.05cm}$. One obtains this way the product of determinants \begin{eqnarray} {\det}\hspace{0.05cm}(\bar\partial_{L_x^{-2}}^\dagger\hspace{0.025cm}\bar\partial_{L_x^{-2}}) \ \hspace{0.05cm}{\rm det}'\hspace{0.025cm}(-\Delta)\hspace{0.05cm}\ {\det}'\hspace{0.025cm} (\bar\partial_{L_x^2}^\dagger\hspace{0.025cm}\bar\partial_{L_x^2})\ , \end{eqnarray} where \hspace{0.05cm}$\Delta\hspace{0.05cm}$ is the scalar Laplacian on \hspace{0.05cm}$\Sigma\hspace{0.05cm}$, \hspace{0.05cm} decorated with zero mode terms. Some care should be taken since the regularization (e.\hspace{0.05cm} g. by the zeta-function prescription) of the big determinant requires, besides similar regularization of the product determinants, also an additional term which may be found by demanding that the result transforms as in (\ref{ChA}) under the complex gauge transformations. \vskip 0.5cm The final expression for the measure \begin{eqnarray} d\mu(n)\hspace{0.05cm}\s\hspace{0.05cm}\equiv\hspace{0.05cm}\s{\rm det}\hspace{0.05cm}( D_{\bar z}(n)^\dagger\hspace{0.025cm} D_{\bar z}(n))\hspace{0.05cm}\s\ {\rm det}\hspace{0.05cm}(\hspace{0.025cm}({_{\partial A_{\bar z}(n)} \over{^{\partial{n_\alpha}}}})^{^{\hspace{-0.04cm}\perp}}\hspace{-0.03cm}, \hspace{0.05cm}({_{\partial A_{\bar z}(n)} \over{^{\partial{n_\beta}}}})^{^{\hspace{-0.04cm}\perp}}\hspace{-0.02cm}) \ \prod\limits_{\alpha}d^2n_\alpha \label{dmu} \end{eqnarray} on the slice becomes rather simple. Fix \hspace{0.05cm}$x\in\Sigma\hspace{0.05cm}$. Let \hspace{0.05cm}\hspace{0.025cm}$(b^\alpha)_{_{\alpha=1}}^{^{3(g-1)}}\hspace{0.05cm}$ be an orthonormal basis (in the natural \hspace{0.05cm}$L^2\hspace{0.05cm}$ scalar product) of \hspace{0.05cm}$L_0^{-2}$-valued 01-forms \hspace{0.05cm}$b\hspace{0.05cm}$ solving eq.\hspace{0.05cm}\s(\ref{b}). Similarly, \hspace{0.05cm} let \hspace{0.05cm}$(\kappa_r)_{_{r=1}}^{^{g-1}}\hspace{0.05cm}$ be an orthonormal basis of sections of \hspace{0.05cm}$L^{-2}_0\hspace{0.05cm}$ annihilated by \hspace{0.05cm}$\nabla-2\overline{a^x}\hspace{-0.02cm}\wedge\hspace{0.05cm}$. \hspace{0.05cm} Set \hspace{0.05cm}\s$M_{ir}^\alpha\hspace{0.025cm}\equiv\smallint_{_\Sigma}\hspace{-0.06cm} \omega^i\langle\kappa_r,\hspace{0.025cm} b^\alpha\rangle\hspace{0.05cm}$. Denote by \hspace{0.05cm}$(z^\alpha)\hspace{0.05cm}$ the complex coordinates w.r.t. the basis \hspace{0.05cm}$(b^\alpha)\hspace{0.05cm}$ on the space of \hspace{0.05cm}$b\hspace{0.05cm}$ satisfying eq.\hspace{0.05cm}\s(\ref{b}). One may use \hspace{0.05cm}$x\in\Sigma\hspace{0.05cm}$ and the homogeneous coordinates \hspace{0.05cm}$z^\alpha\hspace{0.05cm}$ to parametrize the slice of \hspace{0.05cm}${\cal A}\hspace{0.05cm}$ and \begin{eqnarray} d\mu(x,b)\hspace{0.05cm}\s=\hspace{0.05cm}\s{\rm const.}\ \hspace{0.05cm}{_{{\rm area}(\Sigma)} \over^{{\rm det}\hspace{0.05cm}({\rm Im}\hspace{0.05cm}\tau)}}\ \hspace{0.05cm} {\det}\hspace{0.05cm}(\bar\partial_{L_x^{-2}}^\dagger\hspace{0.025cm}\bar\partial_{L_x^{-2}}) \ \hspace{0.05cm}{\rm det}'\hspace{0.025cm}(-\Delta)\hspace{0.05cm}\ {\det}'\hspace{0.025cm} (\bar\partial_{L_x^2}^\dagger\hspace{0.025cm}\bar\partial_{L_x^2})\ \hspace{0.05cm}{\rm e}^{\hspace{0.025cm}{2\over\pi i}\int_{_\Sigma}\hspace{-0.04cm} \langle b,\hspace{0.025cm}\wedge b\rangle}\ \ \cr \cdot\ |\hspace{0.05cm}\epsilon_{\alpha_1,\dots,\alpha_{3(\chi-1)}} \hspace{0.05cm}\hspace{0.025cm} z^{\alpha_1}\hspace{0.025cm} dz^{\alpha_2}\hspace{-0.03cm}\wedge\hspace{0.025cm} \dots\hspace{0.025cm}\wedge dz^{\alpha_{3(\chi-1)}}\hspace{0.05cm}|^2\ \hspace{0.05cm} |\sum\limits_{j=1}^g\hspace{0.05cm}{\rm det}\hspace{0.05cm}(\hspace{0.025cm} \sum_\alpha M_{ir}^\alpha z^\alpha\hspace{0.025cm})_{\hspace{-0.01cm}_{i\not=j}} \ \omega^j(x)\hspace{0.05cm}|^2\ .\ \label{dmu1} \end{eqnarray} Notice that \hspace{0.05cm}$d\mu(n)\hspace{0.05cm}$ contains as a factor the natural measure on the projectivization of the space of solutions of eq.\hspace{0.05cm}\s(\ref{b}). \vskip 1cm \nsection{\hspace{-.6cm}.\ \ Integral over the gauge orbits} \vskip 0.5cm It remains to compute the functional integral \begin{eqnarray} Z_{_{{\cal H}}}(A^{x,\hspace{0.025cm} b})\hspace{0.05cm}\s\equiv\hspace{0.05cm}\int{\rm e}^{\hspace{0.025cm}(k+4)\hspace{0.025cm} S(hh^\dagger,\hspace{0.025cm} A^{x,b})}\ D(hh^\dagger) \label{HFI} \end{eqnarray} giving the genus \hspace{0.05cm}$g\hspace{0.05cm}$ partition function of a (nonunitary) WZNW model with fields taking values in the hyperbolic space \hspace{0.05cm}${\cal H}\hspace{0.05cm}$, first considered in \cite{GK}. The calculation of the functional integral (\ref{HFI}) is the crucial step of the argument. The fields \hspace{0.05cm}$hh^\dagger\hspace{0.05cm}$ may be uniquely parametrized by real functions \hspace{0.05cm}$\varphi\hspace{0.05cm}$ and sections \hspace{0.05cm}$w\hspace{0.05cm}$ of \hspace{0.05cm}$L_0^{-2}=T^{10}\Sigma\hspace{0.05cm}$ by writing \begin{eqnarray} hh^\dagger\hspace{0.05cm}\s=\hspace{0.05cm}\s U\left(\matrix{1&{{\rm e}^\varphi w}\cr0&1}\right) \left(\matrix{{{\rm e}^\varphi}&0\cr0&{{\rm e}^{-\varphi}}}\right) \left(\matrix{1&0\cr{{\rm e}^\varphi w^\dagger}&1}\right) U^{-1} \label{hh} \end{eqnarray} where \hspace{0.05cm}$w^\dagger\hspace{0.05cm}$ is the section of \hspace{0.05cm}$L_0^2\hspace{0.05cm}$ obtained by contracting the vector field \hspace{0.05cm}$\bar w\hspace{0.05cm}$ with the Riemannian metric. Rewritten in terms of \hspace{0.05cm}$\varphi\hspace{0.05cm}$ and \hspace{0.05cm}$w\hspace{0.05cm}$, the functional integral (\ref{HFI}) becomes \begin{eqnarray} Z_{_{{\cal H}}}(A^{x,\hspace{0.025cm} b})&=&{\rm e}^{-{k+4\over 2\pi i}\hspace{-0.1cm} \int_{_\Sigma}\hspace{-0.1cm} \langle b\hspace{0.05cm},\hspace{0.05cm}\wedge\hspace{0.025cm} b\rangle}\ \ \hspace{6.5cm}\cr &\cdot&\int{\rm e}^{{k+4\over 2\pi i}\int_{_\Sigma}\hspace{-0.02cm}[\hspace{0.05cm} -\varphi\hspace{0.05cm}(\partial\bar\partial\varphi+R\hspace{0.025cm})\hspace{0.05cm}\hspace{0.025cm}+ \hspace{0.025cm}\hspace{0.05cm}\langle\hspace{0.05cm}{\rm e}^{-\varphi}b +(\bar\partial+(\bar\partial\varphi))\hspace{0.025cm} w\hspace{0.05cm},\hspace{0.025cm}\hspace{0.05cm}\wedge\hspace{0.025cm}( {\rm e}^{-\varphi}b+ (\bar\partial+(\bar\partial\varphi))\hspace{0.025cm} w)\hspace{0.05cm}\rangle\hspace{0.025cm}]}\ Dw\hspace{0.05cm}\s D\varphi\hspace{0.05cm} ,\ \ \ \ \label{HFIE} \end{eqnarray} where \hspace{0.05cm}$\bar\partial\equiv\bar\partial_{L_x^2}\hspace{0.05cm}$ when acting on \hspace{0.05cm}$w\hspace{0.05cm}$. The \hspace{0.05cm}$w$-integral is Gaussian and may be easily performed: \begin{eqnarray} {\cal I}_w&\equiv&\int{\rm e}^{{k+4\over 2\pi i}\int_{_\Sigma}\hspace{-0.02cm} \langle\hspace{0.05cm}{\rm e}^{-\varphi}b +(\bar\partial+(\bar\partial\varphi))\hspace{0.025cm} w\hspace{0.05cm},\hspace{0.025cm}\hspace{0.05cm}\wedge\hspace{0.025cm}( {\rm e}^{-\varphi}b+ (\bar\partial+\bar\partial(\varphi))\hspace{0.025cm} w)\hspace{0.05cm}\rangle}\ Dw\hspace{2cm}\cr &=&{\rm e}^{{k+4\over 2\pi i}\int_{_\Sigma}\hspace{-0.02cm} \langle\hspace{0.05cm} P_\varphi({\rm e}^{-\varphi}b)\hspace{0.05cm}, \hspace{0.05cm}\hspace{0.025cm}\wedge\hspace{0.025cm} P_\varphi({\rm e}^{-\varphi}b) \hspace{0.05cm}\rangle}\ \hspace{0.05cm}{\rm det}\left((\bar\partial_{L_x^2}+(\bar\partial\varphi))^\dagger\hspace{0.05cm} (\bar\partial_{L_x^2}+(\bar\partial\varphi))\right)^{-1}\ , \label{HH0} \end{eqnarray} where \hspace{0.05cm}$P_{\varphi}\hspace{0.05cm}$ denotes the orthogonal projection on the kernel of \hspace{0.05cm}\s$(\bar\partial_{L_x^2}+(\bar\partial\varphi))^\dagger\hspace{0.05cm}$. Since the latter is spanned by the vectors \hspace{0.05cm}${\rm e}^\varphi b^\alpha\hspace{0.05cm}$, \begin{eqnarray} i\smallint_{_\Sigma}\hspace{-0.02cm} \langle\hspace{0.05cm} P_\varphi({\rm e}^{-\varphi}b)\hspace{0.05cm}, \hspace{0.05cm}\hspace{0.025cm}\wedge\hspace{0.025cm} P_\varphi({\rm e}^{-\varphi}b) \hspace{0.05cm}\rangle\hspace{0.05cm}\equiv\hspace{0.05cm}\|P_\varphi({\rm e}^{-\varphi} b)\|^2\ \cr =\hspace{0.05cm} \overline{(b^\alpha\hspace{0.025cm},\hspace{0.05cm} b)}\hspace{0.05cm}\hspace{0.025cm}(H_\varphi^{-1})_{\alpha\beta}\hspace{0.025cm} \hspace{0.05cm}(b^\beta\hspace{0.025cm},\hspace{0.05cm} b)\hspace{0.05cm}=\hspace{0.05cm}\bar z^\alpha\hspace{0.05cm}\hspace{0.025cm} (H_\varphi^{-1})_{\alpha\beta}\hspace{0.025cm} \hspace{0.05cm} z^\beta\ , \end{eqnarray} where \hspace{0.05cm}$H_\varphi\hspace{0.05cm}$ is the matrix of scalar products \hspace{0.05cm}\s$({\rm e}^{\varphi}b^\alpha\hspace{0.025cm},\hspace{0.05cm}{\rm e}^\varphi b^\beta)\hspace{0.05cm}$. The factor \hspace{0.05cm}${\rm e}^{-{k+4\over 2\pi} \|P_\varphi({\rm e}^{-\varphi}b)\|^2}\hspace{0.05cm}\s$ is the classical value of the \hspace{0.05cm}$w$-integral. One may rewrite it as a finite-dimensional integral \begin{eqnarray} {\rm e}^{-{k+4\over 2\pi} \|P_\varphi({\rm e}^{-\varphi}b)\|^2}\hspace{0.05cm} =\hspace{0.05cm}{\rm const}.\ \hspace{0.05cm}{\rm det}\hspace{0.05cm}(\hspace{0.025cm} H_\varphi)\hspace{0.05cm} \int{\rm e}^{-{2\pi\over k+4}\hspace{0.05cm}\hspace{0.025cm}\bar c_\alpha\hspace{0.05cm} (H_\varphi)^{\alpha\beta}\hspace{0.015cm} \hspace{0.05cm} c_\beta\hspace{0.025cm}\hspace{0.05cm}+\hspace{0.05cm}\hspace{0.025cm} i\hspace{0.05cm} \bar c_\alpha z^\alpha\hspace{0.05cm}\hspace{0.025cm} +\hspace{0.05cm}\s i\hspace{0.05cm} c_\alpha\bar z^\alpha} \ \prod\limits_\alpha d^2c_\alpha\ . \label{HH1} \end{eqnarray} By the global version of the abelian chiral anomaly formula, \begin{eqnarray} {\rm det}\hspace{0.05cm}(\hspace{0.025cm} H_\varphi)\ \hspace{0.05cm}{\rm det} \left((\bar\partial_{L^{-2}_x}+(\bar\partial\varphi))^\dagger\hspace{0.05cm} (\bar\partial_{L^{-2}_x}+(\bar\partial\varphi))\right)^{-1}\hspace{2.2cm}\hspace{2.5cm}\cr =\hspace{0.05cm}\s{\rm e}^{-{1\over\pi i}\int_{_\Sigma}\hspace{-0.04cm} \partial\varphi\wedge\bar\partial\varphi\hspace{0.05cm}\s+ \hspace{0.05cm}\s{3\over 2\pi i}\int_{_\Sigma}\hspace{-0.05cm}\varphi\hspace{0.025cm} R} \ \hspace{0.05cm} {\rm det}\hspace{0.05cm}(\hspace{0.025cm}\bar\partial_{L^{-2}_x}^\dagger\hspace{0.05cm} \bar\partial_{L^{-2}_x}\hspace{0.025cm})^{-1} \label{HH2} \end{eqnarray} (recall that \hspace{0.05cm}$H_0\hspace{0.05cm}$ is the unit matrix). Gathering eq.\hspace{0.05cm}\s(\ref{HH0}), (\ref{HH1}) and (\ref{HH2}), one obtains \begin{eqnarray} {\cal I}_w\hspace{0.05cm}=\hspace{0.05cm}{\rm const}.\hspace{0.05cm}\ {\rm det}\hspace{0.05cm}(\hspace{0.025cm}\bar\partial_{L^{-2}_x}^\dagger\hspace{0.05cm} \bar\partial_{L^{-2}_x}\hspace{0.025cm})^{-1} \ \hspace{0.05cm}\s{\rm e}^{-{1\over\pi i}\int_{_\Sigma}\hspace{-0.04cm} \partial\varphi\wedge\bar\partial\varphi\hspace{0.05cm}\s+ \hspace{0.05cm}\s{3\over 2\pi i}\int_{_\Sigma}\hspace{-0.05cm}\varphi\hspace{0.025cm} R}\ \ \cr \cdot\hspace{0.05cm}\int{\rm e}^{-{2\pi\over k+4}\hspace{0.05cm}\hspace{0.025cm}\bar c_\alpha\hspace{0.05cm} (H_\varphi)^{\alpha\beta} \hspace{0.025cm}\hspace{0.05cm} c_\beta\hspace{0.025cm}\hspace{0.05cm}+\hspace{0.05cm}\hspace{0.025cm} i\hspace{0.05cm} \bar c_\alpha z^\alpha\hspace{0.05cm}\hspace{0.025cm} +\hspace{0.05cm}\s i\hspace{0.05cm} c_\alpha\bar z^\alpha} \ \prod\limits_\alpha d^2c_\alpha\ . \end{eqnarray} Note that the right hand side which, together with the \hspace{0.05cm}$\varphi$-terms left over in (\ref{HFIE}), has to be integrated over \hspace{0.05cm}$\varphi\hspace{0.05cm}$ contains a Liouville-type term \begin{eqnarray} {\rm e}^{-{2\pi\over k+4}\hspace{0.05cm}\hspace{0.025cm}\bar c_\alpha\hspace{0.05cm} (H_\varphi)^{\alpha\beta} \hspace{0.025cm}\hspace{0.05cm} c_\beta}\hspace{0.05cm}=\hspace{0.05cm}{\rm e}^{-{2\pi i\over k+4}\int_{_\Sigma} \hspace{-0.06cm}{\rm e}^{2\varphi}\hspace{0.05cm}\langle c_\alpha b^\alpha,\hspace{0.05cm} c_\beta b^\beta\rangle} \end{eqnarray} notorious for causing problems in the functional integration. Indeed, with the \hspace{0.05cm}$w$-integral done, the \hspace{0.05cm}$\varphi$-integral takes the form \begin{eqnarray} \int{\rm e}^{\hspace{0.025cm}{k+2\over 2\pi i}\int_{_\Sigma}\hspace{-0.04cm} \partial\varphi\wedge\bar\partial\varphi\hspace{0.05cm}\s-{k+1\over 2\pi i} \int_{_\Sigma}\hspace{-0.06cm}\varphi\hspace{0.05cm} R\hspace{0.05cm}\s- \hspace{0.05cm}\s{2\pi\over k+4}\hspace{0.05cm}\hspace{0.025cm}\bar c_\alpha\hspace{0.05cm} (H_\varphi)^{\alpha\beta} \hspace{0.025cm}\hspace{0.05cm} c_\beta}\ D\varphi\hspace{0.05cm}\s\ \equiv\hspace{0.05cm}\ \hspace{0.05cm}{\cal I}_\varphi \end{eqnarray} which, unlike at low genera, is of the Liouville theory type, not a Gaussian one. A possible approach to such an integral is to try to get rid of the Liouville term by integrating out the zero mode \hspace{0.05cm}\s$\varphi_0\equiv({\rm area} \hspace{0.025cm}(\Sigma))^{-{1\over2}} \smallint_{_\Sigma}\hspace{-0.06cm}\varphi\hspace{0.05cm} da\hspace{0.05cm}\s$ of \hspace{0.05cm}$\varphi\hspace{0.05cm}$ (\hspace{0.05cm}$da\hspace{0.05cm}$ denotes the metric volume measure on \hspace{0.05cm}$\Sigma\hspace{0.05cm}$)\hspace{0.05cm}. This was tried in the Liouville theory in \cite{GTW} and, supplemented with rather poorly understood formal tricks, has led in \cite{GL} to the functional integral calculation of three-point functions for the minimal models coupled to gravity. Multiplying \hspace{0.05cm}${\cal I}_\varphi\hspace{0.05cm}$ by \hspace{0.05cm}\s$1\hspace{0.025cm}=\hspace{0.025cm} ({\rm area})^{1\over2}\smallint\delta (\varphi_0-u\hspace{0.05cm}({\rm area})^{1\over2})\hspace{0.05cm} du\hspace{0.05cm}$, \hspace{0.05cm}\s changing the order of integration, shifting \hspace{0.05cm}$\varphi_0\hspace{0.05cm}$ to \hspace{0.05cm}$\varphi_0+u\hspace{0.05cm}$ and setting \hspace{0.05cm}$M\equiv(k+1)(g-1)\hspace{0.05cm}$, one obtains \begin{eqnarray} {\cal I}_{\varphi}\hspace{0.05cm}\s=\hspace{0.05cm}\s({\rm area})^{1/2} \hspace{0.05cm}\int\hspace{-0.03cm}{\rm e}^{-2uM}\ \hspace{-0.04cm} {\rm e}^{\hspace{0.025cm}{k+2\over 2\pi i}\hspace{-0.05cm}\int_{_\Sigma}\hspace{-0.1cm} \partial\varphi\wedge\bar\partial\varphi\hspace{0.05cm} -\hspace{0.05cm}{k+1\over 2\pi i}\int_{_\Sigma} \hspace{-0.08cm}\varphi\hspace{0.025cm} R\hspace{0.05cm}\hspace{0.025cm}-\hspace{0.05cm}\hspace{0.025cm}{2\pi\over k+4} \hspace{0.05cm}\hspace{0.025cm}{\rm e}^{2u}\hspace{0.05cm}\bar c_\alpha\hspace{0.05cm}(H_\varphi)^{\alpha\beta}\hspace{0.025cm} c_\beta} \hspace{0.05cm}\hspace{0.025cm} du\ \delta(\varphi_0)\ D\varphi\hspace{-0.01cm}\hspace{0.8cm}\cr =\hspace{0.05cm}\s{_1\over^2}\hspace{0.05cm}({\rm area})^{1/2}\hspace{0.05cm}\Gamma(-M)\hspace{0.05cm}\s ({_{2\pi}\over^{k+4}})^{M}\hspace{-0.08cm}\int\hspace{-0.05cm} {\rm e}^{\hspace{0.025cm}{k+2\over 2\pi i} \hspace{0.05cm}\int_{_\Sigma}\hspace{-0.1cm} \partial\varphi\wedge\bar\partial\varphi\hspace{0.05cm} -\hspace{0.025cm}{k+1\over 2\pi i}\int_{_\Sigma} \hspace{-0.08cm}\varphi\hspace{0.025cm} R} \hspace{0.05cm} (\hspace{0.025cm}\bar c_\alpha\hspace{0.05cm}(H_\varphi)^{\alpha\beta} c_\beta\hspace{0.025cm})^{M} \hspace{0.05cm}\s\delta(\varphi_0)\hspace{0.05cm}\s D\varphi\hspace{0.05cm}.\hspace{0.5cm}\ \label{GL} \end{eqnarray} Hence the integration over the zero mode of \hspace{0.05cm}$\varphi\hspace{0.05cm}$ diverges but may be easily (multiplicatively) renormalized by removing the overall divergent factor \hspace{0.05cm}$\Gamma(-M)\hspace{0.05cm}$. Now, the \hspace{0.05cm}$c$-integral is easy to perform\hspace{0.025cm}: \begin{eqnarray} \int(\hspace{0.025cm}\bar c_\alpha\hspace{0.05cm}(H_\varphi\hspace{0.025cm})_{\alpha\beta}\hspace{0.05cm} c_\beta)^{M} \hspace{0.05cm}\s{\rm e}^{\hspace{0.05cm} i\hspace{0.05cm} \bar c_\alpha z^\alpha\hspace{0.025cm}+\hspace{0.05cm}\hspace{0.025cm} i\hspace{0.05cm} c_\alpha \bar z^\alpha}\hspace{0.025cm}\prod\limits_\alpha d^2c_\alpha\hspace{5cm}\cr =\ (2\pi)^{6(g-1)}\hspace{0.05cm}\hspace{0.025cm} (-(H_\varphi)^{\alpha\beta}\hspace{0.025cm}\partial_{z^\alpha} \partial_{\bar z^\beta}\hspace{-0.03cm})^{M}\hspace{0.05cm}\prod\limits_\alpha\hspace{-0.05cm} \delta(z^\alpha)\hspace{0.05cm}. \end{eqnarray} Gathering the above results, we obtain the following ``Coulomb gas representation'' for the higher genus partition function of the \hspace{0.05cm}${\cal H}$-valued WZNW model\hspace{0.025cm}: \vskip 3cm \begin{eqnarray} \int\hspace{-0.05cm}{\rm e}^{\hspace{0.025cm}(k+4)\hspace{0.025cm} S(hh^\dagger\hspace{-0.02cm}, \hspace{0.025cm} A^{x,b})}\ D(hh^\dagger) \hspace{0.05cm}=\hspace{0.05cm}{\rm const}.\ \hspace{0.05cm}({\rm area})^{1/2}\ {\rm det}\hspace{0.05cm}(\hspace{0.025cm}\bar\partial_{L^{-2}_x}^\dagger\hspace{0.025cm} \bar\partial_{L^{-2}_x})^{-1}\hspace{0.05cm}\s {\rm e}^{-{k+4\over 2\pi i}\hspace{-0.05cm}\int_{_\Sigma}\hspace{-0.05cm} \langle b\hspace{0.05cm},\hspace{0.05cm}\wedge\hspace{0.025cm} b\rangle}\ \ \hspace{0.05cm}\ \cr \cdot\hspace{0.05cm}\left( \int{\rm e}^{\hspace{0.025cm}{k+2\over 2\pi i}\int_{_\Sigma}\hspace{-0.1cm}\hspace{0.05cm} \partial\varphi\wedge\bar\partial\varphi\hspace{0.05cm}\s -\hspace{0.05cm}\s{k+1\over 2\pi i}\int_{_\Sigma} \hspace{-0.05cm}\varphi\hspace{0.025cm} R}\ (\hspace{-0.04cm}-(H_\varphi)^{\alpha\beta} \hspace{0.025cm}\partial_{z^\alpha} \partial_{\bar z^\beta}\hspace{-0.03cm})^{M}\ \delta(\varphi_0)\hspace{0.05cm}\ D\varphi\right) \prod\limits_\alpha\hspace{-0.03cm}\delta(z^\alpha)\ \ \hspace{0.05cm}\hspace{0.025cm}\ \cr\cr =\hspace{0.05cm}{\rm const}.\ ({\rm area})^{1/2}\ {\rm det}\hspace{0.05cm}(\hspace{0.025cm}\bar\partial_{L^{-2}_x}^\dagger\hspace{0.025cm} \bar\partial_{L^{-2}_x}\hspace{-0.04cm})^{-1}\hspace{0.05cm}\s {\rm e}^{-{k+4\over 2\pi i}\hspace{-0.05cm}\int_{_\Sigma}\hspace{-0.05cm} \langle b\hspace{0.05cm},\hspace{0.05cm}\wedge\hspace{0.025cm} b\rangle}\hspace{0.025cm}\bigg(\hspace{-0.09cm} \prod\limits_m {_1\over^i}\hspace{0.05cm}\partial_{z^{\alpha_m}}\partial_{\bar z^{\beta_m}} \hspace{-0.09cm}\prod\limits_\alpha \delta(z^\alpha)\hspace{-0.03cm}\bigg)\ \ \ \cr \cdot\hspace{0.05cm} \int\left( \int{\rm e}^{\hspace{0.025cm}{k+2\over 2\pi i}\int_{_\Sigma}\hspace{-0.1cm}\hspace{0.05cm} \partial\varphi\wedge\bar\partial\varphi\hspace{0.05cm}\s -\hspace{0.05cm}\s{k+1\over 2\pi i}\int_{_\Sigma} \hspace{-0.05cm}\varphi\hspace{0.025cm} R\hspace{0.05cm}\s+\hspace{0.05cm}\s2\sum_{_m}\hspace{-0.1cm} \varphi(x_m)}\ \delta(\varphi_0)\ D\varphi\right) \prod\limits_m\langle b^{\alpha_m},\hspace{0.05cm} b^{\beta_m} \hspace{-0.05cm}\rangle(x_m\hspace{-0.04cm})\ ,\ \label{PFN} \end{eqnarray} where \hspace{0.05cm}$m\hspace{0.05cm}$ runs from \hspace{0.05cm}$1\hspace{0.05cm}$ to \hspace{0.05cm}$M\hspace{0.05cm}$. The \hspace{0.05cm}$\varphi\hspace{0.05cm}$ integral is now purely Gaussian and easily doable, provided that we Wick order the ``screening charge'' insertions \hspace{0.05cm}${\rm e}^{2\varphi(x_m)}\hspace{0.05cm}$ (this is, again, a multiplicative renormalization). In the end, we obtain \begin{eqnarray} Z_{_{{\cal H}}}(A^{x,\hspace{0.025cm} b})\hspace{0.05cm}\s\hspace{0.05cm}=\hspace{0.05cm}\s\hspace{0.05cm}{\rm const}.\hspace{9.3cm}\cr\cr \cdot\ \hspace{0.05cm}{\rm det}\hspace{0.05cm}(\hspace{0.025cm}\bar\partial_{L^{-2}_x}^\dagger\hspace{0.025cm} \bar\partial_{L^{-2}_x})^{-1}\hspace{0.05cm} \left({_{{\rm det}'\hspace{0.025cm}(-\Delta)}\over^{{\rm area}\hspace{0.025cm}(\Sigma)}} \right)^{\hspace{-0.09cm}-1/2}\hspace{0.05cm} {\rm e}^{-{k+4\over 2\pi i}\hspace{-0.05cm}\int_{_\Sigma}\hspace{-0.05cm} \langle b\hspace{0.05cm},\hspace{0.05cm}\wedge\hspace{0.025cm} b\rangle}\hspace{0.05cm} \hspace{0.05cm}\bigg(\hspace{-0.09cm} \prod\limits_m {_1\over^i}\hspace{0.05cm}\partial_{z^{\alpha_m}}\partial_{\bar z^{\beta_m}} \hspace{-0.09cm}\prod\limits_\alpha \delta(z^\alpha)\hspace{-0.03cm}\bigg)\hspace{0.05cm}\ \ \cr \cdot\hspace{0.05cm}\int\bigg(\prod\limits_{m_1\not=m_2} \hspace{-0.27cm}{\rm e}^{-{4\pi\over k+2}\hspace{0.025cm} G(x_{m_1},\hspace{0.025cm} x_{m_2})}\bigg)\hspace{0.05cm} \bigg(\prod\limits_m {\rm e}^{-{4\pi\over k+2} \hspace{0.05cm}:\hspace{0.025cm} G(x_m,\hspace{0.025cm} x_m)\hspace{0.025cm}:} \hspace{0.05cm}\hspace{0.025cm} \langle b^{\alpha_m},\hspace{0.05cm} b^{\beta_m} \hspace{-0.05cm}\rangle(x_\alpha\hspace{-0.04cm})\bigg)\ , \label{HYPF} \end{eqnarray} where \hspace{0.05cm}$G(\hspace{0.025cm}\cdot\hspace{0.05cm},\hspace{0.025cm}\cdot\hspace{0.025cm})\hspace{0.05cm}$ is the Green function of the scalar Laplacian \hspace{0.05cm}$\Delta\hspace{0.05cm}$ on \hspace{0.05cm}$\Sigma\hspace{0.05cm}$ chosen so that \hspace{0.05cm}\s$\smallint_{_\Sigma}\hspace{-0.08cm} G(\hspace{0.025cm}\cdot\hspace{0.05cm},y)\hspace{0.05cm} R(y)=0\hspace{0.05cm}$. \hspace{0.05cm}\s$:\hspace{-0.02cm}G(y,y)\hspace{-0.02cm}: \hspace{0.05cm}\s\equiv\hspace{0.05cm}\lim\limits_{\epsilon\rightarrow 0} \hspace{0.05cm}\hspace{0.025cm}(\hspace{0.025cm} G(y,y')-{1\over 2\pi}\hspace{0.05cm}\ln\hspace{0.05cm}\epsilon\hspace{0.05cm})\hspace{0.05cm}\s$ where \hspace{0.05cm}$\epsilon=d(y,y')\hspace{0.05cm}$ is the distance between \hspace{0.05cm}$y\hspace{0.05cm}$ and \hspace{0.05cm}$y'\hspace{0.05cm}$. \vskip 0.5cm Formula (\ref{HYPF}) reduces the functional integral over \hspace{0.05cm}$hh^\dagger\hspace{0.05cm}$ to a finite dimensional integral over positions \hspace{0.05cm}$x_m\hspace{-0.08cm}\in\hspace{-0.06cm} \Sigma\hspace{0.025cm}\hspace{0.05cm}$ of \hspace{0.05cm}$M\hspace{0.05cm}$ screening charges. The integrand is a smooth function except for \hspace{0.05cm}${\cal O}(d(x_{m_1},\hspace{0.025cm} x_{m_2})^{-{4\over k+2}})\hspace{0.05cm}$ singularities at coinciding points. Power counting shows, that the integral converges for \hspace{0.05cm}$g=2\hspace{0.05cm}$ but for higher genera it diverge unless special combinations of forms \hspace{0.025cm}$\langle b^{\alpha_m},\hspace{0.05cm} b^{\beta_m} \hspace{-0.05cm}\rangle(x_m\hspace{-0.04cm})\hspace{0.05cm}$ are integrated. Another feature of the right hand side of eq.\hspace{0.05cm}\s(\ref{HYPF}) is even more surprising in the candidate for a partition function: its dependence on the external field \hspace{0.05cm}$A^{{\bf x},b}\hspace{0.05cm}$ is not functional but distributional\hspace{0.025cm}! The entire dependence on \hspace{0.05cm}$b\hspace{0.05cm}$ resides in the term \hspace{0.05cm}\s$\prod\limits_\alpha \partial_{z^{\alpha_m}}\partial_{\bar z^{\beta_m}} \hspace{-0.09cm}\prod\limits_\alpha \delta(z^\alpha)\hspace{0.05cm}\s$ (recall that \hspace{0.05cm}$z^\alpha\equiv (b^\alpha\hspace{0.025cm},\hspace{0.05cm} b)\hspace{0.05cm}$)\hspace{0.05cm}. \hspace{0.05cm}\s This fact is not so astonishing since the partition function of the \hspace{0.05cm}${\cal H}$-valued WZNW model may be expected, by formal arguments similar to the ones used in \cite{Wittenfact}, to be the hermitian square of a holomorphic section of a negative power of the determinant bundle. But there are no such global sections but only distributional solutions of the corresponding Ward identities. The right hand side of (\ref{HYPF}) is one of them. \vskip 1cm \nsection{\hspace{-.6cm}.\ \ The scalar product formula} \vskip 0.5cm In view of the results (\ref{dmu1}) giving the integration measure on the slice and (\ref{HYPF}) computing the integral along the \hspace{0.05cm}${\cal G}^\NC$-orbits, the functional integral expression (\ref{FI1}) for the scalar product of the CS states reduces to \begin{eqnarray} \|\Psi\|^2\ \hspace{0.05cm}\s=\ \hspace{0.05cm}\s{\rm const}.\hspace{0.05cm}\ {\rm det}\hspace{0.05cm} ({\rm Im}\hspace{0.05cm}\tau)^{-1}\hspace{0.05cm}\s\hspace{0.025cm}\left({_{{\rm det}'\hspace{0.025cm} (-\Delta)}\over^{{\rm area}\hspace{0.025cm}(\Sigma)}} \right)^{\hspace{-0.07cm}1/2}{\rm e}^{\hspace{0.05cm}{k\over \pi}\hspace{-0.03cm} \int_{_\Sigma}\hspace{-0.05cm}{\rm tr}\hspace{0.05cm}\hspace{0.025cm} A^0_z\hspace{0.025cm}(A^0_z)^\dagger\hspace{0.05cm} d^2z}\hspace{3cm}\cr\cr \cdot\hspace{0.05cm}\int\ {\rm e}^{-4\pi \hspace{0.025cm} k\hspace{0.05cm}\hspace{0.025cm} (\int_{_{x_0}}^{x}\hspace{-0.1cm}{\rm Im}\hspace{0.05cm}\omega)\hspace{0.05cm}\hspace{0.025cm} ({\rm Im}\hspace{0.05cm}\tau)^{-1}\hspace{0.025cm} (\int_{_{x_0}}^{x}\hspace{-0.1cm}{\rm Im}\hspace{0.05cm}\omega)} \ \hspace{0.05cm}|\psi(x,\hspace{0.025cm} b)|^2 \ |\sum\limits_{j=1}^g\hspace{0.05cm}{\rm det}\hspace{0.025cm}( \sum_\alpha\hspace{-0.05cm}M_{ir}^\alpha z^\alpha\hspace{0.025cm})_{\hspace{-0.01cm}_{i\not=j}} \ \omega^j(x)\hspace{0.05cm}|^2\ \hspace{0.5cm}\ \hspace{0.05cm}\s\cr \cdot\ \hspace{0.05cm}{\rm det}'\hspace{0.025cm}(\hspace{0.025cm}\bar\partial_{L^{2}_x}^\dagger\hspace{0.025cm} \bar\partial_{L^{2}_x})\hspace{0.05cm}\bigg(\hspace{-0.09cm}\prod\limits_m {_1\over^i}\hspace{0.05cm}\partial_{z^{\alpha_m}}\partial_{\bar z^{\beta_m}} \hspace{-0.09cm}\prod\limits_\alpha \delta(z^\alpha)\hspace{-0.03cm}\bigg)\hspace{0.05cm} \hspace{0.05cm}|\hspace{0.05cm}\epsilon_{\alpha_1,\dots,\hspace{0.025cm}\alpha_{3(g-1)}} \hspace{0.05cm} z^{\alpha_1} dz^{\alpha_2}\hspace{-0.06cm}\wedge \dots\wedge\hspace{-0.03cm} dz^{\alpha_{3(g-1)}}\hspace{0.05cm}|^2\hspace{0.05cm}\s\ \hspace{0.05cm}\cr \cdot\hspace{0.05cm}\bigg(\hspace{-0.08cm}\prod\limits_{m_1\not=m_2} \hspace{-0.3cm}{\rm e}^{-{4\pi\over k+2}\hspace{0.05cm} G(x_{m_1},\hspace{0.025cm} x_{m_2})}\bigg)\hspace{0.025cm} \bigg(\prod\limits_m {\rm e}^{-{4\pi\over k+2}\hspace{0.05cm}: \hspace{0.025cm} G(x_m,\hspace{0.025cm} x_m)\hspace{0.025cm}:}\hspace{0.025cm}\langle b^{\alpha_m},\hspace{0.05cm} b^{\beta_m} \hspace{-0.05cm}\rangle(x_m\hspace{-0.04cm})\bigg)\hspace{0.05cm}.\ \ \end{eqnarray} The integral is, for fixed \hspace{0.05cm}$x\in\Sigma\hspace{0.05cm}$, over the \hspace{0.05cm}$(3g-4)$-dimensional projective space with homogeneous coordinates \hspace{0.05cm}$(z^\alpha)\hspace{0.05cm}$ and over the Cartesian product of \hspace{0.05cm}$M\equiv(k+1)(g-1)\hspace{0.05cm}$ copies of \hspace{0.05cm}$\Sigma\hspace{0.05cm}$ (positions \hspace{0.05cm}$x_m\hspace{0.05cm}$ of the screening charges). Finally, one should integrate over \hspace{0.05cm}$x\in\Sigma\hspace{0.05cm}$. \vskip 0.5cm The \hspace{0.05cm}$(z^\alpha)$-integral should be interpreted as \begin{eqnarray} \int\limits_{\NC^{3(g-1)}} \hspace{-0.3cm}|P(z)|^2\hspace{0.05cm} \hspace{0.05cm}\bigg(\hspace{-0.09cm}\prod\limits_m {_1\over^i}\hspace{0.05cm}\partial_{z^{\alpha_m}}\partial_{\bar z^{\beta_m}} \hspace{-0.09cm}\prod\limits_\alpha \delta(z^\alpha)\hspace{-0.03cm}\bigg)\hspace{0.05cm} \hspace{0.05cm} d^{\m6(g-1)}z\hspace{0.05cm}\s=\hspace{0.05cm}\s\prod\limits_m({_1\over^i} \hspace{0.05cm}\partial_{z^{\alpha_m}} \partial_{\bar z^{\beta_m}})\Big|_{_{z=0}}\hspace{0.05cm}|P(z)|^2 \label{delt} \end{eqnarray} where \hspace{0.05cm}\s$P(z)\hspace{0.025cm}=\hspace{0.025cm}\psi(x,\hspace{0.025cm} b)\hspace{0.05cm} \sum\limits_{j=1}^g\hspace{0.05cm}{\rm det}\hspace{0.025cm}( \sum_{_\alpha}\hspace{-0.1cm}M_{ir}^\alpha z^\alpha\hspace{0.025cm})_{\hspace{-0.01cm}_{i\not=j}}\hspace{0.05cm}\hspace{0.025cm}\omega^j(x)\hspace{0.05cm}\s$ is a homogeneous polynomial in \hspace{0.05cm}$(z^\alpha)\hspace{0.05cm}$ of degree \hspace{0.05cm}$M\hspace{0.05cm}$. \hspace{0.05cm} Formally, the latter integral differs from the one involving the volume form on \hspace{0.05cm}$P\NC^{3g-4}\hspace{0.05cm}$ by an infinite constant, which may be interpreted as the factor \hspace{0.05cm}$\Gamma(-M)\hspace{0.05cm}$ which appeared in the integral over the zero mode of the field \hspace{0.05cm}$\varphi\hspace{0.05cm}$. With the \hspace{0.05cm}$z$-integration given by eq.\hspace{0.05cm}\s(\ref{delt}), one obtains the following formula for the scalar product of CS states: \begin{eqnarray} \|\Psi\|^2\ \hspace{0.05cm}=\ \hspace{0.05cm}{\rm const}.\hspace{0.05cm}\ {\rm det}\hspace{0.05cm} ({\rm Im}\hspace{0.05cm}\tau)^{-1}\hspace{0.05cm}\s\left({_{{\rm det}'\hspace{0.025cm} (-\Delta)}\over^{{\rm area}\hspace{0.025cm}(\Sigma)}} \right)^{\hspace{-0.07cm}1/2}{\rm e}^{\hspace{0.05cm}{k\over \pi}\hspace{-0.03cm} \int_{_\Sigma}\hspace{-0.05cm}{\rm tr}\hspace{0.05cm}\hspace{0.025cm} A^0_z\hspace{0.025cm}(A^0_z)^\dagger\hspace{0.05cm} d^2z}\hspace{1.18cm}\hspace{0.05cm}\cr \cdot\hspace{0.05cm}\int\hspace{0.05cm}\prod\limits_m({_1\over^i}\hspace{0.05cm}\partial_{z^{\alpha_m}} \partial_{\bar z^{\beta_m}})\Big|_{_{z=0}}\hspace{0.05cm}\bigg( |\psi(x,\hspace{0.025cm} b)|^2 \ |\sum\limits_{j=1}^g\hspace{0.05cm}{\rm det}\hspace{0.025cm}( \sum_{_\alpha}M_{ir}^\alpha z^\alpha\hspace{0.025cm})_{\hspace{-0.01cm}_{i\not=j}} \ \omega^j(x)\hspace{0.05cm}|^2\bigg)\ \hspace{0.05cm}\s\hspace{0.025cm}\cr \cdot\hspace{0.05cm}\bigg(\hspace{-0.08cm}\prod\limits_{m_1\not=m_2} \hspace{-0.3cm}{\rm e}^{-{4\pi\over k+2}\hspace{0.05cm} G(x_{m_1},\hspace{0.025cm} x_{m_2})}\bigg)\hspace{0.025cm} \bigg(\prod\limits_m {\rm e}^{-{4\pi\over k+2}\hspace{0.05cm}: \hspace{0.025cm} G(x_m,\hspace{0.025cm} x_m)\hspace{0.025cm}:}\hspace{0.025cm}\langle b^{\alpha_m},\hspace{0.05cm} b^{\beta_m} \hspace{-0.05cm}\rangle(x_m\hspace{-0.04cm})\bigg)\ \hspace{0.05cm}\s\hspace{0.025cm}\cr \cdot\ {\rm e}^{-4\pi \hspace{0.025cm} k\hspace{0.05cm}\hspace{0.025cm} (\int_{_{x_0}}^{x}\hspace{-0.1cm}{\rm Im}\hspace{0.05cm}\omega)\hspace{0.05cm}\hspace{0.025cm} ({\rm Im}\hspace{0.05cm}\tau)^{-1}\hspace{0.025cm} (\int_{_{x_0}}^{x}\hspace{-0.1cm}{\rm Im}\hspace{0.05cm}\omega)} \ \hspace{0.05cm}{\rm det}'\hspace{0.025cm}(\hspace{0.025cm}\bar\partial_{L^{2}_x}^\dagger\hspace{0.025cm} \bar\partial_{L^{2}_x})\ ,\hspace{0.05cm} \label{final} \end{eqnarray} where the numerical constant depends on the genus \hspace{0.05cm}$g\hspace{0.05cm}$ and on the level \hspace{0.05cm}$k\hspace{0.05cm}$. \hspace{0.05cm} The integration in (\ref{final}) is over \hspace{0.05cm}$x_m\hspace{0.05cm}$, \hspace{0.05cm}$m=1,\dots,(k+1)(g-1)\hspace{0.05cm}$, \hspace{0.05cm} and over \hspace{0.05cm}$x\hspace{0.05cm}$, \hspace{0.05cm} all in \hspace{0.05cm}$\Sigma\hspace{0.05cm}$. It is not difficult to check that upon the Weyl rescalings \hspace{0.05cm}$\gamma\mapsto{\rm e}^{\sigma} \gamma\hspace{0.05cm}$ of the Riemannian metric on \hspace{0.05cm}$\Sigma\hspace{0.05cm}$, the right hand side of (\ref{final}) (with the zeta-function regularized determinants) changes by the factor \hspace{0.05cm}\s${\rm e}^{\hspace{0.025cm}{k\over 8\pi i(k+2)}\int_{_\Sigma} \hspace{-0.07cm}({1\over 2}\hspace{0.025cm}\partial\sigma\wedge\bar\partial\sigma+\sigma R)}\hspace{0.05cm}$. \hspace{0.05cm} This produces the correct value of the Virasoro central charge of the WZNW partition functions given by eq.\hspace{0.05cm}\s(\ref{BF}). \vskip 0.5cm The arguments which led to eq.\hspace{0.05cm}\s(\ref{final}) were clearly formal and the treatment of the Liouville integral might have looked particularly suspicious. Fortunately, one may do the calculation in a different, more satisfying manner. If the \hspace{0.05cm}$z$-integral over \hspace{0.05cm}$P\NC^{3g-4}\hspace{0.05cm}$ is done just after the \hspace{0.05cm}$w$-integration and before the one over \hspace{0.05cm}$\varphi\hspace{0.05cm}$ then the final result is exactly as above but no infinite constants (apart from the Wick ordering ones) appear in the intermediate steps. This way, it is rather the convergent integration over the (part of) the modular degrees of freedom, not the divergent \hspace{0.05cm}$\varphi_0\hspace{0.05cm}$ integral, which removes the cumbersome Liouville-type terms from the effective action for \hspace{0.05cm}$\varphi\hspace{0.05cm}$. This is an important lesson to learn from the above calculation. It is plausible that similar arguments may be used to substantiate the Goulian-Li trick \cite{GL} in the gravity case. \vskip 0.5cm Similarly as for the genus zero case discussed in \cite{Quadr}, the natural conjecture is that the integral on the right hand side converges if and only if the function \hspace{0.05cm}$\psi\hspace{0.05cm}$ defines a global non-singular CS state \hspace{0.05cm}$\Psi\hspace{0.05cm}$. It is clear from the form (\ref{final}) of the scalar product that finiteness of the screening charge integral in (\ref{final}) imposes, in general, conditions for the Taylor coefficients of \hspace{0.05cm}$\psi(x,\hspace{0.025cm} b)\hspace{0.05cm}$ at \hspace{0.05cm}$b=0\hspace{0.05cm}$ if \hspace{0.05cm}$g>2\hspace{0.05cm}$. We shall postpone the study of these ``fusion rule conditions'' to a future work. The case \hspace{0.05cm}$g=2\hspace{0.05cm}$ is specially accessible since there exists a simple global picture of the moduli space of \hspace{0.05cm}$SL(2,\NC)$-bundles (it is the projectivization of the 4-dimensional space of degree 2 theta-functions)\footnote{I thank O. Garcia-Prada for explaining this work to me} \cite{NaraRama} and of the space of CS states (homogeneous polynomials of order \hspace{0.05cm}$k\hspace{0.05cm}$ on the same space). \vskip 0.3cm If our conjecture is true, then the formula (\ref{final}) defines a hermitian structure on the holomorphic vector bundle with the fibers given by the spaces of the CS states and the base by the moduli space of complex curves. Such a hermitian structure induces a holomorphic hermitian connection. This connection should coincide with the generalization to higher genera of the Knizhnik-Zamolodchikov connection studied in \cite{Denis}\cite{Kar}\cite{AxWitt}. The latter has been constructed in \cite{Hitch} in the geometric terms, and it is challenging to find an interpretation for eq.\hspace{0.05cm}\s(\ref{final}) in terms of the moduli space geometry. \vskip 2cm
1,108,101,562,396
arxiv
\section{Conclusion} \vspace{-8pt} In this paper, we take a step towards the problem of finding the optimal KD scheme given a pair of wanted student network and learned teacher network under a vision task. Specifically, we setup a searching space by building pathways between the two networks and assigning a stochastic distillation process along each path pathway. We propose a meta-learning framework, \texttt{DistPro}, to learn these processes efficiently, and find effective ones to perform KD with intermediate features. We demonstrate its benefits over image classification, and dense predictions such as image segmentation and depth estimation. We hope our method could inspire the field of KD to further expand the scope, and its cooperation with other techniques such as NAS and hyper-parameter tuning. \clearpage \bibliographystyle{splncs04} \section{Introduction} \label{sec:intro} \vspace{-8pt} \begin{figure} \centering \includegraphics[width=\textwidth]{figures/overview_comp_v2.pdf} \vspace{-18pt} \caption{Comparisons of distillation methods for learning process between teacher and student models. (a) Knowledge Review~\cite{chen2021distilling} proposes fixed sampled pathways by enumerating different configurations. (b) L2T-ww~\cite{jang2019learning}, adopts a meta-learning framework to learn a floating weight for each selected pathway. (c) Our framework learns distillation process for each pathway. } \vspace{-15pt} \label{fig:ovewview} \end{figure} Knowledge distillation (KD) is proposed to effectively transfer knowledge from a well performing larger/teacher deep neural network (DNN) to a given smaller/ student network, where the learned student network often learns faster or performs better than that with a vanilla training strategy solely using ground truths. Since its first appearance in DNN learning~\cite{hinton2015distilling}, KD has achieved remarkable success in training efficient models for image classification \cite{zagoruyko2016paying}, image segmentation~\cite{liu2019structured}, object detection~\cite{chen2017learning}, \textit{etc}, contributing to its wide application in various model deployment over mobile-phones or other low-power computing devices~\cite{lyu2020differentially}. Nowadays, KD has become a popular technique in industry to develop distilled DNNs to deal with billions of data per-day. To improve the distillation efficiency and accuracy, numerous handcrafted KD design schemes have been proposed, \textit{e.g.}, designing different distillation losses at outputs~\cite{liu2019structured,wang2020intra}, manually assigning intermediate features maps for additional KD guidance~\cite{zagoruyko2016paying,yim2017gift}. However, recent studies~\cite{gou2021knowledge,liu2020search,yao2021joint} indicate that the effectiveness of those proposed KD techniques is dependent on networks and tasks. Some recent works propose to search some configurations to conclude a better KD scheme. For instance, ReviewKD~\cite{chen2021distilling} (Fig.~\ref{fig:ovewview}(a)) proposes to evaluate a subset from a total of 16 pathways by enabling and disabling partial of them for image classification task. It comes to the conclusion that partial pathways are always redundant. However, our experimental results show that it does not hold to a semantic segmentation task, and after conducting a research of the pathways, we may obtain better results. L2T-ww~\cite{jang2019learning} (Fig.~\ref{fig:ovewview}(b)) takes a further step, it can not only set up multiple distillation pathways between feature maps, but also learn a floating weight for each pathway, which shows better performance than fixed weight. This inspires us to explore deeper to find better KD schemes, and motivated by the learning rate scheduler, as illustrated in Fig.~\ref{fig:ovewview}(c), we brings in the concept of distillation process for each pathway $i$, \textit{i.e.} $\mathcal{A}^i = \{\alpha_t^i\}_{t=1}^{T}$,, where $T$ is the number of distillation steps. Thus, the importance of each pathway is dynamic and changes along the distillation procedure, which we find is beneficial in this paper. However, searching a process is more difficult than finding a floating weight, which includes $T$ times more parameters. It is obviously non-practical to solve them via a brute force way. For example, randomly drawing a sample process, and performing a full training and validate of the network to evaluate its performance. Thanks to bi-level meta-learning~\cite{franceschi2018bilevel,liu2018darts}, we find our problems can be formulated and tackled in a similar manner for each proposed pathway during distillation. Such a framework not only skips the difficulty of random exploration through a valid meta gradient, but also naturally provides soft weighting that can be adopted to generate the process. Additionally, to effectively apply the framework and avoid possible noisy gradients from the meta-training, we propose a proper normalization for each $\alpha_t = [\alpha^0_t, \cdot, \alpha^N_t]$, where $N$ is number of pathways. We call our framework \texttt{DistPro} (Distillation Process). In our experiments, we show that \texttt{DistPro} produces better results on various tasks, such as classification with CIFAR100~\cite{krizhevsky2009cifar} and ImageNet1K~\cite{imagenet_cvpr09}, segmentation with CityScapes~\cite{Cordts2016Cityscapes} and depth esimation with NYUv2~\cite{Silberman:ECCV12}. Finally, we find our learned process remains similar with minor variation across different network architectures and tasks as long as it uses the same proposed pathways and transforms, which indicates that the process can be generalized to new tasks. In practice, we transfer the process learned by CIFAR100 to ImageNet1K, and show that it improves over the baselines and accelerates the distillation (2x faster than ReviewKD~\cite{chen2021distilling} as shown in Tab.~\ref{tab:transfer}). In summary, our contributions are three-folds 1) We propose a meta-learning framework for KD, \textit{i.e.} DistPro, to efficiently learn an optimal process to perform KD. 2) We verify \texttt{DistPro} over various configurations, architecture and task settings, yielding significant improvement over other SoTA methods. 3) Through the experiments, we find processes that can generalize across tasks and networks, can potentially benefit KD in new tasks without additional searching. Our code and implementation will be released upon the publication of this paper. \section{Approach} \label{sec:algo} In this section, we elaborate \texttt{DistPro} by first setting up the KD pathways with intermediate features which constructs our search space, establishing the notations and definition of our KD scheme. Then, we derive the gradient, which is used to generate our process for the KD scheme. At last, the overall algorithm is presented. \vspace{-5pt} \subsection{KD with intermediate features}\label{sec:prelim} \vspace{-5pt} Numerous prior works~\cite{ji2021feature,chen2021distilling} have demonstrated that intermediate feature maps from neural network can benefit distilling to student neural network. Motivated by this, we design our approach using feature maps. Let the student neural network be $\mathcal{S}$ and the teacher neural network be $\mathcal{T}$. Given an input example $\mathbf{X}$, the output of the student network is as follows \begin{equation} \mathcal{S}(\mathbf{X}) := \mathcal{S}_{L_s} \circ \cdots \circ \mathcal{S}_{2} \circ \mathcal{S}_{1}(\mathbf{X}), \end{equation} where $\mathcal{S}_i$ is the $i$-th layer of the neural network and $L_s$ is the number of layers. The $k$-th intermediate feature map $\mathcal{F}^s_k$ of the student is defined as follows, \[ \mathcal{F}^s_k(\mathbf{X}) := \mathcal{S}_{k} \circ \cdots \circ \mathcal{S}_{2} \circ \mathcal{S}_{1}(\mathbf{X}),~1 \leq k \leq L_s. \] Similarly, the feature map of the teacher is denoted by $\mathcal{F}^t_k$, $1 \leq k \leq L_{te}$. Knowledge can be distilled from a pathway between $i$-th feature map of the teacher and $j$-th feature map of the student by penalizing the difference between these two feature maps, ie, $\mathcal{F}^t_i$ and $\mathcal{F}^s_j$. Since the feature maps may come from any stage of the network, they may not be in the same shape and thus not directly comparable. Therefore, additional computation are required to align these two feature maps to the same shape. To this end, a transform block is added after $\mathcal{F}^s_j$, which could be in any forms of differentiable computation. In our experiments, the transform block consists of multiple convolution layers and an interpolation layer to align the spatial and channel resolution of the feature map. Denoting the transform block by $\mathcal{M}$, then the loss term measuring the difference between these two feature maps is as follows, \[ \ell(\mathcal{F}^s_j, \mathcal{F}^t_i) := \delta(\mathcal{M}(\mathcal{F}^s_j), \mathcal{F}^t_i), \] where $\delta$ is the optional distance function, which could be L1 distance or L2 distance as used in~\cite{ji2021feature}, etc. \subsection{The Distillation Process} \vspace{-5pt} Now we will be able to build pathways to connect any feature layer of the teacher to any layer of the student with appropriate transforms. However, as discussed in Sec.~\ref{sec:intro}, not all of these pathways are beneficial. This motivates us to design an approach to find out the importance of each possible pathway by assigning an importance factor for it. Different from the existing work~\cite{jang2019learning,chen2021distilling}, the importance factor here is a process. Formally, a stochastic weight process $\mathcal{A}^i = \{\alpha^i_t\}_{t=1}^{T}$ is associated with the pathway $i$, where $T$ is the total learning steps. Here, $\alpha^i_t$ describes the importance factor at different learning step $t$. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{figures/algo.pdf} \vspace{-15pt} \caption{Two phases of the proposed algorithm. In the search phase, student $\theta$ and $\alpha_t$ are computed. In the retrain phase, the learned process of $\mathcal{A}=\{\alpha_t, 0\le t \le T_{search}\}$ is interpolated to be used for KD, and only $\theta$ is updated.} \vspace{-10pt} \label{fig:algo} \end{figure} Let $D_{train} := \{(\mathbf{X}_i, y_i)\}_{i=1}^{|D_{train}|}$ and $D_{val} := \{(\mathbf{X}_i, y_i)\}_{i=1}^{|D_{val}|}$ be the training set and the validation set respectively, where $y_i$ is the label of sample $\mathbf{X}_i$. We assume that for each pair of feature maps, $\mathcal{F}^t_i$ and $\mathcal{F}^s_j$, we have $N$ candidate pre-defined transforms, $\mathbf{M}_{i,j,1}, \mathbf{M}_{i,j,2}, \cdots, \mathbf{M}_{i,j,N}$. The connections together with the transforms construct our search space as shown in Fig.~\ref{fig:arch}. We now define the search objective which consists of the optimizations of the student network and $\mathcal{A}$. The student network is trained on the training set with a loss encoding the supervision from both the ground truth label and the neural network. Specifically, denoting the parameters of the student and the transforms by $\theta$, the loss on the training set at learning step $t$ is defined as follows, \begin{equation} \begin{aligned} L_{train}(\theta_t, \alpha_t) &= \frac{1}{|D_{train}|} \sum_{(\mathbf{X}, y) \in D_{train}} \Big(\delta_{label}(\mathcal{S}(\mathbf{X}), y)& \\ &~~~~~~~~~~~~~~~~+ \sum_{i=1}^{L_{te}} \sum_{j=1}^{L_s} \sum_{k=1}^{N} \alpha_t^{i,j,k} \delta\left(\mathbf{M}_{i, j, k}\left(\mathcal{F}^s_j(\mathbf{X})\right), \mathcal{F}^t_i(\mathbf{X})\right)\Big),\\ \alpha_t^{i,j,k} &= \frac{\exp(\tilde{\alpha}_t^{i,j,k})}{\sum_{i, j, k}\exp(\tilde{\alpha}_t^{i,j,k}) + \exp(g)} \label{eq:alphatrain} \end{aligned} \end{equation} where $\alpha_t^{i,j,k} \in \mathbb{R}_{\geq 0}^{L_{te} \times L_s \times N}$ are the importance factors at training step t and $\delta_{label}$ is a distance function measuring difference between predictions and labels. Here, $\alpha_t = [\alpha_t^{0, 0, 0}, \cdots]$ controls the importance of all knowledge distill pathway at training step $t$. For numerical stability and avoiding noisy gradients, we apply a biased softmax normalization with parameter $g=1$ to compute $\alpha_t$ which is validated from various normalization strategies in our experiments. This is a commonly adopted trick in meta-learning for various tasks such as NAS~\cite{xie2018snas,liu2018darts} or few-shot learning~\cite{ren2018learning,jang2019learning}. More details about the discussion of normalization methods can be found in the experimental section (Sec.~\ref{sec:ablation}). Next, our goal is to find an optimal sampled process $\mathcal{A}^*$ yielding best KD results, where the validation set is often used to evaluate the performance of the student trained on unseen inputs. To this end, following~\cite{liu2018darts}, the validation loss is adopted to evaluate the quality of $\mathcal{A}$, which is defined as follows, \[ \setlength\abovedisplayskip{-1pt} L_{val}(\theta) := \frac{1}{|D_{val}|} \sum_{(\mathbf{X}, y) \in D_{val}} \delta_{label}(\mathcal{S}(\mathbf{X}), y). \vspace{-5pt} \] Finally, we formulate the bi-level optimization problem over $\mathcal{A}$ and network parameter $\theta$ as: \begin{equation} \setlength\abovedisplayskip{-15pt} \begin{aligned} \min_{\mathcal{A}} \quad & L_{val}\left(\theta^*(\mathcal{A})\right) \\ \textrm{s.t.} \quad & \theta^*(\mathcal{A}) = \arg\text{f}_{\theta} \quad L_{train}(\Theta, \mathcal{A}). \label{eq:bilevel} \end{aligned} \vspace{-5pt} \end{equation} where $\theta^*(\mathcal{A})$ is the parameters of the student neural network trained with the loss process defined with $\mathcal{A}$, \textit{i.e.} $L_{train}(\Theta, \mathcal{A}) = \{L_{train}(\theta_t, \alpha_t)\}_{t=1}^{T}$. The ultimate goal of the optimization is to find an $\mathcal{A}$ such that the loss on the validation set is minimized Note here, similar problem has been proposed in NAS~\cite{xie2018snas,liu2018darts} asking for a fixed architecture, but our problem is different and harder, and can be a reduced to theirs if we enforce all the value in $\mathcal{A}$ to be the same. However, similarly, we are able to apply gradient-based method following the chain-rule to solve this problem. \begin{algorithm}[tb] \caption{DistPro} \label{alg:algorithm} \textbf{Input}: Full train data set $D$; Pre-trained teacher; Initialization of $\alpha_0$, $\theta_0$; Number of iterations $T_{search}$ and $T$.\\ \textbf{Output}: Trained student. \begin{algorithmic}[1] \STATE Split $D$ into $D_{train}$ and $D_{val}$. \STATE Let $t=0$. \WHILE{$t < T_{search}$} \STATE Compute $\alpha_t$ with descending the gradient approximation in Eq.~\ref{eq:final} and do normalization. \STATE Update $\theta_t$ by descending $\nabla_\theta L_{train}(\theta_t; \alpha_t)$ in Eq.~\ref{eq:alphatrain}. \STATE Push the current $\alpha_t$ to $\mathcal{A}$. \ENDWHILE \STATE Interpolate $\mathcal{A}$ with length of $T$. \STATE Set $D_{train} = D$ or use a new $D_{train}$ in another task. \STATE Reset $t = 0$. \WHILE{$t < T$} \STATE Load the $\alpha_t$ corresponding to the current $t$. \label{algo:line:load} \STATE Update $\theta$ by descending $\nabla_\theta L_{train}(\theta, \alpha_t)$. \ENDWHILE \end{algorithmic} \end{algorithm} \subsection{Learning the Process $\mathcal{A}$} \label{sec:meta-learning} \vspace{-5pt} From the formulation in Eq.~\ref{eq:bilevel}, directly solving the issue is intractable. Therefore, we propose two assumptions to simplify the problem. First, smooth assumption which means the next step of the process should be closed to previous one. This is commonly adopted in DNN training with stochastic gradient decent (SGD) with a learning rate scheduler~\cite{kingma2014adam}. Second, similar learning procedure assumption, which means among different distillation training given a teacher/student pair, at same step $t$, the student will have similar parameter $\theta_t$. This assumption allows us to search for $\mathcal{A}$ with a single training procedure, which also holds from our experiments when batch size is relatively large (\textit{e.g.} 512 for ImageNet). Based on these assumptions, we let $\alpha_{t+1} = \alpha_{t} + \gamma\Delta(\alpha_t | \theta_t)$, where $\gamma$ controls its changing ratio to be small. Then, we are able to adopt a greedy strategy to break the original problem of Eq.~\ref{eq:bilevel} down to a sequence of single steps of optimization, which can be defined as, \begin{equation} \setlength\abovedisplayskip{-2pt} \begin{aligned} \alpha_{t+1} = \alpha_t - \gamma\nabla_{\alpha} L_{val}\left(\theta_{t+1}(\alpha_t)\right)\\ \nonumber \textrm{s.t} ~~ \theta_{t+1}(\alpha_t) = \theta_t - \xi \nabla_\theta L_{train}(\theta_t, \alpha_t) \label{eq:singlestepapprox} \end{aligned} \end{equation} where $\xi$ is the learning rate of inner optimization for training student network. In practice, the inner optimization can be solved with more sophisticated gradient-based method, eg, gradient descent with momentum. In those cases, Eq.~\ref{eq:singlestepapprox} has to be modified accordingly, but all the following analysis still applies. Next, we apply the chain rule to Eq.~\ref{eq:singlestepapprox} and get \begin{equation} \begin{aligned} \nabla_{\alpha} L_{val}\left(\theta_t - \xi \nabla_\theta L_{train}(\theta_t, \alpha_t)\right) = - \xi \nabla_{\alpha, \theta }^2 L_{train}(\theta_t, \alpha_t) \nabla_\theta L_{val}(\theta_{t+1}), \end{aligned} \end{equation} However, the above expression contains second-order derivatives, which is still computational expensive. Next, we approximate this second-order derivative with finite difference as introduced in~\cite{liu2018darts}. Let $\epsilon$ be a small positive scalar and define the notation $\theta^\pm = \theta \pm \epsilon \nabla_\theta L_{val}(\theta_{t+1})$. Then, we have \begin{equation} \begin{aligned} \nabla_{\alpha, \theta}^2 L_{train}(\theta_t, \alpha_t) \nabla_\theta L_{val}(\theta_{t+1}) = \frac{\nabla_{\alpha}L_{train}(\theta^+, \alpha_t) - \nabla_{\alpha}L_{train}(\theta^-, \alpha_t)}{2\epsilon}. \label{eq:final} \end{aligned} \end{equation} Finally, we set an initial value $\alpha_0 = 1$ and launch the greedy learning procedure. Once learned, we push all computed $\alpha_t$ with $T$ steps to a sequence, which is served as a sample of learned stochastic distillation process $\mathcal{A}$, and it can be used for retraining the student network with KD in the same or similar tasks. \vspace{-5pt} \subsection{Acceleration and adopting $\mathcal{A}$ for KD.} \vspace{-5pt} Now let us take a closer look at the approximation. To evaluate the expression in Eq.~\ref{eq:final}, the following items have to be computed. First, computing $\theta_{t+1}$ requires a forward and backward pass of the student, and a forward pass of the teacher. Then, computing $\theta^\pm$ requires another forward and backward pass of the student. Finally, computing $\nabla_{\alpha}L_{train}(\theta^{\pm}, \alpha_t)$ requires two forward passes of the student. In conclusion, evaluating the approximated gradient in Eq.~\ref{eq:final} entails one forward pass of the teacher, and four forward passes and two backward passes of the student in total. This is a time consuming process, especially when the required KD learning epoch is large, \textit{e.g.} $\geq 100$. In meta-learning NAS literature, to avoid $2_{nd}$ order approximation, researchers commonly adopt $1_{st}$ order training~\cite{xie2018snas,liu2018darts} solely based on training loss. However, this is not practical in our case since the hyperparameters are defined over training loss. $1_{st}$ order gradient over $\alpha_t$ will simply drive all value to $0$. Therefore, in our case, we choose to reduce the learning epochs of $\mathcal{A}$ to $T_{search}$, which is much smaller than $T$, and then expand it to a sequence with length $T$ using linear interpolation. The choices of $T_{search}$ can be dynamically adjusted based on the dataset size, which will be elaborated in Sec.~\ref{sec:exp}. \begin{figure}[t] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[width=\textwidth]{figures/transform.pdf} \caption{The architecture of the transform block. The self-attention block is illustrated in Fig.~\ref{fig:selfatt}.} \label{fig:trans} \end{subfigure} \begin{subfigure}{0.49\textwidth} \includegraphics[width=\textwidth]{figures/selfatt.pdf} \caption{The architecture of the self-attention block. A convolution layer is applied to the input feature map to generate a $1$-channel attention map. Then the input feature map is multiplied with the attention map.} \label{fig:selfatt} \end{subfigure} \vspace{-15pt} \caption{Transform blocks} \vspace{-10pt} \end{figure} Additionally, since we have more pathways than other KD strategy~\cite{chen2021distilling}, therefore, the loss computation cost can not be ignored. To reduce the cost, at step $t$, we use a clip function to all $\tilde{\alpha}_t$ in Eq.~\ref{eq:alphatrain} with a threshold $\tau = 0.5$, and drop corresponding computation when $\tilde{\alpha}_t \leq \tau$. This can save us $60\%$ of loss computational cost in average, resulting in comparable KD time with our baselines. In summary, the overall \texttt{DistPro} process is presented in Alg.~\ref{alg:algorithm}, where two phases of the algorithm are explained in order. The first phase of the algorithm is for searching the scheme $\mathcal{A}$. To this end, $\alpha_t$ and $\theta_t$ are computed alternately. The $\alpha_t$ obtained in each step is stored for future usage. The Second phase of the algorithm is for retraining the neural network with all the available training data and the searched $\mathcal{A}$. \section{Experiments} \label{sec:exp} \vspace{-8pt} In this section, we evaluate the proposed approach on several benchmark tasks including image classification, semantic segmentation, and depth estimation. For image classification, we consider the popularly used dataset CIFAR100 and ImageNet1K. For semantic segmentation and depth estimation, we consider CityScapes~\cite{Cordts2016Cityscapes} and NYUv2~\cite{Silberman:ECCV12} respectively. To make fair comparison, we use exactly the same training setting and hyper-parameters for all the methods, including data pre-processing, learning rate scheduler, number of training epochs, batch size, etc. We first demonstrate the effectiveness of \texttt{DistPro} for classification on CIFAR100. Then, we provide more analysis with a larger-scale dataset, ImageNet1K. At last, we show the results of dense prection tasks and ablation study. All experiments are performed with Tesla-V100 GPUs. \begin{table*}[!t] \resizebox{1.0\linewidth}{!}{% \begin{tabular}{l|cccccc} Teacher & WRN-40-2 & WRN-40-2 & ResNet32x4 & ResNet32x4 & ResNet56 & ResNet110\\ Student & WRN-16-2 & ShuffleNet-v1 & ShuffleNet-v1 & ShuffleNet-v2 & ResNet20 & ResNet32\\ \hline \hline Teacher Acc. & 76.51 & 76.51 & 79.45 & 79.45 & 73.28 & 74.13\\ Student Acc. & 73.26 (0.050) & 70.50 (0.360) & 70.50 (0.360) & 71.82 (0.062) & 69.06 (0.052) & 71.14 (0.061)\\ \hline L2T-ww\textsuperscript{+}~\cite{jang2019learning}& - & -& 76.35 & 77.39 & - & - \\ ReviewKD\textsuperscript{+}~\cite{chen2021distilling} & 76.20 (0.030) & 77.14 (0.015) & 76.41 (0.063) & 77.37 (0.069) & 71.89 (0.056) & 73.16 (0.029)\\ Equally weighted & 75.50 (0.010) & 74.28 (0.085) & 73.54 (0.120) & 74.39 (0.119) & 70.89 (0.065) & 73.19 (0.052)\\ Use $\alpha_T$ & 76.25 (0.034) & 77.19 (0.074) & 77.15 (0.043) & 76.64 (1.335) & 71.24 (0.014) & 73.58 (0.012)\\ DistPro & \textbf{76.36 (0.005)} & \textbf{77.24 (0.063)} & \textbf{77.18 (0.047)} & \textbf{77.54 (0.059)} & \textbf{72.03 (0.022)} & \textbf{73.74 (0.011)}\\ \end{tabular}% } \caption{Results on CIFAR100. Results are averaged over $5$ runs. Variances are in the parentheses. ``+'' represents our reproduced results. In ``Equally weighted", we do not use the searched $\alpha$ in the retrain phase. Instead, each element of $\alpha$ is uniformly set to $1 / L$, where $L$ is the length of $\alpha$. In ``Use $\alpha_T$", the finally converged $\alpha$ is used at each iteration of the retrain phase. } \vspace{-15pt} \label{tab:cifar} \end{table*} \subsection{Classification on CIFAR100} \vspace{-5pt} \noindent \textbf{Implementation details} We follow the same data pre-processing approach as in~\cite{chen2021distilling}. Similarly, we select a group of representative network architectures including ResNet~\cite{he2016deep}, WideResNet~\cite{zagoruyko2016wide}, MobileNet~\cite{howard2017mobilenets,sandler2018mobilenetv2}, and ShuffleNet~\cite{zhang2018shufflenet,ma2018shufflenet}. We follow the same training setting in~\cite{chen2021distilling} for distillation. We train the models for $240$ epochs and decay the learning rate by $0.1$ for every $30$ epochs after the first $150$ epochs. Batch size is $128$ for all the models. We train the models with the same setting five times and report the mean and variance of the accuracy on the testing set, to demonstrate the improvements are significant. As mentioned, before distillation training, we need to obtain the distillation process by searching described as below. To search the process $\mathcal{A}$, we randomly split the original training set into two subdatasets, $80\%$ of the images for training and $20\%$ for validating. We run the search for $40$ epochs and decay the learning rate for $\theta$ (parameters of the student) by $0.1$ at epoch $10$, $20$, and $30$. The learning rate for $\alpha$ is set to $0.05$. Following the settings in~\cite{chen2021distilling}, we did not use all feature maps for knowledge distillation but only the ones after each downsampling stage. Transform blocks are used to transform the student feature map. Fig.~\ref{fig:trans} shows the block architecture. The size of the pathways is 27, as we use 3 transform blocks and 9 connections. To make fair comparison, we follow the same HCL distillation loss in~\cite{chen2021distilling}. Once the the process is obtained, in order to align the searched distillation process, linear interpolation is used to expand the process of $\mathcal{A}$ from 40 epoch (search stage) to 240 epochs (retrain stage) during KD. \noindent\textbf{Results} The quantitative results are summarized in Table~\ref{tab:cifar}. First two rows show the architecture of teacher and student network and their accuracy without distillation correspondingly. In the line of ``ReviewKD\textsuperscript{+}'', we list the results reproduced using the code released by the author~\footnote{\url{https://github.com/dvlab-research/ReviewKD}}, notice it could be a bit different with reported due to randomness in data loading. To demonstrate learning the $\alpha$ is essential, we first assign equally weighted $\alpha$ to all the pathways. As shown in line ``Equally weighted'', the results are worse than ReviewKD, indicating the selected pathways from ReviewKD are useful. For ablation, we first adopt the learned $\alpha_{T}$ at the end of the process. As shown in ``Use $\alpha_T$'', it outperforms ReviewKD in multiple settings. The last row shows the results adopting the learned process $\mathcal{A}$, which outperforms ReviewKD significantly based on our derived variations. \begin{figure}[t] \scalebox{0.9}{ \begin{minipage}{.45\linewidth} \centering \includegraphics[width=\textwidth]{figures/arch_epochs.png} \vspace{-20pt} \caption{Comparison study on fast distillation on ImageNet1K. } \label{fig:speed} \end{minipage} } \begin{minipage}{.55\linewidth} \resizebox{1.0\linewidth}{!}{% \begin{tabular}{l|c|cc} & Network & ReviewKD\cite{chen2021distilling} & DistPro(Ours)\\ \hline \hline Top-1 & \multirow{2}{*}{MobileNet} & 72.56 & 72.54 \\ \#Epochs & & 100 & \textbf{50} \\ \hline Top-1 & \multirow{2}{*}{ResNet18} & 71.61 & 71.59 \\ \#Epochs & & 100 & \textbf{65} \\ \hline Top-1 & \multirow{2}{*}{DeiT} & 73.44 & 73.41 \\ \#Epochs & & 150 & \textbf{100} \end{tabular}% } \captionof{table}{Comparison study on numbers of distillation epochs to achieve same top-1 accuracy on ImageNet1K (lower the better). } \label{tab:speed} \end{minipage} \vspace{-10pt} \end{figure} \vspace{-8pt} \subsection{Classification on ImageNet1K} \label{sec:exp_imagenet} \vspace{-5pt} \noindent \textbf{Implementation Details} We follow the search strategy used in CIFAR100 and training configurations in \cite{chen2021distilling} with a batch size of 512 (4GPUs are used). The selected student/teacher networks are MobileNet/ResNet50, ResNet18/ResNet34 and DeiT-tiny~\cite{touvron2021deit}/ViT-B~\cite{dosovitskiy2020vit}. We adopt a different architecture of transform block for DeiT. At searching stage, we search $\mathcal{A}$ with Tiny-ImageNet~\cite{le2015tiny} for 20 epochs. We adopt cosine lr scheduler for learning the student network. For search space, 1 transform block is used, while we consider 5 feature maps in the networks, and build 15 pathways by removing some pathways following~\cite{chen2021distilling} due to limited GPU memory. At KD stage, we train the network for various epochs with initial learning rate of 0.1, and cosine scheduler. It takes 4 GPU hours for searching and ~80 GPU hours for distill with 100 epochs. More details (ViT transform block, selected pathways etc) can be found in supplementary due to limited space. \noindent\textbf{Fast Distillation.} From prior works~\cite{shen2021fast}, KD is used to accelerate the network learning especially with large dataset. This is due to the fact that its accuracy can be increased using only Ground Truth labels if the network is trained with a large number of epochs\cite{hoffer2017train}. Previous works~\cite{chen2021distilling} only check the results stopping at 100 epochs. Here, we argue that it is also important to evaluate the results with less training cost, which is another important index for evaluating KD methods. This will be practically useful for the applications requiring fast learning of the network. Here, we show \texttt{DistPro} reaches much better trade-off between training cost and accuracy. In Fig.~\ref{fig:speed}, \texttt{DistPro} with different network architectures (red curves) outperforms ReviewKD with the same setting (dashed blue curves) at all proposed training epochs. The performance gain is larger when less training cost is required, \textit{e.g.} it is 3.22\% on MobileNet with 10 epochs (65.25\% vs 62.03\%), while decreased to 0.14\% with 100 epochs, which is still a decent improvement. Similar results are observed with ResNet18. Note here for all results with various epochs, we adopt the same learning process $\mathcal{A}$, where the time cost can be ignored comparing to that of KD. In Tab.~\ref{tab:speed}, we show the number of epochs saved by \texttt{DistPro} when achieving the same accuracy using ReviewKD~\cite{chen2021distilling}. For instance, in training MobileNet, \texttt{DistPro} only use 50 epochs to achieve 72.54\%, which is comparable to ReviewKD trained with 100 epochs (72.56\%), yielding 2x acceleration. Similar acceleration is also observed with ResNet18 and DeiT. \begin{table}[t] \begin{tabular}{c|cc|cc|c} Setting & Search dataset & Retrain dataset & Teacher & Student & Top-1 (\%) \\ \hline \multirow{2}{*}{(a)} & Tiny-ImageNet & ImageNet1K & ResNet50 & MobileNet & 73.26 \\ & CIFAR100 & ImageNet1K & ResNet50 & MobileNet & 73.20 \\ \midrule Setting & Search teacher & Search student & Retrain teacher & Retran student & Top-1 (\%) \\ \hline \multirow{2}{*}{(b)} & ResNet34 & ResNet18 & ResNet34 & ResNet18 & 71.89 \\ & ResNet50 & MobileNet & ResNet34 & ResNet18 & 71.87 \end{tabular} \caption{Results of transferring searched $\mathcal{A}$ cross search-retrain datasets and search-retrain networks performed on ImageNet1K with 100 epochs. Top-1 accuracy on validation set is reported. } \vspace{-20pt} \label{tab:transfer} \end{table} \noindent \textbf{Transfer $\mathcal{A}$ } Here, we study whether learned $\mathcal{A}$ can be transferred across datasets and similar architectures. Tab.~\ref{tab:transfer} shows the results. In setting (a), we resize the image in CIFAR100 to $224\times 224$, and do the process search on CIFAR100, where the search cost is only 2 GPU hours. As shown at last column, it only downgrades accuracy by 0.06\%, which is comparable with full results searched with ImageNet, demonstrating the process could be transferred across datasets. In setting (b), we adopt the searched process with student/teacher networks of MobileNet/ResNet50 to networks of ResNet18/ResNet34 since they have same feature pathways at similar corresponding layers. As shown, the results are closed, demonstrating the process could be transferred when similar pathways exist. \begin{table}[b] \resizebox{1.0\linewidth}{!}{% \begin{tabular}{c|c|cc|cccccc} Setting & Acc (\%) & Teacher & Student & OFD\cite{heo2019comprehensive} & LONDON\cite{shang2021london} & SCKD\cite{zhu2021sckd} & ReviewKD\cite{chen2021distilling} & ReviewKD* & DistPro\\ \hline \hline \multirow{2}{*}{(a)} & Top1 & 76.16 & 68.87 & 71.25 & 72.36 & 72.4 & 72.56 & 73.12 & \textbf{73.26} \\ & Top5 & 93.86 & 88.76 & 90.34 & 91.03 & - & 91.00 & 91.22 & \textbf{91.27} \\ \hline \multirow{2}{*}{(b)} & Top1 & 73.31 & 69.75 & 70.81 & - & 71.3 & 71.61 & 71.76 & \textbf{71.89} \\ & Top5 & 91.42 & 89.07 & 89.98 & - & - & 90.51 & 90.67 & \textbf{90.76} \\ \hline (c) & Top-1 & 85.1 & 73.41 & - & - & - & - & 73.44 & \textbf{73.51} \end{tabular}% } \caption{Comparison study on ImageNet1K. Settings are (a) teacher: ResNet-50, student: MobileNet; (b) teacher: ResNet-34, student: ResNet-18; (c) teacher: ViT-B, student: DeiT-tiny. ReviewKD* denotes our reproduced experimental results with cosine learning rate scheduler. } \vspace{-20pt} \label{tab:imagenet} \end{table} \noindent\textbf{Best Results.} Finally, Tab.\ref{tab:imagenet} shows the quantitative results comparing with various SoTA baselines. As mentioned, we adopt cosine scheduler while ReviewKD~\cite{chen2021distilling} adopts step scheduler. To make it fair, we retrain ReviewKD with cosine scheduler, and list the results in the column of ReviewKD*. As shown in the table, for three distillation settings, \texttt{DistPro} outperform the existing methods achieving top-1 accuracy of 73.26\% for MobileNet, 71.89\% for ResNet18 and 73.51\% for DeiT respectively, yielding new SoTA results for all these networks. \vspace{-5pt} \subsection{Dense prediction tasks} \label{subsec:dense} \vspace{-5pt} \noindent\textbf{CityScapes} is a popular semantic segmentation dataset with pixel class labels~\cite{Cordts2016Cityscapes}. We compare to a SoTA segmentation KD method called IFVD~\cite{wang2020intra} and adopt their released code~\footnote{https://github.com/YukangWang/IFVD}. IFVD is a response-based KD method, therefore can be combined with feature-based distillation method including ReviewKD and \texttt{DistPro}, which are shown as ``+ReviewKD'' and ``+\texttt{DistPro}'' respectively in Table~\ref{tab:cityscapes}. We adopt student of MobileNet-v2 to compare against another SoTA results from SCKD~\cite{zhu2021sckd}. Please note here for simplicity and numerical stability, we disable adversarial loss of the original IFVD. From the table, ``+\texttt{DistPro}'' outperforms all competing methods. \noindent\textbf{NYUv2} is a dataset widely used for depth estimation~\cite{Silberman:ECCV12}. The experiments are based on the code~\footnote{https://github.com/fangchangma/sparse-to-dense} released in S2D~\cite{ma2018sparse}. We compare \texttt{DistPro} with the plain KD, where we treat teacher output as ground truth, and ReviewKD with intermediate feature maps. Root mean squared errors (RMSE) are summarized in Table~\ref{tab:nyu}, and it shows that \texttt{DistPro} is also beneficial. In the future, we will explore more to verify its generalization ability. \begin{table}[t] \scalebox{0.95}{ \begin{minipage}{.33\linewidth} \resizebox{1.0\linewidth}{!}{% \begin{tabular}{l|l} Teacher & ResNet101\\ Student & MobileNet-v2\\ \hline \hline Baseline & 66.92 (0.00721)\\ IFVD~\cite{wang2020intra} & 68.31 (0.00264)\\ SCKD~\cite{zhu2021sckd} & 68.25 (0.00307) \\ +ReviewKD~\cite{chen2021distilling} & 69.03 (0.00373)\\ +Equally Weighted $\alpha$ & 68.49 (0.00793)\\ \hline +\texttt{DistPro} & \textbf{69.12 (0.00462)}\\ \end{tabular} } \caption{mIoU (\%) on CityScapes (higher the better). Results are averaged over $5$ runs. Standard deviation in the parentheses. } \label{tab:cityscapes} \end{minipage}}% \quad\quad \scalebox{0.95}{ \begin{minipage}{0.33\linewidth} \resizebox{1.0\linewidth}{!}{% \begin{tabular}{l|l} Teacher & ResNet50\\ Student & ResNet18\\ \hline \hline Baseline & 0.2032\\ KD & 0.2045\\ ReviewKD~\cite{chen2021distilling} & 0.1983\\ Equally Weighted $\alpha$ & 0.2030\\ \hline \texttt{DistPro} & \textbf{0.1972}\\ \end{tabular} } \caption{Estimation error on NYUv2 (lower the better). \\\hfill\\\hfill\hfill\hfill\hfill} \label{tab:nyu} \end{minipage}} \scalebox{0.95}{ \begin{minipage}{0.3\linewidth} \resizebox{1.0\linewidth}{!}{% \begin{tabular}{c|c} Normalization & Acc. \\ \hline \hline $\texttt{softmax}(\texttt{concat}(\tilde{\alpha}, 1))$ & \textbf{79.78}\\ $\texttt{softmax}(\tilde{\alpha})$ & 79.41\\ $\frac{\tilde{\alpha}}{\|\tilde{\alpha}\|_1 + 1}$ & 79.47\\ $\frac{\tilde{\alpha}}{\|\tilde{\alpha}\|_1}$ & 79.20\\ $\texttt{sigmoid}(\tilde{\alpha})$ & 78.80\\ \end{tabular} } \captionof{table}{Performance of different normalization methods on CIFRA-100. $\texttt{concat}(\tilde{\alpha}, 1)$ concatenate $\tilde{\alpha}$ with a scalar $1$. Results are averaged over $5$ runs.} \label{tab:normalization} \end{minipage}} \vspace{-25pt} \end{table} \vspace{-5pt} \subsection{Ablation study} \label{sec:ablation} \vspace{-5pt} \noindent\textbf{Is it always beneficial to transfer knowledge only from lower-level feature maps to higher-level feature maps?} In Table~6 of ~\cite{chen2021distilling}, the authors conducted a group of experiments on CIFAR100, which show that only pathways from lower-level feature maps in teacher to higher-level feature maps in student are beneficial. However, we conducted similar experiments on CityScapes, the results did not support this claim. Specifically, using pathways from the last feature map to the first three feature maps results in mIoU $73.9\%$ on ResNet18, while using pathways from all lower-level feature maps to higher level ones as in~\cite{chen2021distilling} results in $73.5\%$. This suggests that the optimal teaching scheme need to be searched w.r.t larger space. Also, the results in Table~\ref{tab:cityscapes} also show that our searched process is better than the hand-crafted one. \begin{figure}[t] \centering \includegraphics[width=0.32\textwidth]{figures/alpha_1.pdf} \includegraphics[width=0.32\textwidth]{figures/alpha_2.pdf} \includegraphics[width=0.32\textwidth]{figures/alpha_3.pdf} \vspace{-10pt} \caption{Searched distillation process performed on CIFAR100 with WRN-40-2 as teacher and WRN-16-2 as student. $x$-axis is the number of iterations. The left image contains $\alpha[1,:,:]$, i.e., all the elements corresponding to the lowest-level feature map of the teacher. The middle one corresponds to $\alpha[2,:,:]$, and the right one corresponds to $\alpha[3,:,:]$.} \vspace{-15pt} \label{fig:alphas} \end{figure} \noindent \textbf{Intuition of process $\mathcal{A}$} In Figure~\ref{fig:alphas}, we show how the learned sample of $\mathcal{A}$ changes with time in search stage of distilling WRN-40-2 to WRN-16-2. Similar observations are found in other settings. The figure indicates that at the early stage of the training, the optimized teaching scheme focuses on transferring knowledge from low-level feature maps of the teacher to the student. As training goes, the optimized teaching scheme gradually moves on to the higher-level feature maps of the teacher. Intuitively, high-level feature maps encode highly abstracted information of the input image and thus are harder to learn from compared to the low-level feature maps. \texttt{DistPro} is able to automatically find a teaching scheme to make use of this intuition. \noindent \textbf{Normalization of $\alpha_t$} As mentioned before, we apply normalization to $\alpha_t$ (\eqref{eq:alphatrain}) for numerical stability after $2_{nd}$ gradient approximation. In this study, we evaluate several normalization strategies. Let the unnormalized parameters be $\tilde{\alpha_t}$ and the normalized ones be $\alpha_t$. In the following, we view $\alpha_t$ as a vector of length $L$. The experiments are conducted on CIFAR100 with WRN-28-4 as teacher and WRN-16-4 as student. The results are shown in Table~\ref{tab:normalization}, and the proposed biased softmax normalization outperforms others. Intuitively, due to the appended scalar $1$, the value of $\tilde{\alpha_t}$ can be compared against $1$, and a constant value yields a label smoothing~\cite{muller2019does} effect for the distribution. \section{Related Works} \vspace{-10pt} \textbf{Knowledge distillation.} Starting under the name knowledge transfer~\cite{bucilua2006model,urner2011access}, knowledge distillation (KD) is later popularized owing to Hinton et.al~\cite{hinton2015distilling} for training efficient neural networks. Thereafter, it has been a popular field in the past few years, in terms of designing KD losses~\cite{wang2018kdgan,wang2020intra}, combination with multiple tasks~\cite{oord2018parallel,devlin2018bert} or dealing with specific issues, eg few-shot learning~\cite{kimura2018few}, long-tail recognition~\cite{xiang2020learning}. Here, we majorly highlight the works that are closely related to ours, in order to locate our contributions. According to a recent survey~\cite{gou2021knowledge}, current KD literature include multiple knowledge types, eg response-based~\cite{muller2019does}, feature-based~\cite{Romero2015FitNetsHF,heo2019comprehensive} and relation-based~\cite{tung2019similarity} knowledge. In addition, in terms of distillation method, we have offline~\cite{hinton2015distilling}, online~\cite{chen2020online} and self-distillation~\cite{zhang2019your}. For distillation algorithms, various distillation criteria are proposed such as adversarial-based~\cite{wang2018kdgan}, attention-based~\cite{huang2017like}, graph-based~\cite{lee2019graph} and lifelong distillation~\cite{chen2018lifelong} etc. Finally, based on a certain task, KD can also extend with different task-aware metrics, eg speech~\cite{oord2018parallel}, NLP~\cite{devlin2018bert} etc. In our principle, we hope all the surveyed KD schemes, ie knowledge types, methods under certain task settings, can be pooled with a universal way to a search space, in order to find the best distillation. While in this paper, we take the first step towards this goal by exploring a sub-field in this whole space, which is already a challenging problem to solve. Specifically, we adopt the setting of offline distillation with feature-based and response-based knowledge, where both network responses and intermediate feature maps are adopted for KD. For KD method, we use attention-based methods to compare feature responses, and apply the KD model to vision tasks including classification, segmentation and depth estimation Inside this field, knowledge review~\cite{chen2021distilling} and L2T-ww~\cite{jang2019learning} are the most related to our work. The former investigates the importance of a few pathways and propose a knowledge review mechanism with a novel connection pattern, ie, residual learning framework. It provides SoTA results in several commonly comparison benchmarks. The latter learns a fixed weights for a few intermediate pathways for few-shot knowledge transfer. As shown in Fig.~\ref{fig:ovewview}, \texttt{DistPro} finds a distillation process. Therefore, we extend the search space. In addition, for dense prediction tasks, one related work is IFVD~\cite{wang2020intra}, which proposes an intra-class feature variation comparison (IFVD). \texttt{DistPro} is free to extend to dense prediction tasks, and it also obtain extra benefits after combined with IFVD. \noindent\textbf{Meta-learning for KD/hyperparameters.} To automate the learning of a KD scheme, we investigated a wide range of efficient meta-learning methods in other fields that we might adopt. For examples, L2L~\cite{andrychowicz2016learning} proposes to learn a hyper-parameter scheduling scheme through a RNN-based meta-network. Franceschi et.al~\cite{franceschi2018bilevel} propose an gradient-based approach without external meta-networks. Later, these meta-learning ideas have also been utilized in tasks of few-shot learning (eg learning to reweight~\cite{ren2018learning}), learning cross-task knowledge transfer(eg learning to transfer~\cite{jang2019learning}), and neural architecture search (NAS)(eg DARTS~\cite{liu2018darts}). Through these methods share similar framework, while it is critical to have essential embedded domain knowledge and task-aware adjustment to make it work. In our case, inspired by these methods, we majorly utilize the gradient-based strategy due to its efficiency for KD scheme learning, and also first to propose using the learnt process additional to the learnt importance factor. Finally, knowledge distillation for NAS has also drawn significant attention recently. For example, Liu et.al~\cite{liu2020search} try to find student models that are best for distilling a given teacher, while Yao et.al~\cite{yao2021joint} propose to search architectures of both the student and the teacher models based on a certain distillation loss. Though different from our scenario, \textit{i.e.} fixed student-teacher architectures, it raises another important question of how to find the Pareto optimal inside the union space of architectures and KD schemes under certain resource constraint, which we hope could inspire future researches.
1,108,101,562,397
arxiv
\section{#1}\setcounter{figure}{0} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\nonumber}{\nonumber} \newcommand{\textnormal}{\textnormal} \newcommand{\mu \nu \rho}{\mu \nu \rho} \newcommand{\mu \nu \rho \sigma}{\mu \nu \rho \sigma} \newcommand{with respect to }{with respect to } \begin{document} \title{Yang-Mills correlation functions at finite temperature} \vspace{1.5 true cm} \author{Leonard Fister} \affiliation{Institut f\"ur Theoretische Physik, Universit\"at Heidelberg, Philosophenweg 16, 69120 Heidelberg, Germany} \affiliation{ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum f\"ur Schwerionenforschung mbH, Planckstr. 1, D-64291 Darmstadt, Germany} \author{Jan M. Pawlowski} \affiliation{Institut f\"ur Theoretische Physik, Universit\"at Heidelberg, Philosophenweg 16, 69120 Heidelberg, Germany} \affiliation{ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum f\"ur Schwerionenforschung mbH, Planckstr. 1, D-64291 Darmstadt, Germany} \begin{abstract} {We put forward a continuum approach for computing finite temperature correlation functions in Yang-Mills theory. This is done in a functional renormalisation group setting which allows for the construction of purely thermal RG-flows. This approach is applied to the ghost and gluon propagators as well as the ghost-gluon vertex in the Landau gauge. We present results in a temperature regime going from vanishing temperature to temperatures far above the confinement-deconfinement temperature $T_c$. Our findings compare well with the lattice results available. } \end{abstract} \pacs{05.10.Cc, 11.15.Tk, 12.38.Aw, 11.10.Wx} \maketitle \pagestyle{plain} \setcounter{page}{1} \section{Introduction}\label{sec:introduction} In the past two decades much progress has been made in our understanding of the QCD phase diagram. This has been achieved in a combination of first principle continuum computations in QCD, see e.g.\ \cite{Pawlowski:2010ht,Binosi:2009qm,Fischer:2006ub,% Alkofer:2000wg,Roberts:2000aa}, lattice simulations, see e.g.\ \cite{Philipsen:2011zx,Borsanyi:2010cj,Bazavov:2010sb}, as well as computations in low energy effective models for QCD, see e.g.\ \cite{Fukushima:2010bq,Schaefer:2007pw,Megias:2004hj,% Ratti:2005jh,Fukushima:2003fw,Dumitru:2003hp}. All these methods have their specific strengths, and the respective investigations are complementary. In some applications, the methods have even been applied in combination. In recent years the first principle continuum approach as well as the model computations have become far more quantitative. This is the more interesting as at present they are one of the methods of choice for finite density and non-equilibrium investigations that are necessary for an understanding of heavy ion collisions. A full quantitative understanding of the phase diagram of QCD from these methods requires a good quantitative grip on the thermodynamics and finite temperature dependence of Yang-Mills theory, for a review on thermal gluons see \cite{Maas:2011se}, for applications of Dyson-Schwinger equations (DSEs) for Yang-Mills propagators see e.g. \cite{Gruter:2004bb,Cucchieri:2007ta}.\\ However, quantitatively reliable continuum results for correlation functions in Yang-Mills theory at general temperatures are presently not available. This concerns in particular the transition region of the confinement-deconfinement phase transition, occurring at the critical temperature $T_c$. The same applies to thermodynamic quantities such as the pressure. Computations in Yang-Mills theory done with hard-thermal loop or 2PI resummations do not match the lattice results for temperatures $T \lesssim 3\,T_c$, see e.g.\ \cite{Andersen:2011sf} and references therein. In turn, they are reliable for $T \gg T_c$. In the present work we put forward an approach with the functional renormalisation group (FRG), for QCD-related reviews see \cite{Litim:1998nf,Berges:2000ew,Pawlowski:2005xe,Gies:2006wv,% Schaefer:2006sr,Pawlowski:2010ht,Braun:2011pp,Rosten:2010vm}. The FRG incorporates non-perturbative effects by a successive integration of fluctuations related to a given momentum scale, also at non-vanishing temperature, for applications in Yang-Mills theory see e.g. \cite{Litim:1998nf,D'Attanasio:1996fy,Comelli:1997ru,Braun:2005uj,Litim:2006ag}. It is applicable to all temperatures, in particular also for temperatures $T \lesssim 3\, T_c$. The present implementation of the FRG incorporates only thermal fluctuations, see e.g.\ \cite{Litim:2006ag}, and utilises the vacuum physics at vanishing temperature as an input. This also allows us to study the effect of global gauge fixing issues in the Landau gauge due to the Gribov problem, for detailed discussions see \cite{Fischer:2008uz,vonSmekal:2008ws}. A summary of the present work and results is also presented in \cite{proceedings}. In Section~\ref{sec:funflow} we give a brief introduction to the FRG-approach to Yang-Mills theory. In particular, it is illustrated at the example of the two-point correlation functions, how flow equations for correlation functions are derived from the flow of the effective action. Section~\ref{sec:local} is devoted to the important issue of locality of an FRG-flow. We show how to minimise the momentum transfer present in the diagram, and hence to minimise the sensitivity to the approximation at hand. We also discuss the factorisation of diagrams at large external momenta that is relevant for subleading momentum corrections. In Section~\ref{sec:thermalflow} we introduce the flow for thermal fluctuations as that of the difference of flows at finite temperature and vanishing temperature, see \cite{Litim:2006ag}. The locality introduced in Section~\ref{sec:local} is used for improving the stability of this difference, as well as having access to a trivial initial condition for the thermal flow at large cut-off scales. Variants of which have been very successfully used already for the physics of scalar theories as well as fermion-boson mixtures, for reviews see \cite{Diehl:2009ma,Scherer:2010sv,Braun:2011pp}. The approximation used in the present work is detailed in Section~\ref{sec:approximation}, and some computational details are discussed in Section~\ref{sec:comp}. Results for propagators and vertices are presented in the Sections~\ref{sec:props+vertices}. They are compared to lattice results, \cite{Maas:2011ez,Aouane:2011fv, Bornyakov:2010nc, Bornyakov:2011jm, Cucchieri:2000cy,% Cucchieri:2001tw,Cucchieri:2007ta, Cucchieri:2011di, Fischer:2010fx, Heller:1995qc, Heller:1997nqa, Maas:2011se}. Our observations are shortly summarised in Section~\ref{sec:summary}. \section{Yang-Mills correlation functions from RG-flows}\label{sec:funflow} In this section we introduce the functional RG setting for Yang-Mills theories which we use for the computation of propagators and vertices. Central to this approach is the scale-dependent effective action $\Gamma_k$, where quantum fluctuations with momenta $p^2\gtrsim k^2$ are already integrated out, and $k$ denotes an infrared cut-off scale. By varying the cut-off scale $k$ one thereby interpolates between the classical action in the ultraviolet and the full quantum effective action in the infrared, where the cut-off is removed. An infinitesimal change of $k$ is described by a flow equation for $\Gamma_k$, and the interpolation can be achieved by integrating this flow. \vspace{.06em} Within the standard Faddeev-Popov gauge fixing procedure in the Landau gauge, \begin{equation} \partial_\mu A_\mu^a=0\,,\qquad a=1,...,N_c^2-1,\label{eq:landau} \end{equation} the classical gauge fixed action for $SU(N_c)$ Yang-Mills theory in Euclidean space-time is given by \begin{eqnarray} S=\int_x\left(\014 F_{\mu\nu}^a F_{\mu\nu}^a+\bar c^a\partial_\mu D_\mu^{ab} c^b \right)\,,\label{eq:fixedaction} \end{eqnarray} with $\int_x=\int d^4 x$. In \eq{eq:fixedaction} we have already introduced the ghost action with the ghost fields $\bar c^a$ and $c^a$. The field strength $F$ and the covariant derivative $D$ are given by \begin{eqnarray} F^a_{\mu\nu}&=&\partial_\mu A^a_\nu-\partial_\nu A_\mu^a+gf^{abc} A_\mu^b A_\nu^c\,,\nonumber\\ D_\mu^{ab}&=&\delta^{ab}\partial_\mu+g f^{acb} A_\mu^c\,.\label{eq:defFD}\end{eqnarray} The gauge fields are transveral due to the Landau gauge \eq{eq:landau}. Hence operators and the inverse of operators are defined on the transversal subspace. The suppression of the infrared fluctuations is achieved via a modification of the propagator with $S \to S+\Delta S_k$ and \begin{eqnarray}\label{eq:dSk} \Delta S_k= \012 \int_p \, A^{a}_\mu\, R_{\mu\nu}^{ab} \, A^{b}_\nu +\int_p \, \bar c^{a}\, R^{ab}\, c^{b}\,, \end{eqnarray} with $\int_p=\int d^4 p/(2\pi)^4$. \begin{figure}[t] \includegraphics[width=.8\columnwidth]{figures/Rrdot} \caption{Regulator $R(p^2)$ and its (logarithmic) cut-off scale derivative $\partial_t R(p^2)=\dot R(p^2)$. } \label{fig:R} \end{figure} The momentum-dependent regulator functions $ R_{\mu\nu}^{ab}$ and $R^{ab}$ implement the infrared cut-off at the momentum scale $p^2\approx k^2$ for gluon and ghost respectively. They carry a Lorentz and gauge group tensor structure and are proportional to a dimensionless shape function $r$, schematically we have $R(p^2)\propto p^2 \,r(p^2/k^2)$. An example for $R(p^2)$ and its logarithmic scale derivative is given in Fig.~\ref{fig:R}. As we work in Landau gauge, \eq{eq:landau}, we can restrict ourselves to a transversal gluon regulator, $p_{\mu} R_{\mu\nu}^{ab}(p)=0$. With the notation $\varphi=(A,c,\bar c)$ we write the flow equation for the Yang-Mills effective action $\Gamma_k[A,c,\bar c]$ at finite temperature $T$ as \begin{eqnarray} \nonumber \partial_t \Gamma_{k}[\varphi] &= & \frac{1}{2} \int_p\ G^{ab}_{\mu\nu}[\varphi](p,p) \, {\partial_t} R_{\nu\mu}^{ba}(p)\\ &&- \int_p\ G^{ab}[\varphi](p,p) \, {\partial_t} R^{ba}(p)\,, \label{eq:funflow}\end{eqnarray} where $t=\ln k$, and the momentum integration measure at finite temperature is given by \begin{eqnarray}\label{eq:matsubaras} \int_p=T \sum_{n\in \mathbb{Z}}\int \0{d^3 p}{(2 \pi)^3}\,, \quad {\rm with}\quad p_0=2 \pi T n\,, \end{eqnarray} where the integration over $p_0$ turns into a sum over Matsubara frequencies. Both, gluons and ghosts have periodic boundary conditions, $\varphi(x_0+1/T,\vec x)=\varphi(x_0,\vec x)$, which is reflected in the Matsubara modes $2 \pi T n$ with a thermal zero mode for $n=0$. At vanishing temperature we have $\sum\!\!\!\!\!\!\int\to \int_p$. The full field dependent propagator for a propagation from $\varphi_1$ to $\varphi_2$ is given by \begin{eqnarray}\label{eq:G} G_{\varphi_1\varphi_2}[\varphi](p,q)=\left(\0{1}{\Gamma_{T,k}^{(2)}[\varphi] +R_k}\right)_{\varphi_1\varphi_2}(p,q)\,. \end{eqnarray} In \eq{eq:G} we also used the regulator function in field space, $R_{k,\varphi_1\varphi_2}$ with \begin{eqnarray}\label{eq:Reg} R_{k,A_\mu^a A_\nu^b}= R_{\mu\nu}^{ab}\,, \qquad R_{k,\bar c^a c^b}=-R_{k, c^b \bar c^a}=R^{ab}\,. \end{eqnarray} The flow is finite in both, the infrared and the ultraviolet, by construction. Effectively, the momentum integration in \eq{eq:funflow} only receives contributions for momenta in the vicinity of $p^2\lesssim k^2$. Consequently, it has a remarkable numerical stability. The flow solely depends on dressed vertices and propagators, leading to consistent RG-scaling on either side of \eq{eq:funflow}. Diagrammatically, the flow in \eq{eq:funflow} is depicted in Fig.~\ref{fig:funflow}. \begin{figure}[t] \includegraphics[width=.8\columnwidth]{figures/YM_flow_equation} \caption{Functional flow for the effective action. Lines with filled circles denote fully dressed field dependent propagators \eq{eq:G}. Crossed circles denote the regulator insertion $\partial_t R_k$. } \label{fig:funflow} \end{figure} The structure of the flow \eq{eq:funflow}, or Fig.~\ref{fig:funflow}, is more apparent in a formulation with the fields $\varphi$. Written in these fields the cut-off term reads $\Delta S_k[\varphi]= \s012 \varphi\cdot R_k\cdot \varphi$, where the dot indicates the contraction over species of fields including space-time or momenta, Lorentz and gauge group indices. Note that the cut-off term now includes two times the ghost term which cancels the global factor $1/2$ and leads to \eq{eq:dSk}. In this short hand notation the flow equation takes the simple and concise form \begin{eqnarray}\label{eq:funflowstruc} \partial_t \Gamma_{k}[\varphi] &= & \frac{1}{2} {\rm Tr}\, G[\varphi](p,p) \cdot {\partial_t} R_k(p)\,, \end{eqnarray} where the trace includes a relative minus sign for the ghosts. \Eq{eq:funflowstruc} is well-suited for structural considerations. Flow equations for propagators and vertices are obtained from \eq{eq:funflow} or \eq{eq:funflowstruc} by taking derivatives with respect to $\varphi$. The computation of $\varphi$-derivatives on the rhs of \eq{eq:funflowstruc} only requires the equation for the field derivative of the propagator and the notion of the full one-particle irreducible (1PI) $n$-point functions. The latter are defined by \begin{eqnarray}\label{eq:Gamman} \Gamma_{k}^{(n)}(p_1,...,p_n)=\0{\delta\Gamma_{k}}{\delta \varphi(p_1)\cdots\varphi(p_n)}\,. \end{eqnarray} The field derivative of the propagator is given by \begin{eqnarray}\label{eq:derG} \0{\delta}{\delta \varphi} G[\varphi]= - G[\varphi]\cdot \Gamma_{k}^{(3)}[\varphi]\cdot G[\varphi]\,, \end{eqnarray} and diagrammatically depicted in Fig.~\ref{fig:derG}. \begin{figure}[t] \includegraphics[width=6.5cm]{figures/prop_deriv_eq} \caption{Field derivative of the propagator. The solid line stands for either a gluon or a ghost propagator and the vertex is depicted by a filled circle. } \label{fig:derG} \end{figure} With the above equations one readily derives the flow of 1PI $n$-point functions. Again, this can be nicely illustrated diagrammatically at the example of the propagators, which are the key objects in the approach. We take two field derivatives of the flow equation. Applying this to its diagrammatical form displayed in Fig.~\ref{fig:funflow} we are led to the flow equations given in Fig.~\ref{fig:YM_props}. The flow equations only contain one loop diagrams in the full propagators, which is a consequence of the exact one-loop form of the flow for the effective action, see \eq{eq:funflow} and Fig.~\ref{fig:funflow}. Moreover, they only depend on the full vertices \eq{eq:Gamman}. Analogously to the flow of the two-point functions, the flows of $n$-point functions only contain one-loop diagrams that depend on full propagators and vertices. \begin{figure}[t] \includegraphics[width=\columnwidth]{figures/YM_system_large_eq} \caption{Flow equations for the ghost and gluon propagators. Vertices with filled circles denote fully dressed vertices. All propagators are fully dressed, the filled circles for the internal ones have been omitted for the sake of clarity. Crossed circles denote the regulator insertion $\partial_t R_k$.} \label{fig:YM_props} \end{figure} The propagators and vertices are purely transversal due to the transversality of the gauge field. The purely transversal correlation functions vanish by contracting one of the Lorentz indices with its momentum, $(p_i)_{\mu_i} \Gamma^{(n)\,T}_{\mu_1\cdots \mu_i\cdots\mu_m}=0$ for $i=1,...,m$. Note that in general $n\neq m$ due to the ghosts. Since the propagators are purely transversal in the Landau gauge, the purely transversal correlation functions in the Landau gauge form a closed system of flow equations: the flow of a purely transversal correlation function only depends on purely transversal correlation functions and carry the whole dynamics. Any observables can be built-up from the purely transversal correlation functions. In turn, the flow of correlation functions with at least one longitudinal direction, $\Gamma^{(n)\,L}$, depends on both, $\Gamma^{(n)\,T}$ and $\Gamma^{(n)\,L}$. Moreover, they also obey modified Slavnov-Taylor identities, see \cite{Ellwanger:1994iz,Freire:2000mn,Pawlowski:2005xe,Gies:2006wv} and references therein.This situation is summarised in \begin{eqnarray}\nonumber \partial_t \Gamma^{(n)\,T}&=& {\rm Flow}_n^T[\{\Gamma^{(m)\,T}\}]\,, \\[1ex]\nonumber \partial_t \Gamma^{(n)\,L}&=& {\rm Flow}_n^L[\{\Gamma^{(m)\,T}\,, \Gamma^{(m)\,L}\}]\,,\\[1ex] (p)_{\mu} \Gamma^{(n)\,L}_{\mu\mu_2\cdots\mu_m}&=& {\rm mSTI}_n[\{\Gamma^{(m)\,T}, \Gamma^{(m)\,L}\}]\,, \label{eq:purelyTflow+Lflow}\end{eqnarray} where $m\leq n+2$. The modified Slavnov-Taylor identities converge to the standard ones for vanishing cut-off scale $k=0$. The above system comprises the full information about the correlation functions in the Landau gauge. Interestingly, the hierarchy of flow equations for the purely transversal correlation functions can be solved independently and carries the full dynamics of Yang-Mills theory. In this context we remark that vertex constructions aided by Slavnov-Taylor identities implicitly utilise an assumed uniformity of correlation functions, that is \begin{eqnarray}\label{eq:differentiable} \partial_{p_\mu} \Gamma^{ (n) } < \infty \quad {\rm for\ all\ } (p_1,...,p_n)\,. \end{eqnarray} This works well in perturbation theory but has to be taken with a grain of salt in the non-perturbative regime. It is also worth emphasising that \eq{eq:purelyTflow+Lflow} does not depend on the way how the Landau gauge is introduced. The Landau gauge can be also represented as the limit of covariant gauges with the gauge action $1/(2 \xi) \int_x (\partial_\mu A^a_\mu)^2$ and gauge fixing parameter $\xi$. Here $\xi=0$ signals the Landau gauge. In this case in general one also introduces a regularisation for the gauge mode, schematically given by \begin{equation} R^{\rm L} = \lim_{\xi\to 0}\0{1}{\xi} \Pi^L\, p^2 r(p^2/k^2)\,, \end{equation} with the projection operators \begin{eqnarray} \Pi^T_{\mu \nu}(p)&=&\delta_{\mu \nu}-p_{\mu}p_{\nu}/p^2\,,\nonumber\\ \Pi^L_{\mu \nu}(p)&=&p_{\mu}p_{\nu}/p^2 \,. \label{eq:projections0} \end{eqnarray} on transversal and longitudinal degrees of freedom, respectively. Still, the longitudinal mode does not play any r$\hat{\rm o}$le for the flow of correlation functions as $G \cdot \partial_t R^{\rm gauge}\cdot G\to 0$ for $\xi\to 0$ and $\lim_{\xi\to 0} G$ is purely transversal. Note however, that $1/2\,{\rm Tr}\, R^{\rm gauge}\cdot G$ does not vanish. Upon $t$-integration it gives the thermal pressure for the gauge mode, namely the Stefan-Boltzmann pressure of $N_c^2-1$ fields, see \cite{Litim:2006ag}. \section{Locality of RG-flows}\label{sec:local} The loops on the rhs of Fig.~\ref{fig:YM_props} only received contributions from momentum fluctuations with $p^2 \lesssim k^2$ due to the regulator insertion $\partial_t R$, see Fig.~\ref{fig:R}. However, the external momenta are not limited by such a constraint. Indeed, for large external momenta, $p^2/k^2 \gg 1$ the flow factorises at leading order: the tadpole diagram with the four-gluon vertex tends to a constant. The other tadpole diagrams vanish with powers $\lesssim k^2/p^2$. The related four-point functions are not present on the classical level and hence decay at least with $k^2/p^2$. This intuitive statement can be proven easily with the help of the respective flows and the factorisation present there. Here, we explain the factorisation at the relevant example of the three-point function diagrams, see also Fig.~\ref{fig:factorisation} for the example of the flow of the ghost propagator. These diagrams factorise for large external momenta, one factor being the uncutted internal line evaluated at $p^2$. For $p^2/k^2 \to\infty$ we have, \begin{eqnarray}\nonumber && \hspace{-.7cm}{\rm Tr}\,\left[ \left(G \dot R G\right)(q)\cdot \Gamma^{(3)}(q,p+q) \cdot G(p+q) \cdot \Gamma^{(3)}(q+p,q)\right] \\\nonumber & \to& {\rm Tr}\,\left[\left(G \dot R G\right)(q)\cdot \Gamma^{(3)}(0,p) \cdot G(p) \cdot \Gamma^{(3)}(p,0)\right]\\ && +\,{\rm higher\ order\ terms} \,, \label{eq:factorisation}\end{eqnarray} where the trace also integrates over loop momenta $q$. \begin{figure}[t] \includegraphics[width=\columnwidth]{figures/factorisation_large_eq} \caption{Factorisation in leading order for large momenta for the first diagram (cutted gluon line) in the flow of the (inverse) ghost propagator in Fig.~\ref{fig:YM_props} . The triangle stands for the product of the two vertices at $q=0$ and reads $-p_\mu p_{\mu'} f^{a cd} f^{b c' d'}$. } \label{fig:factorisation} \end{figure} In \eq{eq:factorisation} we have assumed that all momentum components are suppressed by the regulator: $R$ is a function of $p^2$ and not e.g. of solely spatial momentum squared, $\vec p^{\,2}$. Note also that for kinetic and/or symmetry reasons the leading order in line two of \eq{eq:factorisation} may vanish. This even supports the factorisation. The interchange of integration and limit and hence the factorisation works as the diagram is still finite with the uncutted line being removed. The general factorisation in \eq{eq:factorisation} can be nicely illustrated diagrammatically at the example of the flow of the ghost propagator Fig.~\ref{fig:YM_props}. In Fig.~\ref{fig:factorisation} we display the factorisation of the first diagram in Fig.~\ref{fig:YM_props}. The terms in the second line of \eq{eq:factorisation} or on the rhs of Fig.~\ref{fig:factorisation} are already subleading as the flow usually is peaked at momentum scales $p^2\lesssim k^2$. However, they are already quantitatively relevant at vanishing temperature but turn out to be crucial for the correct thermodynamics, in particular for the slow approach to the Stefan-Boltzmann limit for large temperatures. Potentially, they also play an important r$\hat{\rm o}$le for the thermodynamics in non-relativistic systems, where they supposedly relate to the Tan-relations \cite{Tan:2008-1,Tan:2008-2,Tan:2008-3} in the context of many-body physics; for FRG-reviews see e.g.\ \cite{Diehl:2009ma,Scherer:2010sv}. Still we have to reconcile the above polynomial decay with external momenta $p$ with the well-known exponential decay of thermal fluctuations with the standard suppression factor $\exp(-m/T)$ in the presence of a mass scale $m$. In the present case, this mass scale can be either the cut-off scale $k$ or the physical mass scale of Yang-Mills theory, $\Lambda_{\rm QCD}$, which is directly linked to the critical temperature $T_c$ of the confinement-deconfinement phase transition. The exponential thermal damping factor originates from the full Matsubara sum, and is strictly not present if only the lowest Matsubara frequencies are taken into account. A four-dimensional regulator depending on four-dimensional loop momentum $q^2=(2 \pi T n)^2+\vec q^2$ cuts the Matsubara sum, and hence only leads to the polynomial decay of the flow. The exponential suppression is then built-up successively with the flow. The above properties can be already very clearly seen and understood at the example of perturbative one-loop flows. The full Matsubara sum is reintroduced for regulators only depending on $\vec p^{\,2}$. They are frequently used in finite temperature applications of the FRG as they allow for an analytic summation of the Matsubara sums if only the trivial frequency dependence is taken into account, see e.g.\ \cite{Litim:1998nf,Litim:2006ag,Blaizot:2006rj,Braun:2009si}. However, in the present work we consider full propagators and vertices, that is with non-trivial frequency and momentum dependence. Moreover, as lower-dimensional regulators introduce an additional momentum or frequency transfers in the flow we therefore refrain from using them here. Apparently, the large momentum contributions also weaken the locality of the flow present in the loop momenta as they induce a momentum transfer: the flows at a given cut-off scale $k$ carry physics information about larger momentum scales. In turn this entails that any local approximation does not fully cover this momentum transfer. Now we utilise the freedom of reparameterising the effective action as well as the choice of the regulator in order to reinstall the locality of the flow for the two-point functions $\Gamma_{k}^{(2)}$. This minimises the systematic error of a given approximation, see \cite{Pawlowski:2005xe}. To that end, we rewrite the regulator term in \eq{eq:dSk} as follows, \begin{eqnarray}\label{eq:dSkhat} \Delta S_k= \012 \int_p \, A_{k,a}^\mu\, \hat R_{\mu\nu}^{ab} \, A_{k,b}^\nu +\int_p \, \bar c_{k,a}\, \hat R^{ab}\, c_{k,b}\,. \end{eqnarray} The fields $\phi=(A_k, c_k,\bar c_k)$ in \eq{eq:dSkhat} relate to the cut-off independent fields $\varphi=(A,c,\bar c)$ in the classical action via a cut-off and momentum dependent rescaling, \begin{eqnarray}\label{eq:rescaling} \phi(p)= \hat Z^{1/2}_{\phi,k}(p)\varphi(p)\,\quad {\rm with}\quad \partial_t \phi(p)=\hat\gamma_\phi(p)\, \phi(p)\,, \end{eqnarray} where the derivative is taken at fixed $\varphi$. We will use the natural definition $\hat\gamma_{\bar c}=\hat\gamma_c$. \Eq{eq:rescaling} implies \begin{eqnarray}\label{eq:gamma+eta} \hat\gamma_\phi(p)=\partial_t \log \hat Z_\phi(p)\,, \quad {\rm and}\quad R=\hat Z \cdot \hat R\,. \end{eqnarray} The field reparametrisation in \eq{eq:rescaling} does not change the effective action, in particular the regulator term $\Delta S_k$ does not change. It simply amounts to rewriting the effective action in terms of the new fields, \begin{eqnarray}\label{eq:hatGamma} \hat\Gamma_{k}[\phi]=\Gamma_k[\varphi]\,. \end{eqnarray} Then, $\phi$-derivatives $\hat\Gamma^{(n)}_k$ of the effective action $\Gamma_k=\hat\Gamma_k$ are given by \eq{eq:Gamman} with $\varphi\to \phi$, \begin{eqnarray}\label{eq:hatGamman} \hat\Gamma_{k}^{(n)}(p_1,...,p_n)=\0{\delta\hat\Gamma_{k}}{\delta \phi(p_1)\cdots\delta\phi(p_n)}\,. \end{eqnarray} As the fields $\phi$ and $\varphi$ only differ by a momentum dependent rescaling with $\hat Z^{1/2}$, the correlation functions are related by a simple rescaling with powers of $\hat Z^{1/2}$, \begin{eqnarray} \Gamma^{(n)}(p_1,...,p_n) &=& \prod_{i=1}^{n} \hat Z^{1/2}_{\phi_i}(p_i) \, \hat\Gamma^{(n)}(p_1,...,p_n) \,. \label{eq:hatunhat}\end{eqnarray} The rescaling with $\hat Z$ is up to our disposal, and we shall use it to minimise the momentum transfer in the flow equation of the two-point function by eliminating the subleading terms in the flow exemplified in \eq{eq:factorisation}. Note that this does not remove the related contributions, the momentum transfer is still present but does not feed-back directly in the flow, see \cite{Pawlowski:2005xe}. This is also elucidated below. The flow equation for the effective action now receives further contributions from the $k$-dependence of the fields $\phi$. We are finally led to the following flow for $\hat\Gamma_{k,T}[\phi]=\Gamma_{k}[\varphi]$, see \cite{Pawlowski:2005xe}, \begin{eqnarray}\nonumber &&\hspace{-1.7cm}\left({\partial_t}+\sum_i \int_p \hat\gamma_{\phi_i}(p) \phi_i(p) \0{\delta}{\delta\phi_i(p)}\right) \hat\Gamma_{T,k}[\phi] = \\ \nonumber && \frac{1}{2} \int_p \ \hat G^{ab}_{\mu\nu}[\phi](p,p) \, ({\partial_t}+2 \hat\gamma_A(p)) \hat R_{\nu\mu}^{ba}(p)\\ &&- \int_p \hat G^{ab}[\phi](p,p) \, ({\partial_t}+2 \hat\gamma_C(p)) \hat R^{ba}(p)\,, \label{eq:hatfunflow}\end{eqnarray} where $\hat G[\phi](p,q)= \bigl(\hat\Gamma_{k}^{(2)}[\phi]+\hat R \bigr)^{-1}(p,q) $ denotes the full regularised propagator for the propagation of $\phi$, see \eq{eq:G} with $\varphi\to \phi$ and $\Gamma^{(2)}_k \to \hat \Gamma^{(2)}_k$. The functional flow in \eq{eq:hatfunflow} looks rather complicated, but it is simply a reparameterisation of the standard flow in \eq{eq:funflow}, or \eq{eq:funflowstruc}. In the condensed notation introduced in Section~\ref{sec:funflow} this is more apparent, the flow equation \eq{eq:hatfunflow} then reads \begin{equation}\label{eq:hatfunflowstruc} \left({\partial_t}+\phi\cdot\hat\gamma_{\phi}\cdot \0{\delta}{\delta\phi}\right) \hat\Gamma_{T,k}[\phi]= \frac{1}{2} {\rm Tr} \, \hat G[\phi] \cdot \, ({\partial_t}+2 \hat\gamma_\phi)\cdot \hat R_k\,. \end{equation} \Eq{eq:hatfunflowstruc} illustrates that we have only reparametrised the fields in a scale-dependent way. \vspace{.06em} Taking two derivatives with respect to \ $\phi_1(p)$ and $\phi_2(q)$ of \eq{eq:hatfunflow} at vanishing ghost fields and constant gauge field we schematically get the flow \begin{equation} \left(\partial_t+ \hat \eta_{\phi_1}(p)\,\right) \hat\Gamma_{\phi_1\phi_2}^{(2)}(p) = {\rm Flow}^{(2)}_{\phi_1\phi_2}(p)\, , \label{eq:2pointflow}\end{equation} where the rhs of \eq{eq:2pointflow} stands for the $\phi_1(p)$ and $\phi_2(q)$ derivative of the rhs of \eq{eq:hatfunflow}, \begin{equation}\label{eq:defofflow} {\rm Flow}^{(2)}_{\phi_1\phi_2} = \0{\delta^2}{\delta\phi_1\delta\phi_2} \left( \frac{1}{2} {\rm Tr} \, \hat G[\phi] \cdot \, ({\partial_t}+\hat\eta_\phi)\cdot \hat R_k\right)\,, \end{equation} and \begin{eqnarray}\label{eq:hateta} \hat\eta_\phi(p)=2 \hat\gamma_\phi(p)=\partial_t \log \hat Z_k(p)\,, \end{eqnarray} is the `anomalous' dimension of the propagator related to the rescaling of the fields with $\hat Z$. In \eq{eq:2pointflow} we have used that for vanishing ghosts the two-point functions are diagonal/symplectic in field space. The only non-vanishing components are $\hat\Gamma_{k,AA}^{(2)}$ and $\hat\Gamma_{k,c\bar c}^{(2)}= -\hat\Gamma_{k,\bar c c}^{(2)}$. The two-point functions for constant gauge fields are also diagonal in momentum space, that is \begin{eqnarray}\nonumber \hat\Gamma_{k}^{(2)}(p,q) &=& \hat\Gamma_{k}^{(2)}(p)(2 \pi)^4 \delta(p-q)\,,\\ {\rm Flow}^{(2)}(p,q) &=& {\rm Flow}^{(2)}(p) (2 \pi)^4\delta(p-q)\,, \label{eq:diagonal}\end{eqnarray} and the relation \eq{eq:hatunhat} for the two-point functions reads \begin{equation} \label{eq:paraGform} \Gamma^{(2)}(p)=\hat Z(p)\,\hat \Gamma^{(2)}(p)\,. \end{equation} Now we can come back to the question of locality of the flow. Our aim is to remove the large momentum tail displayed in \eq{eq:factorisation} from the flow in order to keep its locality in momentum space, i.e. to minimise the momentum transfer. This is achieved by demanding \begin{eqnarray} \label{eq:locality} \left.\partial_t \hat\Gamma_{k}^{(2)}(p)\right|_{p^2>(\lambda k)^2}\equiv 0 \,, \end{eqnarray} which implies \begin{eqnarray} \hat\eta_\phi(p)&=& \0{{\rm Flow}^{(2)}(p)}{\hat\Gamma_{k}^{(2)}(p)} \theta(p^2 -(\lambda k)^2)\,. \label{eq:hatetaspecific} \end{eqnarray} \Eq{eq:locality} entails that the momentum transfer is switched off for momenta larger than the cut-off scale. The factor $\lambda$ controls this scale and can be used for an error estimate. After having computed the localised correlation functions $\hat\Gamma^{(n)}$, we can derive the correlation functions $\Gamma^{(n)}$ via rescaling with powers of $\hat Z$, see \eq{eq:hatGamman},\eq{eq:paraGform}. The scaling factor is computed by integrating $\hat \eta_k$ defined in \eq{eq:hateta}, \begin{eqnarray} \hat{Z}_k(p;T) &=& \hat{Z}_{k=0}(p;T) \ {\rm exp}\!\left\{ \int_{0}^{k}dt' \ \hat{\eta}_{k'} (p;T)\right\}\,. \label{eq:hatZint}\end{eqnarray} This determines $\hat Z_k$ up to a $k$-independent function, and we use \begin{eqnarray}\label{eq:hatZ0} \hat Z_{\phi,k=0}(p;T=0)=1\,. \end{eqnarray} For the choice \eq{eq:hatZ0} the two sets of correlation functions agree in the vacuum at $k=0$. As the flow of general correlation functions can be written down solely in terms of $\hat \Gamma^{(n)}$, the relation \eq{eq:locality} with \eq{eq:hatetaspecific} eliminates the momentum transfer (in $\Gamma^{(2)}$) from the flow. Note however, that a remnant of it is still present via the factor $2 \hat\gamma_\phi=\hat\eta_\phi$ on the rhs of the flow \eq{eq:hatfunflowstruc}. For regulators that decay sufficiently fast for momenta $p^2 \gg k^2$ this is quantitatively negligible. Indeed, for regulators which vanish identically for momenta bigger than $\lambda k$ the momentum transfer now is described solely by $\hat \eta_\phi(p)$ and decouples completely. We also remark in this context, that the above construction and the definition \eq{eq:hatZ0} leading to \eq{eq:locality} with $\lambda=1$ can be deduced by evoking functional optimisation for momentum-dependent approximations, see \cite{Pawlowski:2005xe}. The above heuristic arguments entail that optimisation restores the locality of the flow also in general momentum- and frequency-dependent approximations. \section{Thermal fluctuations}\label{sec:thermalflow} The flow equation \eq{eq:funflow} includes both, quantum as well as thermal fluctuations. For the present purpose we are interested in the thermal fluctuations, for reviews on thermal FRG see e.g.\ \cite{Litim:1998yn,Litim:1998nf,Blaizot:2009iy}. Thermal fluctuations are encoded in the difference of the flows at finite and at vanishing temperature, see e.g.\ \cite{Litim:2006ag}, \begin{equation}\label{eq:thermalflucs} \partial_t \Delta\Gamma_{T,k}[\phi]=\left.\012 {\rm Tr}\, G[\phi] \cdot \partial_t R\right|_{T}- \left.\012 {\rm Tr}\, G[\phi]\cdot \partial_t R\right|_{T=0}\,, \end{equation} where \begin{eqnarray}\label{eq:DeltaG} \Delta\Gamma_{T,k}[\phi]=\left.\Gamma_{k}[\phi]\right|_{T} -\left.\Gamma_{k}[\phi]\right|_{T=0} \end{eqnarray} accounts for the difference between the effective action at vanishing and at finite temperature. Due to the thermal exponential suppression the flow \eq{eq:thermalflucs} should have locality properties with respect to the scale $k=T$. As discussed in the previous section, locality is important for the quantitative reliability of a given approximation. In the present work we implement this idea as follows: We use the vacuum physics at vanishing temperature as input for the flow. A given set of correlation functions $\Gamma^{(n)}_{k=0}$ at $T=0$ and $k=0$ can be integrated with the flow \eq{eq:funflow} at vanishing temperature in a given approximation up to a large momentum scale $k=\Lambda$, \begin{equation}\label{eq:vaccorr} \left.\Gamma^{(n)}_{k=0}(p_1,...,p_n)\right|_{T=0}\stackrel{\rm flow}{\longrightarrow} \left. \Gamma^{(n)}_{k=\Lambda}(p_1,...,p_n)\right|_{T=0}\,. \end{equation} In the approximation at hand, \eq{eq:vaccorr} defines the initial conditions $\left.\Gamma^{(n)}_{k=\Lambda}\right|_{T=0}$, which give the correct vacuum correlation functions if integrated to $k=0$. The ultraviolet scale $\Lambda$ is chosen such that all thermal fluctuations are suppressed given the maximal temperature to be considered, $T_{\rm max}$. This implies \begin{equation}\label{eq:UVscale} \0{T_{\rm max}}{\Lambda}\ll 1\,. \end{equation} Then, switching on $T\leq T_{\rm max}$ does not change the initial conditions at leading order, i.e. \begin{equation}\label{eq:initialcond} \0{1}{\Lambda^{d_n}}\Delta\Gamma^{(n)}_{T,\Lambda}(p_1,...,p_n)= 0+O\left(\0{T}{\Lambda}\right)\,, \end{equation} where $d_n$ is the canonical dimension of $\Gamma^{(n)}$, and all momenta are of order $\Lambda$ or bigger, $p_i^2 \gtrsim \Lambda^2$, see e.g.\ \cite{Blaizot:2010ut} for scalar theories. This suggests that we can flow $\Delta\Gamma_{T,k}$ from the trivial initial condition \eq{eq:initialcond} to vanishing cut-off $k=0$. Note also that within such an approach to thermal fluctuations, it is only the difference $\partial_t \Delta\Gamma_{T,k}$ which is sensitive to the approximation at hand. We illustrate the procedure outlined above in a heuristic plot, see Fig.~\ref{fig:thermalflow}. \begin{figure}[t] \includegraphics[width=.8\columnwidth]{figures/thermalflow} \caption{Flow at vanishing temperature from $k=0$ to $k=\Lambda$, and flow at $T\neq 0$ from $k=\Lambda$ to $k=0$ with $\Lambda/T\gg 1$. The flow is described in theory space, and the axes label (orthogonal) couplings/observables ${\mathcal O}_i$ which serve as expansion coefficients of the effective action, e.g. ${\mathcal O}_1=\Gamma^{(2)}(p=0)$. The flows start to deviate at $k\approx T$, see the discussion below.} \label{fig:thermalflow} \end{figure} In this context it is also worth discussing the rapidity of $\Delta\Gamma^{(n)}_{T,\Lambda}\to 0$ for large cut-off scales $k=\Lambda$. This is linked to the question of locality raised in the previous Section~\ref{sec:local}. In Fig.~\ref{fig:thermalflow} it is indicated that $\Delta\Gamma^{(n)}_{T,k}$ start to significantly deviate from zero at the thermal scale $k\approx 2 \pi T$. Above and in the previous section we have argued that we only have a polynomial suppression for the flow $ \Delta\dot\Gamma^{(n)}_{T,k}$. Note also, that the longer $\Delta\Gamma^{(n)}_{T,k}$ survives at large scales $k\to\Lambda$ the more sensitive it will be to the approximation at hand. Eventually, the polynomial contributions to the flow integrate up to the standard thermal exponential suppression, that relates to the thermal distribution functions as discussed in Section~\ref{sec:local}. We conclude that we have a polynomial decay in the flow, which indeed plays a r$\hat{\rm o}$le for computing quantitatively reliable thermodynamic quantities. Additionally, we can utilise the ideas about locality to preserve as much as possible of the exponential decay, that stabilises any approximation scheme. We turn to the localised version of the flow derived in the previous Section~\ref{sec:local}. As the flow of $\hat\Gamma_{k}^{(2)}$ vanishes identically for momenta larger than $\lambda k$ for all temperatures $T$ we arrive at \begin{equation}\label{eq:initialcondexp} \Delta\hat\Gamma^{(2)}_{T,\Lambda}(p)=0+O(e^{-\Lambda/T})\,, \end{equation} where we have neglected the backreaction of the polynomial decay of the thermal corrections of higher correlation functions. The two-point function is the most important quantity if it comes to the computation of thermodynamic observables. It is exactly here where the locality-preserving flow pays off in quantitative reliability. Finally, we are interested in the correlation functions $\Gamma^{(n)}$ at $k=0$, which have to be computed from the $\hat\Gamma^{(n)}$ via the rescaling with powers of $\hat Z_{k=0}$. At vanishing temperature we have used the natural normalisation $\hat Z_{k=0}=1$, see \eq{eq:hatZint} and \eq{eq:hatZ0}, and the two sets of correlation functions agreed at $k=0$. At finite temperature we initiate the flow at $k=\Lambda$ with $T/\Lambda$. Here we have \eq{eq:initialcondexp} for the difference of the localised correlation functions, and we use the natural definition \begin{eqnarray}\label{eq:initialhatZT} \hat{Z}_{\Lambda}(p;T):= \hat{Z}_{\Lambda}(p;T=0)\,. \end{eqnarray} \Eq{eq:initialhatZT} implies that \eq{eq:initialcondexp} also applies to $\Delta\Gamma_{T,\Lambda}^{(2)}$: at the UV scale we have exponentially suppressed thermal fluctuations for the two-point function. As we expect a polynomial suppression due to the arguments in Section~\ref{sec:local}, the use of the computational simple initial condition amounts to a temperature-dependent renormalisation of $\Gamma_k^{(n)}(T)$ at non-vanishing cut-off scale $k$. For the same reason, \eq{eq:initialhatZT} also leads to $Z_{k=0}(p;T)\neq 1$, and we have to compute it from the flow of $\hat \eta_k$. We have \begin{eqnarray}\nonumber \hspace{-.3cm}\hat{Z}_{k}(p;T) &=& \hat{Z}_{\Lambda}(p;0)\, \exp \left\{ \int_{\Lambda}^{k}dt'\,\hat{\eta}_{k} (p;T)\right\}\\[1ex] &=& e^{\int_0^k dt'\, \hat{\eta}_{k} (p;0)}\, e^{ \int_{\Lambda}^{k}dt'\,\left(\hat{\eta}_{k} (p;T)-\hat{\eta}_{k} (p;0)\right)}\,, \label{eq:hatZT} \end{eqnarray} where we have used \eq{eq:hatZint} in the second line of \eq{eq:hatZT}. The relation \eq{eq:hatZT} entails that the rescaling factor $Z_{k=0}(p;T)$ contains the thermal part of the momentum transfer. \section{Approximation}\label{sec:approximation} In this section we discuss the approximation scheme used to capture the thermal fluctuations and thus allows for a quantitative computation of the temperature-dependence of the propagators. In general, the structure of the flow of the effective action, see Figs.~\ref{fig:funflow},\ref{fig:derG}, entails that flows of $n$-point functions depend explicitly on $(n\!+\!2)$-point functions. The relevant example for the present work is the flow of the (inverse) Yang-Mills propagators $\Gamma_{k}^{(2)}$, displayed in Fig.~\ref{fig:YM_props}. The flows depends on $\Gamma_{k}^{(n)}$ with $n \leq 4$. In other words, the functional flow in eq. (\ref{eq:funflow}), if broken up in the flows of $n$-point functions, constitutes an infinite hierarchy of coupled integro-differential equations. For computational purposes the system must be closed: the set of potentially contributing operators must be rendered finite, such that the relevant physics is kept in the approximation. Moreover, the approximation should be subject to self-consistency checks that give access to the systematic error. First, let us describe the approximation put forward here for the flow of the propagators in Fig.~\ref{fig:YM_props}, before we put down the explicit parameterisation/approximation in terms of $\Gamma_{k}^{(n)}$ with $n\leq 4$: we keep the full propagators and work with self-consistent approximations to the vertices which respect the renormalisation group properties of the vertices. We also use the flow equation for the ghost-gluon vertex, evaluated at the symmetric point with $p_i^2=k^2$ , $i=1,2,3$, for the momenta of ghost, anti-ghost and gluon, respectively. We have checked the reliability of the RG-consistent ansatz for the three-gluon vertex with its flow at the symmetric point $p^2=k^2$. The reliability of the RG-consistent four-gluon vertex is checked by its DSE computed in \cite{Kellermann:2008iw}. \subsection{Two-point functions and their flows} Now we proceed with the parameterisation of our approximation. We concentrate on the localised two-point functions $\hat\Gamma^{(2)}$, that of the cut-off independent fields are then obtained with \eq{eq:hatunhat}, \eq{eq:hatZint} and \eq{eq:hatZT}. At vanishing temperature the propagators are described by one wave-function renormalisation each, $Z_{A,k}(p)$ and $Z_{c,k}(p)$. At non-vanishing temperature we have to take into account chromoelectric and chromomagnetic modes, the respective projection operators $P^L$ and $P^T$, \begin{eqnarray} P^{T}_{\mu \nu}(p_0, \vec{p}) &=& \left(1-\delta_{\mu 0} \right) \left(1-\delta_{\nu 0} \right) \left( \delta_{\mu \nu}- p_{\mu}p_{\nu}/\vec{p}^{\ 2} \right), \nonumber\\ P^{L}_{\mu \nu}(p_0, \vec{p}) &=& \Pi^T_{\mu \nu}(p)-P^{T}_{\mu \nu}(p_0, \vec{p})\,, \label{eq:projections} \end{eqnarray} where $\Pi^T_{\mu\nu}$ is the four-dimensional transversal projection operator, see \eq{eq:projections0}. Thus we parameterise the gluon with two wave-function renormalisations $Z_{L/T}$. The ghost has only a scalar structure at vanishing and finite temperature. The parameterisation of the gluons and ghost is given by \begin{eqnarray} \hat\Gamma_{A,L}^{(2)}(p_0,\vec{p}) &=& Z_{L}(p_0, \vec{p})\, p^2\,P^{L}(p_0, \vec{p})\,, \nonumber\\ \hat \Gamma_{A,T}^{(2)}(p_0,\vec{p}) &=& Z_{T}(p_0, \vec{p})\, p^2\,P^{T}(p_0, \vec{p})\,, \nonumber \\ \hat \Gamma_{c}^{(2)}(p_0,\vec{p})&= & Z_c(p_0, \vec{p})\,p^2\,, \label{eq:parahatG}\end{eqnarray} where the identity in colour space is suppressed and the $Z's$ are functions of $p_0$ and $\vec p$ separately. The parameterisation of $\Gamma^{(2)}$ follows from that of $\hat\Gamma^{(2)}$ in \eq{eq:parahatG}, and is read off the definition of $\phi(\varphi)$ in \eq{eq:rescaling} and \eq{eq:paraGform}, \begin{equation} \label{eq:paraG} \Gamma_{A,L/T}^{(2)}\simeq \hat Z_{L/T}(p)\, Z_{L/T}(p)\,p^2\,,\quad \Gamma_{c}^{(2)}\simeq \hat Z_{c}(p)\, Z_{c}(p)\,p^2\,. \end{equation} The flow equations for the two-point functions have been given diagrammatically in Fig.~\ref{fig:YM_props}. Their right hand sides depend on the two-point functions, as well as three- and four-point functions. In particular, we have tadpole diagrams which depend on the ghost-ghost and ghost-gluon scattering vertices $\Gamma^{(4)}_{\bar c c\bar c c}$ and $\Gamma^{(4)}_{\bar c A^2 c}$, respectively. These vertices vanish classically, and in a first approximation one is tempted to drop the related diagrams. However, they can be considered in a rather simple way, which we use for the flow of the (inverse) ghost propagator: we insert the DSE-relations for $\Gamma^{(4)}_{\bar c c\bar c c}$ and $\Gamma^{(4)}_{\bar c A^2 c}$ in the related diagrams. This provides a DSE-resummation of the vertices in a given approximation to the flow. After some straightforward but tedious algebraic computations, it can be shown that this turns the flow equation for the ghost in Fig.~\ref{fig:YM_props} into the total $t$-derivative of the DSE-equation for the ghost, see Fig.~\ref{fig:deriv_ghDSE}. This is nothing but the statement that a flow equation for a correlation function can be seen as the differential form of the corresponding DSE for the ghost two-point function in the presence of the regulator term; both describe the same correlation function. The ghost-DSE and its $t$-derivative is illustrated in Fig.~\ref{fig:deriv_ghDSE}. The derivative acting on the dressed propagator gives \begin{eqnarray}\label{eq:singlescaleG} \partial_t G\left[ \phi\right] = - G\left[\phi \right]\cdot \partial_t \left( \Gamma^{(2)}\left[\phi \right] + R_{\phi}\right)\cdot G\left[\phi \right]\,. \end{eqnarray} The derivative of the bare propagator in the DSE vanishes. \begin{figure}[t] \includegraphics[width=\columnwidth]{figures/DSE_procedure} \caption{Flow equation of the ghost propagator from the total $t$-derivative of the renormalised ghost-DSE. The cutted line stands for the scale derivative acting on the propagator of the corresponding field, see \eq{eq:singlescaleG}. The square denotes the scale derivative acting on the full vertex, the vertex without filled circle denotes the classical vertex.} \label{fig:deriv_ghDSE} \end{figure} The DSE-flow is finite by construction, as it can be derived from the manifestly finite ghost flow in Fig.~\ref{fig:YM_props} by inserting the DSEs for the ghost-ghost and ghost-gluon scattering vertices. These DSEs are also manifestly finite and require no renormalisation. To see this explicitly in Fig.~\ref{fig:YM_props} we discuss the single terms. First, we note that the total $t$-derivative of the propagators, $\partial_t G$, shown in \eq{eq:singlescaleG} act as UV-regularisation of the loops. They decay at least with $G^2$ as $\partial_t \Gamma_k^{(2)}$ tends towards a constant for large momenta. The total $t$-derivative $\partial_t \hat G$ decays even more rapidly with $\partial_t\hat R$ for large momenta due to \eq{eq:locality}. This reflects again the locality implemented by \eq{eq:locality} and its practical importance. The last diagram is proportional to the flow of the ghost-gluon vertex. It will be discussed in detail in the Section~\ref{subsec:ghost-gluon} and is displayed schematically in Fig.~\ref{fig:cAcflow}. Here it suffices to say that the vertex itself is protected from renormalisation and its flow decays rapidly for large momenta. Both flow equations given in Fig.~\ref{fig:YM_props} and Fig.~\ref{fig:deriv_ghDSE} for the ghost are exact and are related via the above resummation procedure. In the present approximation scheme the DSE is more favorable as it only depends on the ghost-gluon vertex, which we shall resolve via its flow. In turn, the four-point functions do not appear explicitly in the total derivative of the DSE, and the related contributions are absorbed in the diagrams with cutted propagators. Notably, the ghost-ghost scattering vertex $\Gamma^{(4)}_{\bar c c\bar c c}$ disappeared completely from the set of flow equations of ghost and gluon propagators, whereas the ghost-gluon scattering vertex $\Gamma^{(4)}_{\bar c A^2 c}$ is still present in the flow of the gluon two-point function. Note that a similar procedure in the flow of the gluon two-point function leads to two-loop diagrams, which are commonly dropped in DSE-computations. This is one of the reasons why we refrain from resumming the related diagrams. \subsection{Vertices and their flows} For the vertices we first introduce a convenient parameterisation that naturally captures the renormalisation group behaviour. To that end, we utilise the wave function renormalisation of the gluon and the ghost and write \begin{eqnarray} \hat \Gamma^{(n)}(p_1,...,p_{n})= \prod_{i=1}^{n} \bar Z^{1/2}_{\phi_i}(p_i)\, {\cal T}(p_1,...,p_{n})\,. \label{eq:Gmnsol}\end{eqnarray} The $\bar Z$-factors are chosen to be proportional to $Z$, and hence carry the RG-scaling of the vertex as well as the momentum dependence of the legs: they carry potential kinematical singularities, see e.g.\ \cite{Fischer:2009tn}. We choose \begin{eqnarray}\nonumber \bar Z_{L/T}(p)&=&\0{Z_{L/T}(p)\,p^2-\left[Z_{L/T}(q)q^2\right]_{q=0}}{p^2}\,,\\[1ex] \bar Z_{c}(p)&=&Z_{c}(p)\,. \label{eq:barZ}\end{eqnarray} However, the $\bar Z_{L/T}(p)$ are frozen for $p\leq p_{\rm peak}$, where $p_{\rm peak}$ is the potential turning point of the inverse propagator $\Gamma_k^{(2)}$ in the infrared defined by $\partial_p (p^2 Z_{L/T}(p))_{p=p_{\rm peak}}=0$. The turning point $p_{\rm peak}$ depends on $k$ and tends towards zero for $k\to\infty$. Without this additional constraint $\bar Z_{L/T}$ would turn negative for small $k$, which reflects positivity violation. At finite temperature the turning points depend on $T$ and differ for $Z_T$ and $Z_L$, the turning point of the latter tends towards $zero$ for $T\to\infty$ due to the Debye screening. We emphasise that this is done simply for convenience in order to avoid the splitting of a positive vertex dressing into two negative factors. The $\bar Z$`s take into account the RG-scaling of the fields, and reflect the gaps present in the gluonic degrees of freedom. This leaves us with a renormalisation group invariant tensor $\cal T$. It is regular up to logarithms and carries the canonical momentum dimension as well as the tensor and colour structure. For the flow of the propagators, Fig.~\ref{fig:YM_props}, ${\mathcal T}$ has to be computed for the three-gluon and four-gluon vertex, the ghost-gluon vertex, as well as for the four-ghost and ghost-gluon scattering vertices. The latter two, which are absent on the classical level, are treated in terms of exact resummations with the help of DSEs. Now we utilise the locality of the flow: it only carries momenta $q^2\lesssim k^2$ and is peaked at about $p^2 \approx k^2$. Hence, we approximate the vertices by evaluating them at the symmetric point at $p_i^2=k^2$ and vanishing temporal components, \begin{eqnarray}\label{eq:sympoint} (p_i)_0^2=0\,\qquad {\rm and}\qquad \vec p_i^{\,2}=k^2\,. \end{eqnarray} Then the $\bar Z$-factors in \eq{eq:barZ} can be evaluated at fixed momenta $p$ with \eq{eq:sympoint} and we set \begin{eqnarray}\nonumber \bar Z_{k,A}&=&\bar Z_{A}(k)\theta(k-k_s)+Z_{A}(k_s)\theta(k_s-k)\,,\\ \bar Z_{k,C}&=&Z_{C}(k)\,, \label{eq:barZk}\end{eqnarray} where $Z_{k,A}$ is either $Z_{k,L}$ and $Z_{k,T}$, depending on the projection $P^{T/L}$ defined in \eq{eq:projections} on the respective leg of $\Gamma_k^{(n)}$. In a slight abuse of notation we have introduced $Z(k)$: the $Z$-factors in \eq{eq:barZk} are functions of $p_0^2$ and $\vec p^2$, which are evaluated at \eq{eq:sympoint}. The freezing scale $k_s\propto \Lambda_{\rm QCD}$ in \eq{eq:barZk} is chosen such that it is bigger than $p_{\rm peak}$. We point out that $k_s$ only defines the parameterisation of the ghost-gluon vertex. \subsection{Ghost-gluon vertex}\label{subsec:ghost-gluon} It is left to determine the dressings $\cal T$ for the primitively divergent vertices $\Gamma^{(3)}_{A^3}, \Gamma^{(3)}_{A^4}$ and $\Gamma^{(3)}_{\bar c Ac}$, which have a classical counterpart. We restrict ourselves to the classical vertex structure. The evaluation at the symmetric point \eq{eq:sympoint} leaves us with $k$-dependent dressing functions. For the ghost-gluon vertex we are led to \begin{equation} {\mathcal T}_{\bar cAc,\mu}^{abc}(q,p) =z_{k,\bar c A c}\,\0{1}{g} [S^{(3)}_{\bar cAc}(q,p) ]^{abc}_\mu= z_{k,\bar c A c} \,i q_\mu f^{abc}\,, \label{eq:gglapprox}\end{equation} where $g$ is the classical coupling, $S^{(3)}_{\bar cAc}$ is the classical ghost-gluon vertex derived from \eq{eq:fixedaction}, and $p$, $q$ are the ghost and anti-ghost momenta respectively, see Fig.~\ref{fig:cAc}. \begin{figure}[t] \includegraphics[width=.2\columnwidth]{figures/cAc_large_eq} \caption{Ghost-gluon vertex.} \label{fig:cAc} \end{figure} The $k$-dependent factor $z_{k,\bar c A c}$ is RG-invariant and defines a running coupling \begin{equation}\label{eq:baralpha} \bar\alpha_s(k)=\0{z^2_{k,\bar c A c}}{4 \pi}\,, \end{equation} with running momentum scale $k$. If expanded in powers of the coupling for large momenta, $\bar\alpha_s$ has the one and two-loop universal coefficients of the $\beta$-function of Yang-Mills theory, where we have used that $\bar Z_{k,A}\to Z_{k,A}$ for large cut-off scales. The flow of $z_{\bar cAc}$ is extracted from that of the ghost-gluon vertex. Here this flow is computed within a DSE-resummation similar to the derivation made for the ghost propagator. \begin{figure}[t] \includegraphics[width=.9\columnwidth]{figures/cAc_DSE_eq} \caption{DSE for the ghost-gluon vertex. The vertex without filled circle denotes the classical vertex.} \label{fig:cAc_DSE} \end{figure} Again, the DSE-resummed flow is finite as it is derived from the standard flow equation for the ghost-gluon vertex which is finite by construction. Note however, that already the DSE in Fig.~\ref{fig:cAc_DSE} is finite without renormalisation procedure due to the non-renormalisation theorem for the ghost-gluon vertex. Due to simple kinematical reasons, it is also present in approximation schemes that respect the kinematical symmetries. The above arguments allow us to start straightaway with the simple ghost-gluon DSE, see Fig.~\ref{fig:cAc_DSE}, which only contains one-loop terms. Similarly to the flow for the ghost propagator, we turn the DSE in Fig.~\ref{fig:cAc_DSE} into a flow equation by taking the $t$-derivative of Fig.~\ref{fig:cAc_DSE}. The running of the vertex allows for a temperature-dependent computation of the vertex, which will have an important effect in the calculation of the propagators. It remains to project the vertex flow onto that of the renormalisation group invariant dressing function $z_{k,\bar c Ac}$. The projection on the classical vertex structure is done by \begin{equation}\label{eq:GbarcAcps} \Gamma_{\bar c Ac}(p_s) = \left(\0{[\hat \Gamma_{\bar c Ac}^{(3)}]_\mu^{abc}\, [S^{(3)}_{\bar cAc}]_\mu^{abc}} {[S^{(3)}_{\bar cAc}]_\nu^{def} [S^{(3)}_{\bar cAc}]_\nu^{def}} \right)_{p^2=q^2=(p+q)^2=p_s^2}\,, \end{equation} with an evaluation at the symmetric point at the momentum scale $p_s$. For the classical vertex $\Gamma_{\bar c Ac}^{(3)}=S_{\bar c Ac}^{(3)}$ derived from \eq{eq:fixedaction}, the dressing is simply unity, $\Gamma_{k,\bar c Ac}(p_s) =1$. In the present approximation we evaluate the vertices at the moment scale $k$, and hence we define \begin{eqnarray}\label{eq:GbarcAc} \Gamma_{k,\bar c Ac}=\Gamma_{\bar c Ac}(k)\,. \end{eqnarray} Note that the lhs depends on $k$ via the evaluation at $p_s=k$ but also due to the implicit dependence of the vertex on the cut-off scale. The full vertex dressing in \eq{eq:GbarcAc} also includes the dressing of the legs as split off in \eq{eq:Gmnsol}. Hence, the dressing function $z_{k,\bar c Ac}$ is given by \begin{equation}\label{eq:zbarcAc} z_{k,\bar c Ac} =\0{1}{\bar Z_{k,A}^{1/2} \bar Z_{k,c}} \Gamma_{k,\bar c Ac}\,. \end{equation} The flow $\partial_t z_{k,\bar c Ac} $ is determined from \eq{eq:zbarcAc} and is directly related to that of the ghost-gluon vertex. Taking the $t$-derivative of \eq{eq:zbarcAc} leaves us with \begin{eqnarray}\nonumber \left(\partial_t+\012 \0{\partial_t \bar Z_{k,A}}{\bar Z_{k,A}}+ \0{\partial_t \bar Z_{k,c}}{\bar Z_{k,c}}\right) z_{k,\bar c Ac}=\partial_t \Gamma_{k,\bar c Ac}\, \0{1}{\bar Z_{k,A}^{1/2} \bar Z_{k,c}} \,. \label{eq:flowzbarcAc}\end{eqnarray} The scale-derivative of the full dressing $\Gamma_{\bar c Ac}$ is proportional to the flow of the vertex but also to the derivative with respect to the momentum at the symmetric point, \begin{eqnarray}\nonumber \partial_t \Gamma_{k,\bar c Ac} = \left[\partial_t \Gamma_{\bar c Ac}(p_s)+ p_s \partial_{p_s} \Gamma_{\bar c Ac}(p_s)\right]_{p_s=k} \,. \label{eq:flowGbarcAc}\end{eqnarray} Upon integration the flow \eq{eq:zbarcAc} gives us the vertex dressing of the ghost-gluon vertex at a given cut-off scale $k$, \begin{equation} z_{k,\bar c Ac} = z_{k=0,\bar c Ac}+ \int_0^{k}\frac{dk'}{k'} \partial_{t'} z_{k',\bar c Ac}. \end{equation} For the thermal flows in the present work the initial condition $\left.z_{k=0,\bar c Ac}\right|_{T=0}$ is required. It is here where we take advantage of the progress made over the last two decades in our understanding of Landau gauge QCD at vanishing temperature, for a review see \cite{Fischer:2008uz}. At vanishing temperature a one-parameter family of solutions with infrared enhanced ghost propagators and gapped gluon propagators in Landau gauge has been found. All solutions have a gluon propagator with a mass gap $m_{\rm gluon}\propto \Lambda_{\rm QCD}$. They only differ from each other in the deep infrared for momenta $p^2\ll\Lambda_{\rm QCD}$. There the gluon propagator is described by \begin{eqnarray} \label{eq:deepIR} Z_A(p\ll \Lambda_{\rm QCD}) \propto c(p) \0{m^2_{\rm gluon}}{p^2}\,, \end{eqnarray} where $c(p)\gtrsim 1$ is a momentum-dependent function which is bounded from below. For all solutions but one $c(p)$ it is also bounded from above, these solutions are called decoupling solutions, as the gluon decouples in that momentum regime. There is one distinguished member of this family where $c(p)$ diverges with $p^{2+2\kappa_A}$ with $\kappa_A<-1$, see \cite{Fischer:2008uz}. This solution is called scaling solution as the infrared propagators and vertices are uniquely determined by scaling laws up to constant prefactors, see e.g.\ \cite{Zwanziger:2001kw,Lerche:2002ep,Alkofer:2004it,% Fischer:2006vf,Fischer:2008uz,Fischer:2009tn,Alkofer:2008jy}. \begin{figure}[t] \includegraphics[width=.9\columnwidth]{figures/DSE_approx_cAc_large_eq} \caption[]{Flow equation for the ghost-gluon vertex from the total $t$-derivative of the ghost-gluon vertex DSE in Fig.~\ref{fig:cAc_DSE}. The cutted line stands for the scale derivative acting on the propagator of the corresponding field, see \eq{eq:singlescaleG}. The square denotes the scale derivative acting on the full vertex, the vertex without a filled circle denotes the classical vertex. } \label{fig:cAcflow} \end{figure} Most importantly, the flow of the ghost-gluon vertex in Fig.~\ref{fig:cAcflow} is not sensitive to the infrared behaviour of the gluon propagator. All diagrams in Fig.~\ref{fig:cAcflow} vanish in the limit $k\to 0$ with powers of $k^2/m^2_{\rm gap}$, \begin{eqnarray} \label{eq:insensitive} \lim_{k\to 0}\partial_t z_{k,\bar c Ac} \propto \0{k^2}{m^2_{\rm gap}}\,. \end{eqnarray} This also entails that the vertex dressing tends toward a constant in the infrared. Hence, we infer that the infrared value of the ghost-gluon vertex is the same for the whole class of solutions up to subleading terms in RG-transformations. This can be used to compute $\left.z_{k=0,\bar c Ac}\right|_{T=0}$ for the whole class of solutions. It has been shown that the scaling solution for constant ghost-gluon dressings is determined in the FRG up to an RG-constant, see \cite{Pawlowski:2003hq}, \begin{subequations}\label{eq:IRvalue} \begin{equation}\label{eq:IRvalue1} \left. z^2_{k=0,\bar c Ac}\right|_{T=0} \left(\0{\bar Z_{A}(p)\bar Z^2_{c}(p)}{ Z_{A,s}(p)Z^2_{c,s}(p)}\right)_{p=0}= 4 \pi\alpha_{s,\rm IR} \,, \end{equation} with the scaling wave function renormalisations $Z_{A/C,s}$ for ghost and gluon propagators, respectively. The coupling $\alpha_{s,\rm IR}$ is analytically known, see \cite{Lerche:2002ep}, \begin{eqnarray}\label{eq:IRvalue2} \alpha_{s} = -\frac{4 \pi}{N_c}\frac{2}{3} \frac{\Gamma(-2\kappa) \Gamma(\kappa-1)\Gamma(\kappa+3)}{\left(\Gamma(-\kappa)\right)^2 \Gamma(2\kappa-1)} \overset{N_c=3}{\approx} 2.97\,, \end{eqnarray} \end{subequations} where $\kappa \approx 0.595$. For momenta $p\gg \Lambda_{\rm QCD}$ the $Z_{A/C,s}(p)$ tend towards the decoupling solutions up to RG-scalings. Demanding equivalence for large momenta fixes the relative ultraviolet renormalisation condition. In summary, this allows us to fix $z^2_{k=0,\bar c Ac}$ at vanishing temperature for a given set of scaling or decoupling propagators in terms of the UV renormalisation condition. In the light of the ongoing debate about the infrared behaviour of Landau gauge propagators in the vacuum ($T=0$) it is important to stress the following: first, the flow of the vertex function is not sensitive to the differences of the momentum behaviour of the propagators in the deep infrared, $p\ll \Lambda_{\rm QCD}$, as it is switched off for $k\to 0$ below $\Lambda_{\rm QCD}$. Second, the above argument leading to \eq{eq:IRvalue} only relies on the technical possibility of finding initial ultraviolet conditions for the flow in the given approximation which flow into the scaling solution in the infrared. This is trivially possible, see \cite{Pawlowski:2003hq,Fischer:2004uk,Fischer:2008uz}. Then, the analytical values of the scaling solution fixes \eq{eq:IRvalue} for both, scaling and decoupling solutions. This does not resolve the infrared problem in Landau gauge Yang-Mills theory closely related to the picture of confinement, as well as to the resolution of the Gribov problem in this gauge. At non-vanishing temperature the coupling is suppressed below the temperature scale $k\sim T$, see e.g. \cite{Braun:2005uj}. Furthermore, we have to distinguish between transversal and longitudinal gluon legs. If all Matsubara frequencies vanish, $p_0=q_0=0$, the longitudinal vertex vanishes and the distinction is only relevant for non-vanishing Matsubara frequencies. There, however, all (hatted) quantities quickly tend towards their $T=0$ counter parts. Therefore, we approximate the longitudinal vertex dressing at finite temperature by the transversal one, \begin{eqnarray}\label{eq:long+trans} z_{k,\bar c Ac}^L=z_{k,\bar c Ac}^T\,. \end{eqnarray} In summary, we have set up a relatively simple flow for the ghost-gluon vertex which already covers the quantitative features of the full vertex flow. We have also checked our approximation by computing the full vertex flow on the basis of the results in the present approximations. This also provides us with an estimate of the systematic error in the present approximation. A full analysis of this will be published elsewhere. \subsection{Gluonic vertices}\label{subsec:gluonicvertices} It remains to determine the purely gluonic vertices. They are described within a parameterisation similar to \eq{eq:gglapprox}. We have schematically \begin{eqnarray} {\mathcal T}_{A^3} = z_{k,A^3}\,\0{1}{g} S^{(3)}_{A^3} \,, \qquad {\mathcal T}_{A^4} = z_{k,A^4}\,\0{1}{g^2} S^{(4)}_{A^4} \,, \end{eqnarray} where the vertex dressings $z_{k,A^3}$ and $z_{k,A^4}$ relate directly to the ghost-gluon dressing $z^2_{k,\bar c Ac}=4 \pi\bar \alpha_s(k)$, \eq{eq:baralpha}, for large cut-off scales $k\gg \Lambda_{\rm QCD}$ or large momenta due to two loop universality. Indeed, this reasoning has been validated in \cite{Kellermann:2008iw} with DSE-equations for the four-point coupling. Hence, we have $z_{k,A^4}\simeq z^2_{k,\bar cAc}$ for most of the momentum regime with a potential deviation in the deep infrared for $k^2/\Lambda_{\rm QCD}^2\ll 1$. We parameterise accordingly \begin{eqnarray}\label{eq:f} z_{k,A^3}= z_3\,z_{k,\bar cAc}\,,\quad \alpha_{s,A^4}= z_4\, z^2_{k,\bar cAc}\,, \end{eqnarray} where the $z_i, i=3,4$, are functions which are expected to approach unity for $k^2\gtrsim \Lambda^2_{\rm QCD}$, that is \begin{eqnarray}\label{eq:f1} z_{i,k\gg \Lambda_{\rm QCD}} \to 1\,, \end{eqnarray} for $i=3,4$. Their infrared behaviour is determined by the only diagram in the flow that does not depend on the gapped gluon propagator, see Fig.~\ref{fig:ghostloops} \begin{figure}[t] \includegraphics[width=.7\columnwidth]{figures/ghost_triangle_and_box} \caption{Infrared dominating diagrams in the flow equations for the three-gluon vertex and the four-gluon vertex.} \label{fig:ghostloops} \end{figure} The effect of these diagrams on the vertices is determined by two competing effects: First, these diagrams are suppressed due to their colour structures and the related vertex dressings are suppressed relative to that of the ghost-gluon vertex. In the case of the four-gluon vertex this combinatorial suppression is of order $10^2-10^3$, which is nicely seen in the solution in \cite{Kellermann:2008iw}. For the three-gluon vertex the combinatorial suppression factor also turns out to be of order $10^2-10^3$, subject to the chosen momenta, details will be published elsewhere. This factor has been determined within a two-dimensional lattice study as $0.017$, \cite{Maas:2007uv}. Second, these diagrams grow strong in comparison to the respective diagram in the flow of the ghost-gluon vertex, which involves one gapped gluonic propagator. In the decoupling case the flow of the gluonic vertices for $k\to 0$ is proportional to $k^0$ up to logarithms. This leads to an effective suppression of diagrams with gluonic vertices as such diagrams also involve gapped gluon propagators, and hence a suppression with $k^2/m^2_{\rm gap}$. In the scaling case the diagrams with gluonic vertices decouple with powers of the scaling, which can be seen best in the DSE-hierarchy. In summary, the above analysis entails that we can safely drop the related diagrams for $k\leq \Lambda_{\rm QCD}$, and above this scale the vertices have a dressing similar to that of the ghost-gluon vertex. In the vertices we have factorised the multiplicative dressing functions $\bar Z^{1/2}$ for the respective legs that are at our disposal. In turn, the relative thermal suppression factor is safely encoded in the ratio of the full ghost-gluon vertex dressings at vanishing and finite temperature, \begin{eqnarray}\label{eq:ratio} \0{\left.\Gamma_{k,\bar c Ac}\right|_{T}}{\left.\Gamma_{k,\bar c Ac}\right|_{T=0}}\,. \end{eqnarray} Since the temperature dependent dressing factor $z_{\bar c Ac}$ has already been included in the definition \eq{eq:f}, we deal with the reduced ratio of the wave function renormalisations, \begin{equation}\label{eq:rbarcAc} r_{\bar c A c}(k,T) = \frac{\left.\bar Z_{k,A}^{1/2}\,\bar Z_{k,C}\right|_{T}} {\left.\bar Z_{k,A}^{1/2}\,\bar Z_{k,C}\right|_{T=0}}\,, \end{equation} leading to the final approximation of the gluonic vertex dressings by \begin{equation}\label{eq:finaldressing} z_i(k,T) = r_{\bar c A c}(k,T)^{i-2}\,, \quad z^{\rm min}_i(k,T) =\0{ \bar Z_{k,A} }{ Z_{k,A} } z_i(k,T) \,. \end{equation} The results we show are achieved within the second set of vertex dressings, where we additionally switch-off the gluonic vertices linearly below the temperature scale, which implements the additional thermal suppression in the infrared. This betters the numerical convergence. In Section~\ref{sec:props+vertices} we test the sensitivity of the results to this switch-off. At non-vanishing temperature the structure functions $z_{A^n}$ also have to carry the difference between the coupling to longitudinal and transversal gluons, which is relevant for the second choice in \eq{eq:finaldressing}. The ratio $\bar Z_A/Z_A$ goes to $\bar Z_{L/T}/Z_{L/T}$. The validity of the present approximation is then tested by computing the flow of $z_{A^3}$ on the basis of our results. Note also that $z_{k,\bar cAc}$ only has transversal parts as we evaluate the flows at vanishing Matsubara frequency. \subsection{Regulators} For a numerical treatment of finite temperature flow equations, exponential regulators \begin{equation} r_m(x)= \frac{x^{m-1}}{ e^{x^m}-1} \label{eq:reg} \end{equation} generally provide good numerical stability. The parameter $m$ controls the sharpness of the regulator, which directly relates to the locality of the flow. As already pointed out in section \ref{sec:local}, locality is a central issue. The importance of sufficiently local flows is also seen on the level of the regulator, where it turns out that the non-locality induced by the exponential regulator (\ref{eq:reg}) with $m=1$ spoils the stability of the flow. On the other hand, regulators with a steep descent lead to a slower convergence of the regularised thermal propagators and vertices towards the vacuum ones at $T=0$. Indeed, for sharp cut-off functions or non-analytic ones thermal modifications are present for arbitrarily large cut-off scales $\Lambda$. This invalidates the use of the $T=0$ initial conditions at the initial UV scale $\Lambda$. Accordingly, we use $m=2$ in the computation, whose form is shown in Fig. \ref{fig:R}. The full regulators for the gluon and the ghost are given by \begin{eqnarray} \hat R^{T/L}{}_{{\mu\nu}}^{ab}(p)& =& \delta^{ab} P^{T/L}_{{\mu\nu}}\, \bar Z_{k,T/L}\, p^2 r_2(p^2/k^2)\,,\nonumber\\ \hat R^{ab}(p)& =& \delta^{ab}\, Z_{c}(0)\, p^2 r_2(p^2/k^2)\,, \end{eqnarray} with the projection operators $P^{L/T}$ defined in \eq{eq:projections}. We have chosen $Z_c(0)$ instead of $Z_c(k)$ for numerical convenience. The ghost renormalisation function $Z_c(p;T)$ tends towards zero for small momenta and finite temperatures. It is a balance between ghost propagator, gluon propagator and ghost-gluon vertex, which prohibits $Z_c$ getting negative. This balance is better resolved numerically with the prefactor $ Z_{c}(0)$ in the regulator $R_c$ for the ghost. At vanishing temperature we have $\bar Z_{k,L}=\bar Z_{k,T}$, and the gluon regulator $R_{k,A}=R^T+R^L$ is proportional to the four-dimensional transversal projection operator $\Pi$ defined in \eq{eq:projections}, see also the discussion below \eq{eq:dSk}. \section{Computational Details}\label{sec:comp} The discussion of the last section leaves us with a coupled set of partial integro-differential equation depicted in Fig.~\ref{fig:truncation}. The initial condition is set in the vacuum, $T=0$, at vanishing cut-off scale, $k=0$. The vertices are determined by \eq{eq:IRvalue2} and the relations in the Sections~\ref{subsec:ghost-gluon},\ref{subsec:gluonicvertices} and the input gluon and ghost propagators are taken from \cite{Fischer:2008uz}, from which we choose a decoupling solution. In order to solve the flow equation up to a UV scale $\Lambda$ we adopt an iteration procedure, which is very stable for the direction $k\rightarrow\Lambda$. Herein the iteration $(i+1)$ only depends on the given solution $(i)$, \begin{eqnarray}\label{eq:iteration} &&\Gamma^{(n)}_{k,i+1}= \Gamma^{(n)}_{k=0,i}+\nonumber\\ &&\hspace{1.5cm}+ \int_{0}^{k}\frac{dk'}{k'} \textnormal{Flow}^{(n)}_{i+1}\left(\Gamma^{(n)}_{k,i},\textnormal{Flow}^{(n)}_{i}\right)\,. \end{eqnarray} Incrementing the number of iterations until the solution is stable under further iteration, i.e. $\Gamma^{(n)}_{k,m+1}=\Gamma^{(n)}_{k,m}$ up to desired accuracy, gives the solution of the flow equation. The starting point of the iteration we take as $\Gamma^{(n)}_{k,(0)} = \Gamma^{(n)}_{T=0,k=0}$. Then we converge within the iterations rapidly to the initial condition $\Gamma^{(n)}_{\Lambda,T=0}(p^2)$. \begin{figure}[t] \includegraphics[width=\columnwidth]{figures/YM_system_truncation_large_eq} \includegraphics[width=.9\columnwidth]{figures/DSE_approx_cAc_large_eq} \caption{Yang-Mills flows for propagators and ghost-gluon vertex in the approximation discussed in Section~\ref{sec:approximation}. The flows for the ghost propagator and the ghost-gluon vertex are DSE-resummed and the ghost-tadpole in the gluon equation is neglected.} \label{fig:truncation} \end{figure} In contrast to the direction $k\rightarrow \Lambda$ in the zero temperature case, the flow from high to low scales at finite temperature involves instabilities. This is due to the self-regulating nature of the equations for the wave-function renormalisation of the ghost in combination with the ghost-gluon vertex dressing. Their structure is such that if one of the quantities becomes small it stops the flow of the other and in the following also its own flow, which is what happens at finite temperature. However, as soon as one of these quantities happens to be negative in an intermediate iteration step, the iteration becomes unstable, i.e. each iteration step brings the iteration solution further away from the correct solution. The system is highly sensitive to this, as one has to resolve very small values numerically. Therefore, we pursue a more direct strategy to solve the flow, namely an evolution of the flow according to a Runge-Kutta solver \begin{eqnarray} &&\Gamma^{(n)}_{k_{i-1}} = \Gamma^{(n)}_{k_{i}}+ \frac{k_{i-1}-k_{i}}{k_{i}} \textnormal{Flow}^{(n)}_{k_{i}}, \end{eqnarray} from the starting condition $k_N = \Lambda\gg T$ to $k_0=0$. In the evolution the system reacts on the balancing effect between the ghost propagator and ghost-gluon vertex immediately, and the purely numerical problem of possibly negative values of $Z_c$ or $z_{\bar{c}Ac}$ in the iteration is avoided. Note that the result is stable under iteration again, as it is the exact solution of the equation. The sensitivity to the balancing is still present, but shows up in the form of a small evolution step size of $\left|k_{i-1}-k_{i}\right| \lesssim 10\textnormal{MeV}$. \section{Results for Propagators and Vertices}\label{sec:props+vertices} In this section we present the results for the ghost and gluon propagators and the ghost-gluon vertex. The temperature is given in lattice units. As has been argued in section~\ref{sec:thermalflow}, the results for the ghost and gluon propagators show the typical thermal scale $ 2 \pi T$. Below this scale we have significant temperature effects on the momentum dependence. In turn, above this scale the temperature fluctuations are suppressed and all propagators tend towards their vacuum counterparts at vanishing temperature. This also holds true for the ghost-gluon vertex. This supports the self-consistency and stability of the thermal flows as discussed in section \ref{sec:thermalflow}. The most significant effect can be seen for the chromoelectric and chromomagnetic gluon propagators, that is the components of the propagator longitudinal and transversal to the heat bath. The zero mode of the longitudinal gluon propagator at various temperatures is given in Fig.~\ref{fig:GL} as a function of spatial momentum. \begin{figure}[t] \includegraphics[width=.9\columnwidth]{figures/GL} \caption[]{Longitudinal gluon propagator $G_L$ at different temperatures as a function of spatial momentum.} \label{fig:GL} \end{figure} For low temperatures $T\lesssim 150\textnormal{ MeV} $ we see an enhancement of the longitudinal propagator. Such an enhancement is also seen on the lattice, \cite{Maas:2011ez,Aouane:2011fv,Maas:2011se,Cucchieri:2011di,Fischer:2010fx,Cucchieri:2007ta}. It has been emphasised on the basis of the FRG that specifically the propagator of the electric mode should show critical behaviour, if computed in the background fields that solve the non-perturbative equations of motion \cite{Maas:2011ez}. However, the significance of the lattice results so far as well as quantitative details are not settled yet. For higher temperatures the longitudinal propagator is suppressed relative to the gluon propagator at vanishing temperature. This is the expected behaviour caused by the Debye screening mass due to the thermal screening of the chromoelectric gluon. For asymptotically high temperatures $T\gg T_c$ the chromoelectric gluon decouples. The onset of this behaviour at about $T\approx 200\textnormal{ MeV}$ is earlier as in the respective lattice computations \cite{Maas:2011ez,Aouane:2011fv,Maas:2011se,Cucchieri:2011di,Fischer:2010fx,Cucchieri:2007ta} where the thermal decoupling takes place for temperatures larger than the critical temperature. In order to capture this behaviour we have to extend our present truncation with a self-consistent inclusion of the Polyakov loop background as well as a better resolution of the purely gluonic vertices for momenta and frequencies below $\Lambda_{\rm QCD}$. Both extensions are under way, and the results will be presented elsewhere. In turn, for large temperature and momentum scales above $\Lambda_{\rm QCD}$ the above lack of quantitative precision at infrared scales is irrelevant. Here we see quantitative agreement with the lattice results, see Fig.~\ref{fig:FRG_latt_long}. The transversal mode is not enhanced for small temperatures, in clear distinction to the longitudinal mode. It is monotonously decreased with temperature, see Fig.~\ref{fig:GT}. Moreover, it develops a clear peak at about 500 MeV. This can be linked to positivity violation, which has to be present for the transversal mode: in the high temperature limit it describes the remaining dynamical gluons of three-dimensional Yang-Mills theory in the Landau gauge. The infrared bending is more pronounced as that of respective lattice results, and its strength is subject to the lack of quantitative precision at these scales. In turn, for larger momenta the transversal propagator agrees well with the respective lattice propagator. \begin{figure}[t] \includegraphics[width=.9\columnwidth]{figures/GT} \caption[]{Transversal gluon propagator $G_T$ at different temperatures as a function of spatial momentum.} \label{fig:GT} \end{figure} The ghost only shows a small temperature dependence, in contradistinction to the gluonic propagators. This is fully compatible with the lattice results, see \cite{Maas:2011ez,Aouane:2011fv,Maas:2011se,Cucchieri:2011di,% Fischer:2010fx,Cucchieri:2007ta}. \begin{figure}[t] \includegraphics[width=.9\columnwidth]{figures/GC} \caption{Ghost propagator $G_c$ at different temperatures as a function of spatial momentum.} \label{fig:GC} \end{figure} The temperature dependence is hardly evident on the level of the propagator, see Fig.~\ref{fig:GC}, but can be resolved on the level of the wave-function renormalisation, see Fig.~\ref{fig:ZC}. The wave-function renormalisation is slightly suppressed, which corresponds to a successive enhancement of the ghost propagator at finite temperature. \begin{figure}[t] \includegraphics[width=.9\columnwidth]{figures/ZC} \caption{Zero mode of the wave-function renormalisation of the ghost $Z_c$ at different temperatures as a function of spatial momentum.} \label{fig:ZC} \end{figure} The enhancement of the ghost propagator is potentially self-amplifying, as it feeds back into the flow of the ghost two-point function, see Fig.~\ref{fig:truncation}. This would cause a pole in the ghost propagator at some temperature, if the ghost-gluon vertex did not change. However, in this case the flow of the latter is dominated by the diagrams with two ghost propagators, see Fig.~\ref{fig:truncation}, and the ghost-gluon vertex flows to zero. This non-trivial interplay of the flow of the ghost propagator with the flow of the ghost-gluon vertex leads to a self-stabilising system and prevents a further enhancement of the ghost. This effect is crucial for the stability of the solution of the Yang-Mills system at finite temperature. Indeed, for a constant vertex no solution could be obtained for intermediate temperatures $T\approx T_c$ and above. Thus we conclude that any reliable truncation must comprise direct thermal effects also in $n$-point functions with $n\leq3$. The self-stabilising property of the Yang-Mills system explained above is clearly seen in the temperature-dependence of the ghost-gluon vertex. The vertex is suppressed successively with the temperature, see Fig.~\ref{fig:ZcAc}, which in turn ensures the relatively mild change of the ghost propagator. Especially the sharp drop-off of the vertex at small scales $k$ accounts for the smallness of the thermal fluctuations to the ghost propagator. \begin{figure}[t] \includegraphics[width=.9\columnwidth]{figures/ZcAc} \caption{Dressing function $z_{\bar{c}Ac}$ of the ghost-gluon vertex at different temperatures as a function of spatial momentum.} \label{fig:ZcAc} \end{figure} In Section~\ref{subsec:gluonicvertices}, we approximated the temperature dependence of the gluonic vertices by an ansatz that combined all necessary qualitative properties of the vertices. In order to check the sensitivity of the propagators at low temperatures to this choice, we compare the results with a strong suppression with a computation where the minimal suppression of the gluonic vertices defined in eq.~(\ref{eq:finaldressing}) is implemented. This gives an approximate error band for the propagators with respect to the ansatz of the gluonic vertices. We find that the main impact of the truncation of the gluonic vertices is onto the longitudinal propagator below $T_c$, given in Fig.~\ref{fig:SO_GL}. \begin{figure}[t] \includegraphics[width=.9\columnwidth]{figures/SO_GL} \caption{Comparison of longitudinal propagator with minimal and maximal strength of the coupling of purely gluonic vertices.} \label{fig:SO_GL} \end{figure} We have already mentioned that our truncation has to be improved in order to quantitatively capture the deep infrared, and the temperature dependence of the gluonic vertices is an important ingredient. In contrast to the sensitivity of the deep infrared behaviour of the longitudinal gluon, the transverse gluon propagator, see Fig.~\ref{fig:SO_GT}, as well as the ghost propagator hardly feel the modified infrared behaviour of the gluonic $n$-point functions. \begin{figure}[t] \includegraphics[width=.9\columnwidth]{figures/SO_GT} \caption{Comparison of transversal propagator with minimal and maximal strength of the coupling of purely gluonic vertices.} \label{fig:SO_GT} \end{figure} This is in agreement with the well-known fact that the infrared sector of Yang-Mills theory in Landau gauge has ghost dominance (for both scaling and decoupling solutions) for any dimension $d=2,3,4$. As the wave-function renormalisation of the ghost with the minimal and maximal cut off of gluonic vertices can not be distinguished by eye, we desist from illustrating it explicitly. In contrast to this, the ghost-gluon vertex reflects the switching off of gluonic vertices, see Fig.~\ref{fig:SO_ZcAc}, however, only in a region where the gluonic contributions to the flow are subleading. \begin{figure}[t] \includegraphics[width=.9\columnwidth]{figures/SO_ZcAc} \caption{Comparison of ghost-gluon vertex with unbounded and bounded gluonic flow.} \label{fig:SO_ZcAc} \end{figure} In summary, the scheme-dependence of the gluonic vertices affects the details of the thermal decoupling of the longitudinal propagator, but has no effects on the transversal propagator and the ghost. The sensitivity of the thermal decoupling of the propagator to the thermal decoupling of the vertices is easily understood. In turn, transversal gluon and ghost are the relevant degrees of freedom for asymptotically high temperatures where we are led to a three-dimensional confining theory. This property can be seen nicely on the lattice, where only the transversal gluon is sensitive to the removal of confining configurations, see e.g. \cite{Chernodub:2011pr}. We conclude that our approximations do not affect the confining physics. In the following, we compare the propagators above with lattice results \cite{Maas:2011ez,Fischer:2010fx, AxelPrivComm}. For this purpose, we scale the lattice data such that the lattice propagators at vanishing temperature match our normalisation at momenta $p\gtrsim 1\textnormal{ GeV}$. Take notice that we did not use the lattice propagator as the initial condition, thus the deep infrared of the data deviates from our propagator already at zero temperature, which persists also in the propagators at finite temperature. Apart from that, there is quantitative agreement with the lattice data with respect to the (temperature dependent) momentum region, where the thermal effects appear. In Fig.~\ref{fig:FRG_latt_trans} the transversal propagators are compared. The critical temperature in the lattice data is $T_c=277\,\textnormal{MeV}$. \begin{figure}[t] \includegraphics[width=.9\columnwidth]{figures/FRG_latt_trans_2} \caption[]{Transversal gluon propagator in comparison with lattice results \cite{Maas:2011ez,Fischer:2010fx}. The lattice data has been rescaled such that the $T=0$ propagators match at intermediate momenta $p\gtrsim 1\textnormal{ GeV}$.} \label{fig:FRG_latt_trans} \end{figure} Clearly, we match the lattice propagator, except for the strong bending of the FRG propagator in the infrared region. This difference is a direct consequence of the mismatch of the ghost and gluon propagators in the deep infrared at vanishing temperature, and relates to the different decoupling solution chosen here. Apart from the deep infrared, the momentum and temperature behaviour of the magnetic gluon matches that of the lattice propagator. In contrast to this, the electric gluon on the lattice shows a qualitatively different behaviour for temperatures below and around the phase transition. Although the longitudinal propagators agree for $T=0.361T_c\approx 100\textnormal{MeV}$, it is exactly this region where the uncertainty due to the truncation for the gluonic vertices is large, as shown in Fig.~\ref{fig:SO_GL} for $T=200$ MeV. \begin{figure}[t] \includegraphics[width=.9\columnwidth]{figures/FRG_latt_long_2} \caption[]{Longitudinal gluon propagator in comparison with lattice results \cite{Fischer:2009gk}. The lattice data has been rescaled such that the $T=0$ propagators match at intermediate momenta $p\gtrsim 1\textnormal{ GeV}$.} \label{fig:FRG_latt_long} \end{figure} Being aware of a potential truncation dependency in the deep infrared of the longitudinal propagator at low temperatures, we note that in the present truncation the electric gluon shows the onset of the enhancement found on the lattice. Increasing the temperature this feature disappears, and we see a qualitatively different effect for temperatures below $T_c$. While the continuum result shows a strictly monotonic decreasing propagator, the counterpart on the lattice is enhanced in the confining regime, but reflects the phase transition in form of a rapid decrease at $T_c$. Nevertheless, this deflection is expected to be missed in the present truncation, as the Polyakov loop potential $V(A_0)$ is pivotal for the critical behaviour around the phase transition. In a full calculation the inverse propagator is proportional to the second derivative of the Polyakov loop $\Gamma_{A,L}^{(2)}\sim V''(A_0)$, see \cite{Pawlowski:2010ht,Braun:2010cy,Braun:2009gm,Braun:2007bx}. This dependence introduces an additional screening of the vertex-approximations as well as vertex corrections, but in the computations presented here it was dropped. This upgrade should also give us access to the question of the signatures of criticality in the chromoelectric propagator discussed in \cite{Maas:2011se}. Interestingly, such a term is absent in the magnetic modes, which do not depend strongly on the vertex-approximation. It is suggestive that the inclusion of $A_0$ will further stabilise our computation. However, we defer this interesting computation of the propagator in the presence of a non-trivial background to future work. \section{Summary and outlook}\label{sec:summary} In the present work we have put forward a functional approach for the quantitative computation of full, momentum dependent correlation functions at finite temperature in Yang-Mills theory. This is done within a FRG-setting which only incorporates thermal fluctuations. We have computed temperature dependent Yang-Mills propagators and vertices in the Landau gauge for temperatures $T\lesssim 3 T_c$. The chromoelectric propagator shows the expected Debye-screening for $T>T_c$ in quantitative agreement with the lattice results. For small temperatures it shows qualitatively the enhancement also seen on the lattice \cite{Maas:2011se,Cucchieri:2007ta,Fischer:2010fx% ,Bornyakov:2011jm,Aouane:2011fv,Maas:2011ez,Cucchieri:2011di}. However, the significance of the lattice results so far as well as quantitative details are not settled yet. The chromomagnetic propagator shows the expected thermal scaling and tends towards the three-dimensional gluon propagator in quantitative agreement with the lattice. The ghost propagator only shows a mild enhancement with temperature in agreement with the lattice. In contradistinction we see a strong thermal infrared suppression of the ghost-gluon vertex which increases with temperature. The results of this paper are also summarised in \cite{proceedings}. At present we improve on the approximations made here and compute thermodynamic observables such as the pressure and the scale anomaly. The inclusion of the $A_0$-corrections discussed in Section~\ref{sec:props+vertices} should give access to the properties of the chromoelectric propagator at criticality, see \cite{Maas:2011ez}. Furthermore, we utilise the propagators obtained here in a combined approach with FRG and DSE as well as lattice results in an extension of \cite{Fischer:2009wc}. This work aims at the QCD phase structure for heavy quarks, for lattice results see \cite{Philipsen:2011zx,Fromm:2011qi}. Moreover, the glue and ghost propagators are a key input for quantitative computations in full QCD with 2 and 2+1 flavours at finite temperature and density in an extension of \cite{Braun:2009gm}. \vspace{.5em} \noindent {\it Acknowledgements} \\[0ex] We thank R.~Alkofer, J.-P.~Blaizot, J.~Braun, S.~Diehl, C.~F.~Fischer, K.~Fukushima, L.~M.~Haas, M.~Ilgenfritz, A.~Maas, U.~Reinosa, B.-J.~Schaefer, A.~Sternbeck, L.~von Smekal and C.~Wetterich for discussions, M.~Ilgenfritz, A.~Maas, A.~Sternbeck for providing lattice results. This work is supported by the Helmholtz Alliance HA216/EMMI. LF acknowledges financial support by the Helmholtz International Center for FAIR within the LOEWE program of the State of Hesse and the Helmholtz Young Investigator Grant VH-NG-322. \noindent \bibliographystyle{bibstyle}
1,108,101,562,398
arxiv
\section*{references}
1,108,101,562,399
arxiv
\section{Appendix} \subsection{Agent Outcomes} \begin{proof}[Proof of Theorem~1] Let's walk through the steps of the algorithm, bounding the error that accumulates along the way. In the first round we set $\omega = 0$ in order to obtain an estimate for $E[{\omega^*}^T x]$. Since $\omega^*$ is a unit vector, the variance of ${\omega^*}^T x$ is at most $\lambda_{max}$ plus a constant (from the $1-$subgaussian noise). This means that $O(\epsilon^{-1} d \lambda_{max})$ samples suffice for the empirical estimator of $E[{\omega^*}^T x]$ to have no more than $\frac \epsilon {4d}$ error with failure probability $\Omega (\frac 1 {2d})$. We call the output of this estimator $\hat \mu$ and let $\hat \mu_d$ be the r-dimensional vector with $\hat \mu$ in every coordinate. Now we choose $\omega_1....\omega_d$ that form an orthonormal basis of the image of the diagonal matrix $V$. For each $\omega$ we observe the reward ${\omega^*}^T (x + G\omega) + \eta$, subtract out $\hat \mu$, and plug it into the empirical mean estimator. For each $\omega_i$, let $\hat \nu_i$ be the resulting coefficient. After $O(\epsilon^{-1} d \lambda_{max})$ samples, each coefficient has at most $\frac \epsilon {4d}$ error with failure probability at most $\frac 1 {2d}$. Since we have computed $d+1$ estimators, each one with failure probability at most $\frac 1 {2d}$, a union bound gives us a total failure probability that is sub-constant. We can now bound the total squared $\ell_2$ error between said coefficients and $G^T \omega^*$ in the $\omega_1...\omega_d$ basis (noting that the choice of basis does not affect the magnitude of the error). We can break up the error into two components using the triangle inequality: the error due to $\hat \mu_d$ and the error in the subsequent rounds. Each coordinate of $\hat \mu_d$ has error of magnitude at most $\frac \epsilon {4d}$, so the total magnitude of the error in $\hat \mu_d$ is at most $\frac \epsilon {4}$. The same argument applies for the error in the coordinate estimates, leading to a total $\ell_2$ error of at most $\epsilon/2$. Recall that $\hat\omega = \hat\nu/\|\hat\nu\|$. Let $\nu := G^T \omega^*$. We can now bound the gap between the agent outcomes incentivized by $\hat\omega$ and by $\omega_{imp} = \nu/\nu$: \begin{align} \AO(\omega_{imp}) - \AO(\hat\omega) &= \nu^T \frac{\nu}{\|\nu\|} - \nu^T \frac{\hat\nu}{\|\hat\nu\|} \\ &= \|\nu\| - \nu^T \frac{\hat\nu}{\|\hat\nu\|} \\ &\leq \|\nu\| - \frac{\|\nu\|(\|\nu\|-\epsilon/2)}{\|\nu\|+\epsilon/2} \\ &= \frac{\|\nu\|\epsilon}{\|\nu\|+\epsilon/2} \leq \epsilon \end{align} \end{proof} \subsection{Prediction Risk} \begin{proof}[Proof of Lemma~1] \begin{align*} Risk(\omega) &= \E[x,a]{\left (\omega^T V \left (x + Ma\right ) - {\omega^*}^T\left (x+Ma \right ) \right )^2} \\ =& \E[x,a]{\left ( \left (\omega^T Vx - {\omega^*}^T x \right) + \left (\omega^T VMa - {\omega^*}^T Ma\right ) \right )^2} \\ =& \E[x,a]{\left (\omega^T Vx - {\omega^*}^T x\right)^2} + \E[x,a]{\left(V\omega - \omega^*\right)^T x (Ma)^T \left(V\omega - \omega^*\right)} + \E[x,a]{\left (\omega^T VMa - {\omega^*}^T Ma\right )^2} \\ =& \E[x]{\left (\omega^T Vx - {\omega^*}^T x\right)^2} + \E[a]{\left (\omega^T VMa - {\omega^*}^T Ma\right )^2} \end{align*} where the last line follows because $Ma$ and $x$ are uncorrelated. \end{proof} \subsection{Parameter Estimation} \label{app:causal} In this section we describe how we recover $\hat\omega_{opt}$ in $L^2$-distance when there exists an $\omega$ such that $\Sigma + G \omega$ is full rank. Before we proceed we make a couple of observations. When there is no way to make the above matrix full rank, we cannot hope to recover the optimal $\hat\omega_{opt}$. If there is no natural variation in e.g. the last two features, and furthermore no agent can act along those features, it is not possible to disentangle their potential effects on the outcome. This also suggests that the parameter recovery is a more substantive demand for the decision maker than the standard linear regression setting. To discover this additional information, the decision maker can incentivize the agents to take actions that help the decision-maker recover the true outcome-governing parameters. This motivates the algorithm we present in this section. It operates in two stages. First, it recovers the information necessary in order to to identify the decision rule which will provide the most informative agent samples after those agents have gamed. Second, it collects data while incentivizing this action. Finally, it computes an estimate of $\hat\omega_{opt}$ using the collected data. We present the complete procedure in Algorithm \ref{alg:causal}. \begin{algorithm*} \caption{Recovering the Causal Model} \label{alg:causal} \begin{algorithmic}[1] \STATE Let $k_1 = \lambda_{max}(G^TG)$ and $k_2 = || \Sigma ||^2$ \STATE Let $\kappa_{min} = \lambda_{min} (\Sigma)$ \STATE Choose an $\epsilon > 0$ \STATE Let $n_1 = O(\max (\frac{d k_1}{\kappa_{min}}, \frac{d^2 k_2}{\kappa_{min}}) )$ \STATE Collect samples $x_1,\ldots,x_{n_1}$ \STATE Let $\hat \mu = \frac{1}{n_1}\sum x_i$ \STATE Let $\hat \Sigma = \frac{1}{n_1}\sum x_ix_i^T$ \STATE Let $n_2 = O( \max({d^2 ||\hat\mu||^2 \mathrm{tr}(\Sigma), d^3 ||G||^2 \mathrm{tr}(\Sigma) }))$ \FOR{ $i = 1...d$} \STATE $\omega = e_i$ \STATE Sample $x_1, \ldots, x^i_{n_2}$ and subtract $\hat \mu$ from each one. \STATE Let $\hat G_i = \frac{1}{n_2} \sum\limits_{j = 1}^{n_2} x_j$ \ENDFOR \STATE Let $\hat\omega_{opt} = \argmin\limits_{\omega} \hat \Sigma + 2 \mu \omega^T \hat G^T + \hat G \omega \omega^T \hat G ^T $ \STATE Let $n_3 = O(\frac{d}{\epsilon \kappa_{min}})$ \STATE Sample $x_1,\ldots, x_{n_3}$ with $\omega = \hat\omega_{opt}$. \STATE Return the output of OLS on $x_1,\ldots, x_{n_3}$ \end{algorithmic} \end{algorithm*} The procedure in Algorithm \ref{alg:causal} can be summarized as follows: \begin{enumerate} \item Estimate the first and second moments of the distribution of agents' features. \item Estimate the Gramian of the action matrix $G$. \item Compute the most informative choice of $\omega$. \item Collect samples under the most informative $\omega$ and then return the output of OLS. \end{enumerate} Before we proceed to the proof of correctness of Algorithm \ref{alg:causal}, let us build some intuition for why this procedure of choosing a single $\omega$ and collecting samples under said $\omega$ makes sense. As we show later, the convergence of OLS for linear regression can be controlled by the minimum eigenvalue of the second moment matrix of the samples. Our algorithm finds the value of $\omega$ that, after agents game, maximizes this minimum eigenvalue in expectation. It turns out the minimum eigenvalue of the expected second moment matrix of post-gaming samples is convex with respect to the choice of $\omega$. The convexity of the objective suggests that a priori, when choosing $\omega$s to obtain informative samples, the optimal strategy is choose a single specific $\omega$. The main difficulty in the rest of the algorithm is achieving the necessary precision in the estimation to be able to set up the above optimization problem to identify such an $\omega$. \textbf{Theorem 3. }\textit{ When $V = I$, the output of Algorithm \ref{alg:causal} run with parameter $\epsilon$ satisfies $|| \omega - \omega^*|| \le \epsilon$ with probability greater than $\frac 2 3$.} The proof of Theorem 3 relies on several lemmas. First we bound the $L_2$ error of OLS as a function of the empirical second moment matrix in Lemma \ref{lem:olsell2}. Note that the usual bound for the convergence of OLS is distribution dependent. That is, the expected error is small. \begin{lemma}\label{lem:olsell2} Assume $V = I$. Consider samples $x_1,\ldots, x_n$ and $y_i = {\hat\omega_{opt}}^T x_i + \eta_i$. Let $\omega$ be the output of OLS $(x_i, y_i)$. Then $$ \mathbb E_{\eta}\left [ || \omega - \hat\omega_{opt}|| ^2 \right] \le \frac{d}{n \kappa_{min}}$$ \end{lemma} The above proof is elementary and a slight modification of the standard textbook proof (see for example, \cite{liangstat}). The proof also requires that the optimization to choose the optimal $\omega$ is convex. \begin{lemma}\label{lem:convex} The minimum eigenvalue of the following matrix is convex with respect to $\omega$ for any values of $x,G$. $$ \sum\limits_i (x_i + \hat G \omega)(x_i + \hat G \omega)^T$$ Furthermore, when the following conditions are true, then the minimum eigenvalue of the above is within a constant factor of the optimal value. $\mathbb E[(x + G \omega)(x + \hat G \omega)^T ]$. \begin{itemize} \item $|| \hat \Sigma - \Sigma ||^2 \le \epsilon$ \item $|| \mu - \hat \mu || ^2 \le \frac{\lambda_{max}(G^T G) \epsilon }{d}$ \item $ ||\hat G - G||^2 \le \frac{\epsilon}{d || \mu ||^2}$ \item $ || \hat G - G ||^2 \le \frac{\epsilon}{d^2 || G ||^2}$ \end{itemize} Finally, the above holds true even for an $\omega$ with distance at most $O(\frac{1}{poly(d)})$ from the optimum. \end{lemma} Finally, we use a minor lemma for recover of a random vector via the empirical mean estimator. Note that we treat the matrix $G$ as a vector. \begin{lemma}\label{lem:grec} Assume $V = I$. Let $g_1^i,\ldots, g_n^i$ be drawn from the distribution $G_i + \xi$ and $\hat G$ be the empirical mean estimator computed from said $g_j^i$'s. Let $\Sigma$ be the expected second moment matrix of the $\xi$s. Then $$ \mathbb E_\xi ||G - \hat G ||^2 \le \frac{d^2 \mathrm{tr}(\Sigma)}{n}$$ \end{lemma} We proceed with the proof of Theorem 3 below. \begin{proof} The first step of the algorithm is for recovering an estimate of $\Sigma$ and $\mu$. Note that $n_1$ samples suffice to recover $\hat \Sigma$ and $\hat \mu$ such that: \begin{itemize} \item $|| \hat \Sigma - \Sigma ||^2 \le \epsilon$ \item $|| \mu - \hat \mu || ^2 \le \frac{\lambda_{max}(G^T G) \epsilon }{d}$ \end{itemize} The for loop recovers an estimate of $G$. Via Lemma \ref{lem:grec}, the samples suffice to ensure that the following two conditions hold: \begin{itemize} \item $ ||\hat G - G||^2 \le \frac{\epsilon}{d || \mu ||^2}$ \item $ || \hat G - G ||^2 \le \frac{\epsilon}{d^2 || G ||^2}$ \end{itemize} Then the algorithm computes an estimate of the optimal $\omega$. Via Lemma \ref{lem:convex}, we have that the optimum guarantees the minimum eigenvalue of an approximate solution will be within a constant factor of the optimum. This $\omega$ guarantees that $n_3$ samples suffice to ensure the recover of $\omega^*$ within squared $L^2$-distance of $O(\epsilon)$ in expectation. Finally the expectations can be used with a Markov inequality to ensure the algorithm succeeds with (arbitrarily high) constant probability. \end{proof} Now we prove the lemmas. We begin with Lemma \ref{lem:olsell2}. This proof is a slight modification of the textbook proof for the convergence of OLS. \begin{proof} In this section we derive a bound on the convergence of the least squares estimator when a fixed design matrix $X$ is used. Note this is exactly the case we encounter, since the choice of $\omega$ lets us affect the entries of the design matrix. This is a standard, textbook result and not a main contribution of the paper. In order to state the result more formally we have to introduce some notation. The goal of the procedure is to recover $\hat\omega_{opt}$, when given tuples $(x_i, \hat\omega_{opt} x_i + \eta)$ where $\eta$ is 1-subgaussian. We aim to characterize $||\omega - \hat\omega_{opt} || $ where $\omega$ is obtained from ordinary least squares. Let $X$ be the vector with the $x_i$'s in its columns. Let $\kappa_{min}$ be the minimum eigenvalue of $\frac 1 n X^TX$ (the second moment matrix). Below all expectations are taken \emph{only} over the random noise. We assume the second moment matrix is full rank. \begin{align*} \mathbb E[|| \omega - \hat\omega_{opt}|| ^2] &\le \mathbb E[ \frac{1}{n\kappa_{min}} (\omega - \hat\omega_{opt}) X^T X (\omega - \hat\omega_{opt})]\\ & = \mathbb E[\frac{1}{n\kappa_{min}} || X (\omega - \hat\omega_{opt})|| ^2] \\ &= \frac{1}{n\kappa_{min}} \mathbb E[|| X(X^TX)^{-1}X^T (X \hat\omega_{opt} + \eta) - X\hat\omega_{opt}|| ^2]\\ &= \frac{1}{n\kappa_{min}} \mathbb E[|| X(X^TX)^{-1}X^T \eta|| ^2]\\ &\le \frac{d}{n \kappa_{min}} \end{align*} This motivates our procedure for parameter recovery. We do so in a fashion that attempts to maximize $\kappa_{min}$. Note that it is the minimum eigenvalue that determines the convergence rate. This is due to the fact that little variation along a dimension makes it hard to disentangle the features' effect on the outcome via $\hat\omega_{opt}$ from the constant-variance noise $\eta$. \end{proof} Lemma \ref{lem:convex} is somewhat more involved. It is proven in three parts. The first is that the optimization problem is convex. The second is that approximate recovery of $S, \mu,$ and $G$ suffice for approximately minimizing the original expression. The third is that an approximate solution suffices. \begin{proof} In this section we describe how to choose the value of $\omega$ that maximizes the value of $\kappa_{min}$ for the samples we obtain. To do so, we examine the expectation of the second moment matrix and make several observations. Let $\Sigma$ denote the expected second moment matrix of $x$ (i.e. $\mathbb E[xx^T]$. We have: $$ \mathbb E[(x + G \omega) (x + G \omega)^T] = \Sigma + 2 \mu \omega^T G^T + G \omega \omega^T G^T$$ \begin{enumerate} \item The minimum eigenvalue of the above expression is concave with respect to $\omega$. This follows due to the following: $x + G \omega$ is a linear operator, the minimum eigenvalue of a Gramian matrix $X^T X$ is concave with respect to $X$, and the expectation of a concave function is concave \cite{boyd2004convex}. \item Since the agent attempts to maximize their motion in the $\omega$ direction, we want to ensure that we move them toward toward the direction that maximizes the minimum eigenvalue of $\mathbb E[ (x + G \omega) (x + G \omega)^T]$. \end{enumerate} However, we do not operate with exact knowledge of $G$, etc. It turns out that even approximately solving this optimization problem with estimates for $G, \Sigma, \mu$ suffices for our purposes, as long as the $\omega$ we obtain from our optimization (using the estimates) results in a high value for the minimum eigenvalue of $\mathbb E[(x + G \omega) ( x + G \omega)^T] $. Let $\hat \omega$ be the maximizing argument for the estimated optimization problem and let $\omega$ be the maximizing argument for the original optimization problem. Let $Q$ be the true maximized second moment matrix including gaming, and $\hat Q$ be the maximizing second moment matrix with gaming resulting from replacing the true $\Sigma, \mu, G$ with the estimates. In formal terms, we need to show the minimum eigenvalue of the following is large: $ \mathbb E [(x + G \hat \omega) (x + G \hat \omega)^T] $. We note that when $y^T \hat Q y$ is within $\epsilon$ of $y^T Q y$ for all $y$ in the unit ball, the minimum eigenvalues may differ by at most $\epsilon$. \begin{align*} || y^T \hat Q y - y ^T Q y||^2 &= || y^T (\hat Q - Q) y||^2\\ &\le \lambda_{max}^2 (\hat Q - Q)(y) || y ||^2\\ & \le || \hat Q - Q||^2 \end{align*} And now we bound the norm of $|| \hat \Sigma - \Sigma ||^2$ assuming the following: \begin{enumerate} \item $|| \hat \Sigma - \Sigma||^2 \le \epsilon $ \item $|| \mu - \hat \mu ||^2 \le \frac{\lambda_{max}(G^T G) \epsilon}{d}$ \item $|| \hat G - G ||^2 \le \frac \epsilon {d ||\mu||^2}$ \item $|| \hat G - G ||^2 \le \frac{\epsilon} {d^2 || G ||^2}$ \end{enumerate} We work out the bound below. \begin{align*} || \hat Q - Q ||^2 &= || \Sigma + 2 \mu {\hat\omega_{opt}}^T G^T + G \hat\omega_{opt} {\hat\omega_{opt}}^T G^T - (\hat \Sigma + 2 \hat \mu \omega^T \hat G^T + \hat G \hat\omega_{opt} {\omega_{opt}}^T \hat G^T)||^2\\ &\le || \Sigma - \hat \Sigma|| ^2 + 2|| \mu {\hat\omega_{opt}}^T G^T - \hat \mu {\hat\omega_{opt}}^T \hat G^T|| ^2 + || \ldots|| ^2\\ & \le \epsilon + 2|| \mu {\hat\omega_{opt}}^T G^T + {\hat \mu \hat\omega_{opt}}^TG^T- {\hat \mu \hat\omega_{opt}}^TG^T- \hat \mu {\hat\omega_{opt}}^T \hat G^T|| ^2 + || \ldots|| ^2\\ &\le \epsilon + d|| \mu - \hat \mu||^2 || {\hat\omega_{opt}}^T G ||^2 + || \hat G - G||^2 || \hat \mu \omega^T||^2 + \ldots\\ &\le \epsilon + \epsilon + || \hat G - G||^2 || \hat \mu \omega^T +\mu \omega^T - \mu \omega^T ||^2 + \ldots\\ &\le \epsilon + \epsilon + || \hat G - G||^2 (|| \hat \mu - \mu||^2 + || \mu {\hat\omega_{opt}}^T||) + \ldots\\ & \le \epsilon + \epsilon + || \hat G - G||^2 d || \mu||^2 + \ldots\\ & \le 3 \epsilon + || \hat G \hat\omega_{opt} {\hat\omega_{opt}}^T \hat G^T - G \hat\omega_{opt} {\hat\omega_{opt}}^T G^T ||^2\\ &\le 3 \epsilon + || \hat G \hat\omega_{opt} {\hat\omega_{opt}}^T \hat G^T - \hat G \hat\omega_{opt} {\hat\omega_{opt}}^T G + \hat G \hat\omega_{opt} {\hat\omega_{opt}}^T G - G \hat\omega_{opt} {\hat\omega_{opt}}^T G^T ||^2\\ &\le 3 \epsilon + || (\hat G - G) \hat\omega_{opt} {\hat\omega_{opt}}^T \hat G^T ||^2 + || (\hat G - G) \hat\omega_{opt} {\hat\omega_{opt}}^T G^T ||^2\\ & \le 3 \epsilon + || (\hat G - G) \hat\omega_{opt} {\hat\omega_{opt}}^T \hat G^T - (\hat G - G) \hat\omega_{opt} {\hat\omega_{opt}}^T G^T + (\hat G - G) \hat\omega_{opt} \hat\omega_{opt}^T G^T ||^2 + d^2|| \hat G - G||^2 ||G ||^2\\ & \le 4 \epsilon + ||(\hat G - G) \hat\omega_{opt} {\hat\omega_{opt}}^T (\hat G - G) ||^2 + || (\hat G - G) \hat\omega_{opt} {\hat\omega_{opt}}^T G^T ||^2\\ & \le 5 \epsilon + d^2 || \hat G - G||^4 \\ & \le 6 \epsilon \end{align*} This means if we find an $\epsilon-$approximate solution to the system with the estimated values, we obtain a $\kappa_{min}$ within $6\epsilon$ of the optimal. \end{proof} Finally, we present the proof of Lemma \ref{lem:grec}: \begin{proof} Recall that when the decision-maker fixes $\omega$, it receives samples of the form $x + G \omega$. We note this can be used to recover the matrix $G$. In particular, we show how $d$ rounds, each with $O(\frac{d \mathrm{tr}(\Sigma)}{\epsilon})$ samples, suffices to recover the matrix to squared Frobenius norm $\epsilon$. Recall the procedure we propose simply chooses $\omega = e_1,...e_d$, one-hot coordinate vectors in each round. We first bound the error in $\hat G$. coordinate-wise: $\mathbb E[||\hat G_{i,j} - G_{i,j}||^2] \le \frac{\mathbb E[x_i^2]}{n}$. A union bound across coordinates shows that $O(\frac{d^2 \mathrm{tr}(\Sigma)}{\epsilon})$ samples suffice to recover $G$ within squared Frobenius norm $\epsilon$. \end{proof} \section{Discussion} \section{Conclusion} In this work, we have introduced a model and techniques for analyzing decision-making about strategic agents. We provide algorithms for leveraging this strategic behavior to maximize population performance, accuracy, and causal precision. Let us dwell on several real-world considerations that should inform the utility of these algorithms. First, while these algorithms eventually yield more desirable decision-making functions, they substantially reduce the decision-maker's accuracy in the short term while exploration is occurring, and this tradeoff should inform their use. In general, these procedures make the most sense in scenarios with a fairly small period of delay between decision and outcome (e.g. predicting short-term creditworthiness rather than long-term professional success), as at each new round the decision-maker must wait this length of time to receive the first samples gamed according to the new rule. That said, our algorithms are either non-adaptive procedures, or have very limited adaptivity. This allows them to be parallelized in a straightforward fashion by using different decision rules for different agents. It is also important that these methods only be applied to agents whose actions do not change their state in irreversible and possible detrimental ways. In particular, in the case of repeated decision-making (as in a credit scoring setting), an agent may make changes to respond to a temporary decision rule, only to realize their features leave them in a worse position when the decision rule changes as part of the type of algorithm we've described. Only if agents have no future expectation of the consistency of the decision rule, or if they receive a decision only once (as in college admissions), can we be certain that the exploration induced by the decision rule is not exploitative. (After all, agents will only incur action cost if they actually benefit from the decision they'll receive.) As we have mentioned, our model has several notable social implications. First, in many settings, our results show that decision-makers are incentivized not only to be fully transparent, but to be actively informative. Sharing details about workings of their algorithm can potentially maximize both the decision-maker's and agents' utilities. More radically, agents may actually be incentivized to join together to construct a decision-maker if one does not exist. The agents themselves may wish to know $\omega_{imp}$, and the way to do that is to aggregate their data, and have the agent-led decision-maker reward different agents for gaming in different directions, in order to more quickly identify the true causal parameters, as in Section \ref{sec:causal}. Our work opens up several avenues for future work. First, one can explore algorithms that work under a more general set of action cost models, or where different agents have different action costs, or where the actions available to agents are state-dependent. One can explore methods for minimizing the regret of decision-makers during the exploration phase of our algorithms. One could also explore the dynamics of decision rules when agents have persistent states across multiple decisions. One could also explore improving the efficiency of the algorithms we proposed. Finally, one could extend these results to the setting where both the decision rule, and the true outcome-producing function, are non-linear. \section{Introduction} As individuals, we want algorithmic transparency in decisions that affect us. Transparency lets us audit models for fairness and correctness, and allows us to understand what changes we can make to receive a different decision. Why, then, are some models kept hidden from the view of those subject to their decisions? Beyond setting-specific concerns like intellectual property theft or training-data extraction, the canonical answer is that transparency would allow strategic individual \emph{agents} to ``game'' the model. These individual agents will act to change their features to receive better \emph{decisions}. An accuracy-minded decision-maker, meanwhile, chooses a decision rule based on how well it predicts individuals' true future \emph{outcomes}. Strategic agents, the conventional wisdom goes, make superficial changes to their features that will lead to them receiving more desirable decisions, but these feature changes will not affect their true outcomes, reducing the decision rule's accuracy and harming the decision-maker. The field of strategic classification \cite{hardt2016} has until recently sought to design algorithms that are robust to such superficial changes. At their core, these algorithms treat transparency as a reluctant concession and propose ways for decision-makers to get by nonetheless. But what if decision-makers could \emph{benefit} from transparency? What if in some settings, gaming could help accomplish the decision-makers' goals, by causing agents to truly improve their outcomes without loss of predictive accuracy? Consider the case of car insurance companies, who wish to choose a pricing decision rule that charges a customer in line with the expected costs that customer will incur through accidents. Insurers will often charge lower prices to drivers who have completed a ``driver's ed'' course, which teaches comprehensive driving skills. In response, new drivers will often complete such courses to reduce their insurance costs. One view may be that only \emph{ex ante} responsible drivers seek out such courses, and that were an unsafe driver to complete such a course it would not affect their expected cost of car accidents. But another interpretation is that drivers in these courses learn safer driving practices, and truly become safer drivers \emph{because} they take the course. In this case, a car insurer's decision rule \emph{remains predictive} of accident probability when agents strategically game the decision rule, while also incentivizing the population of drivers to act in a way that truly causes them to have fewer accidents, which means the insurer needs to pay out fewer reimbursements. This same dynamic appears in many decision settings where the decision-maker has a meaningful stake in the true future outcomes of its subject population, including credit scoring, academic testing, hiring, and online recommendation systems. In such scenarios, given the right decision rule, decision-makers can \emph{gain} from transparency. But how can we find such a decision rule that maximizes agents' outcomes if we do not know the effects of agents' feature-changing actions? In recent work, \citet{miller2020} argue that finding such ``agent outcome''-maximizing decision rules requires solving a non-trivial causal inference problem. As we illustrate in Figure \ref{fig:causal_graph}, the decision rule affects the agents' features, which causally affect agents' outcomes, and recovering these relationships from observational data is hard. We will refer to this setting as ``causal strategic learning'', in view of the causal impact of the decision rule on agents' outcomes. \begin{figure}\label{fig:causal_graph}\centering \begin{tikzpicture} \node[state] (w) [dashed] at (0,0) {$\omega$}; \node[state] (x) [right =of w] {$x$}; \path (w) edge (x); \node[state] (y) [below right =of x] {$y$}; \path (x) edge (y); \node[state] (wstar) [below left=of y] {$\omega^*$}; \path (wstar) edge (y);\end{tikzpicture} \caption{A causal graph illustrating that by intervening on the decision rule $\omega$, a decision-maker can incentivize a change in $x$, enabling them to learn about how the agent outcome $y$ is caused. We omit details of our setting for simplicity.} \end{figure} The core insight of our work is that while we may not know how agents will respond to any arbitrary decision rule, they will naturally respond to any particular rule we pick. Thus, as we test different decision rules and observe strategic agents' responses and true outcomes, we can improve our decision rule over time. In the language of causality, by choosing a decision rule we are effectively launching an intervention that allows us to infer properties of the causal graph, circumventing the hardness result of \citet{miller2020}. In this work, we introduce the setting of causal strategic linear regression in the realizable case and with norm-squared agent action costs. We propose algorithms for efficiently optimizing three possible objectives that a decision-maker may care about. \emph{Agent outcome maximization} requires choosing a decision rule that will result in the highest expected outcome of an agent who games that rule. \emph{Prediction risk minimization} requires choosing a decision rule that accurately predicts agents' outcomes, even under agents' gaming in response to that same decision rule. \emph{Parameter estimation} involves accurately estimating the parameters of the true causal outcome-generating linear model. We show that these may be mutually non-satisfiable, and our algorithms maximize each objective independently (and jointly when possible). Additionally, we show that omitting unobserved yet outcome-affecting features from the decision rule has major consequences for causal strategic learning. Omitted variable bias in classic linear regression leads a learned predictor to reward non-causal visible features which are correlated with hidden causal features \cite{greene2003econometric}. In the strategic case, this backfires, as an agent's action may change a visible feature without changing the hidden feature, thereby breaking this correlation, and undermining a naïvely-trained predictor. All of our methods are designed to succeed even when actions break the relationships between visible proxies and hidden causal features. Another common characteristic of real-life strategic learning scenarios is that in order to game one feature, an agent may need to take an action that also perturbs other features. We follow \citet{kleinberg2019} in modeling this by having agents take actions in \emph{action space} which are mapped to features in \emph{feature space} by an \emph{effort conversion matrix}. As much of the prior literature has focused on the case of binary classification, it's worth meditating on why we focus on regression. Many decisions, such as loan terms or insurance premiums, are not binary ``accept/reject''s but rather lie somewhere on a continuum based on a prediction of a real-valued outcome. Furthermore, many ranking decisions, like which ten items to recommend in response to a search query, can instead be viewed as real-valued predictions that are post-processed into an ordering. \subsection{Summary of Results} In Section \ref{sec:setting}, we introduce a setting for studying the performance of linear models that make real-valued decisions about strategic agents. Our methodology incorporates the realities that agents' actions may causally affect their eventual outcomes, that a decision-maker can only observe a subset of agents' features, and that agents' actions are constrained to a subspace of the feature space. We assume no prior knowledge of the agent feature distribution or of the actions available to strategic agents, and require no knowledge of the true outcome function beyond that it is itself a noisy linear function of the features. In Section \ref{sec:outcome}, we propose an algorithm for efficiently learning a decision rule that will maximize agent outcomes. The algorithm involves publishing a decision rule corresponding to each basis vector; it is non-adaptive, so each decision rule implemented in parallel on a distinct portion of the population. In Section \ref{sec:risk}, we observe that under certain checkable conditions the prediction risk objective can be minimized using gradient-free convex optimization techniques. We also provide a useful decomposition of prediction risk, and suggest how prediction risk and agent outcomes may be jointly optimized. In Section \ref{sec:parameter_estimation}, we show that in the case where all features that causally affect the outcome are visible to the decision-maker, one can substantially improve the estimate of the true model parameters governing the outcome. At a high level, this is because by incentivizing agents to change their features in certain directions, we can improve the conditioning of the second moment matrix of the resulting feature distribution. \input{sec_related_work.tex} \section{Problem Setting} \label{sec:setting} Our setting is defined by the interplay between two parties: \emph{agents}, who receive decisions based on their features, and a \emph{decision-maker}, who chooses the decision rule that determines these decisions.\footnote{In the strategic classification literature, these are occasionally referred to as the ``Jury'' and ``Contestant''.} We visualize our setting in Figure \ref{fig:setting}. \begin{figure}[ht] \centering \includegraphics[width=\linewidth, trim= 60 40 40 0, clip]{figures/setting_graphic.pdf} \caption{ Visualization of the linear setting. Each box corresponds to a real-valued scalar, with value indicated by shading. The two boxes with dark red outlines represent features that are correlated in the initial feature distribution $P$. } \label{fig:setting} \end{figure} Each agent is described by a feature vector $x \in \mathbb{R}^{d'}$,\footnote{For simplicity of notation, we implicitly use homogeneous coordinates: one feature is 1 for all agents. The matrices $V$ and $M$, defined in this section, are such that this feature is visible to the decision-maker and unperturbable by agents. } initially drawn from a distribution $P \in \Delta(\mathbb{R}^{d'})$ over the feature-space with second moment matrix $\Sigma = \E[x \sim P]{x x^T}$. Agents can choose an action vector $a \in \mathbb{R}^k$ to change their features from $x$ to $x_g$, according to the following update rule: $x_g = x + Ma$ where the \emph{effort conversion matrix} $M \in \mathbb{R}^{d' \times k}$ has an $(i, j)$th entry corresponding to the change in the $i$th feature of $x$ as a result of spending one unit of effort along the $j$th direction of the action space. Each action dimension can affect multiple features simultaneously. For example, in the context of car insurance, a prospective customer's action might be ``buy a new car'', which can increase both the safety rating of the vehicle and the potential financial loss from an accident. The car-buying action might correspond to a column $M_1 = (2, 10000)^T$, in which the two entries represent the action's marginal impact on the car's safety rating and cost-to-refund-if-damaged respectively. $M$ can be rank-deficient, meaning some feature directions cannot be controlled independently through any action. Let $y$ be a random variable representing an agent's true outcome, which we assume is decomposable into a noisy linear combination of the features $y \vcentcolon= \langle \omega^*, x_g \rangle + \eta$, where $\omega^* \in \mathbb{R}^{d'}$ is the true \emph{parameter vector}, and $\eta$ is a subgaussian noise random variable with variance $\sigma$. Note that $\omega^*_i$ can be understood as the causal effect of a change in feature $i$ on the outcome $y$, in expectation. Neither the decision-maker nor the agent knows $\omega^*$. To define the decision-maker's behavior, we must introduce an important aspect of our setting: the decision-maker never observes an agent's complete feature vector $x_g$, but only a subset of of those features $Vx_g$, where $V$ is a diagonal projection matrix with $1$s for the $d$ visible features and $0$s for the hidden features. Now, our decision-maker assigns decisions $\langle \omega, Vx_g \rangle $, where $\omega \in \mathbb{R}^{d'}$ is the \emph{decision rule}. Note that because the hidden feature dimensions of $\omega$ are never used, we will define them to be $0$, and thus $\omega$ is functionally defined in the $d$-dimensional visible feature subspace. For convenience, we define the matrix $G = MM^TV$ as shorthand. (We will see that $G$ maps $\omega$ to the movement in agents' feature vectors it incentivizes.) Agents incur a cost $C(a)$ based on the action they chose. Throughout this work this cost is quadratic $C(a) = \frac{1}{2} \|a\|_2^2$. This corresponds to a setting with increasing costs to taking any particular action. Importantly, we assume that agents will best-respond to the published decision rule by choosing whichever action $a(\omega)$ maximizes their utility, defined as their received decision minus incurred action cost:\footnote{Note that this means that all strategic agents will, for a given decision rule $\omega$, choose the same gaming action $a(\omega)$. } \begin{equation} a(\omega) = \arg\max_{a' \in \mathbb{R}^k} \left [ \langle \omega, V(x + Ma')\rangle - \frac{1}{2}\|a'\|^2 \right ] \end{equation} However, to take into account the possibility that not all agents will in practice study the decision rule to figure out the action that will maximize their utility, we further assume that only a $p$ fraction of agents game the decision rule, while a $1-p$ fraction remain at their initial feature vector. Now, the interaction between agents and the decision-maker proceeds in a series of rounds, where a single round \textit{t} consists of the sequence described in the following figure: \begin{figure}[ht]\label{fig:setting_sequence}\fbox{ \parbox{\columnwidth}{ For round $t \in \{1,\dots,r\}$: \begin{enumerate} \item The decision-maker publishes a new decision rule $\omega_t$. \item A new set of $n$ agents arrives: $\{x \sim P\}_n$. \\ Each agent games w.r.t. $\omega_t$; i.e. $x_g \leftarrow x + Ma(\omega_t)$. \item The decision-maker observes the post-gaming visible features $V x_g$ for each agent.\\ Agents receive decisions $\omega_t^TV x_g$. \item The decision-maker observes the outcome $y \sim {\omega^*}^T x_g + \eta$ for each agent. \end{enumerate}}} \end{figure} In general, we will assume that the decision-maker cares more about minimizing the number of rounds required for an algorithm than the number of agent samples collected in each round. We now turn to the three objectives that decision-makers may wish to optimize. \begin{objective}\label{obj:improvement} The \emph{agent outcomes} objective is the average outcome over the agent population after gaming: \begin{equation} \label{eq:improvement} \AO(\omega) := \E[x \sim P, \eta]{\langle \omega^*, x + Ma(\omega)\rangle + \eta} \end{equation} \end{objective} In subsequent discussion we will restrict $\omega$ to the unit $\ell_2$-ball, as arbitrarily high outcomes could be produced if $\omega$ were unbounded. An example of a decision-maker who cares about agent outcomes is a teacher formulating a test for their students---they may care more about incentivizing the students to learn the material than about accurately stratifying students based on their knowledge of the material. \begin{objective}\label{obj:accuracy} \emph{Prediction risk} captures how close the output of the model is to the true outcome. It is measured in terms of expected squared error: \begin{multline} \label{eq:accuracy} Risk(\omega) = \mathbb{E}_{x \sim P, \eta} \big[\big (\langle \omega^*,x + Ma(\omega)\rangle \\+ \eta - \langle\omega, V(x +Ma(\omega)) \rangle \big)^2 \big] \end{multline} \end{objective} A decision-maker cares about minimizing prediction risk if they want the scores they assign to individuals to be as predictive of their true outcomes as possible. For example, insurers' profitability is contingent on neither over- nor under-estimating client risk. In the realizable linear setting, there is a natural third objective: \begin{objective}\label{obj:causal} \emph{Parameter estimation} error measures how close the decision rule's coefficients are to the visible coefficients of the underlying linear model: \begin{equation}\label{eq:causal} \|V(\omega - \omega^*)\|_2 \end{equation} \end{objective} A decision-maker might care about estimating parameters accurately if they want their decision rule to be robust to unpredictable changes in the feature distribution, or if knowledge of the causal effects of the features could help inform the design of other interventions. Below, we show that these objectives may be mutually non-satisfiable. A natural question is whether we can optimize a weighted combination of these objectives. In Section \ref{sec:risk}, we outline an algorithm for optimizing a weighted combination of prediction risk and agent outcomes. Our parameter recovery algorithm will only work in the fully-visible ($V=I$) case; in this case, all three objectives are jointly satisfied by $\omega = \omega^*$, though each objective implies a different type of approximation error and thus requires a different algorithm. \subsection{Illustrative example}\label{sec:example} \begin{figure} \centering \includegraphics[width=\linewidth, trim=0 50 0 0, clip]{figures/objective_inconsistency.pdf} \caption{A toy example in which the objectives are mutually non-satisfiable. Each $\omega$ optimizes a different objective.} \label{fig:conflicting_objectives} \end{figure} To illustrate the setting, and demonstrate that in certain cases these objectives are mutually non-satisfiable, we provide a toy scenario, visualized in Figure \ref{fig:conflicting_objectives}. Imagine a car insurer that predicts customers' expected accident costs given access to three features: (1) whether the customer owns their own car, (2) whether they drive a minivan, and (3) whether they have a motorcycle license. There is a single hidden, unmeasured feature: (4) how defensive a driver they are. Let $\omega^* = (0, 0, 1, 1)$, i.e. of these features only knowing how to drive a motorcycle and being a defensive driver actually reduce the expected cost of accidents. Let the initial data distribution have ``driving a minivan'' correlate with defensive driving (because minivan drivers are often parents worried about their kids). Let the first effort conversion column $M_1 = (1, 0, 0, 2)$ represent the action of purchasing a new car, which also leads the customer to drive more defensively to protect their investment. Let the second action-effect column $M_2=(0, 0, 1, -2)$ be the action of learning to ride a motorcycle, which slightly improves the likelihood of safe driving by conferring upon the customer more understanding how motorcyclists on the road will react, while making the customer themselves substantially more thrill-seeking and thus reckless in their driving and prone to accidents. How should the car insurer choose a decision rule to maximize each objective? If the rule rewards customers who own their own car (1), this will incentivize customers to purchase new cars and thus become more defensive (good for agent outcomes), but will cause the decision-maker to be inaccurate on the $(1-p)$-fraction of non-gaming agents who already had their own cars and no more likely to avoid accidents than without this decision rule (bad for prediction risk), and owning a car does not truly itself reduce expected accident costs (bad for parameter estimation). Minivan-driving (2) may be a useful feature for prediction risk because of its correlation with defensive driving, but anyone buying a minivan specifically to reduce insurance payments will not be a safer driver (unhelpful for agent outcomes) nor does minivan ownership truly cause lower accident costs (bad for parameter estimation). Finally, if the decision rule rewards customers who have a motorcycle license (3), this does reflect the fact that possessing a motorcycle license itself does reduce a driver's expected accident cost (good for parameter estimation), but an agent going through the process of acquiring a motorcycle license will do more harm than good to their overall likelihood of an accident due to the action's side effects of also making them a less defensive driver (bad for agent outcomes), and rewarding this feature in the decision rule will lead to poor predictions as it is negatively correlated with expected accident cost once the agents have acted (bad for prediction risk). The meta-point is that when some causal features are hidden from the decision-maker, there may be a tradeoff between the agents' outcomes, the decision-rule's predictiveness, and the correctness of parameters recovered by the regression. In the rest of the paper, we will demonstrate algorithms for finding the optimal decision rules for each objective, and discuss prospects for optimizing them jointly. \section{Agent Outcome Maximization} \label{sec:outcome} In this section we propose an algorithm for choosing a decision rule that will incentivize agents to choose actions that (approximately) maximally increase their outcomes. Throughout this section, we will assume that without loss of generality $p=1$ and $|| \omega^\star|| = 1$. If only a subset of agents are strategic ($p<1$), the non-strategic agents' outcomes cannot be affected and can thus be safely ignored. In our car insurance example, this means choosing a decision rule that causes drivers to behave the most safely, regardless of whether the decision rule accurately predicts accident probability or whether it recovers the true parameters. Let $\omega_{mao}$ be the decision rule that maximizes agent outcomes: \begin{equation} \omega_{mao} := \argmax_{\omega \in \mathbb{R}^d, \|\omega\|_2 \leq 1} \AO(\omega) \end{equation} \begin{theorem}\label{thm:improve} Suppose the feature vector second moment matrix $\Sigma$ has largest eigenvalue bounded above by $ \lambda_{max}$, and suppose the outcome noise $\eta$ is 1-subgaussian. Then Algorithm~\ref{alg:improvement} estimates a parameter vector $\hat\omega$ in $d+1$ rounds with $O(\epsilon^{-1} \lambda_{max}d + 1)$ samples in each round such that w.h.p. \[ \AO(\hat\omega) \geq \AO(\omega_{mao}) - \epsilon. \] \end{theorem} \begin{algorithm} \caption{Agent Outcome Maximization} \label{alg:improvement} \begin{algorithmic} \STATE {\bfseries Input:} $\lambda_{max}, d, \epsilon$ \STATE Let $n = 100 \epsilon^{-1} \lambda_{max}d$ \STATE Let $\{\omega_j\}_{i=1}^d$ be an orthonormal basis for $\mathbb{R}^d$ \STATE Sample $(x_1,y_1) \ldots (x_n,y_n)$ with $\omega = 0$. \STATE Let $\hat \mu = \frac{1}{n} \sum_{j=1}^n y_j$ \FOR{$i=1$ {\bfseries to} $d$} \STATE Sample $(x_1,y_1) \ldots (x_n,y_n)$ with $\omega = \omega_i$ \STATE Let $\hat\nu_i = \frac{1}{n} \sum_{j=1}^n y_j - \hat\mu$ \ENDFOR \STATE Let $\hat\nu = (\hat\omega_{mao}^{(1)},\dots,\hat\omega_{mao}^{(d)})^T$ \STATE Let $\hat\omega = \hat\nu/\|\hat\nu\|$ \STATE Return {$\hat\omega$} \end{algorithmic} \end{algorithm} \begin{proofidea} First we note that it is straightforward to compute the action that each agent will take. Each agent maximizes $\omega^T V (x + Ma) - \frac 1 2 \|a\|^2$ over $a \in \mathbb{R}^m$. Note that $\nabla_a(\omega^T V Ma - \frac 1 2 \|a\|^2)= M^TV\omega - a$. Thus, \begin{align*} &\argmax\limits_{a} \omega^T V (x + Ma) - \frac 1 2 \|a\|^2\\ &=\argmax\limits_{a} \omega^T V Ma - \frac 1 2 \|a\|^2\\ &= M^T V \omega \end{align*} That is, every agent chooses an action such that $x_g = x + MM^TV\omega = x + G\omega$ (recall we have defined $G := MM^T V$ for notational compactness). This means that if the decision-maker publishes $\omega$, the resulting expected agent outcome is $\AO(\omega) = \E[x \sim P]{{\omega^*}^T x + {\omega^*}^T G\omega)}$. Hence, \[ \omega_{mao} = \frac{G^T \omega^*}{\|G^T \omega^*\|_2} \] In the first round of the algorithm, we set $\omega = 0$ and obtain an empirical estimate of $\AO(0) = \E[x \sim P]{{\omega^*}^T x}$.\footnote{Alternatively, the decision-maker could run this algorithm but with all parameter vectors shifted by some $\omega$.} We then select an orthonormal basis $\{\omega_i\}_{i=1}^d$ for $\mathbb{R}^d$. In each subsequent round, we publish an $\omega_i$ and obtain an estimate of $\AO(\omega_i)$. Subtracting the estimate of $\AO(0)$ yields an estimate of $\E[x \sim P]{{\omega^*}^T G \omega_i}$, which is a linear measurement of $G^T \omega^*$ along the direction $\omega_i$. Combining these linear measurements, we can reconstruct an estimate of $G^T \omega^*$. The number of samples per round is chosen to ensure that the estimate of $G^T \omega^*$ is within $\ell_2$-distance at most $\epsilon/2$ of the truth. We set $\hat\omega$ to be this estimate scaled to have norm 1. A simple argument allows us to conclude that $\AO(\hat\omega)$ is within an additive error of $\epsilon$ of $\AO(\omega_{mao})$. We leave a complete proof to the appendix. \end{proofidea} This algorithm has several desirable characteristics. First, the decision-maker who implements the algorithm does not need to have any knowledge of $M$ or even of the number of hidden features $d' - d$. Second, the algorithm is non-adaptive, in that the published decision rule in each round does not depend on the results of previous rounds. Hence, the algorithm can be parallelized by simultaneously applying $d$ separate decision rules to $d$ separate subsets of the agent population and simultaneously observing the results. Finally, by using decision rules as causal interventions, this procedure resolves the challenge associated with the hardness result of \cite{miller2020}. \section{Prediction Risk Minimization} \label{sec:risk} Low prediction risk is important in any setting where the decision-maker wishes for a decision to accurately match the eventual outcome. For example, consider an insurer who wishes to price car insurance exactly in line with drivers' expected costs of accident reimbursements. Pricing too low would make them unprofitable, and pricing too high would allow a competitor to undercut them. Specifically, we want to minimize expected squared error when predicting the true outcomes of agents, a $p$-fraction of whom have gamed with respect to $\omega$: \begin{equation}\label{eqn:acc} \begin{split} Risk(\omega) &= \mathbb{E}_{x \sim P, \eta} \Bigg [ (1-p) \left (\omega^TVx - (\omega^*)^T x \right )^2 \\ &+ p\left (\omega^T V x_g - \left ({\omega^*}^T x_g + \eta \right )\right )^2 \Bigg ] \end{split} \end{equation} We begin by noting a useful decomposition of accuracy in a generalized linear setting. For this result, all that we assume about agents' actions is that agents' feature vectors and actions are drawn from some joint distribution $(x,a) \sim D$ such that the action effects are uncorrelated with the features: $\E[(x,a) \sim D]{(Ma)x^T} = 0$. \begin{lemma} \label{lem:decomp} Let $\omega$ be a decision rule and let $a$ be the action taken by agents in response to $\omega$. Suppose that the distributions of $Ma$ and $x$ satisfy $\E[x,a]{(Ma)x^T} = 0$. Then the expected squared error of a decision rule $\omega$ on the gamed distribution can be decomposed as the sum of the following two positive terms (plus a constant offset $c$): \begin{enumerate} \item The risk of $\omega$ on the initial distribution \item The expected squared error of $\omega$ in predicting the impact (on agents' outcomes) of gaming. \end{enumerate} That is, \begin{equation} \label{eq:decomp} \begin{split} Risk(\omega) &= \E[x]{\left( (V\omega - \omega^*)^Tx\right)^2} \\ &+ \E[a]{\left( (V\omega - \omega^*)^T(Ma)\right )^2} + c \end{split} \end{equation} \end{lemma} The proof appears in the appendix. This decomposition illustrates that minimizing prediction risk requires balancing two competing phenomena. First, one must minimize the risk associated with the original (ungamed) agent distribution by rewarding features that are \emph{correlated with outcome} in the original data. Second one must minimize error in predicting the effect of agents' gaming on their outcomes by rewarding features in accordance with \emph{the true change in outcome}. The relative importance of these two phenomena depends on $p$, the fraction of agents who game. Unfortunately, minimizing this objective is not straightforward. Even with just squared action cost (with actions $a(\omega)$ linear in $\omega$), the objective becomes a non-convex quartic. However, we will show that in cases where the naive gaming-free predictor \emph{overestimates} the impact of gaming, this quartic can be minimized efficiently. \begin{algorithm} \caption{Relaxed Prediction Risk Oracle} \label{alg:risk} \begin{algorithmic} \STATE {\bfseries Input:} $\omega, n$ \STATE Let $P_{\omega}$ be the distribution of agent features and labels $(x, y)$ drawn when agents face decision rule $\omega$. \STATE Let $Y(S) \coloneqq \frac{1}{\|S\|} \sum_{(x, y) \in S} y$. \STATE Let $\tilde{Y}_\omega (S) \coloneqq \frac{1}{\|S\|} \sum_{(x, y) \in S} (V\omega)^T x$. \STATE Collect samples $D_i = \left \{(x, y) \sim P_{\omega_i} \right \}_{n}$. \IF{$\tilde{Y}_{\omega}(D_i) > Y(D_i)$} \STATE Collect new samples $D'_i =\left \{(x, y) \sim P_{\omega_i} \right \}_n$. \RETURN $\frac{1}{n}\sum_{(x, y) \in D'_i} ((V\omega_i)^Tx - y)^2$ \ELSE \STATE Collect\footnote{Note that, if we know the number of rounds $k$, we can simply deploy the $\vec{0}$ once initially and collect $kn$ samples, halving the total round complexity.} samples $D^*_i = \left \{(x, y) \sim P_{\vec{0}} \right \}_n$. \RETURN $\frac{1}{n}\sum_{(x, y) \in D^*_i} ((V\omega_i)^Tx - y)^2$ \ENDIF \end{algorithmic} \end{algorithm} \begin{remark} \label{remark:risk} Let $\omega_{\text0}$ be the decision rule that minimizes the prediction risk without gaming and let agent action costs be $C(a) = \frac{1}{2}\|a\|_2^2$. If $\omega_{\text0}$ \emph{overestimates} the change in agent outcomes as a result of the agents' gaming, then we can find an approximate prediction-risk-minimizing decision rule in $k =O(\mathrm{poly}(d))$ rounds by using a derivative-free optimization procedure on the convex relaxation implemented in Algorithm \ref{alg:risk}. \end{remark} \begin{proofidea} To find a decision rule $\omega$ that minimizes predictive risk, we first need to define prediction risk as a function of $\omega$. As shown in Lemma \ref{lem:decomp}, the prediction risk $Risk(\omega)$ consists of two terms: the classic gaming-free prediction risk (a quadratic), and the error in estimating the effect of gaming on the outcome (the ``gaming error''). In the quadratic action cost case, this can be written out as $\left ( (V\omega - \omega^*)^T(MM^TV\omega)\right )^2$. This overall objective is a high-dimensional quartic, with large nonconvex regions. Instead of optimizing this sum of a convex function and nonconvex function, we optimize the convex function plus a convex relaxation of the nonconvex function. Since the minimum of the original function is in a region where the relaxation matches the original, this allows us to find the optimum using standard optimization machinery. The trick is to observe that of the two terms composing the prediction risk, the prediction risk before gaming is always convex (it is quadratic), and the ``gaming error'' is a regular quartic that is convex in a large region. Specifically, there exists an ellipse in decision rule space separating the convex from the (partially) non-convex regions of the ``gaming error'' function, where the value of this ``gaming error'' on the ellipse is exactly $0$. To see why, note that there is a possible rotation and translation of $\omega$ into a vector $z$ such that the quantity inside the square, $(V\omega - \omega^*)^TMM^TV\omega$, can be rewritten as $z_1^2 + z_2^2 + \ldots + z_d^2 -c$, where $c>0$. The zeros of this function, and thus of the ``gaming error'' (which is its square), form an ellipse. This ellipse corresponds to the set of points where the decision rule $\omega$ perfectly predicts the effect on outcomes of gaming induced by agents responding to this same decision rule, meaning $\omega^{*T}Ma - \omega^TVMa = 0$. In the interior of this ellipse, where the decision rule underestimates the true effect of gaming on agent outcomes ($\omega^{*T}Ma > \omega^TVMa$), the quartic is not convex. On the other hand, on the area outside of this ellipse, where $\omega^{*T}Ma < \omega^TVMa$, it is always convex. $\omega_0$ minimizes the prediction risk on ungamed data, and is thus the minimum of the quadratic first term. We are given that $\omega_0$ overestimates the effect of gaming on the outcome $\omega_0^{*T}Ma < \omega_0^TVMa$ and is thus outside the ellipse.\footnote{Note that if $\omega_0^{*T}Ma = \omega_0^TVMa$ exactly, then $\omega_0$ is a global optimum and we are done.} We will now show that a global minimum lies outside the ellipse. Assume, for contradiction, that all global minima lie inside the ellipse. Pick any minimum $\omega_{min}$. Then there must be a point $\omega'$ on the line between $\omega_0$ and $\omega_{min}$ that lies on the ellipse. All points on the ellipse are minima (zeros) for the ``gaming error'' quartic, so the second component of the predictive risk is weakly smaller for $\omega'$ than for $\omega_{min}$. But the value of the quadratic is also smaller for $\omega'$ since it is strictly closer to the minimum of the quadratic $\omega_0$. Thus $Risk(\omega') < Risk(\omega_{min})$, which is a contradiction. This means that the objective $Risk(\omega)$ is convex in a neighborhood around its global minimum, which guarantees that optimizing this relaxation of the original objective yields the correct result. The remaining concern is what to do if we fall into the ellipse, and thus potentially encounter the non-convexity of the objective. We solve this by, in this region, replacing the overall prediction risk objective $Risk(\omega)$ with the no-gaming prediction risk objective (on data sampled when $\omega = \vec{0}$). Geometrically, the function on this region becomes a quadratic. The values of this quadratic perfectly match the values of the quartic at the boundary, so the new piecewise function is convex and continuous everywhere. In practice, we empirically check whether the decision rule on average underestimated the true outcome. If so, we substitute the prediction risk with the classic gaming-free risk. We can now give this objective oracle to a derivative-free convex optimization procedure, which will find a decision rule with value $\epsilon$-close to the global prediction-risk-minimizing decision rule in a polynomial (in $d$, $1 / \epsilon$) number of rounds and samples. \end{proofidea} This raises an interesting observation: in our scenario it is easier to recover from an initial over-estimate of the effect of agents' gaming on the outcome (by reducing the weights on over-estimated features) than it is to recover from an under-estimate (which requires increasing the weight of each feature by an unknown amount). \begin{remark} The procedure described in Remark \ref{remark:risk} can also be used to minimize a weighted sum of the outcome-maximization and prediction-risk-minimization objectives. \end{remark} This follows from the fact that the outcome-maximization objective is linear in $\omega$, and therefore adding it to the prediction-risk objective preserves the convexity/concavity of each of the different regions of the objective. Thus, if a credit scorer wishes to find the loan decision rule that maximizes a weighted sum of their accuracy at assigning loans, and the fraction of their customers who successfully repay (according to some weighting), this provides a method for doing so under certain initial conditions. \section{Parameter Estimation} \label{sec:parameter_estimation} Finally, we provide an algorithm for estimating the causal outcome-generating parameters $\omega^*$, specifically in the case where the features are fully visible ($V=I$).\footnote{For simplicity, we also assume $p=1$, though any lower fraction of gaming agents can be accomodated by scaling the samples per round.} \begin{restatable}{theorem}{causal} \label{thm:causal} (Informal) Given $V = I$ (all dimensions are visible) and $\Sigma + \lambda MM^T$ is full rank for some $\lambda$ (that is, there exist actions that will allow change in the full feature space), we can estimate $\omega^*$ to arbitrary precision. We do so by computing an $\omega$ that results in more informative samples, and then gathering samples under that $\omega$. The procedure requires $\widetilde O(d)$ rounds. See the appendix for details.) \end{restatable} The algorithm that achieves this result consists of the following steps: \begin{enumerate} \item Estimate the covariance of the initial agent feature distribution before strategic behavior $\Sigma$ by initially not disclosing any decision rule to the agents, and observing their features. \item Estimate parameters of the Gramian of the action matrix $G=MM^T$ by incentivizing agents to vary each feature sequentially. \item Use this information to learn the decision function $\omega$ which will yield the most informative samples in identifying $\omega^*$, via convex optimization. \item Use the new, more informative samples in order to run OLS to compute an estimate of the causally precise regression parameters $\omega^*$. \end{enumerate} At its core, this can be understood as running OLS after acquiring a better dataset via the smartest choice of $\omega$ (which is, perhaps surprisingly, unique). Whereas the convergence of OLS without gaming would be controlled by the minimum eigenvalue of the second moment matrix $\Sigma$, convergence of our method is governed by the minimum eigenvalue of post-gaming second-moment matrix: $$ \mathbb E[(x + G \omega) (x + G \omega)^T] = \Sigma + 2 \mu \omega^T G^T + G \omega \omega^T G^T$$ Our method learns a value of $\omega$ that results in the above matrix having a larger minimum eigenvalue, improving the convergence rate of OLS. The proof and complete algorithm description is left to the appendix. \section{Discussion} In this work, we have introduced a linear setting and techniques for analyzing decision-making about strategic agents capable of changing their outcomes. We provide algorithms for leveraging agents' behaviors to maximize agent outcomes, minimize prediction risk, and recover the true parameter vector governing the outcomes. Our results suggest that in certain settings, decision-makers would benefit by not just being passively transparent about their decision rules, but by actively informing strategic agents. While these algorithms eventually yield more desirable decision-making functions, they substantially reduce the decision-maker's accuracy in the short term while exploration is occurring. Regret-based analysis of causal strategic learning is a potential avenue for future work. In general, these procedures make the most sense in scenarios with a fairly small period of delay between decision and outcome (e.g. predicting short-term creditworthiness rather than long-term professional success), as at each new round the decision-maker must wait to receive the samples gamed according to the new rule. Our methods also rely upon the assumption that new agents appear every round. If there are persistent stateful agents, as in many real-world repeated decision-making settings, different techniques may be required. \section*{Acknowledgements} The authors would like to thank Suhas Vijaykumar, Cynthia Dwork, Christina Ilvento, Anying Li, Pragya Sur, Shafi Goldwasser, Zachary Lipton, Hansa Srinivasan, Preetum Nakkiran, Thibaut Horel, and our anonymous reviewers for their helpful advice and comments. Ben Edelman was partially supported by NSF Grant CCF-15-09178. Brian Axelrod was partially supported by NSF Fellowship Grant DGE-1656518, NSF Grant 1813049, and NSF Award IIS-1908774. \subsection{Related Work} This paper is closely related to several recent and concurrent papers that study different aspects of (in our parlance) causal strategic learning. Most of these works focus on one of our three objectives. \paragraph{Agent outcomes.} Our setting is partially inspired by \citet{kleinberg2019}. In their setting, as in ours, an agent chooses an action vector in order to maximize the score they receive from a decision-maker. The action vector is mapped to a feature vector by an \emph{effort conversion matrix}, and the decision-maker publishes a mechanism that maps the feature vector to a score. However, their decision-maker does not face a learning problem: the effort conversion matrix is given as input, agents do not have differing initial feature vectors, and there is no outcome variable. Moreover, there are no hidden features. In a variation on the agent outcomes objective, their decision-maker's goal is to incentivize agents to take a particular action vector. Their main result is that whenever a monotone mechanism can incentivize a given action vector, a linear mechanism suffices. \citet{alon2020} analyze a multi-agent extension of this model. In another closely related work, \citet{miller2020} bring a causal perspective \cite{pearl2000,peters2017} to the strategic classification literature. Whereas prior strategic classification works mostly assumed agents' actions have no effect on the outcome variable and are thus pure \emph{gaming}, this paper points out that in many real-life strategic classification situations, the outcome variable is a descendant of some features in the causal graph, and thus actions may lead to genuine \emph{improvement} in agent outcomes. Their main result is a reduction from the problem of orienting the edges of a causal graph to the problem of finding a decision rule that incentivizes net improvement. Since orienting a causal graph is a notoriously difficult causal inference problem given only observational data, they argue that this provides evidence that incentivizing improvement is hard. In this paper we we point out that improving agent outcomes may not be so difficult after all because the decision-maker does not need to rely only on observational data---they can perform causal interventions through the decision rule. \citet{haghtalab2020} study the agent outcomes objective in a linear setting that is similar to ours. A significant difference is that, while agents do have hidden features, they are never incentivized to change their hidden features because there is no effort conversion matrix. This, combined with the use of a Euclidean norm action cost (we, in contrast, use a Euclidean squared norm cost function), makes finding the optimal linear regression parameters trivial. Hence, they mainly focus on approximation algorithms for finding an optimal linear \emph{classifier}. \citet{tabibian2020} consider a variant of the agent outcomes objective in a classification setting: the outcome is only ``realized'' if the agent receives a positive classification, and the decision-maker pays a cost for each positive classification it metes out. The decision-maker knows the dependence of the outcome variable on agent features a priori, so there is no learning. \paragraph{Prediction risk.} \citet{perdomo2020} define \emph{performative prediction} as any supervised learning scenario in which the model's predictions cause a change in the distribution of the target variable. This includes causal strategic learning as a special case. They analyze the dynamics of \emph{repeated retraining}---repeatedly gathering data and performing empirical risk minimization---on the prediction risk. They prove that under certain smoothness and strong convexity assumptions, repeated retraining (or repeated gradient descent) converges at a linear rate to a near-optimal model. \citet{liu2020} introduce a setting where each agent responds to a classifier by intervening directly on the outcome variable, which then affects the feature vector in a manner depending on the agent's population subgroup membership. \paragraph{Parameter estimation.} \citet{bechavod2020} study the effectiveness of repeated retraining at optimizing the parameter estimation objective in a linear setting. Like us, they argue that the decision-maker's control over the decision rule can be conducive to causal discovery. Specifically, they show that if the decision-maker repeatedly runs least squares regression (with a certain tie-breaking rule in the rank-deficient case) on batches of fresh data, the true parameters will eventually be recovered. Their setting is similar to ours but does not include an effort conversion matrix. \begin{center} \rule{8em}{1px} \end{center} \paragraph{Non-causal strategic classification.} Other works on strategic classification are \emph{non-causal}---the decision-maker's rule has no impact on the outcome of interest. The primary goal of the decision-maker in much of the classic strategic classification literature is robustness to gaming; the target measure is typically prediction risk. Our use of a Euclidean squared norm cost function is shared by the first paper in a strategic classification setting \cite{bruckner2011}. Other works use a variety of different cost functions, such as the \emph{separable} cost functions of \citet{hardt2016}. The online setting was introduced by \citet{dong2018} and has also been studied by \citet{chen2020}, both with the goal of minimizing ``Stackelberg regret''.\footnote{See \citet{bambauer2018} for a discussion of online strategic classification from a legal perspective.} A few papers \cite{milli2019, hu2019} show that accuracy for the decision-maker can come at the expense of increased agent costs and inequities. \citet{braverman2020} argue that random classification rules can be better for the decision-maker than deterministic rules. \paragraph{Economics.} Related problems have long been studied in information economics, specifically in the area of contract theory \cite{salanie2005, laffont2002}. In \emph{principal-agent problems} \cite{holmstrom1979, grossman1983, holmstrom1991,ederer2018}, also known as \emph{moral hazard} or \emph{hidden action} problems, a decision-maker (called the \emph{principal}) faces a challenge very similar to the agent outcomes objective. Notable differences include that the decision-maker can only observe the outcome variable, and the decision-maker must pay the agent. In a setting reminiscent of strategic classification, \citet{frankel2020} prove that the fixed points of retraining can be improved in terms of accuracy if the decision-maker can commit to underutilizing the available information. \citet{ball2020} introduces a three-party model in which an intermediary scores the agent and a decision-maker makes a decision based on the score. \paragraph{Ethical dimensions of strategizing.} \citet{ustun2019} and \citet{venkatasubramanian2020philosophical} argue that it is normatively good for individuals subject to models to have “recourse”: the ability to induce the model to give a desired prediction by changing mutable features. \citet{ziewitz2019} discusses the shifting boundaries between morally ``good'' and ``bad'' strategizing in the context of search engine optimization. \paragraph{Other strategic linear regression settings.} A distinct literature on strategic variants of linear regression \cite{perote2004,dekel2010,chen2018, ioannidis2013, cummings2015} studies settings in which agents can misreport their $y$ values to maximize their privacy or the model's prediction on their data point(s).
1,108,101,562,400
arxiv
\section{Introduction} There has recently been interest in interpreting the cosmological constant in an asymptotically de Sitter or anti-de Sitter black hole space-time to be a pressure in a thermodynamic sense. The thermodynamically conjugate variable would then be interpreted as a volume associated to the black hole, though it would not necessarily be related to any notion of geometric volume, \cite{KRT} \cite{BPD1} \cite{CGKP}. The idea of varying the cosmological constant goes back to \cite{HenneauxTeitelboim} \cite{Teitelboim}, but the interpretation adopted here was first given in \cite{KRT}. In this paper we re-examine some of the known thermodynamics of asymptotically anti-de sitter (AdS) Kerr black holes from this perspective, emphasising the role that the pressure plays in the analysis. As usual in thermodynamics, the phase structure depends on the constraints. For example when the angular momentum $J$ is held fixed there is a critical point with a second order phase transition at finite pressure and temperature, first found in \cite{CCK}. This constant $J$ phase transition is known to have mean field exponents, \cite{GKM} \cite{PdV}, putting it in the same universality class as a van der Waals gas and the phase diagram (figure \ref{fig:J_free_energy}) looks the same as that of a van der Waals gas. On the other hand fixing the angular velocity $\Omega$ results in a phase diagram which, like the one dimensional Ising model, has no second order transition at any finite temperature. There is a critical point at finite $T$ and $P$ where the free energy has a cusp, but the latent heat diverges there. Strictly speaking there is a second critical point, with vanishing latent heat, but it is at infinite $T$ and there is no phase transition as one cannot pass through this point --- this is similar to the $1$-d Ising model, though there the critical point is at $T=0$. The phase structure for constant $\Omega$ and constant $J$ are different in the the pressure-temperature plane.\footnote{The fixed $J$ and fixed $\Omega$ ensembles were analysed in \cite{BMS} and \cite{BKR} in terms Ehrenfest equations.} In the former case the phase boundary is determined by the condition $\rho=2P$, where $\rho=\frac{M}{V}$ is the black hole mass per unit volume, while it is not so easy find an analytic expression for constant $J$. Many familiar notions from ordinary thermodynamics are applicable, such as the Clapeyron equation for the slope of the phase boundary in the $P-T$ plane, but there are also significant differences. While the analysis produces some explicit expressions for phase boundary curves in the $P-T$ plane and latent heats in black hole phase transitions that have not appeared in the literature before, these are not the main point of the paper, being just trivial consequences of the structure of the Hawking-Page phase transitions. Rather the main point is to emphasise the shift in viewpoint that occurs when the thermodynamic volume is introduced. The phase diagram in figure \ref{fig:Omega_free_energy} the same as that of \cite{HHT-R}, but drawn using thermodynamic variables rather than the geometric variables that are explicit in the metric. This is done to emphasis the physics of the thermodynamics: the free energy is a single valued function of the geometric variables, but is multiple valued in terms of thermodynamic variables, giving the different branches in the $P-T$ plane that are the hallmark of phase transitions in the grand canonical ensemble. This conceptual shift may well prove to be important for the analysis of rotating superfluids in the AdS/CFT correspondence \cite{Brihaye} In section \S\ref{sec:Static}, static non-rotating black holes are treated, as a warm up for the constant $\Omega$ case in \S\ref{sec:Omega} and constant $J$ in \S\ref{sec:J}. Conclusions are given in \S\ref{sec:Conclusions}. \section{Static black holes \label{sec:Static}} A non-rotating, neutral, asymptotically anti-de Sitter black hole has line element \[ d^2s = -f(r) dt^2 + f^{-1}(r)dr^2 + r^2 d\Omega^2, \] with \begin{equation} \label{fdef} f(r) = 1 -\frac{2 m}{r} - \frac{\Lambda}{3} r^2, \end{equation} and $d\Omega^2=d\theta^2+\sin^2\theta d\phi^2$. The horizon radius, $r_h$, is determined by the largest real root of $f(r)=0$ giving \begin{equation} m= \frac {r_h}{2} \left(1 - \frac{\Lambda}{3} r_h^2 \right), \label{Mass} \end{equation} which is the asymptotically AdS equivalent of the ADM mass, $M=m$, in the non-rotating case. Following \cite{KRT}, $M$ will be identified with the enthalpy, \begin{equation} \label{Enthalpy} H(S,P)=\frac {1} {2} \left(\frac {S}{\pi}\right)^{\frac 1 2} \left(1+\frac {8 P S} {3} \right),\end{equation} where the Bekenstein-Hawking entropy is $1/4$ of the event horizon area \[ S=\pi r_h^2\] and $P=-\frac{\Lambda}{8\pi}$ is the pressure ($G_N$ and $\hbar$ are set to one). The temperature follows either from the surface gravity \begin{equation} T=\frac{f'(r_h)}{4\pi}=\frac {\left(1-\Lambda r_h^2\right)} {4\pi r_h} \label{eq:T_rLambda}\end{equation} using (\ref{fdef}) or from the thermodynamic relation \begin{equation} T=\left.\frac{\partial H}{\partial S}\right|_P =\frac{(1+8PS)}{4\sqrt{\pi S}} \label{eq:T_SP}\end{equation} using (\ref{Enthalpy}), these are the same formula written in geometric and thermodynamic variables respectively. The temperature has a minimum at $S=\frac{1}{8 P}$ when \begin{equation} T_{min}= \sqrt{\frac{2P}{\pi}}. \label{eq:T_min}\end{equation} For $S<\frac{1}{8 P}$ the heat capacity \begin{equation} C_P= T \left. \frac{\partial S}{\partial T}\right|_P = 2S\left(\frac{8 P S +1}{8 P S -1}\right) \end{equation} is negative while for $S>\frac{1}{8 P}$ it is positive and it diverges at $T_{min}$. The thermodynamic volume is the Legendre transform of the pressure, \cite{KRT} \cite{BPD1}, namely \begin{equation} V=\frac{\partial M}{\partial P}= \frac{4}{3\sqrt{\pi}}S(T,P)^{\frac 3 2} =\frac{4\pi}{3} r_h^3.\label{eq:SchwarzschildVolume} \end{equation} This is a rather surprising result as there is no a priori reason for the thermodynamic volume to be related to a geometric volume.\footnote{It is not immediately clear how the geometric volume of a black hole might be defined, as $r$ is a time-like co-ordinate and $t$ a space-like co-ordinate for $r<r_h$. Inside the event horizon surfaces of constant $t$ have a time-dependent metric. Note that the entropy and the volume are not independent for a Schwarzschild black hole: this is not a pathology, they are independent for rotating black black holes and the above formula can be obtained by taking the non-rotating limit of the rotating case \cite{BPD2}.} Indeed when rotation is introduced there is no such obvious relation \cite{CGKP}. The Gibbs free energy is the Legendre transform of the enthalpy, \[ G(T,P)= H-TS = \frac{1}{4} \sqrt{\frac{S(T,P)}{\pi}}\left( 1-\frac{8PS(T,P)}{3}\right)=\frac{r_h}{4}\left( 1+\frac{\Lambda}{3} r_h^2\right),\] with \begin{equation} S(T,P)=\frac{\pi T^2 - P \pm T\sqrt{\pi^2 T^2 - 2 \pi P}}{8 P^2} \end{equation} (the heat capacity is positive for the plus sign, negative for the minus sign and diverges when $P\rightarrow \pi T^2/2$). We have the thermodynamic relation \begin{equation} dG=-S dT + V dP \end{equation} and, for AdS space-time, $G=0$ and so $S_{AdS}=V_{AdS}=0$ (this is the thermodynamic volume of AdS space-time, not a geometric volume). For a black hole, on the other hand, the thermodynamic volume is given by (\ref{eq:SchwarzschildVolume}). If $PS>\frac{3}{8}$ the Gibbs free energy of the black hole is negative and thus lower than that of anti-de Sitter space-time, the former is then the more stable thermodynamic configuration, while for $PS<\frac{3}{8}$ pure anti-de Sitter space-time is the more stable and any black hole with $S<\frac{3}{8P}$ will tend to evaporate. This is the Hawking-Page phase transition, \cite{HawkingPage}, which occurs on the line $\Lambda r_h^2=-3$, or \begin{equation} S=\frac{3}{8P} \qquad \Rightarrow \qquad P=\frac{3\pi}{8}T^2,\label{eq:TP}\end{equation} when the two states can exist together as shown in the $P-T$ plane in figure \ref{fig:static_free_energy} below. (Similar phase diagram plots appeared in \cite{Altamirano} and are shown here with a view to extending the analysis to constant angular velocity in the next section.) \begin{figure}[ht] \centerline{\raise 1cm \hbox{\includegraphics[width=7cm,height=5cm]{fig1a}} \hbox{\includegraphics[width=7cm]{fig1b}}} \caption{\small Left: the black hole free energy, for static black holes, at fixed $P=1$, the heat capacity is negative on the upper branch, positive on the lower branch and diverges at the cusp. The Hawking-Page temperature is where $G=0$. Right: the co-existence curve of the Hawking-Page phase transition is the red (solid) line, the heat capacity diverges on the green (dotted) line. The lower branch of the free energy tends to minus infinity on the $P=0$ axis.} \label{fig:static_free_energy} \end{figure} Clearly there is a jump in entropy, \begin{equation} \Delta S = \frac{3}{8P}, \label{eq:DeltaS} \end{equation} when a black hole nucleates from pure AdS space time, energy must be supplied to form the black hole at constant temperature and the latent heat is \begin{equation} L=T\Delta S. \label{eq:TDeltaS}\end{equation} From the form of the co-existence curve in (\ref{eq:TP}) the latent heat is \begin{equation} L=\frac{1}{\pi T}=\sqrt{\frac{3}{8\pi P}},\end{equation} which is equal to the mass on the co-existence curve, in the black hole phase. The latent heat is non-zero for any finite $T$ and goes to zero as $T\rightarrow\infty$, as though the system were aiming for a second order phase transition but cannot reach it. For asymptotically flat space-times, $P=0$ and the latent heat is infinite, hence black holes will not spontaneously nucleate in Minkowski space. The Clapeyron equation for static black holes follows from equating the Gibbs free energy of the two phases at a point on the co-existence curve. For AdS space-time $G_{AdS}=0$, so in particular \[ dG_{AdS}=0 \] on the co-existence curve, while for the black hole \[ d G_{BH} = -S_{BH}dT + V_{BH}dP.\] The co-existence curve is defined by $G_{AdS}=G_{BH}$, so \begin{eqnarray} 0 &=&d G_{AdS}- dG_{BH} = S_{BH} dT - V_{BH} dP \nonumber \\ \Rightarrow \frac{dP}{dT}&=&\frac{S_{BH}}{V_{BH}}=\frac{3}{4 r_h}. \end{eqnarray} Since there is no black hole in pure AdS space-time we define \[ \Delta V = V_{BH},\qquad \Delta S = S_{BH}\] giving \begin{equation} \frac{dP}{dT}=\frac{\Delta S}{\Delta V},\label{eq:Clapeyron}\end{equation} which is the Clapeyron equation \cite{Callen}. It is easily checked, using (\ref{eq:T_rLambda}) and (\ref{eq:TP}) directly, that indeed $\frac{dP}{dT}=\frac{3}{4 r_h}$. The Clapeyron equation for static charged black holes, and its relation to Ehrenfest's equations, was considered in \cite{BKR}. \section{Rotating black holes \label{sec:Omega}} In this section the analysis of the Hawking-Page phase transition is extended to rotating black holes in asymptotically AdS space times. The ADM mass for a black hole rotating with angular momentum $J$ can be expressed as a function of the entropy, the angular momentum and the pressure \cite{CCK} \begin{equation} \label{eq:CCKmass} H(S,P,J):= \frac {1}{2}\sqrt{\frac{\left( 1+\frac{8 P S}{3} \right) \left(S^2\left(1 + \frac{8 P S}{3}\right) + 4 \pi^2 J^2\right)} {\pi S}}, \end{equation} which is again interpreted here as the enthalpy. In asymptotically AdS space-times there can be rotating black holes that are in equilibrium with thermal radiation rotating at infinity, \cite{HHT-R} \cite{HawkingPage}. The black hole is stable against decay if its Euclidean action is less than that of pure AdS, which is zero, so it is stable if the Euclidean action is negative. The Euclidean action of the black hole, $I_E$, is a function of $T$, $P$ and the angular velocity, $\Omega$, and is related to the Legendre transform of the ADM mass \cite{GPPI}, $I_E=\Xi/T$ with \begin{equation} \label{eq:Xi} \Xi(T,\Omega,P)=M-ST-J\Omega.\end{equation} The Legendre transforms can be performed to obtain $\Xi$ explicitly. First express the temperature and the angular velocity as functions of entropy and angular momentum \begin{equation} \label{CCKTemperature} T= \left.\frac {\partial H}{\partial S}\right|_{J,P}=\frac {1}{8\pi H}\left[ \left(1 +\frac {8 P S}{3} \right) \left(1 + 8 P S \right) -4\pi^2 \left(\frac {J}{S}\right)^2\right] \end{equation} \begin{equation} \label{eq:CCKOmega} \Omega = \left.\frac{\partial M}{\partial J}\right|_{S,P} ={2\, {\pi }^{3/2}J} \sqrt{\frac{\left( 3+8\,PS \right)} {S \left( 3\,{S}^{2}+8\,P{S}^{3}+12\,{\pi }^{2}{J}^{2} \right) }}\ .\end{equation} While it is easy to invert the latter to write $J$ as a function of $\Omega$, expressing $S$ explicitly as a function of $T$ requires the solution of an eighth order polynomial equation. Nevertheless we can eliminate $J$ in favour of $\Omega$, \begin{equation} T(S,\Omega,P)= \frac{\left( 64\,{P}^{2}{S}^{2}\pi -24\,P{S}^{2} {\Omega}^{2}+32\,\pi \,PS-6\,{\Omega}^{2}S+3\,\pi \right)} {4\pi\sqrt{\,S \left( 3+8\,PS \right) \left( 3\,\pi +8\,\pi \,PS-3\,{\Omega}^{2}S \right)} }.\label{eq:TempS} \end{equation} Solving for $S$ now only involves a quartic, but simpler is to use (\ref{eq:TempS}) in (\ref{eq:Xi}) to give \begin{equation} \Xi(T(S,\Omega,P),\Omega,P) ={\frac {\sqrt{S} \left(9\,\pi +24\,P{S}^{2}{\Omega}^{2}-64\,{P}^{2}{S}^{2}\pi \right) }{12 \pi \,\sqrt { \left( 3+8\,PS \right)\left( 3\, \pi +8\,\pi \,PS-3\,{\Omega}^{2}S \right) }}}, \label{eq:Xi2} \end{equation} which, together with (\ref{eq:TempS}), gives $\Xi(T,\Omega,P)$ parametrically in terms of $S$. The Hawking-Page phase transition is determined by the locus of points where $\Xi=0$, {\it i.e.} \begin{equation} \label{eq:HPCo-Existence} S^2=\frac{9\pi}{8P(8\pi P - 3\Omega^2)}.\end{equation} Note that \begin{equation} \Omega^2\le\frac{8\pi P}{3}\label{eq:Pless3}\end{equation} is a condition that must be imposed on $\Omega$ in order to ensure that the Einstein universe at infinity is not rotating faster than the speed of light \cite{HHT-R}. The free energy at a fixed pressure $P>\frac{3\Omega^2}{8\pi}$ is plotted as a function of temperature in figure \ref{fig:Omega_free_energy}, using the dimensionless variables \begin{equation} p=\frac{8\pi P}{\Omega^2}\ge 3,\qquad t=\frac{T}{\Omega} \qquad \hbox{and} \qquad s= \frac {\Omega^2 S}{\pi}. \label{eq:ptsdef}\end{equation} For $p>3$ there are two branches and the Hawking-Page temperature is determined by the point where the lower branch cuts the $t$-axis. As $p\rightarrow 3$ from above the lower branch becomes steeper until it disappears at $p=3$, the upper branch remaining and terminating at $T=\frac{\Omega}{2 \pi}$, where $\Xi=\frac{1}{4\Omega}$. For $p<3$ there are black hole solutions for all positive $T$ with the black hole free energy $\Xi_{BH}$ a positive function decreasing monotonically with $T$. Substituting (\ref{eq:HPCo-Existence}) in (\ref{eq:TempS}) gives an analytic expression for the co-existence curve $T(P)$ at fixed $\Omega$: it has the parametric form \begin{eqnarray}\label{eq:Omega_HP} p&=&\frac {3(s+\sqrt{s^2+4})}{2s},\\ t&=&\frac{\sqrt{2+\sqrt{s^2+4}}}{2\pi\sqrt{s}}\nonumber \end{eqnarray} and is plotted in the right hand figure of figure 2 (red line). It terminates at $p=3$ and the temperature of the black hole cannot go below $T=\frac{\Omega}{2\pi}$ for $p\ge 3$. Black holes are stable against decay to AdS in the grey region bounded by the red and the horizontal blue lines, labelled \lq\lq Black hole phase''. In this region $\Xi_{BH}$ is double valued, with a positive and a negative branch, the negative branch being the more stable of the two and also more stable than AdS which has $\Xi_{AdS}=0$. Below the horizontal blue line the negative branch of $\Xi_{BH}$ disappears and only the positive branch remains. Across the blue line the negative branch jumps discontinuously from minus infinity (black hole) to zero (AdS).\footnote{For finite jumps such phase transitions have been termed zeroth-order and and it has been suggested that they could occur in superconductors and superfluids \cite{Maslov}. Indeed it was proposed in \cite{HHT-R} that the horizontal line $p=3$ may be associated with a superfluid transition.} \begin{figure}[ht] \centerline{\includegraphics[width=7cm]{fig2a}\includegraphics[width=7cm]{fig2b}} \caption{\small Left: the dimensionless free energy for a rotating black hole as a function of the dimensionless variable $t$. The upper branch represents small black holes (negative heat capacity), the lower branch large black holes (positive heat capacity). Only negative $\Xi$ black holes are stable against Hawking-Page decay. The vertical axis is the dimensionless combination $\Omega\, \Xi$ and the figure is drawn for $p=4$ (other values of $p>3$ move the position of the cusp but the shape of the figure is the same). Right: phase diagram in the $p-t$ plane. The constraint that the Einstein universe at infinity is not rotating faster than the speed of light imposes the condition $p\ge 3$. In the region below this line (blue) $\Xi_{BH}$ is single valued and positive. In the white region (upper left labelled \lq\lq AdS phase'') there are no black hole solutions (a black hole in this region would have negative entropy). The right-hand boundary of the white region is the locus of points on which the heat capacity diverges (black line), to the right of this there is a wedge shaped region (green) in which $\Xi_{BH}$ has two branches and is positive on both branches. AdS is still the preferred phase in this region as $0=\Xi_{AdH}<\Xi_{BH}$. In the grey region, labelled \lq\lq Black hole phase'' one branch of $\Xi_{BH}$ becomes negative and black holes are more stable than AdS. The Hawking-Page phase transition occurs on the boundary between the green and the grey region (red line), where the lower branch of $\Xi_{BH}$ equals zero. } \label{fig:Omega_free_energy} \end{figure} When there is rotation present angular momentum contributes to the change in enthalpy across the co-existence curve, which is the latent heat. For the Hawking-Page transition the latent heat is just the mass of the black hole: \begin{itemize} \item On the first order line, $\Xi_{BH}=M-TS-\Omega J=0$, hence \[ L=M =T S + \Omega J.\] Since the entropy and angular momentum vanish on the $AdS$ side of the transition the jumps in entropy and angular momentum are $\Delta S=S$ and $\Delta J=J$, giving latent heat \begin{equation} L=T \Delta S + \Omega \Delta J = \frac{16\pi^3 T^3}{(4\pi^2 T^2 -\Omega^2)^2}, \label{eq:latentOmega} \end{equation} which diverges at the critical point\footnote{This is a critical point in the original sense of the phrase, it is necessary to tune both $T$ and $P$ very carefully to access this point. A critical point in this sense is a separate concept to the notion of zero latent and a ``second order'' phase transition.} $T=\frac{\Omega}{2\pi}$, $P=\frac{3\Omega^2}{2\pi}$. Of course $M=TS+\Omega J$ is not a general formula, it only holds on the co-existence curve and gives and gives a quick and convenient way of determining the curve. It can be combined with Smarr relation, \[ M=2(TS+\Omega J - PV),\] to give $M=2P V$, or \begin{equation} \rho=2 P \label{eq:PVHP}\end{equation} with $\rho=\frac M V$, on the co-existence curve. The Clapeyron equation can be checked using thermodynamic volume, $V=\left.\frac{\partial H}{\partial P}\right|_{S,J}$ first calculated in \cite{CGKP}. Written as a function of $\Omega$ it reads \begin{equation} V={\frac {2S^{\frac 3 2 }\left( 16\,\pi \,PS-3\,{\Omega}^{ 2}S+6\,\pi \right)}{3\pi\sqrt {{ { \left( 3+8\,PS \right)}{(3\,\pi +8\,\pi \,PS-3\,{\Omega}^{2}S)} }}}}. \label{eq:Volume}\end{equation} On the co-existence curve (\ref{eq:HPCo-Existence}) this is \[ V=\frac{2\pi}{3\Omega^3}\sqrt{s^3(2+\sqrt{s^2+4})}\;.\] Again, as the thermodynamic volume of AdS without a black hole vanishes, $\Delta V=V$ and \[\frac{\Delta S}{\Delta V} = \frac{3\Omega}{2\sqrt{s(2+\sqrt{s^2+4})}}, \] and this is indeed equal to $\frac{dP}{dT}$ along the co-existence curve, from (\ref{eq:Omega_HP}). Another way of writing this is to use (\ref{eq:PVHP}) to give \begin{equation} \label{eq:dTdP} \frac{d P}{d T}= \frac{S}{V}=\frac{2PS}{M} \end{equation} hence, at a given value of $\Lambda$, the slope of the co-existence curve is determined by the entropy per unit mass of the black hole. \item The Clapeyron equation does not hold across the zeroth order transition line in figure 2 (blue), because $\Xi_{BH}\ne 0$ there. In fact $\Xi_{BH}<0$ on the blue line and the free energy jumps across this line \end{itemize} The free energy $\Xi_{BH}$ is plotted in figure \ref{fig:Omega_free_energy} as a function of the dimensionless variable $t$ in (\ref{eq:ptsdef}), with $p=4$. There is a cusp at the minimum temperature $t=\frac{1}{2\pi}$, where the heat capacity and the latent heat diverge, but no phase transition. \section{The Caldarelli, Cognola and Klemm phase transition \label{sec:J}} It was found in \cite{CCK} that there is a phase transition for asymptotically AdS rotating black holes with fixed angular momentum, $J$, between small and large black holes, the CCK phase transition. The phase structure associated with constant $J$ transitions has been more extensively studied from the point of view of varying $P$ than the constant $\Omega$ case \cite{Altamirano}. It is more like the familiar liquid-gas transition than the constant $\Omega$ case and in higher dimensions there can be a triple point associated with three different black hole phases \cite{Altamirano}. There is no Hawking-Page phase transition for constant $J$, because AdS with no black hole cannot have non-zero $J$. The results in this section are not new and are included for completeness and comparison to the constant $\Omega$ case in section \S\ref{sec:Omega}. The thermodynamic form of the mass for the asymptotically AdS Kerr metric is (\ref{eq:CCKmass}), the temperature is (\ref{CCKTemperature}) and the thermodynamic volume (\ref{eq:Volume}) is, in terms of $J$, \begin{equation}\label{eq:ThermodynamicVolume} V= \left.\frac {\partial M}{\partial P}\right|_{S,J}= \frac{2}{3 \pi M}\left\{S^2\left(1 + \frac{8 P S}{3}\right)+ 2\pi^2 J^2 \right\}. \end{equation} This is greater than (or equal to when $J=0$) the na{\"\i}ve geometric result \hfill\break $V=\frac{4\pi}{3}\left(\frac{S}{\pi}\right)^{\frac 3 2}$, as first observed in \cite{CGKP}. In the $P-V$ plane the CKK transition mimics the van der Waals gas-liquid phase transition very closely. Below a critical temperature $T_c$ the transition is first order, culminating at a second order transition at $T_c$, in the same universality class as the van der Waals transition (with mean field exponents, \cite{GKM} \cite{PdV}) and no transition for $T>T_c$. The free energy \begin{equation} G(T,P)=M(S,P)-S T \end{equation} is plotted in figure \S\ref{fig:J_free_energy}, as a function of $T$ at fixed $P$, together with the co-existence curve in the $P-T$ plane. \begin{figure}[ht] \centerline{\includegraphics[width=7cm]{fig3a}\includegraphics[width=7cm]{fig3b}} \caption{\small Left: Gibbs free energy, $G/\sqrt{J}$, as a function of $\tilde t$ for a black hole at constant $J$ with $\tilde p= <\tilde p_c$. Right: co-existence curve at fixed $J$, in the $\tilde p-\tilde t$ plane. The curve ends at the critical point $\tilde t_c=0.0417$, $\tilde p_c=0.144$.} \label{fig:J_free_energy} \end{figure} Just like the van der Waals equation of state there is a line a line of second order phase transitions, but here between large and small black holes, terminating at critical point where there is a second order phase transition. The heat capacity at constant $J$ and $P$ is \begin{equation} C_{J,P}=T\left.\frac{\partial S}{\partial T}\right|_{J,P} \end{equation} The full expression is not very illuminating and we shall focus on the spinodal curve at constant $J$, the locus of points in the $S-P$ plane where the heat capacity diverges. In terms of the dimensionless variables \[\tilde p=16\pi P J, \qquad \tilde s= \frac{S}{2\pi J} \] the spinodal curve is given by a quartic polynomial in $\tilde p$, \begin{eqnarray} &&\kern -30pt {\tilde p}^{4}{\tilde s}^{8}+4\,{\tilde s}^{5} \left( 2\,{\tilde s}^{2}+3 \right) {\tilde p}^{3}+18\,{\tilde s}^{4}\left( {\tilde s}^{2}+5 \right) {\tilde p}^{2}+36\,\tilde s \left( 6\,{\tilde s}^{2}+1 \right) \tilde p+162\,{\tilde s}^{2}+81-27\,{\tilde s}^{4}\nonumber \\ &=&0.\nonumber \end{eqnarray} Defining a dimensionless temperature $\tilde t=T\sqrt{J}$, curves of constant $\tilde p$ are plotted in the $\tilde s-\tilde t$ plane in figure \ref{fig:T-S_diagram}. \begin{figure}[ht] \centerline{\includegraphics[width=12cm]{fig4}} \caption{\small Curves of constant $\tilde p$ in the $\tilde s-\tilde t$ plane. The green curve is $\tilde p=0$. The red curve is the spinodal curve. The temperature of the phase transition is calculated using the Maxwell equal area rule.} \label{fig:T-S_diagram} \end{figure} The spinodal curve is the red curve and the negative slope of the lines of constant $P$ under the spinodal curve is an unstable region, where the heat capacity is negative. The temperature of the large-small black hole phase transition can in principle be obtained from the Maxwell equal area rule: the Gibbs free energy, at constant $J$, \[ G(T,P,J)=M(S,P,J)-TS \] should be the same in both phases, $G_l=G_s$. \ At constant $P$ \[ dG=-S dT + V dP =-S dT,\] so \begin{eqnarray} 0 = G_l - G_s = \int_{S_s}^{S_l}dG &=& -\int_{S_s}^{S_l} S d T =-[ST]^l_s+\int_{S_s}^{S_l} T(S)\, d S\nonumber \\ \Leftrightarrow \qquad \int_{S_s}^{S_l}T(S)\,d S-(S_l-S_s)T &=&0.\nonumber \end{eqnarray} The last expression determines the transition temperature, $T=T(S_l)=T(S_s)$, by demanding that the filled area in figure \ref{fig:T-S_diagram} is zero. The two phases co-exist on a line in the $T-S$ plane, terminating at the critical point where the two sizes are equal (the Maxwell equal area rule for the first order Hawking-Page transition, associated with static AdS-Schwarzschild, was investigated in the $P-V$ plane in \cite{SS}). One must however be careful about applying ordinary thermodynamic intuition to black holes. In the $S-T$ plane the segment of an isotherm on which $T$ decreases with $S$ corresponds to negative heat capacity and signals an instability. The Maxwell rule states that, for the liquid-gas phase transition, such an isotherm should be replaced by one which is horizontal between the two extremes of pure gas and pure liquid, the horizontal section of the isotherm being a mixture of liquid and gas in linear proportion to the distance to its two end points \cite{Callen} (the argument is usually given in the $P-V$ plane, when compressibility replaces heat capacity, but it is essentially the same). There does not appear to be any such interpretation for black holes: a classical black hole solution is not a combination of large and small black holes, it is either one or the other and there does not seem to be any simple way in which the horizontal section of the constant $P$ curve in figure \ref{fig:T-S_diagram} can be thought of as a \lq\lq mixture'' of black holes with different entropies, as there is only one black hole. The negative heat capacity in the unstable region of figure \ref{fig:T-S_diagram}, and the negative compressibility \cite{Compressibility}, may be a feature that one must live with, just like the negative heat capacity of asymptotically flat Schwarzschild black holes. The Clapeyron equation $\frac {d P}{d T} = \frac{\Delta S}{\Delta V}$, follows from standard reasoning, that the Gibbs free energy of the two phases agrees across on the co-existence curve, $G_l(T,P)=G_s(T,P)$, but an explicit verification for the CCK transition would require solving a high order polynomial by numerical computation. \section{Conclusions \label{sec:Conclusions}} Phase transitions for asymptotically AdS Kerr black holes have been analysed in thermodynamic variables, viewing the (negative) cosmological constant as a pressure and taking its thermodynamically conjugate variable to be a volume. For non-rotating black holes the analytic form of the Hawking-Page transition line in the $P-T$ plane is quadratic (\ref{eq:TP}) and the latent heat is inversely proportional to the temperature. At constant $\Omega$ the transition line is given parametrically in equations (\ref{eq:ptsdef}) and (\ref{eq:Omega_HP}) while the latent heat is (\ref{eq:latentOmega}) and is proportional to the mass per unit entropy. The Clapeyron equation has been explicitly checked to hold true, which is a consistency check on the thermodynamic interpretation of $\Lambda$ presented here. Analytic expressions are harder to obtain for constant $J$, but similar concepts apply. From the AdS/CFT perspective the constant $\Omega$ case is relevant for rotating superfluids in the boundary conformal field theory. The main difference between the phase diagrams when $\Omega=0$ and $\Omega\ne 0$ is that the lower boundary of the black hole phase is at $P=0$ in the former case, and cannot be crossed for positive $P$, whereas it as at $P>0$ in the latter case, corresponding to Bose condensation on the boundary conformal field theory \cite{HHT-R}. The phase diagram in higher dimensions is more complicated, due to the presence of more than one angular momentum, but is all the richer for that: the constant $J$ case exhibiting triple points \cite{Triple} and re-entrant phase transitions \cite{Reentrant}. It would be interesting to explore the constant $\Omega$ phase diagram in higher dimensions in more detail. Bose condensation of vortices at a critical angular velocity for example could be a new phase when $\Omega\ne 0$. There is much still to discover in this new picture of black hole thermodynamics. This research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Economic Development and Innovation. \begin{appendix} \end{appendix}
1,108,101,562,401
arxiv
\section*{Introduction} Given a number field ${\rm K}$ and a prime number $p$, the tame version of the conjecture of Fontaine-Mazur (conjecture (5a) of \cite{FM}) asserts that every finitely and tamely ramified continuous Galois representation $\rho : {\rm Gal}(\overline{{\rm K}}/{\rm K}) \rightarrow {\rm Gl}_m({\mathbb Q}_p)$ of the absolute Galois group of ${\rm K}$, has finite image. Let us mention briefly two strategies to attack this conjecture. $-$ The first one is to use the techniques coming from the considerations that inspired the conjecture, {\em i.e.}, from the Langlands program (geometric Galois representations, modular forms, deformation theory, etc.). Fore more than a decade, many authors have contributed to understanding the foundations of this conjecture with some serious progress having been made. As a partial list of such results, we refer the reader to Buzzard-Taylor \cite{BT}, Buzzard \cite{Buzzard}, Kessaei \cite{Kassaei1}, Kisin \cite{Kisin}, Pilloni \cite{Pilloni}, Pilloni-Stroh \cite{Pilloni-Stroh}, etc. $-$ The second one consists in comparing properties of $p$-adic analytic pro-$p$ groups and arithmetic. Thanks to this strategy, Boston gave in the 90's the first evidence for the tame version of the Fontaine-Mazur conjecture (see \cite{Boston1}, \cite{Boston2}). This one has been extended by Wingberg \cite{Wingberg}. See also \cite{Maire} and the recent work of Hajir-Maire \cite{H-M}. In all of these situations, the key fact is to use a semi-simple action. Typically, this approach gives no information for quadratic number fields when $p=2$. \ In this work, when $p=2$, we propose to give some families of imaginary quadratic number fields for which the unramified Fontaine-Mazur conjecture is true (conjecture (5b) of \cite{FM}). For doing this, we compare the étale cohomology of ${\rm Spec } {\mathcal O}_{\rm K}$ and the cohomology of $p$-adic analytic pro-$p$ groups. In particular, we exploit the fact that in characteristic $2$, cup products in $H^2$ could be not alternating (meaning $x\cup x \neq 0$), more specificaly, a beautiful computation of cup-products in $ H_{et}^3({\rm Spec } {\mathcal O}_k, {\mathbb F}_2)$ made by Carlson and Schlank in \cite{Carlson-Schlank}. And so, surprisingly our strategy works \emph{only} for $p=2$ ! \ Given a prime number $p$, denote by ${\rm K}^{ur}(p)$ the maximal pro-$p$ unramifed extension of ${\rm K}$; put ${\rm G}^{ur}_{\rm K}(p):={\rm Gal}({\rm K}^{ur}(p)/{\rm K})$. Here we are interested in uniform quotients of ${\rm G}^{ur}_{\rm K}(p)$ (see section \ref{section2} for definition) which are related to the unramified Fontaine-Mazur conjecture thanks to the following equivalent version: \begin{conjecture} \label{conj2} Every uniform quotient ${\rm G}$ of ${\rm G}^{ur}_{\rm K}(p)$ is trivial. \end{conjecture} Remark that Conjecture \ref{conj2} can be rephrased as follows: the pro-$p$ grop ${\rm G}^{ur}_{\rm K}(p)$ has no uniform quotient ${\rm G}$ of dimension $d$ for all $d>0$. Of course, this is obvious when $d>{\rm d}_2 {\rm Cl}_{\rm K}$, and when $d\leq 2$, thanks to the fact that ${\rm G}^{ur}_{\rm K}(p)$ is FAb (see later the definition). \medskip Now take $p=2$. Let $(x_i)_{i=1,\cdots, n}$ be an ${\mathbb F}_2$-basis of $H^1({\rm G}^{ur}_{\rm K}(2),{\mathbb F}_2)\simeq H_{et}^1({\rm Spec } {\mathcal O}_{\rm k},{\mathbb F}_2)$, and consider the $n\times n$-square matrix ${\mathcal M}_{\rm K}:=(a_{i,j})_{i,j}$ with coefficients in ${\mathbb F}_2$, where $$a_{i,j}=x_i\cup x_i \cup x_j,$$ thanks to the fact that here $H^3_{et}({\rm Spec } {\mathcal O}_{\rm K},{\mathbb F}_2) \simeq {\mathbb F}_2$. As we will see, this is the Gram matrix of a certain bilinear form defined, via Artin symbol, on the Kummer radical of the $2$-elementary abelian maximal unramified extension ${\rm K}^{ur,2}/{\rm K}$ of ${\rm K}$. We also will see that for imaginary quadratic number fields, this matrix is often of large rank. First, we prove: \begin{Theorem} \label{maintheorem0} Let ${\rm K}$ be a totally imaginary number field. Let $n:=d_2{\rm Cl}_{\rm K}$ be the $2$-rank of the class group of ${\rm K}$. \begin{itemize} \item[$(i)$] Then the pro-$2$ group ${\rm G}^{ur}_{\rm K}(2)$ has no uniform quotient of dimension $d>n-\frac{1}{2} \rm rk({\mathcal M}_{\rm K})$. \item[$(ii)$] Moreover, Conjecture \ref{conj2} holds (for $p=2$) when: \begin{enumerate} \item[$\bullet$] $n=3$, and $\rm rk({\mathcal M}_{\rm K})>0$; \item[$\bullet$] $n=4$, and $\rm rk({\mathcal M}_{\rm K}) \geq 3$; \item[$\bullet$] $n=5$, and $\rm rk({\mathcal M}_K)=5$. \end{enumerate} \end{itemize} \end{Theorem} By relating the matrix ${\mathcal M}_{\rm K}$ to a R\'edei-matrix type, and thanks to the work of Gerth \cite{Gerth} and Fouvry-Kl\"uners \cite{Fouvry-Klueners}, one can also deduce some density information when ${\rm K}$ varies in the family ${\mathcal F}$ of imaginary quadratic fields. For $n,d, X \geq 0$, denote by $${\rm S}_X:=\{ {\rm K} \in {\mathcal F}, \ -{\rm disc}_{\rm K} \leq X\}, \ \ {\rm S}_{n,X}:=\{ {\rm K}\in {\rm S}_{X}, \ \ d_2 {\rm Cl}_{\rm K}=n\}$$ $${\rm FM}_{n,X}^{(d)}:=\{ {\rm K} \in {\rm S}_{n,X}, \ {\rm G}^{ur}_{\rm K}(2) {\rm \ has \ no \ uniform \ quotient \ of \ dimension} > d\},$$ $$ {\rm FM}_{n,X}:=\{ {\rm K} \in {\rm S}_{n,X}, \ {\rm Conjecture \ \ref{conj2} \ holds \ for \ }{\rm K}\},$$ and consider the limits: $${\rm FM}_n^{(d)}:= \liminf_{X\rightarrow + \infty} \frac{\# {\rm FM}_{n,X}^{(d)}}{\#{\rm S}_{n,X}}, \ \ \ {\rm FM}_{n}:=\liminf_{X \rightarrow + \infty} \frac{\# {\rm FM}_{n,X}}{\#{\rm S}_{n,X}}.$$ ${\rm FM}_n$ measures the proportion of imaginary quadratic fields ${\rm K}$ with $d_2 {\rm Cl}_{\rm K}=n$, for which Conjecture \ref{conj2} holds (for $p=2$); and ${\rm FM}_n^{(d)}$ measures the proportion of imaginary quadratic fields ${\rm K}$ with $d_2 {\rm Cl}_{\rm K}=n$, for which ${\rm G}^{ur}_{\rm K}(2)$ has no uniform quotient of dimension $>d$. \medskip Then \cite{Gerth} allows us to obtain the following densities for uniform groups of small dimension: \begin{Corollary} \label{coro-intro1} One has: \begin{enumerate} \item[$(i)$] ${\rm FM}_3 \geq .992187$, \item[$(ii)$] ${\rm FM}_4 \geq .874268$, ${\rm FM}_4^{(4)} \geq .999695$, \item[$(iii)$] ${\rm FM}_5 \geq .331299$, ${\rm FM}_5^{(4)} \geq .990624$, ${\rm FM}_5^{(5)} \geq .9999943$, \item[(iv)] for all $d\geq 3$, ${\rm FM}_{d}^{(1+d/2)} \geq 0.866364 $, and ${\rm FM}_d^{(2+d/2)} \geq .999953$. \end{enumerate} \end{Corollary} \medskip \begin{Remark} At this level, one should make two observations. 1) Perhaps for many ${\rm K} \in {\rm S}_{3,X}$ and ${\rm S}_{4,X}$, the pro-$2$ group ${\rm G}^{ur}_{\rm K}(2)$ is finite but, by the Theorem of Golod-Shafarevich (see for example \cite{Koch}), for every ${\rm K} \in {\rm S}_{n,X}$, $n\geq 5$, the pro-$2$ group ${\rm G}^{ur}_{\rm K}(2)$ is infinite. 2) In our work, it will appear that we have no information about the Conjecture \ref{conj2} for number fields ${\rm K}$ for which the $4$-rank of the class group is large. Typically, in ${\rm FM}_i$ one keeps out all the number fields having maximal $4$-rank. \end{Remark} To conclude, let us mention a general asymptotic estimate thanks to the work of Fouvry-Kl\"uners \cite{Fouvry-Klueners}. Put $${\rm FM}_{X}^{[i]}:=\{{\rm K} \in {\rm S}_{X}, \ {\rm G}^{ur}_{\rm K}(2) {\rm \ has \ no \ uniform \ quotient \ of \ dimension} > i+ \frac{1}{2}d_2 {\rm Cl}_{\rm K}\}$$ and $${\rm FM}^{\lbrack i\rbrack }:= \liminf_{X\rightarrow + \infty} \frac{\# {\rm FM}^{[i]}_X}{\# {\rm S}_{X}}.$$ Our work allows us to obtain: \begin{Corollary} \label{coro-intro2} One has: $${\rm FM}^{[1]} \geq .0288788, \ \ {\rm FM}^{[2]} \geq 0.994714, \ \ {\rm and } \ \ {\rm FM}^{[3]} \geq 1-9.7 \cdot 10^{-8}.$$ \end{Corollary} \ \ This paper has three sections. In Section 1 and Section 2, we give the basic tools concerning the \'etale cohomology of number fields and the $p$-adic analytic groups. Section 3 is devoted to arithmetic considerations. After the presentation of our strategy, we develop some basic facts about bilinear forms over ${\mathbb F}_2$, specially for the form introduced in our study (which is defined on a certain Kummer radical). In particular, we insist on the role played by totally isotropic subspaces. To finish, we consider a relation with a R\'edei matrix that allows us to obtain density information. \ \ \medskip {\bf Notations.} Let $p$ be a prime number and let ${\rm K}$ be a number field. Denote by \begin{enumerate} \item[$-$] $p^*=(-1)^{(p-1)/2}p$, when $p$ is odd; \item[$-$] ${\mathcal O}_{\rm K}$ the ring of integers of ${\rm K}$; \item[$-$] ${\rm Cl}_{\rm K}$ the $p$-Sylow of the Class group of ${\mathcal O}_{\rm K}$; \item[$-$] ${\rm K}^{ur}$ the maximal profinite extension of ${\rm K}$ unramified everywhere. Put ${\mathcal G}_{\rm K}={\rm Gal}({\rm K}^{ur}/{\rm K})$; \item[$-$] ${\rm K}^{ur}(p)$ the maximal pro-$p$ extension of ${\rm K}$ unramified everywhere. Put ${\rm G}^{ur}_{{\rm K}}(p):={\rm Gal}({\rm K}^{ur}(p)/{\rm K})$; \item[$-$] ${\rm K}^{ur,p}$ the elementary abelian maximal unramified $p$-extension of ${\rm K}$. \end{enumerate} Recall that the group ${\rm G}^{ur}_{{\rm K}}(p)$ is a finitely presented pro-$p$ group. See \cite{Koch}. See also \cite{NSW} or \cite{Gras}. Moreover by class field theory, ${\rm Cl}_{\rm K} $ is isomorphic to the abelianization of ${\rm G}^{ur}_{\rm K}(p)$. In particular it implies that every open subgroup ${\mathcal H}$ of ${\rm G}^{ur}_{{\rm K}}(p)$ has finite abelianization: this property is kwnon as "FAb". \ \section{Etale cohomology: what we need} \label{section1} \subsection{} For what follows, the references are plentiful: \cite{Mazur}, \cite{Milne}, \cite{Milne1}, \cite{Schmidt1}, \cite{Schmidt2}, etc. \medskip Assume that ${\rm K}$ is totally imaginary when $p=2$, and put ${\mathcal X}_{\rm K}= {\rm Spec } {\mathcal O}_{\rm K}$. The Hochschild-Serre spectral sequence (see \cite{Milne}) gives for every $i\geq 1$ a map $$\alpha_i : H^i ({\rm G}^{ur}_{\rm K}(p)) \longrightarrow H^i_{et}({\mathcal X}_{\rm K}), $$ where the coefficients are in ${\mathbb F}_p$ (meaning the constant sheaf for the \'etale site ${\mathcal X}_{\rm K}$). As $\alpha_1$ is an isomorphism, one obtains the long exact sequence: $$H^2({\rm G}^{ur}_{\rm K}(p)) \hookrightarrow H^2_{et}({\mathcal X}_{\rm K}) \longrightarrow H_{et}^{2}({\mathcal X}_{{\rm K}^{ur}(p)}) \longrightarrow H^3({\rm G}^{ur}_{\rm K}(p)) \longrightarrow H^3_{et}({\mathcal X}_{\rm k}) $$ where $H^3_{et}({\mathcal X}_{\rm k}) \simeq (\mu_{{\rm K},p})^\vee$, here $(\mu_{{\rm K},p})^\vee$ is the Pontryagin dual of the group of $p$th-roots of unity in ${\rm K}$. \medskip \subsection{} Take now $p=2$. Let us give $x,y,z \in H^1_{et}({\mathcal X}_{\rm K})$. In \cite{Carlson-Schlank}, Carlson and Schlank give a formula in order to determine the cup-product $x \cup y \cup z \in H^3_{et}({\mathcal X}_{\rm K})$. In particular, they show how to produce some arithmetical situations for which such cup-products $x\cup x \cup y $ are not zero. Now, one has the commutative diagram: $$\xymatrix{H^3({\rm G}^{ur}_{\rm K}(p)) \ar[r]^{\alpha_3} & H^3_{et}({\mathcal X}_{\rm K}) \\ H^1({\rm G}^{ur}_{\rm K}(p))^{\otimes^3}\ar[r]^\simeq_{\alpha_1} \ar[u]^{\beta} & \ar[u]^{\beta_{et}} H^1_{et}({\mathcal X}_{\rm K})^{\otimes^3}}$$ Hence $(\alpha_3\circ \beta)(a\otimes b \otimes a)=\alpha_1(a)\cup \alpha_1(b)\cup \alpha_1(c)$. By taking $x=\alpha_1(a)=\alpha_1(b)$ and $y=\alpha_1(c)$, one gets $a\cup a \cup b \neq 0 \in H^3({\rm G}^{ur}_{\rm K}(p))$ when $x\cup x\cup y \neq 0 \in H^3_{et}({\mathcal X}_{\rm K})$. \subsection{The computation of Carlson and Schlank} \label{section:Carlson-Schlank} Take $x$ and $y$ two non-trivial characters of $H^1({\rm G}^{ur}(p))\simeq H^1_{et}({\mathcal X})$. Put ${\rm K}_x={\rm K}^{ker(x)}$ and ${\rm K}_y={\rm K}^{ker(y)}$. By Kummer theory, there exist $a_x, a_y \in {\rm K}^\times/({\rm K}^\times)^2$ such that ${\rm K}_x={\rm K}(\sqrt{a_x})$ and ${\rm K}_y={\rm K}(\sqrt{a_y})$. As the extension ${\rm K}_y/{\rm K}$ is unramified, for every prime ideal ${\mathfrak p}$ of ${\mathcal O}_{\rm K}$, the ${\mathfrak p}$-valuation $v_{\mathfrak p}(a_y)$ is even, and then $\sqrt{(a_y)}$ has a sense (as an ideal of ${\mathcal O}_{\rm K}$). Let us write $$\sqrt{(a_y)}:=\prod_i{\mathfrak p}_{y,i}^{e_{y,i}}.$$ Denote by $I_x$ the set of prime ideals ${\mathfrak p}$ of ${\mathcal O}_{\rm K}$ such that ${\mathfrak p}$ is inert in ${\rm K}_x/{\rm K}$ (or equivalently, $I_x$ is the set of primes of ${\rm K}$ such that the Frobenius at ${\mathfrak p}$ generates ${\rm Gal}({\rm K}_x/{\rm K})$). \begin{prop}[Carlson and Schlank] \label{proposition:C-S} The cup-product $x\cup x \cup y \in H^3_{et}(X)$ is non-zero if and only if, $\displaystyle{\sum_{{\mathfrak p}_{y,i} \in I_x}e_{y,i}}$ is odd. \end{prop} \begin{rema} \label{remarque:symbole} The condition of Proposition \ref{proposition:C-S} is equivalent to the triviality of the Artin symbol $\displaystyle{\left(\frac{{\rm K}_x/{\rm K}}{\sqrt{(a_y)}}\right)}$. Hence if one takes $b_y=a_y \alpha^2$ with $\alpha\in {\rm K}$ instead of $a_y$ then, as $\displaystyle{\left(\frac{{\rm K}_x/{\rm K}}{(\alpha)}\right)}$ is trivial, the condition is well-defined. \end{rema} Let us give an easy example inspired by a computation of \cite{Carlson-Schlank}. \begin{prop}\label{criteria} Let ${\rm K}/{\mathbb Q}$ be an imaginary quadratic field. Suppose that there exist $p$ and $q$ two different odd prime numbers ramified in ${\rm K}/{\mathbb Q}$, and such that: $\displaystyle{\left(\frac{p^*}{q}\right)=-1}$. Then there exist $x \neq y \in H^1_{et}({\mathcal X}_{\rm K})$ such that $x\cup x \cup y \neq 0$. \end{prop} \begin{proof} Take ${\rm K}_x={\rm K}(\sqrt{p^*})$ and ${\rm K}_y={\rm K}(\sqrt{q^*})$, and apply Proposition \ref{proposition:C-S}. \end{proof} \section{Uniform pro-$p$-groups and arithmetic: what we need} \label{section2} \subsection{} Let us start with the definition of a uniform pro-$p$ group (see for example \cite{DSMN}). \begin{defi} Let ${\rm G}$ be a finitely generated pro-$p$ group. We say that ${\rm G}$ is uniform if: \begin{enumerate} \item[$-$] ${\rm G}$ is torsion free, and \item [$-$] $[{\rm G},{\rm G}] \subset {\rm G}^{2p}$. \end{enumerate} \end{defi} \begin{rema} For a uniform group ${\rm G}$, the $p$-rank of ${\rm G}$ coincides with the dimension of ${\rm G}$. \end{rema} The uniform pro-$p$ groups play a central rule in the study of analytic pro-$p$ group, indeed: \begin{theo}[Lazard \cite{lazard}] \label{Lazard0} Let ${\rm G}$ be a profinite group. Then ${\rm G}$ is $p$-adic analytic {\em i.e.} ${\rm G} \hookrightarrow_c {\rm Gl}_m({\mathbb Z}_p)$ for a certain positive integer $m$, if and only if, ${\rm G}$ contains an open uniform subgroup ${\mathcal H}$. \end{theo} \begin{rema} For different equivalent definitions of $p$-adic analytic groups, see \cite{DSMN}. See also \cite{Lubotzky-Mann}. \end{rema} \begin{exem} The correspondence between $p$-adic analytic pro-$p$ groups and ${\mathbb Z}_p$-Lie algebra via the log/exp maps, allows to give examples of uniform pro-$p$ groups (see \cite{DSMN}, see also \cite{H-M}). Typically, let ${\mathfrak s} {\mathfrak l}_n({\mathbb Q}_p)$ be the ${\mathbb Q}_p$-Lie algebra of the square matrices $n\times n$ with coefficients in ${\mathbb Q}_p$ and of zero trace. It is a simple algebra of dimension $n^2-1$. Take the natural basis: \begin{enumerate} \item[(a)] for $i \neq j$, $E_{i,j}=(e_{k,l})_{k,l}$ for which all the coefficient are zero excepted $e_{i,j}$ that takes value $2p$; \item[(b)] for $i>1$, $D_i=(d_{k,l})_{k,l}$ which is the diagonal matrix $D_i=(2p,0,\cdots,0,-2p,0,\cdots, 0)$, where $d_{i,i}=-2p$. \end{enumerate} Let ${\mathfrak s} {\mathfrak l}_n$ be the ${\mathbb Z}_p$-Lie algebra generated by the $ E_{i,j}$ and the $D_i$. Put $X_{i,j}=\exp E_{i,j}$ and $Y_i=\exp D_i$. Denote by ${\rm Sl}_n^1({\mathbb Z}_p)$ the subgroup of ${\rm Sl}_n({\mathbb Z}_p)$ generated by the matrices $X_{i,j}$ and $Y_i$. The group ${\rm Sl}_n^1({\mathbb Z}_p)$ is uniform and of dimension $n^2-1$. It is also the kernel of the reduction map of ${\rm Sl}_n({\mathbb Z}_p)$ modulo $2p$. Moreover, ${\rm Sl}_n^1({\mathbb Z}_p)$ is also FAb, meaning that every open subgroup ${\mathcal H}$ has finite abelianization. \end{exem} \medskip Recall by Lazard \cite{lazard} (see also \cite{Symonds-Weigel} for an alternative proof): \begin{theo}[Lazard \cite{lazard}] \label{Lazard} Let ${\rm G}$ be a uniform pro-$p$ group (of dimension $d>0$). Then for all $i\geq 1$, one has: $$H^i({\rm G}) \simeq \bigwedge^i(H^1({\rm G}),$$ where here the exterior product is induced by the cup-product. \end{theo} As consequence, one has immediately: \begin{coro} Let ${\rm G}$ be a uniform pro-$p$ group. Then for all $x,y \in H^1({\rm G})$, one has $x\cup x \cup y =0 \in H^3({\rm G})$. \end{coro} \begin{rema} For $p>2$, Theorem \ref{Lazard} is an equivalence: a pro-$p$ group ${\rm G}$ is uniform if and only if, for $i\geq 1$, $H^i({\rm G}) \simeq \bigwedge^i(H^1({\rm G})$. (See \cite{Symonds-Weigel}.) \end{rema} Let us mention another consequence useful in our context: \begin{coro} Let ${\rm G}$ be a FAb uniform prop-$p$ group of dimension $d>0$. Then $d\geq 3$. \end{coro} \begin{proof} Indeed, if $\dim {\rm G}=1$, then ${\rm G} \simeq {\mathbb Z}_p$ (${\rm G}$ is pro-$p$ free) and, if $\dim{\rm G}=2$, then by Theorem \ref{Lazard0}, $H^2({\rm G}) \simeq {\mathbb F}_p$ and ${\rm G}^{ab} \twoheadrightarrow {\mathbb Z}_p$. Hence, $\dim {\rm G} $ should be at least $3$. \end{proof} \subsection{} Let us recall the Fontaine-Mazur conjecture (5b) of \cite{FM}. \begin{conjectureNC} \label{conj1} Let ${\rm K}$ be a number field. Then every continuous Galois representation $\rho : {\rm G}^{ur}_{\rm K} \rightarrow {\rm Gl}_m({\mathbb Z}_p)$ has finite image. \end{conjectureNC} Following the result of Theorem \ref{Lazard0} of Lazard, we see that proving Conjecture (5b) of \cite{FM} for ${\rm K}$, is equivalent to proving Conjecture \ref{conj2} for every finite extension ${\rm L}/{\rm K}$ in ${\rm K}^{ur}/{\rm K}$. \section{Arithmetic consequences} \subsection{The strategy} \label{section:strategy} Usually, when $p$ is odd, cup-products factor through the exterior product. But, for $p=2$, it is not the case! This is the obvious observation that we will combine with \'etale cohomology and with the cohomology of uniform pro-$p$ groups. \ From now on we assume that $p=2$. \ Suppose given ${\rm G}$ a non-trivial uniform quotient of ${\rm G}^{ur}_{\rm K}(p)$. Then by the inflation map one has: $$H^1({\rm G}) \hookrightarrow H^1({\rm G}^{ur}_{\rm K}(p)).$$ Now take $a,b \in H^1({\rm G}^{ur}_{\rm K}(p))$ coming from $H^1({\rm G})$. Then, the cup-product $a\cup a \cup b \in H^3({\rm G}^{ur}_{\rm K}(p))$ comes from $H^3({\rm G})$ by the inflation map. In other words, one has the following commutative diagram: $$\xymatrix{H^3({\rm G}) \ar[r]^{inf}&H^3({\rm G}^{ur}_{\rm K}(p)) \ar[r]^{\alpha_3} & H^3_{et}({\mathcal X}_{\rm K}) \\ H^1({\rm G})^{\otimes^3} \ar@{->>}[u]^{\beta_0} \ar@{^(->}[r] &H^1({\rm G}^{ur}_{\rm K}(p))^{\otimes ^3}\ar[r]^\simeq \ar[u]^\beta & \ar[u]^{\beta_{et}} H^1_{et}({\mathcal X}_{\rm K})^{\otimes^3} }$$ But by Lazard's result (Theorem \ref{Lazard}), $\beta_0(a\otimes a \otimes b)=0$, and then one gets a contradiction if $\alpha_1(a)\cup \alpha_1(a) \cup \alpha_1(b) $ is non-zero in $H^3_{et}({\mathcal X}_{\rm K})$: it is at this level that one may use the computation of Carlson-Schlank. \medskip Before developing this observation in the context of analytic pro-$2$ group, let us give two immediate consequences: \begin{coro} Let ${\rm K}/{\mathbb Q}$ be a quadratic imaginary number field satisfying the condition of Proposition \ref{criteria}. Then ${\rm G}^{ur}_{{\rm K}}(2)$ is of cohomological dimension at least $3$. \end{coro} \begin{proof} Indeed, there exists a non-trivial cup-product $x\cup x \cup y \in H^3_{et}(X)$ and then non-trivial in $H^3({\rm G}_{\rm K}(2))$. \end{proof} \medskip \begin{coro}\label{coro-exemple} Let $p_1,p_2,p_3,p_4$ be four prime numbers such that $p_1 p_2 p_3 p_4 \equiv 3 (\mod 4)$. Take ${\rm K}={\mathbb Q}(\sqrt{-p_1 p_2 p_3 p_4})$. Suppose that there exist $i\neq j$ such that $\displaystyle{\left(\frac{p_{i}^*}{p_{j}}\right)=-1}$. Then ${\rm G}^{ur}_{\rm K}(2)$ has non-trivial uniform quotient. \end{coro} \begin{rema} Here, one may replace $p_1$ by $2$. But we are not guaranteed in all cases of the infiniteness of ${\rm G}^{ur}_{\rm K}(2)$, as we are outside the conditions of the result of Hajir \cite{Hajir}. We will see later the reason. \end{rema} \begin{proof} Let us start with a non-trivial uniform quotient ${\rm G}$ of ${\rm G}^{ur}_{\rm K}(2)$. As by class field theory, the pro-$2$ group ${\rm G}$ should be FAb, it is of dimension $3$, {\emph i.e.} $H^1({\rm G}) \simeq H^1({\rm G}^{ur}_{\rm K}(2))$. By Proposition \ref{criteria}, there exist $x,y \in H^1({\rm G})$ such that $x\cup y \neq 0 \in H^3_{et}({\mathcal X}_{\rm K})$, and the "strategy" applies. \end{proof} Now, we would like to extend this last construction. \subsection{Bilinear forms over ${\mathbb F}_2$ and conjecture \ref{conj2}} \subsubsection{Totally isotropic subspaces} Let ${\mathcal B}$ be a bilinear form over an ${\mathbb F}_2$-vector space ${\mathcal V}$ of finite dimension. Denote by $n$ the dimension of ${\mathcal V}$ and by $\rm rk({\mathcal B})$ the rank of ${\mathcal B}$. \begin{defi} Given a bilinear form ${\mathcal B}$, one define the index $\nu({\mathcal B})$ of ${\mathcal B}$ by $$\nu({\mathcal B}):=\max \{ \dim W, \ {\mathcal B}(W,W)=0\}.$$ \end{defi} The index $\nu({\rm K})$ is then an upper bound of the dimension of totally isotropic subspaces $W$ of ${\mathcal V}$. As we will see, the index $\nu({\mathcal B})$ is well-known when ${\mathcal B}$ is symmetric. For the general case, one has: \begin{prop} \label{bound-nu} The index $\nu({\mathcal B})$ of a bilinear form ${\mathcal B}$ is at most than $n - \frac{1}{2}\rm rk({\mathcal B})$. \end{prop} \begin{proof} Let ${\rm W}$ be a totally isotropic subspace of ${\mathcal V}$ of dimension $i$. Let us complete a basis of $W$ to a basis ${\rm B}$ of ${\mathcal V}$. It is then easy to see that the Gram matrix of ${\mathcal B}$ in ${\rm B}$ is of rank at most $2n-2i$. \end{proof} This bound is in a certain sense optimal as we can achieve it in the symmetric case. \begin{defi} $(i)$ Given $a\in {\mathbb F}_2$. The bilinear form $b(a)$ with matrix $\left(\begin{array}{cc} a&1 \\ 1&0 \end{array}\right) $ is called a metabolic plan. A metabolic form is an orthogonal sum of metabolic plans (up to isometry). $(ii)$ A symmetric bilinear form $(V,{\mathcal B})$ is called alternating if ${\mathcal B}(x,x) = 0$ for all $x\in V$. Otherwise ${\mathcal B}$ is called nonalternating. \end{defi} \medskip Recall now a well-known result on symmetric bilinear forms over ${\mathbb F}_2$. \begin{prop}\label{proposition:dimension-isotropic} Let $(V,{\mathcal B})$ be a symmetric bilinear form of dimension $n$ over ${\mathbb F}_2$. Denote by $r$ the rank of ${\mathcal B}$. Write $r=2r_0 +\delta$, with $\delta =0$ or $1$, and $r_0 \in {\mathbb N}$. \begin{enumerate} \item[$(i)$] If ${\mathcal B}$ is nonalternating, then $(V,{\mathcal B})$ is isometric to $$ \overbrace{b(1) \bot \cdots \bot b(1)}^{r_0} \bot \overbrace{\langle 1 \rangle}^{\delta} \bot \overbrace{\langle 0 \rangle \bot \cdots \bot \langle 0 \rangle}^{n-r} \ \simeq_{iso} \ \overbrace{\langle 1 \rangle \bot \cdots \bot \langle 1 \rangle}^{r} \bot \overbrace{\langle 0 \rangle \bot \cdots \bot \langle 0 \rangle}^{n-r} ;$$ \item[$(ii)$] If $ {\mathcal B}$ is alternating, then ${\mathcal B}$ is isometric to $$ \overbrace{b(0) \bot \cdots \bot b(0)}^{r_0} \bot \overbrace{\langle 0 \rangle \bot \cdots \bot \langle 0 \rangle}^{n-r}.$$ \end{enumerate} Moreover, $\nu({\mathcal B})=n-r+r_0=n-r_0-\delta$. \end{prop} When $({\mathcal V},{\mathcal B})$ is not necessary symmetric, let us introduce the symmetrization ${\mathcal B}^{sym}$ of ${\mathcal B}$ by $${\mathcal B}^{sym}(x,y)={\mathcal B}(x,y)+{\mathcal B}(y,x), \ \ \ \forall x,y \in {\mathcal V}.$$ One has: \begin{prop}\label{proposition:dimension-isotropic2} Let $({\mathcal V},{\mathcal B})$ be a bilinear form of dimension $n$ over ${\mathbb F}_2$. Then $$\nu({\mathcal B}) \geq n - \lfloor \frac{1}{2} \rm rk({\mathcal B}^{sym}) \rfloor - \lfloor \frac{1}{2} \rm rk({\mathcal B}) \rfloor.$$ In particular, $\nu({\mathcal B}) \geq n - \frac{3}{2} \rm rk({\mathcal B})$. \end{prop} \begin{proof} It is easy. Let us start with a maximal totally isotropic subspace $W$ of $({\mathcal V},{\mathcal B}^{sym})$. Then ${\mathcal B}_{|{\rm W}}$ is symmetric: indeed, for any two $x,y \in {\rm W}$, we get $0={\mathcal B}^{sym}(x,y)={\mathcal B}(x,y)+{\mathcal B}(y,x)$, and then ${\mathcal B}(x,y)={\mathcal B}(y,x)$ (recall that $V$ is defined over ${\mathbb F}_2$). Hence by Proposition \ref{proposition:dimension-isotropic}, ${\mathcal B}_{|{\rm W}}$ has a totally isotropic subspace of dimension $\nu({\mathcal B}_{|{\rm W}})=\dim {\rm W} - \lfloor \frac{1}{2} \rm rk({\mathcal B}_{|{\rm W}}) \rfloor$. As $\dim {\rm W}=n-\lfloor \frac{1}{2} \rm rk({\mathcal B}^{sym})\rfloor$ (by Proposition \ref{proposition:dimension-isotropic}), one obtains the first assertion. For the second one, it is enough to note that $\rm rk({\mathcal B}^{sym}) \leq 2 \rm rk({\mathcal B})$. \end{proof} \subsubsection{Bilinear form over the Kummer radical of the $2$-elementary abelian maximal unramified extension} Let us start with a totally imaginary number field ${\rm K}$. Denote by $n$ the $2$-rank of ${\rm G}^{ur}_{\rm K}(2)$, in other words, $n=d_2 {\rm Cl}_{\rm K}$. \medskip Let $V=\langle a_1,\cdots, a_n\rangle ({\rm K}^\times)^2 \in {\rm K}^\times /({\rm K}^\times)^2$ be the Kummer radical of the $2$-elementary abelian maximal unramified extension ${\rm K}^{ur,2}/{\rm K}$. Then $V$ is an ${\mathbb F}_2$-vector space of dimension~$n$. As we have seen in section \ref{section:Carlson-Schlank}, for every prime ideal ${\mathfrak p} $ of ${\mathcal O}_{\rm K}$, the ${\mathfrak p}$-valuation of $a_i$ is even, and then $\sqrt{(a_i)}$ as ideal of ${\mathcal O}_{\rm K}$ has a sense. \medskip For $x\in V$, denote ${\rm K}_x:={\rm K}(\sqrt{x})$, and ${\mathfrak a}(x):=\sqrt{(x)} \in {\mathcal O}_{\rm K}$. We can now introduce the bilinear form ${\mathcal B}_{\rm K}$ that plays a central role in our work. \begin{defi} For $a,b \in V$, put: $${\mathcal B}_{\rm K}(a,b)=\left(\frac{{\rm K}_a/{\rm K}}{{\mathfrak a}(b)}\right)\cdot\sqrt{a} \Bigg/ \sqrt{a} \in {\mathbb F}_2,$$ where here we use the additive notation. \end{defi} \begin{rema} The Hilbert symbol between $a$ and $b$ is trivial due to the parity of $v_{\mathfrak q}(a)$. \end{rema} Of course, we have: \begin{lemm} The application ${\mathcal B}_{\rm K}: V \times V \rightarrow {\mathbb F}_2$ is a bilinear form on $V$. \end{lemm} \begin{proof} The linearity on the right comes from the linearity of the Artin symbol and the linearity on the left is an easy observation. \end{proof} \begin{rema} \label{remarque:matrix} If we denote by $\chi_{i}$ a generator of $H^1({\rm Gal}({\rm K}(\sqrt{a_i})/{\rm K}))$, then the Gram matrix of the bilinear form ${\mathcal B}_{\rm K}$ in the basis $\{a_1({\rm K}^\times)^2,\cdots, a_n({\rm K}^\times)^2\}$ is exactly the matrix $(\chi_{i} \cup \chi_i \cup \chi_j)_{i,j}$ of the cup-products in $H_{et}^3({\rm Spec } {\mathcal O}_{\rm K})$. See Proposition \ref{proposition:C-S} and Remark \ref{remarque:symbole}. Hence the bilinear form ${\mathcal B}_{\rm K}$ coincides with the bilinear form ${\mathcal B}_{\rm K}^{et}$ on $H_{et}^1({\rm Spec } {\mathcal O}_{\rm K})$ defined by ${\mathcal B}_{\rm K}^{et}(x,y)=x\cup x\cup y \ \in H^3_{et}({\rm Spec } {\mathcal O}_{\rm K})$. \end{rema} The bilinear form ${\mathcal B}_{\rm K}$ is not necessarily symmetric, but we will give later some situations where ${\mathcal B}_{\rm K}$ is symmetric. Let us give now two types of totally isotropic subspaces ${\rm W}$ that may appear. \begin{defi} The right-radical ${\mathcal R}ad_r$ of a bilinear form ${\mathcal B}$ on ${\mathcal V}$ is the subspace of ${\mathcal V}$ defined by: ${\mathcal R}ad_r:=\{x\in {\mathcal V}, \ {\mathcal B}(V,x)=0\}$. \end{defi} Of course one always has $\dim {\mathcal B}={\rm rk {\mathcal B}} + {\dim} {{\mathcal R}ad_r}$. And, remark moreover that the restriction of ${\mathcal B}$ at ${\mathcal R}ad_r$ produces a totally isotropic subspace of ${\mathcal V}$. \medskip Let us come back to the bilinear form ${\mathcal B}_{\rm K}$ on the Kummer radical of ${\rm K}^{ur,2}/{\rm K}$. \begin{prop} Let ${\rm W}:=\langle \varepsilon_1, \cdots, \varepsilon_r \rangle ({\rm K}^\times)^2 \subset {\mathcal V}$ be an ${\mathbb F}_2$-subspace of dimension~$r$, generated by some units $\varepsilon_i \in {\mathcal O}_{\rm K}^\times$. Then ${\rm W} \subset {\mathcal R}ad_r $, and thus $({\mathcal V},{\mathcal B}_{\rm K})$ contains ${\rm W}$ as a totally isotropic subspace of dimension~$r$. \end{prop} \begin{proof} Indeed, here ${\mathfrak a}(\varepsilon_i)={\mathcal O}_{\rm K}$ for $i=1,\cdots, r$. \end{proof} \begin{prop} Let ${\rm K}={\rm k}(\sqrt{b})$ be a quadratic extension. Suppose that there exist $a_1,\cdots, a_r \in {\rm k}$ such that the extensions ${\rm k}(\sqrt{a_i})/{\rm k}$ are independent and unramified everywhere. Suppose moreover that $b \notin \langle a_1,\cdots, a_r \rangle ({\rm k}^\times)^2$. Then ${\rm W} := \langle a_1, \cdots,a_r \rangle ({\rm K}^\times)^2 $ is a totally isotropic subspace of dimension $r$. \end{prop} \begin{proof} Let ${\mathfrak p} \subset {\mathcal O}_{\rm k}$ be a prime ideal of ${\mathcal O}_{\rm k}$. It is sufficient to prove that $\displaystyle{\left(\frac{{\rm K}_{a_i}/{\rm K}}{{\mathfrak p}}\right)}$ is trivial. Let us study all the possibilities. $\bullet$ If ${\mathfrak p}$ is inert in ${\rm K}/{\rm k}$, then as ${\rm K}(\sqrt{a_i})/{\rm K}$ is unramified at ${\mathfrak p}$, necessary ${\mathfrak p}$ splits in ${\rm K}(\sqrt{a_i})/{\rm K}$ and then $\displaystyle{\left(\frac{{\rm K}_{a_i}/{\rm K}}{{\mathfrak p}}\right)}$ is trivial. $\bullet$ If ${\mathfrak p}={\mathfrak P}^2$ is ramified in ${\rm K}/{\rm k}$, then $\displaystyle{\left(\frac{{\rm K}_{a_i}/{\rm K}}{{\mathfrak p}}\right)= \left(\frac{{\rm K}_{a_i}/{\rm K}}{{\mathfrak P}}\right)^2}$ is trivial. $\bullet$ If ${\mathfrak p}={\mathfrak P}_1{\mathfrak P}_2$ splits, then obviously $\displaystyle{\left(\frac{{\rm K}_{a_i}/{\rm K}}{{\mathfrak P}_1}\right)=\left(\frac{{\rm K}_{a_i}/{\rm K}}{{\mathfrak P}_2}\right)}$, and then $\displaystyle{\left(\frac{{\rm K}_{a_i}/{\rm K}}{{\mathfrak p}}\right)}$ is trivial. \end{proof} \medskip It is then natural to define the index of ${\rm K}$ as follows: \begin{defi} The index $\nu({\rm K})$ of ${\rm K}$ is the index of the bilinear form ${\mathcal B}_{\rm K}$. \end{defi} Of course, if the form ${\mathcal B}_{\rm K}$ is non-degenerate, one has: $\nu({\rm K}) \leq \frac{1}{2} d_2 {\rm Cl}_{\rm K}$. Thus one says that ${\rm Cl}_{\rm K}$ is non-degenerate if the form ${\mathcal B}_{\rm K}$ is non-degenerate. \medskip One can now present the main result of our work: \begin{theo} \label{maintheorem} Let ${\rm K}/{\mathbb Q}$ be a totally imaginary number field. Then ${\rm G}^{ur}_{\rm K}(2)$ has no uniform quotient of dimension $d> \nu({\rm K})$. In particular: \begin{enumerate} \item[$(i)$] if $\nu({\rm K}) <3$, then the Conjecture \ref{conj2} holds for ${\rm K}$ (and $p=2$); \item[$(ii)$] if ${\rm Cl}_{\rm K}$ is non-degenerate, then ${\rm G}^{ur}_{\rm K}(2)$ has no uniform quotient of dimension $d > \frac{1}{2} d_2 {\rm Cl}_{\rm K}$. \end{enumerate} \end{theo} \begin{proof} Let ${\rm G}$ be a non-trivial uniform quotient of ${\rm G}^{ur}_{\rm K}(p)$ of dimension $d>0$. Let $W$ be the Kummer radical of $H^1({\rm G})^\vee $; here $W$ is a subspace of the Kummer radical ${\mathcal V}$ of ${\rm K}^{ur,2}/{\rm K}$. As $d>\nu({\rm K})$, the space ${\rm W}$ is not totally isotropic. Then, one can find $x,y \in H_{et}^1({\rm G}) \subset H^1({\mathcal X}_{\rm K})$ such that $x \cup x\cup y \in H^3_{et}({\mathcal X}_{\rm K})$ is not zero (by Proposition \ref{proposition:C-S}). See also Remark \ref{remarque:matrix}. And thanks to the stategy developed in Section \ref{section:strategy}, we are done for the first part ot the theorem. $(i)$: as ${\rm G}^{ur}(2)$ is FAb, every non-trivial uniform quotient ${\rm G}$ of ${\rm G}^{ur}(2)$ should be of dimension at least $d\geq 3$. $(ii)$: in this case $\nu({\rm K}) \leq \frac{1}{2} \rm rk({\mathcal B}_{\rm K})$. \end{proof} We finish this section with the proof of the theorem presented in the introduction. $\bullet$ As $\nu({\rm K}) \leq n-\frac{1}{2} \rm rk({\mathcal B}_{\rm K})$, see Proposition \ref{bound-nu} and Remark \ref{remarque:matrix}, the assertion $(i)$ can be deduced by Theorem \ref{maintheorem}. $\bullet$ This is an obvious observation for the small dimensions. In the three cases, $\nu({\rm K}) \leq n-\frac{1}{2} \rm rk({\mathcal B}_{\rm K}) <3$. \subsection{The imaginary quadratic case} \subsubsection{The context} \label{section:thecontext} Let us consider an imaginary quadratic field ${\rm K}={\mathbb Q}(\sqrt{D})$, $D \in {\mathbb Z}_{<0}$ square-free. Let $p_1,\cdots, p_{k+1}$ be the {\it odd} prime numbers dividing $D$. Let us write the discriminant ${\rm disc}_{\rm K}$ of ${\rm K}$ by: ${\rm disc}_{\rm K}=p_0^*\cdot p_1^* \cdots p_{k+1}^*$, where $p_0^*\in \{1, -4,\pm 8\}$. The $2$-rank $n$ of ${\rm Cl}_{\rm K}$ depends on the ramification of $2$ in ${\rm K}/{\mathbb Q}$. Put ${\rm K}^{ur,2}$ the $2$-elementary abelain maximal unramified extension of ${\rm K}$: \begin{enumerate} \item[$-$] if $2$ is unramified in ${\rm K}/{\mathbb Q}$, {\emph i.e.} $p_0^*=1$, then $n=k$ and $V=<p_1^*,\cdots, p_k^*> ({\rm K}^\times)^2\subset {\rm K}^\times$ is the Kummer radical of ${\rm K}^{ur,2}/{\rm K}$; \item[$-$] is $2$ is ramified in ${\rm K}/{\mathbb Q}$, {\emph i.e.} $p_0^*=-4$ or $\pm 8$, then $n=k+1$ and $V=<p_1^*,\cdots, p_{k+1}^*> ({\rm K}^\times)^2\subset {\rm K}^\times$ is the Kummer radical of ${\rm K}^{ur,2}/{\rm K}$. \end{enumerate} We denote by ${\mathcal S}=\{p_1^*,\cdots, p_{n}^*\}$ the ${\mathbb F}_2$-basis of $V$, where here $n=d_2 {\rm Cl}_{\rm K}$ ($=k$ or $k+1$). \begin{lemm} \medskip $(i)$ For $p^*\neq q^* \in {\mathcal S}$, one has: ${\mathcal B}_{\rm K}(p^*,q^*)=0$ if and only if, $\displaystyle{\left(\frac{p^*}{q}\right)=1}$. $(ii)$ For $p |D$, put $D_p:=D/p^*$. Then for $p^*\in{\mathcal S}$, one has: $\displaystyle{{\mathcal B}_{\rm K}(p^*,p^*):=\left(\frac{D_p}{p}\right)}$. \end{lemm} \begin{proof} Obvious. \end{proof} Hence the matrix of the bilinear form ${\mathcal B}_{\rm K}$ in the basis ${\mathcal S}$ is a square $n\times n$ R\'edei-matrix type ${{\mathcal M}_{\rm K}}=\left( m_{i,j}\right)_{i,j}$, where $$m_{i,j}=\left\{\begin{array}{ll} \displaystyle{ \left(\frac{p_i^*}{p_j}\right)} & {\rm if \ } i\neq j, \\ \displaystyle{ \left(\frac{D_{p_i}}{p_i}\right)} & {\rm if \ } i=j. \end{array}\right.$$ Here as usual, one uses the additive notation (the $1$'s are replaced by $0$'s and the $-1$'s by~$1$). \medskip \begin{exem} \label{exemple-Boston} Take ${\rm K}={\mathbb Q}(\sqrt{-4\cdot 3 \cdot 5 \cdot 7 \cdot 13})$. This quadratic field has a root discriminant $|{\rm disc}_{\rm K}|^{1/2}= 73. 89 \cdots$, but we dont know actually if ${\rm G}^{ur}_{\rm K}(2)$ is finite or not; see the recent works of Boston and Wang \cite{Boston-Wang}. Take ${\mathcal S}=\{-3,-5,-7,-13\}$. Then the Gram matrix of ${\mathcal B}_{\rm K}$ in ${\mathcal S}$ is: $${\mathcal M}_{\rm K}=\left(\begin{array}{cccc} 1&1&1&0 \\ 1&1&1&1 \\ 0&1&1&1 \\ 0&1&1&0 \end{array}\right).$$ Hence $\rm rk({\mathcal B}_{\rm K})=3$ and $\nu({\rm K}) \leq 4- \frac{3}{2} =2.5$. By Theorem \ref{maintheorem}, one concludes that ${\rm G}^{ur}_{\rm K}(2)$ has no non-trivial uniform quotient. By Proposition \ref{proposition:dimension-isotropic2}, remark that here one has: $\nu({\rm K})=2$. \end{exem} Let us recall at this level a part of the theorem of the introduction: \begin{coro} \label{coro-FM-quadratic} The Conjecture \ref{conj2} holds when: \begin{enumerate} \item[$(i)$] $d_2{\rm Cl}_{\rm K}= 5$ and ${\mathcal B}_{\rm K}$ is non-degenerate; \item[$(ii)$] $d_2{\rm Cl}_{\rm K}=4$ and $\rm rk({\mathcal B}_{\rm K}) \geq 3$; \item[$(iii)$] $d_2{\rm Cl}_{\rm K}=3$ and $\rm rk({\mathcal B}_{\rm K}) >0$. \end{enumerate} \end{coro} Remark that $(iii)$ is an extension of corollary \ref{coro-exemple}. \subsubsection{Symmetric bilinear forms. Examples} Let us conserve the context of the previous section \ref{section:thecontext}. Then, thanks to the quadratic reciprocity law, one gets: \begin{prop} \label{prop:congruence} The bilinear form ${\mathcal B}_{\rm K}: V \times V \rightarrow {\mathbb F}_2$ is symmetric, if and only if, there is at most one prime $p \equiv 3 (\mod 4)$ dividing $D$. \end{prop} \begin{proof} Obvious. \end{proof} Let us give some examples (the computations have been done by using PARI/GP \cite{pari}). \begin{exem} Take $k+1$ prime numbers $p_1,\cdots, p_{k+1}$, such that \begin{enumerate} \item[$\bullet$] $p_1\equiv \cdots p_{k} \equiv 1({\rm mod} \ 4)$ and $p_{k+1} \equiv 3 ({\rm mod} \ 4)$; \item[$\bullet$] for $1\leq i < j \leq k$, $\displaystyle{\left(\frac{p_i}{p_j}\right)=1}$; \item[$\bullet$] for $i=1,\cdots, k$, $\displaystyle{\left(\frac{p_i}{p_{k+1}}\right)=-1}$ \end{enumerate} Put ${\rm K}={\mathbb Q}(\sqrt{-p_1 \cdots p_{k+1}})$. In this case the matrix of the bilinear form ${\mathcal B}_{\rm K}$ in the basis $(p_i)_{1 \leq k}$ is the identity matrix of dimension $k \times k$ and, $\nu({\rm K})=\lfloor \frac{k}{2} \rfloor$. Hence, ${\rm G}^{ur}_{\rm K}(p)$ has no uniform quotient of dimension at least $\lfloor \frac{k}{2} \rfloor +1$. In particular, if we choose an integer $t>0$ such that \ $\sqrt{k+1}\geq t \geq \sqrt{\lfloor \frac{k}{2} \rfloor +2}$, then there is no quotient of ${\rm G}^{ur}_{\rm K}(2)$ onto ${\rm Sl}_t^1({\mathbb Z}_2)$. (If $t > \sqrt{k+1}$, it is obvious.) \medskip Here are some more examples. For ${\rm K}_1={\mathbb Q}(\sqrt{-5\cdot 29 \cdot 109 \cdot 281 \cdot 349 \cdot 47})$, ${\rm G}^{ur}_{{\rm K}_1}(2)$ has no uniform quotient; here ${\rm Cl}({\rm K}_1) \simeq ({\mathbb Z}/2{\mathbb Z})^5$. Take ${\rm K}_2={\mathbb Q}(\sqrt{-5\cdot 29 \cdot 109 \cdot 281 \cdot 349 \cdot 1601 \cdot 1889 \cdot 5581 \cdot 3847})$; here ${\rm Cl}({\rm K}_2) \simeq ({\mathbb Z}/2{\mathbb Z})^8 $. Then ${\rm G}^{ur}_{{\rm K}_2}(2)$ has no uniform quotient of dimension at least $5$. In particular, there is no unramified extension of~${\rm K}_2$ with Galois group isomorphic to ${\rm Sl}_3^1({\mathbb Z}_2)$. \end{exem} \medskip \begin{exem} Take $2m+1$ prime numbers $p_1,\cdots, p_{2m+1}$, such that \begin{enumerate} \item[$\bullet$] $p_1\equiv \cdots p_{2m} \equiv 1({\rm mod} \ 4)$ and $p_{2m+1} \equiv 3 ({\rm mod} \ 4)$; \item[$\bullet$] $\displaystyle{\left(\frac{p_1}{p_2}\right)= \left(\frac{p_3}{p_4}\right) = \cdots = \left(\frac{p_{2m-1}}{p_{2m}}\right) = -1}$; \item[$\bullet$] for the other indices $1\leq i <j \leq 2m$, $\displaystyle{\left(\frac{p_i}{p_j}\right)= 1}$; \item[$\bullet$] for $i=1,\cdots, 2m$, $\displaystyle{\left(\frac{p_i}{p_{2m+1}}\right)=-1}$ \end{enumerate} Put ${\rm K}={\mathbb Q}(\sqrt{-p_1\cdots p_{2m+1}})$. In this case the bilinear form ${\mathcal B}_{\rm K}$ is nondegenerate and alternating, then isometric to $ \displaystyle{\overbrace{b(0) \bot \cdots \bot b(0)}^{m} }$. Hence, $\nu({\rm K})=m$, and ${\rm G}^{ur}_{\rm K}(p)$ has no uniform quotient of dimension at least $m +1$. For example, for ${\rm K}={\mathbb Q}(\sqrt{-5\cdot 13 \cdot 29 \cdot 61 \cdot 1049 \cdot 1301 \cdot 743})$, ${\rm G}^{ur}_{\rm K}(2)$ has no uniform quotient of dimension at least $4$. \end{exem} \subsubsection{Relation with the $4$-rank of the Class group - Density estimations} The study of the $4$-rank of the class group of quadratic number fields started with the work of R\'edei \cite{Redei} (see also \cite{Redei-Reichardt}). Since, many authors have contribued to its extensions, generalizations and applications. Let us cite an article of Lemmermeyer \cite{Lemmermeyer} where one can find a large litterature about the question. See also a nice paper of Stevenhagen \cite{Stevenhagen}, and the work of Gerth \cite{Gerth} and Fouvry-Kl\"uners \cite{Fouvry-Klueners} concerning the density question. \begin{defi} Let ${\rm K}$ be a number field, define by ${\rm R}_{{\rm K},4}$ the $4$-rank of ${\rm K}$ as follows: $${\rm R}_{{\rm K},4}:= \dim_{{\mathbb F}_2} {\rm Cl}_{\rm K}[4]/{\rm Cl}_{\rm K}[2],$$ where ${\rm Cl}_{\rm K}[m]=\{c \in {\rm Cl}_{\rm K},c^m=1\}$. \end{defi} Let us conserve the context and the notations of the section \ref{section:thecontext}: here ${\rm K}={\mathbb Q}(\sqrt{D})$ is an imaginary quadratic field of discrimant ${\rm disc}_{\rm K}$, $D \in {\mathbb Z}_{<0}$ square-free. Denote by $\{q_1,\cdots q_{n+1}\}$ the set of prime numbers that ramify in ${\rm K}/{\mathbb Q}$; $d_2 {\rm Cl}_{\rm K}=n$. Here we can take $q_i=p_i$ for $1\leq i \leq n$, and $q_n=p_{k+1}$ or $q_n=2$ following the ramification at $2$. Then, consider the R\'edei matrix ${\mathcal M}_{\rm K}'=(m_{i,j})_{i,j}$ of size $(n+1)\times (n+1)$ with coefficients in ${\mathbb F}_2$, where $$m_{i,j}=\left\{\begin{array}{ll} \displaystyle{ \left(\frac{q_i^*}{q_j}\right)} & {\rm if \ } i\neq j, \\ \displaystyle{ \left(\frac{D_{q_i}}{q_i}\right)} & {\rm if \ } i=j. \end{array}\right.$$ It is not difficult to see that the sum of the rows is zero, hence the rank of ${\mathcal M}'_{\rm K}$ is smaller than $n$. \begin{theo}[R\'edei] \label{theorem:Redei} Let ${\rm K}$ be an imaginary quadratic number field. Then $ {\rm R}_{{\rm K},4}=d_2 {\rm Cl}_{\rm K}-\rm rk({\mathcal M}'_{\rm K})$. \end{theo} \begin{rema} The strategy of R\'edei is to construct for every couple $(D_1,D_2)$ "of second kind", a degree $4$ cyclic unramified extension of ${\rm K}$. Here to be of second kind means that ${\rm disc}_{\rm K}=D_1D_2$, where $D_i$ are fundamental discriminants such that $\left(\frac{D_1}{p_2}\right)= \left(\frac{D_2}{p_1}\right)=1$, for every prime $p_i|D_i$, $i=1,2$. And clearly, this condition corresponds exactly to the existence of orthogonal subspaces $W_i$ of the Kummer radical ${\mathcal V}$, $i=1,2$, generated by the $p_i^*$, for all $p_i|D_i$: ${\mathcal B}_{\rm K}(W_1,W_2)={\mathcal B}_{\rm K}(W_2,W_1)=\{0\}$. Such orthogonal subspaces allow us to construct totally isotropic subspaces. And then, the larger the $4$-rank of ${\rm Cl}_{\rm K}$, the larger $\nu({\rm K})$ must be. \end{rema} Consider now the matrix ${\mathcal M}''_{\rm K}$ obtained from ${\mathcal M}'_{\rm K}$ after missing the last row. Remark here that the matrix ${\mathcal M}_{\rm K}$ is a submatrix of the R\'edei matrix ${\mathcal M}''_{\rm K}$: $${\mathcal M}''_{\rm K}=\left(\begin{array}{cl} \begin{array}{|c|} \hline \\ {\mathcal M}_{\rm K} \\ \\ \hline \end{array} & \begin{array}{c} * \\ \vdots \\ * \end{array} \end{array} \right)$$ Hence, $\rm rk({\mathcal M}_{\rm K}) +1 \geq \rm rk({\mathcal M}'_{\rm K}) \geq \rm rk({\mathcal M}_{\rm K})$. Remark that in example \ref{exemple-Boston}, $\rm rk({\mathcal M}_{\rm K})=3$ and $\rm rk({\mathcal M}'_{\rm K})=4$. But sometimes one has $\rm rk({\mathcal M}'_{\rm K})=\rm rk({\mathcal M}_{\rm k})$, as for example: \begin{enumerate} \item[$(A)$:] when: $p_0=1$ (the set of primes $p_i \equiv 3 ({\rm mod} \ 4)$ is odd); \item[$(B)$:] or, when ${\mathcal B}_{\rm K}$ is non-degenerate. \end{enumerate} For situation $(A)$, it suffices to note that the sum of the columns is zero (thanks to the properties of the Legendre symbol). \medskip From now on we follow the work of Gerth \cite{Gerth}. Denote by ${\mathcal F}$ the set of imaginary quadratic number fields. For $0 \leq r \leq n$ and $X\geq 0$, put $${\rm S}_{X}=\big\{ {\rm K} \in {\mathcal F}, \ |{\rm disc}_{\rm K}|\leq X \big\},$$ $${\rm S}_{n,X}=\big\{{\rm K}\in {\rm S}_{X}, \ d_2{\rm Cl}_{\rm K}=n \big\}, \ \ {\rm S}_{n,r,X}=\big\{{\rm K} \in {\rm S}_{n,X}, \ {\rm R}_{{\rm K},4}=r \big\}.$$ Denote also $${\rm A}_{X}=\big\{{\rm K}\in {\rm S}_X, {\rm \ satisfying } \ (A) \big\}$$ $$ {\rm A}_{n,X}=\big\{ {\rm K} \in {\rm A}_{X}, \ d_2 {\rm Cl}_{\rm K}=n\big\}, \ \ {\rm A}_{n,r,X}=\big\{ {\rm K} \in {\rm A}_{n,X}, \ {\rm R}_{{\rm K},4}=r\big\}.$$ One has the following density theorem due to Gerth: \begin{theo}[Gerth \cite{Gerth}] The limits $\displaystyle{\lim_{X\rightarrow \infty} \frac{|{\rm A}_{n,r,X}|}{|{\rm A}_{n,X}|} }$ and $\displaystyle{\lim_{X\rightarrow \infty} \frac{|{\rm S}_{n,r,X}|}{|{\rm S}_{n,X}|} }$ exist and are equal. Denote by $d_{n,r}$ this quantity. Then $d_{n,r}$ is explicit and, $$d_{\infty,r}:=\lim_{n \rightarrow \infty} d_{n,r}= \frac{2^{-r^2} \prod_{k=1}^\infty(1-2^{-k})}{\prod_{k=1}^r(1-2^{-k})}.$$ \end{theo} \medskip Recall also the following quantities introduced at the beginning of our work: $${\rm FM}_{n,X}^{(d)}:=\{ {\rm K} \in {\rm S}_{n,X}, \ {\rm G}^{ur}_{\rm K}(2) {\rm \ has \ no \ uniform \ quotient \ of \ dimension} > d\},$$ $$ {\rm FM}_{n,X}:=\{ {\rm K} \in {\rm S}_{n,X}, \ {\rm Conjecture \ \ref{conj2} \ holds \ for \ }{\rm K}\},$$ and the limits: $${\rm FM}_{n}:=\liminf_{X \rightarrow + \infty} \frac{\# {\rm FM}_{n,X}}{\#{\rm S}_{n,X}}, \ \ \ {\rm FM}_n^{(d)}:= \liminf_{X\rightarrow + \infty} \frac{\# {\rm FM}_{n,X}^{(d)}}{\#{\rm S}_{n,X}}.$$ After combining all our observations, we obtain (see also Corollary \ref{coro-intro1}): \begin{coro} For $d\leq n$, one has $${\rm FM}_n^{(d)} \geq d_{n,0}+d_{n,1}+\cdots + d_{n,2d-n-1}.$$ In particular: \begin{enumerate} \item[$(i)$] $ {\rm FM}_{3} \geq .992187 $; \item[$(ii)$] ${\rm FM}_{4} \geq .874268$, ${\rm FM}_4^{(4)} \geq .999695$; \item[$(iii)$] ${\rm FM}_5 \geq .331299$, ${\rm FM}_5^{(4)} \geq .990624$, ${\rm FM}_5^{(5)} \geq .9999943$; \item[$(iv)$] ${\rm FM}_{6}^{(4)} \geq .867183$, ${\rm FM}_6^{(5)} \geq .999255$, ${\rm FM}_6^{(6)} \geq 1-5.2 \cdot 10^{-8}$; \item[$(v)$] for all $d\geq 3$, ${\rm FM}_d^{(1+d/2)} \geq .866364$, ${\rm FM}_d^{(2+d/2)} \geq .999953$. \end{enumerate} \end{coro} \begin{proof} As noted by Gerth in \cite{Gerth}, the dominating set in the density computation is the set ${\rm A}_{n,X}$ of imaginary quadratric number fields ${\rm K}={\mathbb Q}(\sqrt{D})$ satisfying $(A)$. But for ${\rm K}$ in ${\rm A}_{n,X}$, one has ${\rm rk({\mathcal B}_{\rm K})}={\rm rk({\mathcal M}_{\rm K})}= {n}-{\rm R}_{{\rm K},4}$. Hence for ${\rm K} \in {\rm A}_{n,X,r}$, by Proposition \ref{bound-nu} $$\nu({\rm K}) \leq n -\frac{1}{2} \big(n - {\rm R}_{{\rm K},4}\big)=\frac{1}{2}\big(n+ {\rm R}_{{\rm K},4}\big).$$ Now one uses Corollary \ref{coro-FM-quadratic}. Or equivalently, one sees that Conjecture \ref{conj2} holds when $3 > \frac{1}{2}\big( n + {\rm R}_{{\rm K},4}\big)$, {\emph i.e.}, when ${\rm R}_{{\rm K},4} < 6-n$. More generaly, ${\rm G}^{ur}_{\rm K}(2)$ has no uniform quotient of dimension $d$ when ${\rm R}_{{\rm K},4} < 2d -n$. In particular, $${\rm FM}_n^{(d)} \geq d_{n,0}+d_{n,1}+\cdots + d_{n,2d-n-1}.$$ Now one uses the estimates of Gerth in \cite{Gerth}, to obtain: \begin{enumerate} \item[$(i$)]${\rm FM}_{3} \geq d_{3,0}+d_{3,1}+d_{3,2} \simeq 0.992187 \cdots$ \item[$(ii)$] ${\rm FM}_4 \geq d_{4,0} + d_{4,1} \simeq 0.874268 \cdots $, ${\rm FM}_4^{(4)} \geq d_{4,0} + d_{4,1}+d_{4,2}+d_{4,3} \simeq .999695 \cdots$, \item[$(iii)$] $ {\rm FM}_5 \geq d_{5,0} \simeq 0.331299 \cdots $, ${\rm FM}_5^{(4)} \geq d_{5,0}+d_{5,1}+d_{5,2} \simeq .990624 \cdots$, ${\rm FM}_5^{(5)} \geq d_{5,0}+d_{5,1}+d_{5,2}+d_{5,3}+d_{5,4} \simeq .9999943 \cdots$, \item[$(iv)$] ${\rm FM}_6^{(4)} \geq d_{6,0}+d_{6,1} \simeq 0.867183 \cdots$, ${\rm FM}_6^{(5)} \geq d_{6,0}+d_{6,1}+d_{6,2}+d_{6,3} \simeq .999255 \cdots$, ${\rm FM}_6^{(6)} \geq 1- d_{6,6} \simeq 1-5.2 \cdot 10^{-8}$, \item[$(v)$] ${\rm FM}_d^{(1+d/2)} \geq d_{\infty,0}+ d_{\infty,1} \simeq .866364\cdots $, ${\rm FM}_d^{(2+d/2)} \geq d_{\infty,0}+d_{\infty,1}+d_{\infty,2}+d_{\infty,3} \simeq .999953 \cdots$. \end{enumerate} \end{proof} In the spirit of the Cohen-Lenstra heuristics, the work of Gerth has been improved by Fouvry-Kl\"uners \cite{Fouvry-Klueners}. This work allows us to give a more general density estimation as announced in Introduction. Recall $${\rm FM}_{X}^{[i]}:=\{{\rm K} \in {\rm S}_{X}, \ {\rm G}^{ur}_{\rm K}(2) {\rm \ has \ no \ uniform \ quotient \ of \ dimension} > i+ \frac{1}{2}d_2 {\rm Cl}_{\rm K}\}$$ and $${\rm FM}^{[i]}:= \liminf_{X\rightarrow + \infty} \frac{\# {\rm FM}^{[i]}_X}{\# {\rm S}_{X}}.$$ Our work allows us to obtain (see Corollary \ref{coro-intro2}): \begin{coro} For $i\geq 1$, one has: $${\rm FM}^{[i]} \geq d_{\infty,0}+ d_{\infty,1}+ \cdots + d_{\infty,2i-2} .$$ In particular, $${\rm FM}^{[1]} \geq .288788, \ \ {\rm FM}^{[2]} \geq . 994714, \ {\rm and } \ {\rm FM}^{[3]} \geq 1-9.7\cdot 10^{-8}.$$ \end{coro} \begin{proof} By Fouvry-Kl\"uners \cite{Fouvry-Klueners}, the density of imaginary quadratic fields for which ${\rm R}_{{\rm K},4}=r$, is equal to $d_{\infty,r}$. Remind of that for ${\rm K} \in {\mathcal F}$: ${\rm rg}({\mathcal M}_{\rm K}) \geq {\rm rg}({\mathcal M}_{\rm K}') -1 $. Then thanks to Proposition \ref{bound-nu} and Theorem \ref{theorem:Redei}, we get $$\nu({\rm K}) \leq \frac{1}{2} d_2 {\rm Cl}_{\rm K} + \frac{1}{2} +\frac{1}{2} {\rm R}_{{\rm K},4}.$$ Putting this fact together with Theorem \ref{maintheorem}, we obtain that ${\rm G}^{ur}_{\rm K}(2)$ has no uniform quotient of dimension $d > \frac{1}{2} d_2 {\rm Cl}_{\rm K} + \frac{1}{2} +\frac{1}{2} {\rm R}_{{\rm K},4}$. Then for $i\geq 1$, the proportion of the fields ${\rm K}$ in ${\rm FM}^{[i]}$ is at least the proportion of ${\rm K} \in {\mathcal F}$ for which ${\rm R}_{{\rm K},4} < 2i -1$, hence at least $d_{\infty,0}+ d_{\infty,1}+ \cdots + d_{\infty,2i-2}$ by \cite{Fouvry-Klueners}. To conclude: $${\rm FM}^{[1]} \geq d_{\infty,0} \simeq .288788\cdots$$ $$ {\rm FM}^{[2]} \geq d_{\infty,0} + d_{\infty,1}+ d_{\infty,2} \simeq .994714\cdots$$ $$ {\rm FM}^{[3]} \geq d_{\infty,0} + d_{\infty,1}+d_{\infty,2} +d_{\infty,3} +d_{\infty,4} \simeq 1-9.7\cdot 10^{-8}.$$ \end{proof} \
1,108,101,562,402
arxiv
\section{Introduction} \label{intro} T Tauri stars (TTS) are by definition variable stars, named after their prototype T Tau. Variability in this class of objects is ubiquitous and has inspired a long history of study \citep[e.g.,][]{joy45, joy49, rydgren76}. The majority of more recent studies have focused on their optical and near-infrared (NIR) emission. \citet{carpenter01} found significant variability of TTS in Orion A based on near-IR JHK 2MASS photometry. The typical timescale of variation in these TTS was on the order of days and could be explained by cool spots, hot spots, extinction, and$/$or changes in the mass accretion rate onto the star. Work by \citet{eiroa02} further supported these results. For many TTS in that study the optical and JHK photometry varied simultaneously, supporting that in most objects variability is due to star spots and variable extinction. However, for several of the objects in that sample there is no correlation between the optical and NIR. This points to structural changes in the disk, which begin to dominate the SED in the K-band, on the order of days. With the arrival of the {\it Spitzer Space Telescope} \citep{werner04}, more detailed studies probing the mid-IR have been possible, particularly with the Infrared Spectrograph \citep[IRS;][]{houck04} which provides simultaneous wavelength coverage between $\sim$5 and 38~{$\mu$m}. Variations in the shape and size of the 10~{$\mu$m} silicate emission feature has been seen in DG Tau, XZ Tau, \citep{bary09} and LRLL 31 \citep{muzerolle09}. In the case of EX Lupi, transient changes in the dust composition of the disk have been detected with multi-epoch spectra of the silicate emission feature \citep{abraham09}. Flux changes in the IRS spectra of disks have also been observed. \citet{muzerolle09} found substantial variability in LRLL 31, located in IC 348, on timescales down to days. The flux of this object oscillated around a pivot point at 8.5~{$\mu$m} -- as the emission decreased at wavelengths shortwards of the pivot point, the emission increased at longer wavelengths and vice versa. The star spots proposed to explain variability at shorter wavelengths could change the irradiation heating, but this would cause an overall increase or decrease of the flux, not an anti-correlation between the flux centered at the pivot point. \citet{muzerolle09} proposed that the observed ``seesaw'' variability was due to dynamical changes in the disk itself. In particular, changes in the height of the inner disk edge or wall located at the dust destruction radius (which emits primarily in the NIR) could lead to variable shadowing of the outer disk material (which emits at longer wavelengths). Indeed, the overall characteristics of the flux changes observed can be explained by models of a disk with an inner warp that leads the scale height of the inner disk to change with time \citep{flaherty10}. It is important to note that the above-mentioned object, LRLL 13, is surrounded by a transitional disk. Transitional disks have nearly photospheric near-IR (1--5~{$\mu$m}) and mid-IR (5--20~{$\mu$m}) emission below the median excess of Class II objects, coupled with substantial emission above the stellar photosphere at wavelengths beyond $\sim$20~{$\mu$m} \citep{strom89}. This dip in the infrared flux has been attributed to a central ``hole'' in the dust distribution of the disk. This has been inferred from detailed modeling of some of these transitional objects \citep{calvet02,calvet05,espaillat07a,espaillat08b} and confirmed with sub-millimeter and millimeter interferometric imaging \citep{hughes07,hughes09,andrews09}. Motivated by the variability observed in the transitional disk of LRLL~13, we conducted a {\it Spitzer} IRS variability study of transitional disks and pre-transitional disks. Pre-transitional disks have significant near-infrared excesses relative to their stellar photospheres, similar to the median spectral energy distribution of disks in Taurus \citep{dalessio99}, while also exhibiting the characteristics seen in transitional disks (i.e. deficits of mid-infrared flux and substantial excesses beyond $\sim$20~{$\mu$m}). This indicates a gapped disk structure where the inner disk is separated from the outer disk. Sub-millimeter and millimeter interferometric imaging \citep[Andrews et al, in prep;][]{andrews09, pietu06} has confirmed the location of the wall of the outer disk inferred from SED modeling for a few pre-transitional disks (e.g. LkCa~15, UX~Tau~A, Rox 44). The near-IR emission of these objects is due to dust in an optically thick inner disk, a result obtained by using the ``veiling" \citep{hartigan89} of near-infrared spectra \citep{espaillat08a, espaillat10}. Here we perform detailed modeling of the broad-band spectral energy distributions of the 14 transitional and pre-transitional disks in our sample at different epochs. We take into account the effect of shadowing by the inner disk on the outer disk by employing the irradiated accretion disk models of \citet{dalessio06} with the modifications to include shadowing presented in \citet{espaillat10}. Our work adds to the number of detailed modeling efforts of disk variability in the literature which have found some success in reproducing the observations by varying the height of the inner disk wall \citep{juhasz07, sitko08}. \citet{juhasz07}'s study of the UX Ori-type star SV Sep came to the conclusion that the variability in the optical and near-IR emission could be explained by changing the height of the inner disk edge, but they were unable to simultaneously fit the variability from the IR out to 100~{$\mu$m} using a self-shadowed disk. \citet{sitko08} reported the variability of two Herbig Ae stars between 1-5~{$\mu$m}, the region dominated by the inner disk wall, and could explain this change by varying the height of the inner disk edge. However, this work did not have data at longer wavelengths and so could not test if these models fit the emission from the outer disk. The advantage of our study over previous works is the quality of the data and the simultaneous wavelength coverage (5-38~{$\mu$m}) provided by {\it Spitzer} IRS. Thus we are able to present the largest and most detailed modeling study of variability in disks around TTS to date. We find that we can explain the variability of most of the pre-transitional disks in the sample by changing the height of the inner disk wall and thus the extent of its shadow on the outer disk, thereby affecting the resulting emission from the outer disk. We also find that the objects in the sample with the largest amounts of crystalline silicates in their disks exhibit variability on the shortest timescales observed in this study. \section{Sample Selection} \label{sample} Our sample of 14 objects was chosen to include bright transitional and pre-transitional disks in Taurus and Chamaeleon, two nearby 1--2 Myr old star--forming regions with low extinction that were well covered by a {\it Spitzer} guaranteed-time observing (GTO) program \citep[Manoj et al., submitted;][]{furlan06, kim09}. For each of our 14 objects we obtained a pair of general observer (GO) observations taken within 1 week of each other. When combined with the GTO data, these observations give us baselines of $\sim$3--4 yr and $\sim$1 wk. We included the well-studied and previously modeled transitional disks CS~Cha, DM~Tau, and GM~Aur \citep{espaillat07a,calvet05} and the pre-transitional disks LkCa~15 and UX~Tau~A \citep{espaillat07b}. We also included other transitional and pre-transitional disks in Chamaeleon (T25, T35, T56, SZ~Cha) which had been identified by \citet{kim09}. Four objects in our sample were chosen based on analysis conducted in \citet{furlan09}. In that work, we compared the observed equivalent width of the 10~{$\mu$m} silicate emission feature and the SED slope between 13 and 31~{$\mu$m} against a grid of disk models. RY~Tau, IP~Tau, CR~Cha, and WW~Cha were four of the many objects that fell outside of range of EW(10~{$\mu$m}) and n$_{13-31}$ covered by the full disk model grid. One explanation proposed by \citet{furlan09} and \citet{espaillat09} to explain these outliers was that these disks are actually pre-transitional disks with smaller gaps than had been previously observed. Therefore, we included these objects in the sample based on their potential for being pre-transitional disks. ISO~52 is the only object in our sample which could be explained by the full disk model grid presented in \citet{furlan09}. However, we chose to include this object in our variability study because qualitatively its GTO IRS spectrum resembled what would be expected from a pre-transitional disk with a small gap \citep[see Figure 12 in][]{espaillat10}. Except for CS~Cha, the objects in our sample are thought to be single stars and so the holes and gaps in our objects indicate the likely presence of planets \citep[see discussion in][]{espaillat08a,espaillat10}.\footnote{Even in the case of the spectroscopic binary of CS~Cha, the 38~AU hole is too large to be cleared out by the binary system alone. A binary system with a circular orbit can clear a hole twice the size of the semi-major axis; a binary with an eccentricity of 0.8 can clear out a region 3.5 times the semi-major axis \citep{artymowicz94,pichardo05,aguilar08}. Since the separation of the binary in CS~Cha is $\sim$4~AU \citep{guenther07}, the binary should clear only a region up to 14~AU.} \section{Data Reduction} \label{redux} \subsection{Observations} Here we present three {\it Spitzer} IRS spectra for each of our targets (Table~\ref{tab:log}). The first spectra were obtained through IRS GTO in Program 2 (PI: Houck) and have been previously presented elsewhere \citep[Manoj et al., submitted;][]{furlan06,furlan09, kim09}. The last two spectra for each target were obtained in GO Program 50403 (PI: Calvet). For consistency, we have re-reduced the GTO data in the same way we reduce the GO data (see {\S}~\ref{datredux}). We note that we also searched the {\it Spitzer} archive for all other IRS observations of objects in our sample. We reduced those data as we did our GTO and GO data and comment further on these additional spectra in the Appendix. All of the GO observations were performed in staring mode using the low-resolution modules (Short-Low (SL) and Long-Low (LL)) of IRS, spanning wavelengths from 5--14~{$\mu$m} and 14-38~{$\mu$m}, respectively, with a resolution $\lambda/\delta\lambda\sim$90. The Chamaeleon GTO spectra were obtained in staring mode as well while the Taurus GTO spectra were obtained in mapping mode with 2$\times$3 step maps (2 parallel and 3 perpendicular to the slit) on the target. Most of the GTO spectra use the SLLL configuration. The exceptions are RY~Tau, CR~Cha, and WW~Cha which were taken with the SL module and the high-resolution modules Short-High (SH) and Long-High (LH), which cover 10--19~{$\mu$m} and 19--37~{$\mu$m}, with $\lambda/\delta\lambda\sim$600. We note that in the case of DM~Tau, two GTO spectra were obtained in Program 2 and are listed in Table~\ref{tab:log} as GTO1 and GTO2. Throughout this paper, we only show the GTO2 spectrum since it was taken in staring mode, as were our GO observations. In addition, the SL spectrum has a higher SNR in the GTO2 observation due to a longer integration time. We note that the GTO1 spectrum is equivalent to the GTO2 spectrum (i.e. flux, shape) within the uncertainties of the observations. \subsection{Extraction and Calibration of Spectra} \label{datredux} Details on the observational techniques and general data reduction steps, including bad pixel identification, sky subtraction, and flux calibration, can be found in \citet{furlan06} and \citet{watson09}. Here we provide a brief summary. Each object was observed twice along the slit, at a third of the slit length from the top and bottom edges of the slit. Basic calibrated data (BCD) with pipeline version S18.7 for both the GTO and GO observations were obtained from the {\it Spitzer} Science Center. With the BCDs, we extracted and calibrated the spectra using the SMART package \citep{higdon04}. Bad and rogue pixels were corrected by interpolating from neighboring pixels. For the low-resolution spectra, the data were sky subtracted using the sky emission in the off-target nods except in the cases of CS~Cha GO1, IP Tau GO 1, and LkCa~15 GO 2 where the off-target orders were used in order to minimize over-subtraction of H~I from the sky background at $\sim$20~{$\mu$m}. For the GTO SH and LH modules, no background subtraction was performed and so the emission of these targets could be slightly overestimated, particularly in LH since the slit is larger. However, the targets are much brighter than the background, and so the emission from the disks should clearly dominate in the mid-IR. After sky subtraction, the low-resolution spectra were extracted from the 2D spectral images using a tapered column which varies with the width of the IRS point-spread function. For the SH and LH modules, a full slit extraction was performed. To flux calibrate the observations we used spectra of $\alpha$ Lac (A1 V) for the low-resolution modules and $\xi$ Dra (K2III) for the high resolution modules. We performed a nod-by-nod division of the target spectra and the $\alpha$ Lac or $\xi$ Dra spectra and then multiplied the result by a template spectrum \citep{cohen03}. The final spectrum was produced by averaging the calibrated spectra from the two nods. For the 2$\times$3 maps, only the central map positions were used for the final spectrum. The high-resolution data were rebinned to same sampling as the low-resolution data. Our spectrophotometric accuracy is 2--5$\%$ estimated from half the difference between the nodded observations. We note that there are artifacts in the T25 GO1, ISO~52 GO1, T56 GO1, and CR~Cha GTO spectra at $\sim$9~{$\mu$m}, $\sim$7~{$\mu$m}, $\sim$15~{$\mu$m}, and $>$35~{$\mu$m}, respectively. These spikes are due to additional bad pixels not captured by the bad pixel and rogue masks used in our data reduction. For clarity, we manually mask these artifacts from the spectra. The final spectra used in this study are shown in Figures~\ref{figirsptd1},~\ref{figirsptd2},~and~\ref{figirstd}. \subsection{Uncertainties due to Mispointing} \label{mispoint} Apparent variability could also be the effect of mispointings, which would cause loss in flux especially in SL, which is the module with the narrowest slit width. PCRS peak-up, which we used for our GO observations, yields a pointing accuracy of 0.4$^\prime$$^\prime$ (1-sigma radial rms); this is just slightly less than the accuracy for blind pointing (0.5$^\prime$$^\prime$; however, note that earlier in the mission the blind pointing of Spitzer was only accurate to $<$1$^\prime$$^\prime$). According to a detailed study in \citet{swain08}, a 0.4$^\prime$$^\prime$ pointing offset in SL1 causes a drop in flux of about 7$\%$, with a slight (1-2$\%$) dependence on wavelength. Therefore, if we observe a flux variation $\geq$10$\%$ that fluctuates with wavelength more than on the few percent level, we can most likely exclude mispointing as the cause of the variability. The GTO observations did not use peak-ups. However, the GTO data of our Chamaeleon objects were taken later in the mission, when the blind pointing of {\it Spitzer} was improved. The objects in our sample that could be affected by some mispointing are the Taurus GTO observations, taken in 2004 February. They were obtained with blind pointing in mapping mode; the spectra presented here are extracted from the central map positions. For IP Tau, DM Tau GTO1, and LkCa 15, the SL and LL spectra match at 14~{$\mu$m}, which argues against mispointing; since the SL and LL slits are perpendicular to each other, mispointings should affect one module more than the other, resulting in an offset. There were small (5$\%$) offsets between SL and LL for DM Tau GTO2, GM Aur and UX Tau A, with SL lower than LL, suggesting small mispointings. RY Tau likely suffered from larger mispointing, since SH is lower than SL and LH by 20-30$\%$ (the mispointing relative to SL is more difficult to determine, given that part of SL1 in the GO spectra is saturated); also, compared to the GO data, the SL spectrum appears low. Overall, the GTO spectrum of RY Tau is quite uncertain, but the other GTO observations should have a pointing accuracy of 0.5$^\prime$$^\prime$ or better. In order to account for the mispointing discussed here we scale the SL spectra upward so that the SL and LL spectra match. We note that the high-resolution modules SH and LH are much less affected by small mispointings, since the slits are wider than for the low-resolution modules. We note that none of the other GTO or GO observations were mispointed. Aside from those observations discussed above, there is no mismatch between SL and LL, so we can be confident that the observations are well pointed and that therefore both the SL and LL slits contain the full flux of the object. As an additional check, we find that the sources are located within $\sim$1$^\prime$$^\prime$ of each other in the SL and LL slits amongst the GTO, GO1, and GO2 observations, further evidence that these sources were well-centered in the slit. We note that at $\sim$14{$\mu$m} the SL module ends and the LL module begins. Given that several of our objects have large holes and gaps, the emission of the outer wall begins to dominate at 15--20{$\mu$m}, coinciding with where SL ends and LL starts. We also overlaid the IRS slit positions for each AOR on 2MASS K-band images to check for anomalous behavior. In WW~Cha GO1 and GO2 both of the modules were mispointed due to an error in the coordinates (the declination was off by $\sim$1$^\prime$$^\prime$). Judging from the overlay, the LL module was more off-target than the SL module and the mispointing is more along the spatial direction of the SL slit, but it is difficult to tell how much flux was lost. We note that in the GO observations of SZ~Cha, two faint objects entered the SL and LL slits at 5.3$^\prime$$^\prime$ and 12.5$^\prime$$^\prime$ from the target. However, it is known that these two objects are not members of Cha I and are likely very faint at mid-IR wavelengths \citep{luhman07}. Therefore, SZ~Cha dominates the emission in the GO spectra. In the GTO observations of ISO~52 a faint object is present in the LL slit 8.5$^\prime$$^\prime$ from the target. Due to the different orientation of the IRS slit positions in the GO observations, this object is not present in the LL slit in this epoch. However, this object is much fainter than ISO~52, with a magnitude of about 14 in the 3.6 and 4.5~{$\mu$m} IRAC bands (K. Luhman, 2010, private communication), and so we can conclude that ISO~52 dominates the emission seen in its GTO spectrum. \section{Analysis} \label{sec:ana} \subsection{Flux Variability} \label{sec:var} It is evident from the IRS spectra of our pre-transitional (Figures~\ref{figirsptd1} and~\ref{figirsptd2}) and transitional objects (Figure~\ref{figirstd}) that there is some variability in their fluxes. Here we quantitatively discern if there is true variability in the sample and then qualitatively discuss the overall behavior of the variability that is present. Figures~\ref{figptd1err}--\ref{fignovar2} illustrate the change in flux seen in an object in our sample between different epochs. The differences in flux between the GTO and GO1 spectra are shown in Figures~\ref{figptd1err}--\ref{figtderr}. (Note: the same analysis for UX~Tau~A and T35 is plotted in Figure~\ref{fig1wkerr}). The difference in flux ($\delta$F$_{\lambda}$) is plotted in terms of the percentage difference in emission between the GTO and GO1 data relative to the GTO data. The error bars in the figures correspond to the uncertainties in the observations. Except for the cases of DM~Tau and T25 (where $\delta$F$_{\lambda}\sim$0), we observe significant variability outside of the observational uncertainties in each of the targets between the GTO and GO1 observations, which were taken more than a year apart. We performed the same analysis comparing the GO1 and GO2 spectra, which were taken about one week apart, and we only see variability on these timescales in four objects: UX~Tau~A (Figure~\ref{figptd1err}), ISO~52, T56 (Figure~\ref{fig1wkerr}), T35 (Figure~\ref{figptd1err}). In Figures~\ref{fignovar1} and~\ref{fignovar2} we show that the rest of the objects in our sample do not vary significantly between the GO1 and GO2 epochs. In many of the pre-transitional targets, the flux clearly oscillates around a pivot wavelength (Figure~\ref{figptd1err}). As the short wavelength emission decreases, the emission at longer wavelengths increases and as the short wavelength emission increases, the emission at longer wavelengths decreases. The spectra of LkCa~15 can be taken as representative of this group of objects. In Figure~\ref{figptd1err} the flux of its GO1 spectrum is $\sim$10$\%$ lower than the GTO spectrum at wavelengths $<$8~{$\mu$m}. The fluxes of the two spectra are the same around $\sim$10~{$\mu$m}, but the GO1 spectrum is higher beyond that point. In the other objects mentioned above, the overall behavior is similar while the pivot wavelength and magnitude of the flux change can vary from object to object. UX~Tau~A does not display this seesaw behavior between the GTO and GO1 spectra (Figure~\ref{fig1wkerr}), but it does between the GO1 and GO2 spectra (Figure~\ref{figptd1err}). We point out that while the RY~Tau GTO and WW~Cha GO observations were mispointed, that should have resulted in an decrease of flux at all wavelengths relative to the GO and GTO observations, respectively. However, we see flux losses at some wavelengths while flux gains at others, indicating that there is true variability in these objects, but we cannot accurately constrain the difference in the flux or the pivot wavelength due to the mispointing. Whether or not this seesaw behavior is present in the other pre-transitional objects in the sample is unclear (Figure~\ref{figptd2err}). The GO1 spectrum of CR~Cha has less emission than the GTO spectrum shortwards of $\sim$6~{$\mu$m}, but substantially more at longer wavelengths. Due to the artifacts in the GTO spectrum $>$35~{$\mu$m} discussed in {\S}~\ref{redux}, we cannot confidently tell whether or not the GTO and GO spectra agree at these wavelengths. In IP~Tau the flux at $\lambda$ $<$7{$\mu$m} is about the same, but it is lower for the GO1 spectrum at longer wavelengths. For T56, the emission in the GO1 spectrum is higher than the GTO spectrum beyond $\sim$20~{$\mu$m}. It appears that the flux in the GO1 spectrum is lower at $<$20~{$\mu$m}, but the SNR is too poor $<$7~{$\mu$m} to tell if this holds at the shortest wavelengths. In T35, when comparing the GO1 and GO2 spectra, the flux is the same $<$7~{$\mu$m}, but the GO2 spectrum has less emission beyond that. Again, the spread in uncertainties is large in T35 because of the poorer SNR. The behavior observed in the transitional disks is displayed in Figure~\ref{figtderr}. GM~Aur has the seesaw behavior seen in LkCa~15, where the pivot is at $\lambda$$\sim$18~{$\mu$m}. In CS~Cha, only the flux of the 10~{$\mu$m} silicate emission feature changes substantially between the GTO and GO1 spectra. DM~Tau and T25 have no discernible variability. The four objects in our sample which vary on 1~wk timescales display behavior that could be classified as seesaw-like as already described above, but they also exhibit additional behavior (Figure~\ref{fig1wkerr}). UX~Tau~A's GO1 spectrum is weaker at all wavelengths relative to the GTO spectrum, indicating that the emission of this object has decreased with time. The spread in uncertainties is large in ISO~52, T56, and T35 because of poor SNR; however, it appears that the ISO~52 and T56 spectra diverge beyond $\sim$20~{$\mu$m} between the GO1 and GO2 spectra and that the T35 spectra diverge shortwards of $\sim$10~{$\mu$m} between the GTO and GO1 spectra. \subsection{Disk Model} \label{sec:mod} Using disk models, we attempt to reproduce the SED variability observed in Figures~\ref{figirsptd1}--\ref{figirstd}. The models used here are those of \citet{dalessio98,dalessio99,dalessio01,dalessio05,dalessio06}. We refer the reader to those papers for details of the model and to \citet{espaillat10} for a summary of how we fit the SEDs of pre-transitional and transitional disks in particular. A full disk model has an irradiated accretion disk with a sharp transition at the dust sublimation radius. We model this transition as a frontally illuminated wall which dominates the near-IR emission. Pre-transitional disks have a gap in the disk. They have an inner disk separated from an outer disk by the gap. The optically thick inner disk also has a sharp transition at the dust sublimation radius, as seen in full disks, which we model as an inner wall. In the subsequent modeling analysis, we do not include the contribution to the SED from the inner disk behind this wall since previous work has shown that the inner wall dominates the emission at these shorter wavelengths. There is another wall located where the outer disk is inwardly truncated (i.e. the outer edge of the gap) and this outer wall dominates the SED emission from $\sim$20--30~{$\mu$m}. Behind this wall, there is an outer disk which dominates the emission beyond $\sim$40~{$\mu$m}. Since transitional disks have holes in their disks, they do not have the inner wall seen in pre-transitional disks. When modeling transitional disks, we include the outer wall and outer disk described above. In both pre-transitional and transitional disks, the gap or hole sometimes contains a small amount of optically thin dust which dominates the contribution to the 10~{$\mu$m} silicate emission feature. We calculate the emission from this optically thin dust region following \citet{calvet02}. We note that in the case of the pre-transitional disks, the inner optically thick disk will cast a shadow on the outer disk and here we include the effect of this shadowing on the outer wall following \citet{espaillat10}. In short, since the star is a finite source, there is both a penumbra and umbra on the outer wall. In the umbra, the wall is not illuminated and in the penumbra, the wall is partially illuminated. Above the penumbra, the wall is fully illuminated. Refer to the Appendix of \citet{espaillat10} for more details. \subsubsection{Stellar Properties} Table~\ref{tab:stellar} lists stellar properties for our sample which are relevant for the disk model. We note that the stellar properties of our objects are based on optical and near-infrared data which are not contemporaneous with the IRS spectra analyzed in this work. If the star's properties change over time this can result in uncertainties in the input stellar parameters and hence the disk properties derived here. Spectral types for our objects are from the literature and the temperature for the spectral type listed in Table~\ref{tab:stellar} was taken from \citet{kh95}. The stellar properties for our sample (L$_{*}$, M$_{*}$, R$_{*}$) are from the HR diagram and \citet{siess00} tracks. When U-band photometry was available, the mass accretion rates were derived in this work using U-band data and the relation in \citet{gullbring98}. Extinction corrections were made by matching V- , R- , I-band, and 2MASS photometry to photospheric colors from \citet{kh95}. The spectra were dereddened with the \citet{mathis90} dereddenning law. The distance adopted for Taurus is 140~pc \citep{bertout99} and for Chamaeleon this is 160~pc \citep{whittet97}. \subsubsection{Disk Properties} Table~\ref{tab:disk} lists the disk properties of our sample. When parameters are specific to only one epoch, this distinction is made in the table (see table footnote). We assume that the inclination of the disk is 60 degrees, unless a measurement could be found in the literature. $T_{wall}$ is the temperature at the surface of the optically thin wall atmosphere. The temperature of the inner wall (T$_{wall}^i$) is typically held fixed at 1400~K (except in the cases of UX~Tau~A and T35 which will be addressed in the Appendix). The temperature of the outer wall (T$_{wall}^o$) is varied to fit the SED best. The radius of the wall ($R_{wall}$) is derived using $T_{wall}$ following Equation 2 in \citet{espaillat10}. The heights of the walls (z$_{wall}$) and the maximum grain sizes (a$_{max}$) are adjusted to fit the SED. The parameters of the outer disk are also varied to fit the SED. These include the viscosity parameter ($\alpha$) and the settling parameter ($\epsilon$; i.e. the dust-to-gas mass ratio in the upper disk layers relative to the standard dust-to-gas mass ratio). M$_{disk}$ is calculated according to Equation 38 in \citet{dalessio98} and it is proportional to ${\hbox{$\dot M$}}/{\alpha}$. We adopt an outer disk radius of 300~AU for all of our disks. \subsubsection{Dust Opacities} \label{sec:moddustopa} As discussed in \citet{espaillat10}, the opacity of the disk has important consequences on the resulting SED. The opacity is affected by the sizes of the dust grains and the composition of the dust used. The grain size distribution used in the models follows the form $a^{-3.5}$ where $a$ varies between $a_{min}$ and $a_{max}$ \citep{mathis77}. We assume the grains are spherical and note that while irregularly shaped grains may have different opacities from spherical grains \citep{min07}, it is outside the scope of this work to constrain the shape of the dust grains. Throughout the disk, $a_{min}$ is held fixed at 0.005~{$\mu$m}. In the walls, $a_{max}$ is varied to achieve the best fit to the SED. We try maximum grain sizes between 0.25~{$\mu$m} and 10~{$\mu$m}. The wall emission is primarily optically thick, but also has an optically thin component from the wall atmosphere which contributes to the silicate emission features. Smaller grain sizes lead to a strong, narrow 10~{$\mu$m} silicate emission feature while larger grain sizes produce wider and less prominent emission features \citep[see Figure 3 in][]{espaillat07a}. In the outer disk, there are two dust grain size distributions in order to simulate dust growth and settling \citep[see][for more details]{dalessio06}. In the upper disk layers, $a_{max}$=0.25~{$\mu$m} and in the disk midplane the maximum grain size is 1~mm \citep{dalessio06}. For the dust composition of the inner wall we follow \citet{dalessio05} and \citet{espaillat10} in adopting silicates with a dust-to-gas mass ratio (${\zeta}_{sil}$) of 0.0034. We note that only silicates exist at the high temperatures at which the inner wall is located. There are other types of dust such as metallic iron that can exist at high temperatures \citep{pollack94}. However, here we adopt a dust composition consistent with the one proposed by \citet{pollack94} for accretion disks. We perform a more detailed dust composition fit for the silicates in the outer wall and disk than done in our previous works. The motivation is that for a variability study trying to trace small changes in the flux, it is important to isolate the continuum emission of the disk. By fitting the silicate dust features seen in the IRS spectrum as closely as possible, one can then more clearly see the effect of changing the disk continuum. Here we adopt a dust-to-gas mass ratio (${\zeta}_{sil}$) of 0.0034 for the silicates in the outer wall and disk and explore silicate dust mixtures incorporating olivines, pyroxenes, forsterite, enstatite, and silica. (We note that throughout this work we are referring to amorphous material of olivine or pyroxene stoichiometry when using the terms ``olivine'' and ``pyroxene.'') We list the derived mass fractions in Table~\ref{tab:silwall}. The optical constants used for olivines and pyroxenes come from \citet{dorschner95}. We calculate the opacities assuming segregated spheres and Mie theory for the adopted dust grain size distribution. For an explanation of how the forsterite opacity was computed, see the discussion by \citet{poteet10}. We also calculated the opacity for enstatite. We adopt optical constants for enstatite from \citet{huffman71} and \citet{egan77}, crystalline bronzite at 300K from \citet{henning97}, the three crystalline axes of orthoenstatite from \citet{jaeger98}, and crystalline hypersthene from \citet[][Sample 1S]{jaeger94} for the 0.1--0.5, 0.533--1.105, 6.7--8.4, 8.7--98, and 98--8000~{$\mu$m} wavelength regimes, respectively. The optical constants from \citet{jaeger94} were modified to match the values from \citet{jaeger98} as was done for forsterite as described by \citet{poteet10}. Beyond 585~{$\mu$m}, the real part of the index of refraction, $n$, was chosen to be a constant value equal to the value of $n$ at 585~{$\mu$m} obtained by modifying the \citet{jaeger94} $n$ values, and the imaginary part of the index of refraction, $k$, was determined by scaling a 1$/{\lambda}$ curve to the value of $k$, at 585~{$\mu$m} from the modified \citet{jaeger94} values. The absorption opacity was then computed from these optical constants by CDE theory \citep{bohren83}. The scattering opacity is assumed to be zero. Finally, we compute the opacity for silica. We adopt optical constants from the following sources for silica. Between 0.05 to 0.15~{$\mu$m}, alpha quartz from \citet{palik85} is used. From 3-8~{$\mu$m}, $k$ comes from $k_{abs}$ for amorphous silica from \citet{palik85}. For $k$ between 0.15 and 3~{$\mu$m}, $k$ is interpolated between its values at 0.15 and 3~{$\mu$m}. From 0.15 to 5.5~{$\mu$m}, $n$ comes from \citet{palik85} for alpha quartz. Between 8 and 30~{$\mu$m}, the $n$ and $k$ values for beta quartz at 975~K from \citet{gervais75} are used. From 50-333~{$\mu$m}, both $n$ and $k$ are from \citet{loewenstein73} for alpha quartz at room temperature. The value of $n$ at 333~{$\mu$m} was kept constant to 8000~{$\mu$m}. For $k$, a 1$/{\lambda}$ curve fit to the value of $k$ at 333~{$\mu$m} was used. To compute the absorption opacity, we employed CDE at all wavelengths except 8 to 40~{$\mu$m}. Between 8 to 40~{$\mu$m} the absorption opacity was for annealed silica from \citet{fabian01}. The scattering opacity at all wavelengths is assumed to be zero. In addition to silicates, for each of the disks, we add organics and troilite to the dust mixture following \citet{espaillat10} with ${\zeta}_{org}$ = 0.001 and ${\zeta}_{troi}$ = 0.000768 and sublimation temperatures of $T_{org}$ = 425~K and $T_{troi}$ = 680~K. We include water ice as well with a sublimation temperature of 110~K. Unless otherwise noted, we use ${\zeta}_{ice}$ = 0.00056. Optical constants for organics, troilite, and water ice are adopted from \citet{pollack94}, \citet{begemann94}, and \citet{warren84}. In objects where we include optically thin dust within the hole or gap, the silicate dust composition is listed in Table~\ref{tab:silthin}. The abundances of silicates, organics, and troilite in the optically thin dust region are given in the subsequent sections. We do not include ice in the optically thin region since the temperatures here are high enough for it to have sublimated. \subsection{SED Modeling} \label{sec:modsed} Here we provide an overview of our modeling results. In the Appendix, we describe in detail the modeling conducted in this study for each individual object. \subsubsection{Disk Structure} We can explain most of the seesaw variability observed in the pre-transitional disks by changing the height of the inner disk wall. (We note that other possible explanations including changes in the stellar and disk properties have not been explored here.) In the pre-transitional disks of LkCa~15, SZ~Cha, and UX~Tau~A we can reproduce the seesaw variability by changing the height of the inner disk wall by $\sim$22$\%$, 33$\%$, and 17$\%$ respectively (Figure~\ref{figlkca15}). When the inner wall is taller, the emission at the shorter wavelengths where the wall dominates the emission is higher; there is also a larger shadow on the outer wall and hence the emission seen from the outer wall is less and the IRS spectrum is lower. Correspondingly, when the inner wall is lower there is less near-IR emission and the shadow on the outer wall is smaller and so we see more emission from the outer wall longwards of 20~{$\mu$m}. The 10~{$\mu$m} silicate emission in LkCa~15 and SZ~Cha does not change. This emission is dominated by small dust in the optically thin region. UX~Tau~A does not have a discernible 10~{$\mu$m} silicate emission feature. Because of uncertainties introduced to the observations by mispointing, we do not attempt to reproduce the variability seen in RY~Tau (Figure~\ref{figrytau}). However, we can fit the SED of RY Tau with an 18~AU gap which contains some optically thin dust. In the case of WW~Cha (Figure~\ref{figwwcha}), due to the fact that the GO observations were significantly mispointed and that we do not have a mass accretion rate estimate, we do not attempt to model its disk here. We also do not have a mass accretion rate for ISO~52 (Figure~\ref{figiso52}). However, these observations were well pointed and for the purposes of reproducing the general trend seen in the variability, we assume a typical value (see the Appendix for more details). In this object, we need to increase the height of the inner wall by $\sim$400$\%$ between the GTO and GO epochs to explain the observed variability. Assuming that our assertion that the inner wall height is varying is correct, this is by far the largest change in wall height seen in the sample. CR~Cha has a substantial change in slope at $\sim$6~{$\mu$m} (Figure~\ref{figchx3}), from which one could infer that there is either a substantial change in the temperature of the wall or a change in the nature of the emission from optically thick to optically thin. We find that CR~Cha is best explained with a pre-transitional disk model in the GTO observations and either a pre-transitional or transitional disk model in the GO observations. See the Appendix for more details and {\S}~\ref{crchadis} for a discussion. In the pre-transitional disks of IP~Tau, T56, and T35, we can reasonably reproduce the emission within the uncertainties of the observations by varying the height of the inner disk wall by 17$\%$, 50$\%$, and 20$\%$, respectively (Figure~\ref{figiptau}). In addition to varying the height of the inner wall, we also had to change the amount of dust in the optically thin regions of IP~Tau and T56 in order to reproduce the variability in the 10~{$\mu$m} silicate emission feature. We also modeled the transitional disks in the sample (Figures~\ref{figgmaur} and~\ref{figdmtau}). GM~Aur displays seesaw behavior. To fit it we vary the amount of optically thin dust in the hole and have to change the height of the outer wall as well. In CS~Cha, only the 10~{$\mu$m} silicate emission changes between epochs and to fit this variability we alter the amount of dust in the optically thin region. In DM~Tau and T25 there is no variability and there is no evidence for significant amounts of dust in their holes. There is some variability in the pre-transitional disks which we cannot explain by changing the height of the inner disk wall. The GO2 spectrum of T35 has a change in slope at $\sim$7~{$\mu$m} which we could not explain with the disk models presented here. We have no obvious explanation for this but speculate it could be related to the high temperature derived for the inner disk wall (1800~K). In UX~Tau~A, the overall emission from the disk has decreased with time (i.e. the GO spectra have less emission than the GTO spectra). While we do not try to fit this decrease with models, it can possibly be attributed to a decrease in the luminosity at the bands where the disk absorbs stellar radiation, most likely due to star spots \citep{skrutskie96}, or an overall decrease in the accretion luminosity of the disk, most likely from a change in the mass accretion rate by a factor of about 3. We also analyzed an additional SHLH spectrum from the {\it Spitzer} archive (see Appendix for more details). This spectrum has substantially lower emission at ${\lambda}>$13~{$\mu$m} (Figure~\ref{uxshlh}). We can fit the SHLH spectrum using an inner wall with a temperature of 1800~K. This hotter inner wall is closer to the star and leads to a larger shadow on the outer wall. Given that we do not have simultaneous data at shorter wavelengths, we cannot test if this wall fits the SED at ${\lambda}<$10~{$\mu$m} at the time the SHLH spectrum was taken. \subsubsection{Dust Composition} As a result of trying to reproduce the variability observed in our sample, in this study we also performed fitting of the silicate emission features visible in the IRS spectra, deriving the mass fraction of amorphous and crystalline silicates in the outer wall and the optically thin regions (Tables~\ref{tab:silwall} and~\ref{tab:silthin}). (We refer the reader to Figure 1 of \citet{watson09} for the positions of the strongest features of crystalline silicates visible in IRS spectra.) We do not attempt a detailed ${\chi}^{2}$ fitting since it would be too computationally expensive to do so with our disk code. Thus, the derived mass fractions in Tables~\ref{tab:silwall} and~\ref{tab:silthin} should be taken as representative of a dust composition that can reasonably explain the observed SED. We refer the reader to \citet{sargent09} for a review of the typical degeneracies of dust fitting. In short, large grains of amorphous olivine and amorphous pyroxene composition are the most degenerate, in the sense that one of these components could be replaced by the other and a similar fit would be found. Enstatite and forsterite are also somewhat degenerate at cooler temperatures. Another caveat, which was noted earlier, is that the shapes of the grains in the disk are not well known. We leave it to future work to further constrain the mass fractions of silicates in these disks. For the inner wall of our objects, the silicate composition consisted solely of amorphous olivines. However, since the inner wall does not produce significant 10~{$\mu$m} silicate emission in the objects in this study, we have no way to distinguish between pyroxene and olivine silicates in the inner wall. Also, while we included crystalline silicates in the disk behind the outer wall, it is the outer wall that dominates the emission at the longer IRS wavelengths. Because of these previous two points, here we only discuss the composition of silicates in the optically thin dust region and the outer wall. Most of the absorption and emission of the outer walls in our sample are dominated by amorphous silicates (Table~\ref{tab:silwall}). The exception is T35 which is dominated by crystalline silicates ($\sim$60$\%$). The optically thin region also tends to be dominated by amorphous silicates with typically $\sim$10$\%$ or less of crystalline silicates. This is not the case in T56 which contains $\sim$25$\%$ crystalline silicates. Of the three crystalline silicates studied in this work, we are more likely to see forsterite rather than enstatite and silica in the optically thin region (Table~\ref{tab:silthin}). Comparing the optically thin region and outer wall in objects that have both, it appears that silica is more likely to be present in the outer wall. Relative to the optically thin region, we find more crystalline silicates in the outer walls of T56, SZ~Cha, and LkCa~15 and less for CS~Cha and GM~Aur. The amount of crystalline silicates in CR~Cha and IP~Tau is the same between both regions. RY~Tau has no evidence for significant amounts of crystalline silicates in its disk. The results from the dust fitting performed in this work are in reasonably good agreement with the detailed dust fitting conducted by \citet{sargent09} which used a two-temperature model. The objects that the two samples have in common are DM~Tau, GM~Aur, IP~Tau, and LkCa~15. Both works find that these four disks are dominated by amorphous silicates and that there are relatively few crystalline silicates present. Furthermore, olivine silicates dominate the inner parts of the disk that contribute to the 10~{$\mu$m} emission. \section{Discussion} \label{sec:discuss} \subsection{Linking Infrared Variability to Disk Structure} Understanding the underlying causes of the variability observed in this sample depends upon the physical locations in the disk from which the changes in flux arise. Given that the sample was chosen to include pre-transitional and transitional disks, the nature of these objects will necessarily play a key role in this. The disk structures of LkCa~15, UX~Tau~A, GM~Aur, and RY~Tau have been independently confirmed. LkCa~15, UX~Tau~A, and GM~Aur have been imaged with millimeter interferometers and large cavities in their disks have been observed \citep[Andrews et al, in prep;][]{pietu06,hughes09}. Near-infrared spectra have confirmed that the inner disks of LkCa~15 and UX~Tau~A are optically thick while the inner disk of GM~Aur is optically thin \citep{espaillat10}. Millimeter interferometric imaging of RY~Tau by \citet{isella10} detects two spatially resolved peaks, an indicator of a disk hole, whose separation translates to a cavity that is consistent with the 18~AU gap inferred from the SED modeling in this work. For the other objects in the sample, the disk structure is inferred solely from SED modeling. Millimeter interferometry and near-IR data are needed to confirm that there are cavities in these disks and to probe if the inner disk is optically thick. However, the IRS spectra of SZ~Cha, WW~Cha, and T56 are reminiscent of LkCa~15, suggesting that they are gapped disks as well. Likewise, T35 resembles UX~Tau~A. CS~Cha, DM~Tau, and T25 have large deficits of flux which are strong indicators of inner disk holes, as seen in GM~Aur. It follows that one can roughly divide the disk into two regions -- inner (inner wall and$/$or optically thin dust region) and outer (outer wall and outer disk). Interestingly, the only disks that do not display variability in our sample are the transitional disks DM~Tau and T25 whose inner regions do not contain substantial amounts of small dust. DM~Tau's inner hole is relatively devoid of small dust and T25's inner region contains only 10$^{-13}$~{\hbox{M$_{\odot}$}}. In contrast, the transitional disks of GM~Aur and CS~Cha have about ten times more small dust within their optically thin inner cavities than the transitional disks of DM~Tau and T25. We can infer that there is not enough material in the inner regions of DM~Tau and T25 to lead to significant variability. Objects in the sample that have a notable amount of material in their inner region do vary. \subsubsection{Inner Wall} We attempted to understand the variability seen in our sample by fitting the SED with disk models. In the pre-transitional disks of LkCa~15, SZ~Cha, UX Tau~A, IP~Tau, T56, and T35 we can reasonably reproduce the emission between 5--38~{$\mu$m} within the uncertainties by varying the height of the inner disk wall by 22$\%$, 33$\%$, 17$\%$, 17$\%$, 50$\%$, and 20$\%$, respectively (Figures~\ref{figlkca15} and~\ref{figiptau}). When the inner wall is taller, the emission at the shorter wavelengths is higher since the inner wall dominates the emission at 2--8~{$\mu$m}. The taller inner wall casts a larger shadow on the outer disk wall and we see less emission at the wavelengths beyond 20~{$\mu$m} where the outer wall dominates. When the inner wall is shorter, the reverse occurs. ISO~52 is an extreme case. Its inner wall height has to change by 400$\%$ to explain the observed variability (Figure~\ref{figiso52}). We did not attempt to fit the variability seen in the pre-transitional disks of RY~Tau and WW~Cha due to complications introduced by mispointing and insufficient data. However, these disks exhibit seesaw-like variability (Figures~\ref{figrytau} and~\ref{figwwcha}). Taking the modeling described above into consideration, one can surmise that the variability in these disks is also due to an inner wall which varies in height. While the variations in the SED for many objects in the sample can be reproduced by changes in the height of the inner wall, we note that other explanations that have not been considered here may possibly result in similar SED behaviors. We leave exploration of this to future work. \subsubsection{Optically Thin Dust Region} \label{disoptthin} In the pre-transitional disks of IP~Tau and T56 the 10~{$\mu$m} silicate emission changes (Figure~\ref{figiptau}). This feature is dominated by sub-micron sized grains in the optically thin dust region located within the disk gap. We can reproduce the change in this emission by adjusting the amount of small dust in this region. Alternatively, given that the spatial distribution of this dust is largely unknown, it is possible that part of the optically thin dust region is in the shadow of the inner wall and so in some cases the amount of dust we see in this region varies as the height of the inner wall changes. In LkCa~15 and SZ~Cha, the 10~{$\mu$m} silicate emission does not change. This indicates that the optically thin dust is vertically distributed in such a way that it is not shadowed by the inner wall. This could suggest that there is more dust in the gap that we do not detect. Therefore, the values for the amount of dust in the gaps of pre-transitional disks should be taken as a lower limit. Alternatively, this optically thin dust could be heated indirectly. In order to explain the presence of the 10~{$\mu$m} silicate emission feature in self-shadowed Herbig Ae$/$Be stars, \citet{dullemond04} proposed that light reaches the shadowed regions after being scattered off of the upper parts of the inner wall. In addition, they suggest that the thermal emission from the wall may also irradiate the shadowed region. Similar mechanisms may be at work in the optically thin regions of pre-transitional disks. Of the four transitional disks in the sample, two exhibit variability: CS~Cha and GM~Aur (Figure~\ref{figgmaur}). In CS~Cha, only the 10~{$\mu$m} silicate emission changes between epochs. We can explain this by varying the amount of optically thin dust located within the central cavity. On the other hand, the dust in CS~Cha could be spatially distributed in such a way as to cause the observed variability. For example, CS~Cha is a spectroscopic binary and, while we do not see the type of variability expected for a circumbinary disk \citep{nagel10}, the optically thin dust in the hole of CS~Cha could be unevenly distributed in such a way that the alignment of the binary system at the time when the GO data was taken illuminates more of the dust. High-resolution near-infrared interferometry of this object would be ideal to test this. To fit the variability of GM~Aur, we not only change the amount of optically thin dust in the hole, we have to change the height of the outer wall as well. \citet{espaillat10} demonstrated that GM~Aur's near-infrared excess continuum between 1--5~{$\mu$m} could be reproduced by emission from sub-micron-sized optically thin dust. These variability data suggest that there is some optically thick structure in the inner disk perhaps composed of large grains and$/$or limited in spatial extent which does not contribute substantially to the emission between 1--5~{$\mu$m} and leads to shadowing of the outer disk. Alternatively, it could be that while the dust in the hole is vertically optically thin, it becomes horizontally optically thick at some radius and shadows the outer disk \citep{mulders10}. This scenario implies that the vertical extent of the optically thin dust region changes to produce a shadow commensurate with the variable emission at the longer wavelengths. Then again, the fact that we have to change the height of the outer wall of GM Aur to fit its variability may not be linked to shadowing of the outer wall by inner disk material, but by changes in the outer wall itself. For example, our models assume the outer wall is axisymmetric, but changes in the visible area of the wall could lead to different emission from the outer wall. We note however that the orbital timescales at the outer wall are much longer than the timescales probed in this work. The case of GM~Aur needs to be explored further. \subsubsection{The Unique Case of CR Cha} \label{crchadis} CR~Cha displays a considerable change in slope at 6~{$\mu$m} (Figure~\ref{figchx3}). In the other objects in this sample, the emission at this wavelength is typically dominated by either an optically thick inner wall or an optically thin dust region. This could suggest that the inner disk alternates from being dominated by optically thick material in the GTO epoch to being dominated by optically thin dust at the time that the GO observations were taken 3~yrs later. Alternatively, the change in slope at 6~{$\mu$m} could also be due to a substantial decrease in the temperature of the inner wall. Accordingly, we reproduced the variability observed in CR~Cha by fitting it with a pre-transitional disk model in the GTO epoch and both a transitional and pre-transitional disk model in the GO epoch. In the case where we fit the GO spectra with a transitional disk model, it follows that once the optically thick inner wall disappears, we see all of the optically thin dust within the disk hole. Hence there is substantially more 10~{$\mu$m} emission in the GO observations which we reproduce by increasing the amount of dust in the optically thin region. We note that the height of the outer wall in the GO epoch, where we assume that there is no shadowing, is less than the height of the wall in the GTO epoch when there is an optically thick inner wall shadowing part of the outer wall. Given that we take the shadowed portion of the outer wall into account in the case where the inner disk is optically thick, this decrease in wall height in the case where the inner disk is optically thin may imply that a portion of the outer wall is still shadowed. As in the case of GM Aur discussed in {\S}~\ref{disoptthin}, this suggests that either there is an optically thick structure in the inner disk that we cannot detect or that the vertically optically thin dust is radially optically thick. It is expected that the optically thick inner disk in pre-transitional disks will disappear at some point via accretion onto the star and$/$or a lack of resupply of dust and gas from the outer disk, leaving behind a transitional disk. However, the viscous timescale at these radii is on the order of 10$^{4}$~yrs making it improbable that we are detecting this transition. Alternatively, in the case where we fit the GO spectra with a pre-transitional disk model, we have to decrease the temperature of the wall from 1400~K (in the GTO fit) to 800~K. This corresponds to a change in radius from 0.2~AU to 1~AU. This may not be a viable model, since it is not clear what process could make the dust grains sublimate above 800~K. In any event, the amount of optically thin dust in the gap remains about the same as seen in the GTO epoch. Near-IR spectra at shorter wavelengths are necessary to decipher whether or not the inner disk of CR~Cha is optically thick or optically thin. Multi-epoch spectra would be useful in constraining if the nature of the inner disk changes with time. Millimeter confirmation of the hole in CR~Cha with {\it ALMA} is necessary to decipher if there is indeed a cavity in this disk. \subsection{Physical Mechanisms Behind Variable Disk Structures} \subsubsection{Variable Accretion} A higher mass accretion rate will lead to a higher surface density in the disk and so the height of the wall (defined as the point where the optical depth to the stellar radiation reaches $\sim$1) will increase \citep{muzerolle04}. In the cases of LkCa~15, SZ~Cha, UX Tau~A, IP~Tau, T56, and T35 the change in the near-IR emission could be explained if the mass accretion rate varies by factors of $\sim$3--10 relative to the mass accretion rate used in this work. Studies have shown that mass accretion rates onto the star are indeed variable. In the transitional disk of TW Hya, accretion rates of 5$\times 10^{-10}$ M$_{\sun}$ yr$^{-1}$ (Muzerolle et al. 2000), 2$\times 10^{-9}$ M$_{\sun}$ yr$^{-1}$ (Herczeg et al. 2004), and 3.5$\times 10^{-9}$ M$_{\sun}$ yr$^{-1}$ (Ingleby \& Calvet, submitted) have been measured. Alencar \& Batalha (2002) found that TW Hya's mass accretion rate varied between 10$^{-9}$ -- 10$^{-8}$ M$_{\sun}$ yr$^{-1}$ over a one year period and that smaller variations were seen even on periods of days. While these accretion rates have been measured onto the star (i.e. the accretion columns), the orbital timescales at the dust sublimation radius ($\sim$1~week) are within the timescales of infrared variability seen. Hence the changes in the mass accretion rate in the inner disk necessary to change the wall height are plausible. However, if the mass accretion rate increases, the radius of the dust sublimation radius will increase as well given that $R_{wall}\propto(L_* + L_{acc})^{0.5}$ and $L_{acc}$ $\sim GM_*\hbox{$\dot M$}/R_*$ \citep{dalessio05}. The change in the radius is much larger than the change in the wall height. For a wall with a relatively similar height and a larger radius, the shadow on the outer wall \citep[see Equations A4 and A5 in the appendix of][]{espaillat10} will not be large enough to diminish the flux at the longer wavelengths to the levels observed. Therefore, a change in mass accretion rate alone cannot explain the observed SEDs in the pre-transitional disks in our sample, indicating that the variability in these disks is due to a change in the wall height while keeping the radius of the wall fixed. Earlier we noted that to fit the variability in the transitional disks of GM~Aur and CS~Cha and the pre-transitional disks of IP~Tau and T56, we altered the amount of dust in the optically thin region. It has been proposed that this small dust exists in the holes of some objects due to dust traveling with gas from the outer disk and into the inner disk after being filtered at the outer wall \citep{rice06}. In this scenario, changes in the amount of dust in the optically thin region could be due to variable mass accretion rates. The reason behind the variability of accretion is not understood. \citet{turner10} proposed that changes in the disk magnetic flux coupled with changes in the X-ray luminosity can lead to substantial changes in the mass accretion rates of typical TTS disks. However, this applies to accretion flows onto the star. For the inner disk, accretion variability could be linked to the formation mechanism behind cavities in disks, namely planets. One can speculate that changes in mass accretion rate could be due to planetary companions which alter the accretion flow in the inner disk regions, eventually getting onto the star. \citet{lubow06} and \citet{zhu10} demonstrate that planets will affect the mass accretion rate into the inner disk. It is possible that this could occur on the timescales seen here given that the 1--3 yr variability observed in our sample corresponds to orbital timescales of 1--2 AU, plausible radii for planets to be located. \subsubsection{Disk Warps} \label{diskwarps} The changes seen in the inner disk could be due to warps. To explain the variability seen in the transitional disk of LRLL 13, \citet{muzerolle09} proposed that the variability was due to dynamical changes in the inner disk, particularly in the form of disk warps. \citet{flaherty10} showed that the seesaw-like variability observed could be explained by models of a disk with an inner warp which leads the height of the inner disk to change with time. Such warps could be due to the presence of multiple planets in the disk. While a disk would damp the eccentricity of a single planet, multiple planets would be able to maintain eccentric orbits which would induce modulations that would effect the inner disk edge \citep{dangelo06}, leading to the change in the height of the inner wall needed to reproduce the observations of pre-transitional disks. Warps caused by planets could account for the timescales of the flux changes seen in our sample. Variability on timescales of 1--3 yr corresponds to orbital timescales of 1--2 AU and 1 wk timescales correspond to 0.07~AU. Radii of 1--2~AU are plausible locations for planetary companions. A radius of 0.07~AU is comparable to the dust destruction radius. Many ``hot Jupiters'' are known to exist at radii $<$0.1~AU \citep{marcy05}, comparable to or within the magnetospheric radius of their host stars and well within the dust sublimation radius, most likely reaching their current positions via migration \citep{rice08}. \subsection{The Composition of Silicate Dust in the Disk} The formation of crystalline silicates requires high temperatures \citep{fabian01}. One proposed mechanism for the formation of crystalline silicates is accretion heating in the innermost disk, close to the dust sublimation radius \citep[e.g.][]{gail01}. However, in our objects crystalline silicates can be seen in the outer wall, which is located at radii much further than the dust sublimation radius. In several other studies, it is also found that crystalline silicates are in the outer regions of the disk \citep{bouwman08,watson09,sargent09,olofsson09,olofsson10}. This suggests that large-scale radial mixing is necessary to transport the crystalline silicates that form near the dust destruction radius out to larger radii in the disk \citep[e.g.][]{boss04, gail04, keller04, ciesla07}, but this must occur before the hole or gap forms. Alternatively, crystalline silicates can form in the inner disk due to heating from shocks in the disk created by planets \citep{desch05, boss05, bouwman08}. In this case, the crystalline silicates would still need to be transported to the outer disk. Changes in the accretion rate or stellar luminosity could possibly lead to the formation of crystalline silicates in the surface layers of the disk as proposed in the case of EX Lupi \citep{abraham09}. Or as noted by \citet{espaillat07b}, local processing may be due to collisions of larger bodies that produce small grains heated to temperatures high enough to create crystals. One interesting by-product of this study is the possibility to explore how or if the silicate composition is linked to the variability seen in the sample. Indeed, we find that the four disks with 1~wk variability contain the highest amounts of crystalline silicates in the sample. In UX~Tau~A, ISO~52, T35, and T56, the outer wall is composed of $\sim$19$\%$, 20$\%$, 60$\%$, and 18$\%$ crystalline silicates, respectively. (The next highest fraction is $\sim$13$\%$, found in SZ~Cha.) The optically thin regions of ISO~52 and T56 are $\sim$20$\%$ and $\sim$25$\%$ crystalline silicates, respectively, higher than any other disks in the sample by a factor of at least 2. In larger studies of samples focusing on low-mass stars, there is very little correlation between the crystalline silicate mass fraction and any stellar or disk property aside from the positive correlations with other crystalline silicate abundances and the amount of dust settling in the disk \citep{sargent09,watson09}. The trend between crystalline mass fraction and short timescale variability points to a link behind the underlying cause of the silicate composition of the disk and the seesaw behavior observed. We propose that this link is planets. Planets can instigate warps, shocks, and collisions in the disk which can lead to both changes in the height of the inner disk wall and a higher abundance of crystalline silicates. Planets can also lead to recurring changes in the disk. In T56 and T35, the variability observed at $\lambda >$ 20~{$\mu$m} appears to oscillate between a maximum and minimum flux. For T56, the flux at these wavelengths is at a minimum in the GTO epoch, it increases in the GO1 epoch, and in the GO2 epoch it decreases back to the same flux observed in the GTO epoch. In T35, the GTO and GO1 spectra are both at the maximum and the GO2 spectrum is lower. Such changes point to a periodic origin, such as a planetary orbit. Since planets are likely present in most of the disks in this sample, it would seem that these four disks with high amounts of crystalline silicates either have more planets or that the presence of hot Jupiters (see {\S}~\ref{diskwarps}) is significant. \section{Summary \& Conclusions} \label{sed:sum} In this work we see various types of variability on 3-4 year timescales and in some cases we see variability on 1 week timescales. The dominant type of variability observed can be classified as seesaw-like behavior, whereby the emission at shorter wavelengths varies inversely with the emission at longer wavelengths. We attempted to understand the origin of the variability in pre-transitional and transitional disks by modeling the overall SEDs at different epochs. For many of the pre-transitional disks we find that the variability can be explained by changing the height of the inner disk wall and hence the shadow on the outer disk. Typically, the height of the wall varies by $\sim$20$\%$. We also perform SED model fitting for the transitional disks GM~Aur and CS~Cha. To fit the variability of GM~Aur, we vary the amount of optically thin dust in the hole and the height of the outer wall. In CS~Cha, only the 10~{$\mu$m} silicate emission changes between epochs and so we only alter the amount of dust in the optically thin region. The transitional disks DM~Tau and T25 are the only two disks in the sample which display no variability. These disks' inner regions do not contain discernible amounts of dust. We propose that planets are responsible for the changes observed in our sample. Overall, it seems that most of the variability seen is due to material in the inner disk casting a shadow on the outer disk. The height of the inner wall can vary due to disk warps caused by planets in the disk. We can also link the silicate dust compositions found in this work to the presence of planets. We find that crystalline silicates are common in the outer disks of our objects, too far from the central star to be explained by most crystallization mechanisms. In addition, the four disks in our sample which have the highest crystalline silicate mass fractions vary on 1~wk timescales. In two of these four disks, we see periodic changes in the infrared emission. Planets can cause shocks and collisions which can heat the dust to high enough temperatures to crystallize the dust. Planets can also lead to short timescale and periodic variability. Follow-up variability studies conducted with the {\it James Webb Space Telescope} will give us the simultaneous, multi-wavelength data needed to test if the variability observed in our sample is periodic as well as the sensitivity to significantly expand the sample size. \acknowledgments{ We thank the referee for a constructive and thorough report. We thank Lee Hartmann for providing comments on the manuscript and Steve Lubow for useful discussions. C.~E.~was supported by the National Science Foundation under Award No. 0901947. E.F. was supported by NASA through the Spitzer Space Telescope Fellowship Program, through a contract issued by JPL/Caltech under a contract with NASA. P.~D.~acknowledges a grant from PAPIIT-DGAPA UNAM. E.~N. acknowledges a postdoctoral grant from CONACyT. N.~C.~acknowledges support from NASA Origins Grant NNX08AH94G. }
1,108,101,562,403
arxiv
\section{Introduction} In the CUORE experiment \cite{CUORE-NIMA}, an array of 988 TeO$_2$~ bolometers will be used to search for neutrinoless double beta decay (\ensuremath{\beta\beta0\nu}) with the aim of investigating the nature and the absolute mass of neutrinos. CUORE bolometers are operated as thermal calorimeters at a temperature of $\sim$10~mK. The energy released by a particle is converted into phonons and the subsequent temperature rise is read out by a thermistor and converted into a voltage pulse, whose amplitude is related to the particle energy in a nonlinear way that must be determined experimentally for each bolometer. An energy calibration is necessary to reliably establish the bolometer response over the entire energy spectrum, in particular close to the region of interest for \ensuremath{\beta\beta0\nu}~of \elem{130}{Te}~ around 2527~keV. Any uncertainty in the calibration will affect the energy resolution of the bolometers (thus worsening our sensitivity) and introduce a systematic error in the search for \ensuremath{\beta\beta0\nu}~events. The bolometers will be cooled down in a large cryogen-free cryostat \cite{Nucciotti-LTD12} where the 4K stage is achieved by means of 4 pulse tubes and the base temperature by a high-power dilution refrigerator. The design of such a cryostat is challenging, mainly because of the size and weight of the detector and the need to include $\sim$15~t of lead for radioactive shielding inside the cryostat. Several different requirements must be satisfied in the design of the detector calibration system. We must \begin{itemize} \item not exceed the maximum event rate of 150~mHz on each bolometer, to avoid pile-up and baseline build-up due to the intrinsic slow response of bolometers; \item meet the maximum allowed heat load at each stage of the cryostat; \item have the ability to change the sources and the calibration source isotope; \item prevent the calibration time from significantly affect detector live time; \item use only certified materials with low radioactivity, store calibration sources outside the cryostat during normal data-taking, and otherwise avoid any risk of radioactive contamination, since \ensuremath{\beta\beta0\nu}~observation requires an ultra-low-background environment; \item operate safely and reliably for the entire experiment lifetime (> 5 years). \end{itemize} This paper describes the calibration system that is being developed to calibrate the CUORE detector according to these requirements. \section{Design of the calibration system} \begin{figure} \includegraphics[width=0.4\columnwidth]{fig/source-locations.pdf} \caption{Top view of calibration source layout with respect to the crystal towers (sources enlarged for visibility). Visualization from GEANT4.} \label{fig:source-locations} \end{figure} The energy calibration of the CUORE bolometer array will be based upon regular measurements with $\gamma$-sources of well-known energies. This is similar to what was done in the Cuoricino experiment \cite{QINO-PRC78}, the predecessor to CUORE. For best results, five or more lines should be clearly visible in each bolometer spectrum. The CUORE detector consists of a tightly-packed array of 19 towers with 13 planes of four bolometers each. To overcome the detector self-shielding, calibration sources need to be placed between the towers. Fig. \ref{fig:source-locations} shows the final arrangement of the 6 internal and 6 external sources. They were optimized with GEANT4 Monte Carlo simulations to produce a uniform illumination of all detectors. The CUORE calibration system is based upon 12 radioactive source strings that are able to move, under their own weight, through a set of guide tubes that route them from outside the cryostat to their locations around and between the bolometers. The system consists also of 4 computer controlled deployment boxes above the 300K flange, cooling mechanisms to thermalize the source strings at 4K, and a vacuum and purge system. \begin{figure} \includegraphics[width=0.60\columnwidth]{fig/source-string.pdf} \caption{Schematic of the source carrier and photo of a prototype source string.} \label{fig:source-string} \end{figure} As shown in Fig. \ref{fig:source-string}, the source string is made of 30 copper crimp capsules, evenly spaced over a length equal to the detector height (85 cm), at the bottom of a Kevlar string (0.35~mm diameter). Each capsule is 8~mm long, has a 1.6~mm outer diameter (OD) and houses a radioactive thoriated Tungsten wire. A heat-shrunk PTFE cover is placed over each capsule to minimize the friction against the guide tube in which the string is moving. The activity of each capsule will be 130~mBq for the internal sources and 633~mBq for the external ones. A storage and deployment mechanism for the source carriers will be provided by four motion boxes that sit on top of the cryostat, each of which will host three independently-controlled drive spool assemblies (Fig. \ref{fig:3D}). The insertion and extraction of the calibration sources will be controlled remotely by a computer control system which will be integrated into both the cryostat's slow control and the experiment's database. \begin{figure} \includegraphics[width=\columnwidth]{fig/3Dmodels.pdf} \caption{3D model of the cryostat flanges with the calibration motion boxes on top and the 12 guide tube routings integrated. On the right, 3D rendering of a drive spool assembly (view from inside of motion box) and of the cooling mechanism.} \label{fig:3D} \end{figure} The source strings are routed by guide tubes from above the 300K flange through the cryostat's flanges (Fig. \ref{fig:3D}). The guide tubes also provide a thermal connection to the various stages of the cryostat. The presence of the lead shields and other cryostat subsystems forces the tubes to bend in several places. In addition, manufacturing and cleaning constraints force us to split them into several sections. Above the mixing chamber, where there is no direct line of sight with the detector because of the presence of the a lead shield, stainless steel (type 304) is used for the tubing (ID 5~mm, OD 5.4~mm). In the lower parts, tubes (ID 4~mm, OD 2~mm) will be machined out of bars of a special high-purity, oxygen-free copper. Thermal and mechanical connections are established by copper mounts to all cryostat flanges. A set of thermometers will be placed at various points along the guide tubes to monitor the temperature during the source insertion and extraction. \section{Thermal analysis} In the context of the cryostat design \cite{Nucciotti-LTD12}, a thermal analysis of the cryostat was performed and the thermal budget at each stage calculated and distributed among the various subsystems. The cooling power available at each thermal stage for the calibration system evaluated under static equilibrium conditions is shown in Table \ref{tab:table-thermal-calc}. \begin{table}[tbp] \begin{tabular}{cccc} \hline Stage & calibration cooling & heat load from & radiated power from \\ & power budget & guide tube conductivity & source strings at 4~K \\ \hline 40 K & $\sim 1$ W & $\sim 1$ W & -- \\ 4 K & 0.3 W & 0.02 W & -- \\ 0.7 K & 0.55 mW & 0.13 mW & 0.08 $\mu$W \\ 70 mK & 1.1 $\mu$W & negligible & 0.3 $\mu$W \\ 10 mK & 1.2 $\mu$W & 1.07 $\mu$W & 0.08 $\mu$W \\ detector & $<$ 1 $\mu$W & -- & 0.25 $\mu$W \\ \hline \end{tabular} \caption{Cooling power available to the calibration system at each thermal stage of the cryostat and heat load contributions from the calibration system parts and operations.} \label{tab:table-thermal-calc} \end{table} Studies to understand the dynamic behavior of the cryostat in response to thermal excitation have also been done but, given its complexity, the real behavior is largely unknown. As a conservative approach the calibration system is designed so that the thermal load at any moment is lower than the maximum allowed in static conditions. This will ensure that the working points of the bolometers are not affected by the calibration. A complete thermal analysis of the calibration system was performed, trying to include all relevant sources of heat load. We will briefly discuss them here. The calculated heat loads are summarized in Table \ref{tab:table-thermal-calc}. \begin{figure} \includegraphics[width=.74\columnwidth]{fig/guide-tubes.pdf} \caption{Schematic view of the guide tubes' material and thermal couplings to the cryostat flanges.} \label{fig:guide-tubes} \end{figure} {\bf Thermal conductivity of guide tubes} \\ A schematic model of the thermal connections is shown in Fig.~\ref{fig:guide-tubes}. By using data from literature \cite{lounasmaa,nist} we calculated the heat load from the guide tubes' conductivity. As shown in Table \ref{tab:table-thermal-calc}, all values are within the allowed limit if good thermal contact is made at all stages, with the exception of the 70~mK plate where a weak coupling is needed because of the small distance (11~cm) from the 0.7 K plate. Weak thermal coupling will be achieved by introducing low--conductivity spacers in the tube mounts. {\bf Radiation heat inflow down the guide tubes}\\ The radiation from the motion box at room temperature will inevitably be funneled down the guide tubes and contribute to the heat load on the various stages. The bends along the path are thought to have a similar effect as typical shields or baffles. Worst-case calculations have shown that this heat contribution is expected to be negligible. If this proves not to be the case, we can mitigate unwanted radiation by blackening the inner surface of the section of the guide tube between 300~K and 4~K. {\bf Heat radiated by the source strings} \\ In order to meet the maximum heat load in the detector region, calculations show that the source strings cannot have a temperature higher than 4~K when they are fully deployed next to the bolometers (i.e. during calibration measurements). It is worth noting that the guide tubes will shield the radiant heat from the detectors, which would otherwise be immediately warmed up and become unusable. The source carrier design has the advantage of being small and light, thus reducing the total heat to be removed to cool it down to 4~K. Nevertheless, since radiation cooling is ineffective, a mechanical system that provides good thermal contact between the capsules and a heat sink at 4~K is currently being developed. The concept is based upon a linear solenoid actuator operated at 4~K that squeezes the capsules against a heat sink at 4~K. A conceptual design drawing is shown in Fig. \ref{fig:3D}. Although preliminary tests at room and liquid nitrogen temperature are encouraging, cooling the sources is still one of the most critical open design issues. {\bf Thermal conductivity of the source strings}\\ The source strings are connected to 300~K at all times in the motion boxes. Due to their size and the relatively small thermal conductivity of Kevlar \cite{nist}, the heat load will be negligible if we are able to thermalize the string at each thermal stage. Unfortunately, the thermal contact between the string and the guide tube is ill-defined. In the worst case scenario, when a source is fully deployed, the heat from the string conductance will overcome the heat loss by radiation. Thus it will effectively warm up the source string above its nominal operating temperature of 4~K. Therefore, the source carrier strings need to be effectively thermalized at least at the 4K stage. {\bf Friction heat during source string motion}\\ A feature of the calibration system design is that there are 12 strings that move over 2.5 meters from 300~K into a region at 10 mK. During insertion and extraction, the source strings produce friction by sliding against the tubes. This is especially critical at the bends where the friction force depends exponentially on the bending angle and the friction coefficient. We have developed a simulation that models the friction and adjusts the source extraction speed so that the heat dissipated by friction on each stage does not overcome the maximum allowed in static conditions. Source string motion as slow as 0.1 mm/s is needed to meet this constraint. The time required for the extraction of the sources is then evaluated based on pairing one internal string with an external one and then staggering the pairs, resulting in $\sim$48 hours extraction time. During the commissioning tests that will be performed in the CUORE cryostat, we plan to study the dynamic response of the cryostat to excess heat loads and evaluate the possibility of moving the source faster depending on the recovery time constant of cryostat and detector. To reduce friction, we are evaluating the use of PTFE for the bent sections of the guide tubes. We are also investigating materials other than Kevlar for the source strings, with a lower friction coefficient and similar mechanical properties. \section{Prototyping and testing} For room temperature motion tests, a representative full-length guide tube routing was assembled together with a drive assembly. The source is moved by a stepper motor while its tension is continuously monitored by a miniature load cell. An optical encoder, a proximity sensor and a camera are also present to monitor the source position. Dummy sources were manufactured for the tests. A LabView program has been developed to control the motor and acquire data from all sensors. Our testing so far has demonstrated that, at room temperature, the source moves along the guide tubes in a reliable and reproducible way. The load cell reading shows a pattern that reflects the source string's path through the guide tubes and can be used to monitor the source for problems. Positioning accuracy of about 0.5 mm can be achieved. A wear test was performed, which showed that only small fraying occurs in the Kevlar string after more than 10000 spooling cycles. \section{Conclusions} The design of a calibration system for CUORE that allows radioactive sources to be inserted into and extracted from a cryostat is being finalized. The cryostat for CUORE will be installed at the beginning of 2010, along with a full calibration routing for testing purposes. Final installation and commissioning of the calibration system will take place in the second half of 2010. \begin{theacknowledgments} We gratefully acknowledge the support of the University of Wisconsin and the U.S. Department of Energy, Office of Science, Nuclear Physics through OJI Grant DR-FG02-08ER41551. We also thank Ken Kriesel, Jackie Houston and Dan Zou for their contributions to the design and testing of this system. This work was done in collaboration with and as part of the CUORE experiment. \end{theacknowledgments} \bibliographystyle{aipproc}
1,108,101,562,404
arxiv
\section{Introduction} Sequential recommendation has now been more widely studied, characterized by its well-consistency with real-world recommendation situations. Recently, deep learning (DL) based methods, which have contributed to significant performance improvement in diverse fields, have been adopted in sequential recommendation. Sequential recommender systems basically take sequences of items as their input, which is similar to natural language processing(NLP) that generally regards word sequences as its input unit. Based on this structural similarity, DL architectures such as recurrent neural networks (RNN) and attention mechanisms are extensively applied to sequential recommendation, delivering notable performance. Also, various methods based on other DL-architectures such as convolutional neural networks (CNN) and variational auto-encoder (VAE) have been increasingly adopted, demonstrating state-of-the-art performance. Although it is clear that various DL-based methods have been actively introduced in sequential recommendation, most of them only focus on the transformations of model structure. However, according to previous works\citep{Fang20}, there are multiple factors influencing the performance of DL-based models, including data augmentation in the data processing step, and choice of loss function as a model training strategy. In particular, DL-based models with high complexity generally require a large amount of training data in order to estimate model parameters effectively and achieve high performance. For this reason, in some domains including computer vision (CV), there have been considerable early efforts to increase the size of training dataset through augmentation, which is currently established as a standard preprocessing technique for accomplishing high performance\citep{Krizhevsky12}. Yet, it is not sufficiently verified if and how data augmentation is effective in improving the performance of recommender systems. In this paper, we aim to show that various data augmentation strategies can improve the performance of sequential recommender systems, especially with a limited amount of training dataset. To this end, we propose a simple set of data augmentation strategies including 1)Noise Injection, 2)Redundancy Injection, 3)Item Masking, 4)Synonym Replacement, all of which transform original item sequences with direct manipulation, and describe how data augmentation changes the performance through our extensive experiments based on the state-of-the-art recommendation model. The experiments demonstrate that overall performance improvements are achieved by the application of our proposed data augmentation strategies. It is notable that the performance improvement can be large if the size of the dataset is relatively small. This result suggests that our strategies can be particularly effective to boost the performance, especially for the cold-start situations in sequential recommender systems which do not have sufficient dataset in the early stage. The contributions of this study can be summarized as follows: \begin{itemize} \item We verify through quantitative experiments that data augmentation technology can improve DL-based sequential recommendation performances. We also demonstrate that it can be significantly effective to boost model performances especially when the amount of training data is small. \item We describe how the performance varies by the extensive applications of various data augmentation strategies. In particular, we propose a simple set of augmentation strategies that directly manipulate original sequences, in addition to the subset-selection based ones suggested in prior works. It is shown that our suggested strategies can improve the performance to a better or competitive level with existing strategies. \item We suggest the possibility of further performance improvements of other current state-of-the-art models, where data augmentation is applied in the preprocessing step while maintaining the overall model architecture. In this regard, we expect that future works can verify if data augmentation can serve as an universal preprocessing technique in the design of recommender systems. \end{itemize} \section{Related Works} \subsection{Sequential Recommendation} Traditional recommendation aims to model general and global preferences of users based on the assumption that user-item relationships are not dynamic but static, which is intrinsically bound to ignore the order of user-item interactions in the model. Accordingly, other special approaches have been required for sequential recommendations, mainly concentrating on finding sequential patterns from data in the consideration of the item order\citep{Wang19survey}. The earliest approach on sequential recommendation is Markov Chain (MC), which assumes that the next action is conditioned on only the few previous actions. MC aims to learn the transition patterns between items based on an item-item transition matrix applied equally to all users. {\itshape FPMC}\citep{Rendle10} is marked by its extensive approach that combines a 1st-order MC model with general matrix factorization, building a user-item-item tensor and factorizing it into a user-item matrix and a personalized transition matrix. The successively proposed {\itshape Fossil}\citep{He16} is an improved version of {\itshape FPMC} by extending factored MCs into higher orders. Recently, researchers have found that DL-based methods can be effective for sequential recommendation. RNN-based models are the most popular due to their structural commonality in which they both generally take their input in the form of sequences and model them step-by-step. The most pioneering DL-based method is {\itshape GRU4Rec}\citep{Hidasi15}, which utilize gated recurrent units (GRU) to model sequences. After that, an updated model {\itshape GRU4Rec+}\citep{Hidasi18} is suggested by adopting a different loss function and a sampling strategy. Other advanced approaches originally suggested in NLP, such as attention mechanisms, have also been increasingly brought and exploited in sequential recommendation. For example, {\itshape Transformer}\citep{Vaswani17} and {\itshape BERT}\citep{Devlin18}, purely composed of self-attention modules, are the most outstanding models in NLP, receiving attention for not only achieving state-of-the-art performance in multiple NLP tasks but also changing the landscape of the research field. {\itshape SASRec}\citep{Kang18} is the first method that seeks to apply the approaches of {\itshape Transformer} to sequential recommendation tasks. The model focuses on adopting the encoder structure of {\itshape Transformer} based of self-attention mechanisms in order to learn the latent representations of items in the sequences and utilize them as the basis of item similarity computation. It is proved that {\itshape SASRec} can outperform other previous DL-based methods on several benchmark datasets and thus has been established as a representative baseline model in sequential recommendation as multiple recently suggested methods - such as {\itshape BERT4Rec}\citep{Sun19}, {\itshape TiSASRec}\citep{Li20}, {\itshape FISSA}\citep{Lin20}, {\itshape SSE-PT}\citep{Wu20} - are following its approaches large and small. \citep{Meng20} is clearly differentiated from other works in that it focuses on data splitting strategies in the preprocessing step, not the transformations of network structures or modules. However, it is mainly interested rather in the impact of data splitting strategies on the evaluation of recommendation models than the improvement of performance. As such, it is evident that most of the existing works making use of DL-based approaches seek to improve the performance solely by designing better network architectures, without adequately considering other aspects of the approaches. \subsection{Data Augmentation} DL-based models require large datasets to properly estimate numerous parameters because otherwise they can produce much worse performance than other simpler approaches. As it is common, in practice, that the amount of data in hand is limited and the cost of acquiring additional data is high, one way to deal with this is data augmentation, a technique to create fake data based on the dataset currently available and add them to the training dataset\citep{Goodfellow16}. Data augmentation is known to be particularly effective for classification problems. For object recognition tasks in the CV domain, some augmentation strategies have already proved to greatly improve generalization, thereby becoming a standard technique in the data preprocessing step. Since there should be more appropriate or inappropriate ways of augmenting by the characteristics of datasets, most researchers have paid attention to finding the best augmentation strategies for each specific benchmark dataset\citep{Shorten19}; for example, natural image datasets such as ImageNet, random cropping, image mirroring, and coloring shifting\citep{Krizhevsky12}. Also for speech recognition tasks, several effective augmentation strategies have been proposed to operate on the raw signals or log mel spectrogram of input audio, such as speed perturbation or noise injection\citep{Ko15,Park19}. In NLP, progress in data augmentation is relatively slow due to the challenge of establishing generalized rules for language transformation. Yet, as the technique has recently been proved promising to boost performance in text classification tasks, interests are growing on the augmentation of natural language data. {\itshape EDA}\citep{Wei19} has presented a set of universal data augmentation strategies (i.e., synonym replacement, random insertion, random swap, and random deletion) and confirmed that the technique can lead to substantial improvement by randomly operating one of the strategies, being particularly helpful for smaller datasets. This study, which provides the methodological basis for the evaluation of the technique’s effectiveness, uses a similar method to ours, assuming low-resource situations by using only a restricted fraction of the original training dataset. Even though data augmentation has been established in many domains as a standard technique to train more robust models, it is hard to say that it has been fully explored in recommendation. One pioneering work is {\itshape AugCF}\citep{Wang19}, which utilizes data augmentation to alleviate the inefficiency of existing hybrid collaborative filtering methods. The key idea is using side information in the process of data augmentation, not using it directly as a simple additional input. In sequential recommendation, there have also been only few works that have explored data augmentation\citep{Grbovic15,Wolbitsch19}. The earliest work is {\itshape Improved RNN}\citep{Tan16}, which proposed a two-step method to augment click sequences particularly on {\itshape YooChoose} dataset. The first step is treating all prefixes of the original sequence as new training sequences and next applying item dropout randomly to the augmented subsequences. Thanks to data augmentation, the extended model can enhance the performance of basic RNN models, thus becoming a standard preprocessing technique for the subsequent models that evaluate on the dataset. Besides, {\itshape CASER}\citep{Tang18} is a CNN-based model which applies horizontal and vertical convolutional filters to capture short-term preferences in item sequences. It operates the strategy of sliding window in order to apply CNN architecture to sequential data, splitting an original sequence of length {\itshape m} into several subsequences with the same length (i.e. window) of smaller {\itshape L}, and generates $(m-L+1)$ subsequences from a sequence, resulting in an enlarged size of training dataset. In all of these previous works, augmented subsequences leave the order of original sequence intact. This makes their strategies dependent on the length of original sequence while our version is capable of augmenting data at any desirable size. \section{Proposed Method} In this paper, we propose a set of data augmentation strategies for sequential recommendation. To the best of our knowledge, we are the first study to comprehensively explore the effectiveness of augmenting data in sequential recommendation with extensive experiments. Note that we only focus on the data augmentation operated in the preprocessing step, with the other network architecture intact. In our experiments, we use {\itshape SASRec}\citep{Kang18}, a state-of-the-art model which applies self-attention mechanisms as the baseline model. Thus, the overall network architecture of the recommender system follows the original design of {\itshape SASRec} model. \subsection{Problem Formulation} Before the illustration of our suggested strategies, we first formulate the sequential recommendation problem as follows. Without loss of generality, we have a recommendation system with implicit feedback given by a set of users $U$ to a set of items $I$. For sequential recommendation, we denote the records of each user $u \in U$ as an item sequence (in the order of interaction time) as $S^u =\{ s^u_1,s^u_2,...,s^u_{|S^u|} \}$, $s^u \in I$. Our goal is to provide a recommendation list for each user $u$, in which we expect the next real interacted item $s^u_{|S^u|+1} \in I\verb|\|S^u$ to appear and be ranked as high as possible, hopefully the first. \begin{table} \centering \caption{Classification of data augmentation strategies for sequential recommendation.} \label{tab:clf} \begin{tabular}{ccc} \toprule Category & Strategy & Notes\\ \midrule Direct Manipulation & Noise Injection (NI) & Suggested in this paper\\ & Redundancy Injection (RI) & \\ & Item Masking (IM) & \\ & Synonym Replacement (SR) & \\ \midrule Subset Selection & Subset Split (SS) & Suggested in prior works\\ & Sliding Window (SW) & \\ \bottomrule \end{tabular} \end{table} \subsection{Augmentation Strategies} In this section, we present the details of four augmentation strategies that can be operated for each given item sequence of a training dataset. \begin{itemize} \item Noise Injection (NI): Randomly choose an item (i.e., negative sample) not included in the original item sequence. Inject the negative sample into a random position in the sequence. \item Redundancy Injection (RI): Randomly choose an item (i.e., positive sample) from the original item sequence. Inject the positive sample into a random position in the sentence. \item Item Masking (IM): Randomly choose {\itshape k} items from the original item sequence. Mask the {\itshape k} items to exclude them from the training. The size of {\itshape k} is decided based on the sequence length {\itshape m} with the formula $k = p*m$ where {\itshape p} is a parameter indicating the percentage of the items to be masked. \item Synonym Replacement (SR): Randomly choose an item from the original item sequence. Replace it with the most similar {\itshape s} items (i.e., synonyms). \end{itemize} To apply the NI or RI strategy, an item should be removed from the original sequence since the length of augmented item sequences should remain the same as the length of the original sequence due to the model structure. According to previous research, the latest item has the most significant impact on the prediction of the next item while the far-away items have a relatively low impact.\citep{Tang18,Wang19} Based on the lessons, our design removes the first item of the original sequence from the training, assuming that it would have the lowest impact on the next-item prediction. Besides, to apply the SR strategy, it is necessary to set the criteria on item similarities to determine the most similar ones (i.e., synonyms) to each item. Thus, we decide to train the item embeddings first to compute the similarities. In this paper, we use the {\itshape Word2Vec}\citep{Mikolov13} algorithm prevalently utilized to acquire the pretrained word embeddings in NLP thanks to its easiness of implementation and computational efficiency. However, the algorithm has its intrinsic limitations where we cannot verify how accurately the similarities given by the trained item embeddings reflect the 'real' similarities between items. Therefore, our design generates augmented samples based on multiple similar ones for an item to be replaced, as the simple replacement with 'the most similar' one would have a higher risk of degrading the quality of the augmented data. Note that for all augmentation strategies suggested here, original item sequences remain intact and ($N_{aug}-1$) artificial data samples are generated additionally, where {\itshape $N_{aug}$} is a parameter that indicates the number of augmented sequences per each original one. For example, if it is set to be $N_{aug} = 10$, the NI or RI strategy can generate 9 unique artificial sequences for an original item sequence and each will have only one-item-difference from the original sequence. In contrast, IM will generate 9 artificial sequences which have {\itshape k} different items from the original one, respectively. In case of the SR strategy, assuming to set $N_{aug} = 10$ and $s = 3$, it will randomly choose 3 items in the original sequence and replace each with its most similar 3 items, resulting in 9 different sequences having one-item-difference from the original one. \section{Experiments} In this section, we present our experimental settings and results to answer the following research questions: \begin{itemize} \item RQ1: Can our proposed data augmentation strategies improve the performance of state-of-the-art baselines for sequential recommendation tasks? \item RQ2: How do the different data augmentation strategies affect model performance differently? \item RQ3: How do the size of data augmentation strategies affect model performance differently? \end{itemize} Note that we evaluate our methods through extensive comparison with two baseline situations: 1) No data augmentation is applied, 2) Data augmentation strategies suggested in prior works\citep{Tan16,Tang18} are applied. \subsection{Baseline Models and Strategies} As we seek to improve performance solely through the data augmentation operated in the preprocessing step, we choose to use a state-of-the-art model {\itshape SASRec}\citep{Kang18} for sequential recommendation tasks as our baseline. Also, since the model leverages only the item order information which is essential for general sequential recommendation models, it is adequate to verify if data augmentation can be a standard for various sequential recommendation models. Besides, the compared augmentation methods are Subset Split (SR)\citep{Tan16} and Sliding Window (SW)\citep{Tang18}, both of which generate augmented data samples by using the subsets of the original sequences. SR implements data augmentation by splitting the original sequence of length {\itshape m} into shorter subsequences of length 1 \verb|~| {\itshape m}. In this experiment, we consider the number of $N_{aug}$ as the maximum size of augmentation if the original sequence length {\itshape m} is larger than $N_{aug}$ (i.e., $(m > N_{aug})$) in order to control the size of augmentation and apply it on the same scale for all training sequences as much as possible. Meanwhile, SW splits the original sequence of length {\itshape m} into subsequences with the same length {\itshape L} by sliding with a window of length {\itshape L}. Likewise, we set the parameter $N_{aug}$ fixed and adjust the window size {\itshape L} in accordance with the formula $N_{aug} = m - L + 1$. \begin{table} \centering \caption{Basic Dataset Statistics.} \label{tab:dataset} \begin{tabular}{cccccc} \toprule Dataset&\#\ users&\#\ items&\#\ interactions&Avg. Length&Sparsity\\ \midrule MovieLens-1M&6,040&3,416&999,611&165.50&95.16\\ Amazon Games&64,073&33,614&568,508&6.88&99.97\\ Gowalla&85,034&308,957&6,442,892&52.83&99.98\\ \bottomrule \end{tabular} \end{table} \subsection{Datasets} We evaluate our methods on three benchmark datasets from real-world platforms which have different characteristics in terms of domain, size, sparsity, and average sequence length: MovieLens-1M, Amazon Games, and Gowalla. Summary statistics are shown in Table 2 above. Furthermore, we hypothesize that data augmentation can be more helpful for smaller datasets, so we delegate the restricted fraction of datasets by selecting a random subset of the full training set. \begin{itemize} \item MovieLens\citep{harper2015movielens}: A widely-used benchmark dataset especially for evaluating collaborative filtering algorithms. We use the version that includes 1 million user ratings (i.e., MovieLens-1M). It is known as a relatively dense dataset for its low sparsity and long sequence length in average. \item Amazon Games\citep{He16,McAuley15}: Amazon is a series of datasets comprising large volume of product reviews crawled from {\itshape Amazon.com}. Among various categories, we use ‘Video Games’ with high sparsity and variability. \item Gowalla\citep{cho2011}: A location-based social networking website where users share their locations by checking-in, labeled with timestamps. It has high sparsity but medium average sequence length. \end{itemize} \subsection{Experimental Setup} To preprocess all datasets, we follow the procedure from \citep{He2017,Kang18,Rendle10} as follows: 1) we regard the presence of a review, rating, or check-in record as implicit feedback and use timestamps to determine the sequence order of actions; 2) we discard cold-start users and items with fewer than 5 actions; 3) we adopt the leave-one-out evaluation by splitting each dataset into three parts, i.e., using the most recent item of the sequence for testing, the second recent item for validation and the remaining for training. Also, other parameters which do not have a direct impact on data augmentation are set to follow the baseline\citep{Kang18}. Source codes for both preprocessing and implementation are available at \url{https://github.com/saladsong} According to prior works\citep{Wei19}, as the size of augmentation $N_{aug}$ is an important factor that affects performance significantly, it is necessary to tune it adequately for each dataset. Basically, we set the default size as $N_{aug} = 10$ for all datasets and strategies and next conduct additional experiments with the size of $N_{aug} \in \{ 2,5,15 \}$ in order to assess the contribution of the parameter. For the evaluation of performance, we adopt two common metrics: hit ratio (HR@10) and normalized discounted cumulative gain (NDCG@10). HR@10 refers to the ratio of the ground-truth items presented in the top 10 recommendation lists, while NDCG@10 considers the position and assigns higher weights to higher positions. To reduce computation, referencing \citep{Kang18,Li20}, we randomly sample 100 negative items for each user { u} and rank these items with the ground-truth item, thereby calculating HR@10 and NDCG@10 based on the rankings of these 101 items. For all experiments, we average the results from five different random seeds and determine the best performance based on the metric of NDCG@10. \subsection{Training Set Sizing} In general, there is more tendency of overfitting in case of smaller training datasets, which means regularization such as data augmentation could bring larger gains on smaller datasets. Following \citep{Wei19}, we conduct experiments using a restricted fraction of the available training data for all three datasets. We run training both with and without any augmentation for the following training set fractions (\verb \subsection{Results} {\itshape 4.5.1. Performance Comparison (RQ1).} The experiments show that data augmentation on all datasets can lead to the improvement of sequential recommendation performance. The summary of the results is shown in Table~\ref{tab:res_main}, illustrating the best strategy and its results for each sub-dataset with the augmentation size $N_{aug} = 10$. Note that the best performance for each case is counted based on NDCG@10. Although the performance varies depending on the augmentation strategies applied, injection strategies and Sliding Window (SW) have been found to be most effective overall. \begin{table*}[h] \centering \caption{Recommendation Performances with the Best Augmentation Strategies (NDCG@10)} \label{tab:res_main} \begin{tabular}{c|ccc|ccc|ccc} \toprule Size & \multicolumn{3}{c|}{MovieLens-1M} & \multicolumn{3}{c|}{Amazon Games} & \multicolumn{3}{c}{Gowalla} \\ \cline{2-10} & Base&Prior(SW)&Ours(RI)& Base&Prior(SW)&Ours(NI)& Base&Prior(SW)&Ours(NI) \\ \midrule 10\%&0.3114&0.3936&\textbf{0.4003}&0.1830&0.1991&\textbf{0.2350}&0.4981&0.5060&\textbf{0.5149}\\ & &(+26.4\%)&\textbf{(+28.5\%)}& &(+8.8\%)&\textbf{(+28.4\%)}& &(+1.6\%)&\textbf{(+3.4\%)}\\ 20\%&0.4305&0.4576&\textbf{0.4609}& 0.2462&0.2965&\textbf{0.3240}&0.5612&0.5718&\textbf{0.5821}\\ & &(+6.3\%)&\textbf{(+7.1\%)}& &(+20.5\%)&\textbf{(+31.6\%)}& &(+1.9\%)&\textbf{(+3.7\%)}\\ 30\%&0.4876&0.4903&\textbf{0.4974}& 0.3728&0.3532&\textbf{0.3774}&0.6174&0.6304&\textbf{0.6334}\\ & &(+0.5\%)&\textbf{(+2.0\%)}& &(-5.3\%)&\textbf{(+1.2\%)}& &(+2.1\%)&\textbf{(+2.6\%)}\\ 40\%&0.5090&0.5155&\textbf{0.5215}& 0.3993&0.3778&\textbf{0.4026}&0.6579&\textbf{0.6723}&0.6714\\ & &(+1.3\%)&\textbf{(+2.5\%)}& &(-5.4\%)&\textbf{(+0.8\%)}& &\textbf{(+2.2\%)}&(+2.1\%)\\ 50\%&0.5233&0.5205&\textbf{0.5278}& 0.4290&0.3976&\textbf{0.4293}&0.6947&\textbf{0.7152}&0.7074\\ & &(-0.5\%)&\textbf{(+0.8\%)}& &(-7.3\%)&\textbf{(+0.1\%)}& &\textbf{(+2.9\%)}&(+1.8\%)\\ 100\%&0.5592&0.5543&\textbf{0.5631}& 0.5017&0.4586&\textbf{0.5031}&0.8110&\textbf{0.8279}&0.8134\\ & &(-0.9\%)&\textbf{(+0.7\%)}& &(-8.6\%)&\textbf{(+0.3\%)}& &\textbf{(+2.1\%)}&(+0.3\%)\\ \bottomrule \end{tabular} \end{table*} \begin{figure*}[h] \centering \includegraphics[width=\linewidth]{figur_1} \caption{Performance Improvements based on the Baseline Model (i.e. SASRec) for Each Strategy (NDCG@10)} \label{fig:per_baseline} \end{figure*} Particularly, we aim to examine if data augmentation can help a sequential recommendation model generalize better when the amount of data is not sufficient, by assuming the low-resource situations where the training set size is intentionally restricted. Of course, it is easily inferred that there is no 'absolute' standard to determine if the amount of data is sufficient, which means it would be dependent on the characteristics of the dataset a system deals with. Regarding this, our extensive experiments show that data augmentation tends to be more effective on smaller datasets, but the criteria of low-resource situations to induce better generalization differ by the dataset. {\itshape 4.5.2. Ablation Study: Effects of each strategy (RQ2).} Figure~\ref{fig:per_baseline} shows the improvements in performance for all experimental cases in this study. Most cases demonstrate performance improvement compared with the baseline model without augmentation. For total 71 cases\footnote{(3 datasets * 6 fractions * 4 strategies) - 1 = 71, excluding the case of SR for Gowalla full dataset due to its high computation cost} that our strategies are applied to, 78.9\verb Furthermore, it can be meaningful to explore why the effect is brought by each augmentation strategy. Among the direct manipulation strategies suggested, Noise/Redundancy Injection (NI/RI) shows the best performance improvements for most of datasets. We can infer that their excellence stems from grasping the skipping patterns of the short-term preferences in item sequences. According to the observations in previous studies, it is important to capture the union-level and skipping patterns of preferences as well as point-level ones (i.e., item-to-item transition) for enhancing performance in sequential recommendations\citep{Tang18,Yan19}. In other words, a model can yield better recommendations when it relaxes the structural constraint (i.e., one-directional chain-structure in ordered item sequences) than when it strictly maintains the constraint. Thus, we can conclude that these injection strategies lead to significantly better performance by capturing the skipping patterns in sequences through the injected data samples. Besides, Synonym Replacement (SR) is shown to exhibit a substantial difference in improvement by the dataset, which may be brought by the difficulty of confirming how similar the computed synonyms are to the corresponding original, since the process of computing similar ones (i.e., synonyms) is not elaborately tuned in this experimental setting. Of course, in certain cases, synonyms may not fundamentally replace the original item. In case of prior works, Sliding Window (SW) shows more stable improvements than Subset Split (SS). We can infer that this is because SS, an item removal strategy, is bound to reduce the amount of available information. In the same regard, the relatively inferior performance of Item Masking (IM) strategy can be understood. {\itshape 4.5.3. Ablation Study: Effects of data augmentation size (RQ3).} The experiments to see the variation of performance depending on the size of data augmentation are conducted only on the strategies of SW and NI to show the most outstanding improvement for each category, respectively. It turns out that the size of performance gain tends to be proportional to the size of augmentation for all datasets. Regardless of the selection of the dataset or augmentation strategies, it is hard to expect improvements in performance if the size of data augmentation is too small. We can conclude accordingly, for small data, noticeable improvements can be expected even with a small size of data augmentation while for sufficiently large data, a large size of data augmentation is required to generate meaningful improvement in performance. In other words, the effects of data augmentation become saturated as the amount of data becomes sufficiently large. Table~\ref{tab:res_aug} summarizes the variation of the performance across the size of augmentation for all sub-datasets. Figure~\ref{fig:saturation} also shows the saturation of performance along with the increase in the training data size, when the size of augmentation is fixed $(N_{aug}=10)$. For MovieLens-1M and Amazon Games datasets, at the 10\verb|~|20\verb \begin{table*} \centering \caption{Amazon Games Performances across the Size of Data Augmentation $(N_{aug})$} \label{tab:res_aug} \begin{tabular}{c|c|cccc|cccc} \toprule Size & Base & \multicolumn{4}{c|}{Prior(SW)} & \multicolumn{4}{c}{Ours(NI)} \\ \cline{2-10} & n=1 & n=2 & n=5 & n=10 & n=15 & n=2 & n=5 & n=10 & n=15 \\ \midrule 10\%&0.1830&0.1429&0.1790&0.1991&0.2076 &0.1532&0.2069&0.2350&0.2453\\ & &(-21.9\%)&(-2.2\%)&(+8.8\%)&(+13.5\%)&(-16.4\%)&(+13.1\%)&(+28.4\%)&(+34.6\%)\\ 20\%&0.2462&0.1781&0.2577&0.2965&0.3168 &0.2233&0.2994&0.3081&0.3340\\ & &(-27.7\%)&(+4.7\%)&(+20.5\%)&(+28.7\%)&(-9.3\%)&(+21.6\%)&(+25.2\%)&(+35.7\%)\\ 30\%&0.3728&0.2535&0.3357&0.3532&0.3665 &0.2919&0.3505&0.3774&0.3898\\ & &(-32.0\%)&(-9.9\%)&(-9.0\%)&(-1.7\%)&(-21.7\%)&(-6.0\%)&(+1.2\%)&(+4.6\%)\\ 40\%&0.3993&0.2881&0.3630&0.3778&0.3868 &0.3247&0.3792&0.4026&0.4161\\ & &(-27.9\%)&(-9.1\%)&(-5.4\%)&(-3.1\%)&(-18.7\%)&(-5.0\%)&(+0.8\%)&(+4.2\%)\\ 50\%&0.4290&0.3207&0.3869&0.3976&0.4050 &0.3522&0.4032&0.4293&0.4392\\ & &(-25.2\%)&(-9.8\%)&(-7.3\%)&(-5.6\%)&(-17.9\%)&(-6.0\%)&(+0.1\%)&(+2.4\%)\\ 100\%&0.5017&0.3983&0.4445&0.4586&0.4666 &0.4250&0.4738&0.5031&0.5144\\ & &(-20.6\%)&(-11.4\%)&(-8.6\%)&(-7.0\%)&(-15.3\%)&(-5.6\%)&(+0.3\%)&(+2.5\%)\\ \bottomrule \end{tabular} \end{table*} \begin{figure*}[h] \centering \includegraphics[width=\linewidth]{figur_2} \caption{Saturation in Performance Improvement in line with the Increase in the Fraction of Sampled Dataset} \label{fig:saturation} \end{figure*} In general, it is relatively difficult to acquire additional data especially on the behavior logs of actual users for recommender systems. Thus, in the early stage of the system, the use of a relatively small dataset had intrinsic limitations as to achieving high recommendation performance, which inevitably caused low user satisfaction until substantial attainment of additional data. However, based on this result, we can strongly suggest that when a sequential recommender system does not have large enough data (i.e., a cold-start situation of the system), the application of data augmentation can more efficiently boost the performance than only focusing on the acquisition of additional data. \section{Conclusion} In this paper, we have shown that the application of data augmentation can contribute to boosting performance in DL-based sequential recommendations, especially in low-resource situations. Although the improvement may be marginal or negative at times, augmentation is found to help the model generalize better in most cases. Furthermore, we suggest a set of augmentation strategies that directly manipulate original sequences, distinctive from the prior approaches exploiting subsequences and prove that our strategies can result in better or competitive performance compared to the prior ones. Continued work on this topic could explore its more expanded application to other algorithms or network architectures such as CNN or VAE. We hope that our work can help manage cold-start situations for any early-stage real-world sequential recommender systems. \bibliographystyle{unsrtnat}
1,108,101,562,405
arxiv
\section{Introduction} The extremely precise photometry and nearly continuous observations provided by the {\it Kepler} satellite have led to the discovery of a number of transiting planetary systems around stellar binaries. At the time of this writing, six such circumbinary systems are known, including Kepler-16 (with stellar binary period of 41 days and planet orbital period of 229 days; Doyle et al.~2011), Kepler-34 (28~d,~289~d), Kepler-35 (21~d,~131~d; Welsh et al.~2012), Kepler-38 (19~d,~106~d; Orosz et al.~2012a), Kepler-47 (stellar binary orbit 7.45~d, with two planets of periods 49.5~d and 303.2~d; Orosz et al.~2012b), and KIC~4862625 (20~d, 138~d; Schwamb et al.~2012, Kostov et al.~2012). The stars in these systems have masses of order of the mass of the sun or smaller, and the planets have radii ranging from 3 earth radii (Kepler-47b) to 0.76 Jupiter radii (Kepler-34b). By virtue of their detection methods, all the Kepler circumbinary systems have highly aligned planetary and stellar orbits, with the mutual orbital inclinations constrained between $\Theta \sim 0.2^\circ$ (for Kepler-38b) and $\Theta \lo 2^\circ$ (Kepler-34b and Kepler-35b). In Kepler-16, measurement of the Rossiter-McLaughlin effect further indicates that the spin of the primary is aligned with the orbital angular momentum of the binary (Winn et al.~2011). A natural question arises: do misaligned ($\Theta\go 5^\circ$) circumbinary planetary systems exist? If so, under what conditions can they form? One might expect that circumbinary systems naturally form with highly aligned orbits, since the associated orbital angular momenta originate from the protostellar cores. However, several lines of evidence suggest that misaligned configurations may be present in some systems: (i) Solar-type main-sequence binaries with large separations ($\go 40$~AU) often have a rotation axis misaligned relative to the orbital angular momentum (Hale 1994). Misalignments are also observed in some short-period binaries, such as DI Hercules (with orbital period of 10 days; Albrecht et al. 2009; see also Albrecht et al.~2011; Konopacky et al.~2012; Triaud et al.~2012). (ii) Some binary young stellar objects (YSOs) are observed to contain circumstellar disks that are misaligned with the binary orbital plane (e.g., Stapelfeldt et al.~1998). Also, several unresolved YSOs or pre-main sequence binaries have jets along different directions, again suggesting misaligned disks (e.g., Davis, Mundt \& Eisl\"offel 1994; Roccatagliata et al.~2011). (iii) Imaging of circumbinary debris disks shows that the disk plane and the binary orbital plane are aligned for some systems (such as $\alpha$ CrB, $\beta$ Tri and HD 98800), and misaligned for others (such as 99 Herculis, with mutual inclination $\go 30^\circ$; see Kennedy et al.~2012a,b). Also, the pre-main sequence binary KH~15D is surrounded by a precessing circumbinary disk inclined with respect to the binary plane by $10^\circ$-$20^\circ$ (e.g., Winn et al.~2004; Chiang \& Murray-Clay 2004; Capelo et al.~2012), and the FS Tauri circumbinary disk appears to be misaligned with the circumstellar disk (Hioki et al.~2011). While the aforementioned ``misalignments'' may have various origins (e.g., dynamical interactions in few body systems), in this paper we focus on the possible existence of warped, misaligned disks around proto-stellar binaries. We consider scenarios for the assembly of circumbinary disks in the context of binary star formation (Section 2). These scenarios suggest that circumbinary disks may form with misaligned orientations with respect to the binary. We then study the mutual gravitational interaction between the misaligned disk and the binary (Section 3) and the long-term evolution of the binary-disk systems (Section 4). We discuss our results in Section 5 and conclude in Section 6. \section{Formation of Binaries and Circumbinary disks: Scenarios} Binary stars are thought to form by fragmentation inside the collapsing cores/clumps of molecular clouds, either due to turbulent fluctuation in the core (``turbulent fragmentation''; e.g., Goodwin et al.~2007; Offner et al.~2010) or due to gravitational instability in the resulting disk (``disk fragmentation''; e.g., Adams et al.~1989; Kratter et al.~2008). In the turbulent fragmentation scenario, the binaries form earlier and have initial separations of order 1000~AU. Disk fragmentation also leads to binaries with large initial separations ($\sim 100$~AU). In both cases, continued mass accretion and inward migration, either due to binary-disk interactions (e.g., Artymowicz \& Lubow 1996) or dynamical interactions in few-body systems, are needed in order to produce close (sub-AU) binaries. Planet formation can take place in the circumbinary disk during or after the binary orbital decay. In the simplest picture, the proto-binary and circumbinary disk rotate in the same direction. However, molecular clouds and their collapsing cores are turbulent (see McKee \& Ostriker 2007; Klessen 2011). It is natural that the condensing and accreting cores contain gas which rotates around different directions. Even if the cores are not turbulent, tidal torques between neighboring cores in a crowded star formation region can change the rotation direction of the outer regions of the condensing/accreting cores. Thus the gas that falls onto the central protostellar core and assembles onto the disk at different times may rotate in different directions. Such ``chaotic'' star formation has been seen in some numerical simulations (Bate et al.~2003). In this scenario, it is reasonable to expect a rapidly rotating central proto-stellar core which fragments into a binary, surrounded by a misaligned circumbinary disk which forms as a result of continued gas accretion. The mutual gravitational interaction between a proto-binary and the circumbinary disk leads to secular evolution of the relative inclination between the disk and the binary plane. In most cases, this interaction, combined with continued mass accretion, tends to reduce the misalignment. We will address these issues in the next two sections. Note that previous works have focused on warped {\it circumstellar} disks inclined relative to the binary (e.g., Papaloizou \& Terquem 1995; Bate et al.~2000; Lubow \& Ogilvie 2000). The warped/twisted circumbinary disks studied below have qualitatively different behaviours. \section{Warped Circumbinary disks} \label{sec:analytic} \subsection{Disk-Binary Interaction} Consider a circumbinary disk surrounding a stellar binary. The two stars have masses $M_1$ and $M_2$, and are assumed to have a circular orbit of semi-major axis $a$. The circumbinary disk has surface density $\Sigma(r)$, and extends from $r_{\rm in}$ to $r_{\rm out}(\gg r_{\rm in})$. The inner disk is truncated by the tidal torque from the binary, and typically $r_{\rm in}\sim 2a$ (Artymowicz \& Lubow 1994; MacFadyen \& Milosavljevic 2008). The orientation of the disk at radius $r$ (from the center of mass of the binary) is specified by the unit normal vector $\hat{\mbox{\boldmath $l$}} (r)$. Averaging over the binary orbital period and the disk azimuthal direction, the binary imposes a torque per unit area on the disk element at radius $r$ given, to leading order in $a/r$, by \begin{equation} {\bf T}_{\rm b} = -\frac{3}{4} \frac{G M_t\eta\Sigma a^2}{r^3} \,({\hat{\mbox{\boldmath $l$}}}_b\cdot\hat{\mbox{\boldmath $l$}}) ({\hat{\mbox{\boldmath $l$}}}_b \times \hat{\mbox{\boldmath $l$}}), \label{eq:torque}\end{equation} where $M_t=M_1+M_2$ is the total mass, $\eta=M_1M_2/M_t^2$ the symmetric mass ratio of the binary, and ${\hat{\mbox{\boldmath $l$}}}_b$ is the unit vector along the orbital angular momentum of the binary. \footnote{ A similar calculation, but for the tidal torques imposed on a circumstellar disk by a binary companion, can be found in Appendix B of Ogilvie \& Dubus (2001). For circumbinary disks, the only differences are that we expand the gravitational potential of the stars to first order in $a/r$ instead of $r/a$, and consider the motion of both stars around the center of mass of the system instead of the motion of the companion relative to the primary star. } Under the influence of this torque, the angular momentum of an isolated disk element would precess at the frequency $-\Omega_p\cos\beta$, where $\beta$ is the angle between $\hat{\mbox{\boldmath $l$}}_b$ and $\hat{\mbox{\boldmath $l$}}$, and \begin{equation} \Omega_p(r) \simeq \frac{3\eta}{4} \frac{a^2}{r^2}\, \Omega(r), \end{equation} with $\Omega(r)\simeq \Omega_K=(GM_t/r^3)^{1/2}$ the disk rotation rate. Since $\Omega_p$ depends on $r$, the differential precession can lead to the warping (change with $r$ of the angle between $\hat{\mbox{\boldmath $l$}}$ and $\hat{\mbox{\boldmath $l$}}_b$) and twisting (change of $\hat{\mbox{\boldmath $l$}}$ orthogonal to the $\hat{\mbox{\boldmath $l$}}-\hat{\mbox{\boldmath $l$}}_b$ plane) of the disk. \subsection{Dynamical Warp Equations for Low-Viscosity disks} Theoretical studies of warped disks (Papaloizou \& Pringle 1983; Papaloizou \& Lin 1995) have shown that there are two dynamical regimes for the linear propagation of warps in an accretion disk. For high viscosity Keplerian disks with $\alpha\go \delta\equiv H/r$ (where $H$ is the half-thickness of the disk, and $\alpha$ is the Shakura-Sunyaev parameter such that the viscosity is $\nu=\alpha H^2\Omega$), the warp satisfies a diffusion-type equation with diffusion coefficient $\nu_2=\nu/(2\alpha^2)$. For low-viscosity disks ($\alpha\lo \delta$), on the other hand, the warp propagates as bending waves at about half the sound speed, $c_s/2$. Protoplanetary disks with $\alpha\sim 10^{-4}$-$10^{-2}$ likely satisfy $\alpha\lo \delta$ (e.g., Terquem 2008; Bai \& Stone 2011). For such disks, the warp evolution equations governing long-wavelength bending waves in the linear regime ($|\partial\hat{\mbox{\boldmath $l$}}/\partial\ln r|\ll 1$) are given by (Lubow \& Ogilvie 2000; see also Lubow et al.~2002, Ogilvie 2006) \begin{eqnarray} \label{eq:dtlvl} &&\Sigma r^2 \Omega \frac{\partial \hat{\mbox{\boldmath $l$}}}{\partial t} = \frac{1}{r} \frac{\partial {\bf G}}{\partial r} + {\bf T}_{\rm b}, \\ \label{eq:dtG} &&\frac{\partial {\bf G}}{\partial t} = \left(\!\frac{\Omega^2 - \Omega_r^2}{2\Omega}\!\right) \hat{\mbox{\boldmath $l$}} \times {\bf G} - \alpha \Omega {\bf G} + \frac{\Sigma H^2 \Omega_z^2 r^3 \Omega}{4} \frac{\partial \hat{\mbox{\boldmath $l$}}}{\partial r}, \end{eqnarray} where $\Omega_r$ and $\Omega_z$ are the radial epicyclic frequency and the vertical oscillation frequency, ${\bf G}$ is the internal torque of the disk, and the surface density $\Sigma(r)$ is taken to be the same as that of the unwarped disk. These equations are only valid for $\alpha \lo \delta \ll 1$, $|\Omega_r^2-\Omega^2|\lo \Omega^2\delta$ and $|\Omega_z^2-\Omega^2|\lo \Omega^2\delta$. For circumbinary disks considered here, the rotation rate differs from the Keplerian rate by an amount $|\Omega^2-\Omega_K^2|/\Omega_K^2 = {\cal O}\left(\eta a^2/r^2 \right)+{\cal O}\left(\delta^2\right)$, and similarly for $\Omega_r$ and $\Omega_z$ (see Eqs~(\ref{eq:OmNK})-(\ref{eq:OmrNK}) below). So the validity of equations~(\ref{eq:dtlvl})-(\ref{eq:dtG}) requires $\eta a^2/r^2\lo \delta$, a condition that is generally satisfied. In the absence of the external torque (${\bf T}_b=0$), equations~(\ref{eq:dtlvl})-(\ref{eq:dtG}) admit wave solutions. If we define a Cartesian coordinate system so that $\hat l_z \simeq 1$ and $|\hat l_{x,y}|\ll 1$, then a local (WKB) bending wave with $\hat{\mbox{\boldmath $l$}}_{x,y}, {\bf G} \propto e^{ikr-i\omega t}$ has a phase velocity $\omega/k\simeq \pm c_s/2=\pm H\Omega/2$ (assuming $\omega\ll \Omega \simeq \Omega_r\simeq \Omega_z$). \subsection{Steady-State Warp and Twist of Circumbinary disks} We now consider a circumbinary disk whose rotation axis at the outer radius ($r_{\rm out}$), $\hat{\mbox{\boldmath $l$}}_{\rm out}=\hat{\mbox{\boldmath $l$}}(r_{\rm out})$, is inclined relative to the binary direction $\hat{\mbox{\boldmath $l$}}_b$ by a finite angle, $\beta(r_{\rm out})\equiv\Theta$. This corresponds to the situation where the outer disk region is fed by gas rotating around the axis $\hat{\mbox{\boldmath $l$}}_{\rm out}$. In the Cartesian coordinate system with the $z$-axis along $\hat{\mbox{\boldmath $l$}}_b$, the disk direction $\hat{\mbox{\boldmath $l$}}(r)$ can be written as \begin{equation} \hat{\mbox{\boldmath $l$}}(r)=(\sin\beta\cos\gamma,\sin\beta\sin\gamma,\cos\beta), \end{equation} where $\beta(r)$ is the warp angle and $\gamma(r)$ is the twist angle. At $r=r_{\rm out}$, we have $\hat{\mbox{\boldmath $l$}}_{\rm out}=(\sin\Theta,0, \cos\Theta)$ without loss of generality. A steady-state warp/twist is reached after a few bending wave propagation times across the whole disk. The steady-state warp/twist profile can be obtained by numerically integrating Eqs.~(\ref{eq:dtlvl})-(\ref{eq:dtG}) and setting $\partial/\partial t =0$. Figure 1 depicts selected numerical results. \begin{figure} \includegraphics[width=8.3cm]{NKvsKProf} \caption{Steady-state warp ({\it top panel}) and twist ({\it bottom panel}) profiles, in degrees, of a circumbinary disk for which the outer disk is misaligned by $10^\circ$ with respect to the angular momentum axis of the binary. Both stars in the binary have the same mass, and the disk parameters are $p=0.5$ [See Eq.~(\ref{eq:defp})], $\delta=0.1$, $r_{\rm in}=2a$, and $\alpha=0.01$ or $\alpha=0.001$. The profiles for $\alpha=0.001$ are rescaled to show the approximate scaling of the warp and twist with $\alpha$ (i.e. the warp is multiplied by 100 and the twist by 10). Note that the rescaled $\alpha=0.001$ profiles nearly conincide with the $\alpha=0.01$ Keplerian profiles. For $\alpha=0.01$, we show both the Keplerian profile, and results including the leading order non-Keplerian correction. The non-Keplerian term significantly increases the warp of the disk, but has only a small effect on its twist.} \label{fig:Profile} \end{figure} Physically, the steady-state warp/twist profile is determined by balancing the internal viscous torque ${\bf G}$ of the disk and the external torque ${\bf T}_b$. The viscous damping time of the disk warp [associated with the viscosity $\nu_2=\nu/(2\alpha^2)$] is \begin{equation} t_{\rm v2}={r^2\over \nu_2}={2\alpha^2 r^2\over \nu}= {2\alpha\over \delta^2\Omega}. \end{equation} A critical warp radius $r_{\rm warp}$ is obtained by equating $t_{\rm v2}$ to the precession time $\Omega_p^{-1}$ of an isolated disk element, i.e. $t_{\rm v2}\Omega_p=1$ at $r=r_{\rm warp}$ where \begin{equation} t_{\rm v2}\Omega_p={3\alpha\eta\over 2}\left({a\over r\delta} \right)^2. \label{eq:tvis0}\end{equation} This gives \begin{equation} r_{\rm warp} \approx a\left({3\alpha\eta\over 2\delta^2}\right)^{1/2}. \end{equation} For $r_{\rm warp} \gg r_{\rm in}$, we expect that, in steady state, the disk well inside $r_{\rm warp}$ be aligned with the binary $\hat{\mbox{\boldmath $l$}}_b$, while the disk well outside $r_{\rm warp}$ be aligned with $\hat{\mbox{\boldmath $l$}}_{\rm out}$. However, if the inner disk radius $r_{\rm in}$ is larger than $r_{\rm warp}$, or $(t_{\rm v2} \Omega_p)_{\rm in}\lo 1$ (the subscript ``in'' means that the quantity is evaluated at $r=r_{\rm in}$), then the whole disk can be approximately aligned with $\hat{\mbox{\boldmath $l$}}_{\rm out}$, with very small warp between the inner and outer edge of the disk. For standard disk parameters (e.g., $\eta\sim 1/4,\,\alpha\sim 10^{-2},\,\delta\sim 0.1$, $r_{\rm in}\sim 2a$), the inequaility $(t_{\rm v2}\Omega_p)_{\rm in}\lo 1$ or $r_{\rm warp}\lo r_{\rm in}$ is well satisfied. Equation~(\ref{eq:dtG}) shows that, to first order, changes in the orientation $\hat{\mbox{\boldmath $l$}}(r)$ of the disk are due to the combination of a term proportional to the internal stress $\bf{G}$ (which modifies the twist $\gamma$ of the disk), and a term proportional to $\bf{G}\times \hat{\mbox{\boldmath $l$}}$ (which causes variations of the warp $\beta$). The second term only exists for non-Keplerian disks, while the first is only slightly modified by deviations from a Keplerian profile. We treat these two effects separately by first considering purely Keplerian disk profiles, to obtain a good approximation for the twist $\gamma(r)$ in the disk, and then including non-Keplerian effects, which are generally the main source of the warp $\beta(r)$. For Keplerian disks ($\Omega_r=\Omega_z=\Omega$) and small warps, we can obtain approximate, analytic expressions for the disk warp and twist (see Foucart \& Lai 2011 for a similar calculation of magnetically driven warped disks). For concreteness, we consider disk models with constant $\alpha$, and assume that the surface density and the dimensionless disk thickness have the power-law profiles \begin{equation} \Sigma\propto r^{-p},\quad \delta={H\over r}\propto r^{(2p-1)/4}, \label{eq:defp} \end{equation} so that $\dot M\sim \nu\Sigma=\alpha H^2\Omega\Sigma=$constant \footnote{ In practice, these scalings are unlikely to be valid for the entire radial extent of the disk. As the warp and twist of the disk are mostly due to the torque acting at small radii (${\bf T}_b \propto r^{-3-p}$), we are only concerned about the approximate value of $p$ for $r$ close to $r_{\rm in}$. }. Equations (\ref{eq:dtlvl})-(\ref{eq:dtG}) then reduce to \begin{equation} \frac{\partial}{\partial r} \left[\left(\frac{r}{r_{\rm in}}\right)^{\!3/2} \!\frac{\partial}{\partial r} \hat{\mbox{\boldmath $l$}} \right] = -\frac{4\alpha r {\bf T}_{\rm b}}{(\delta^2r^5\Sigma \Omega^2)_{\rm in}}. \label{eq:dldr}\end{equation} We adopt the zero-torque boundary condition $\partial\hat{\mbox{\boldmath $l$}}/\partial r=0$ at the inner disk radius $r=r_{\rm in}$. Since ${\bf T}_{\rm b}$ falls off rapidly with $r$ as ${\bf T}_b \propto r^{-3-p}$, we can integrate Eq.~(\ref{eq:dldr}) approximately to obtain: \begin{equation} \label{eq:lindl} \left(\frac{r}{r_{\rm in}}\right)^{\!\!3/2}\!\frac{\partial}{\partial r} \hat{\mbox{\boldmath $l$}} \simeq \frac{4\alpha}{1+p}\left(\frac{{\bf T}_{\rm b}} {\delta^2r^3 \Sigma \Omega^2}\right)_{\rm in}\! \left[\left(\frac{r_{\rm in}}{r}\right)^{\!1+p}-1\right]. \end{equation} Using the outer boundary condition $\hat{\mbox{\boldmath $l$}}(r_{\rm out})=\hat{\mbox{\boldmath $l$}}_{\rm out}$, we find \begin{eqnarray} &&\hat{\mbox{\boldmath $l$}}(r) - \hat{\mbox{\boldmath $l$}}_{\rm out}\simeq \left(\frac{8\alpha}{1+p}\right) {{\bf T}_b(r)\over (\delta^2 r^2\Sigma\Omega^2)_{\rm in}} \left({r\over r_{\rm in}}\right)^{\!3/2}F_p(r)\nonumber\\ && ~~=-{4\,t_{\rm v2}\Omega_p\over (1+p)} F_p(r) (\hat{\mbox{\boldmath $l$}}_b\cdot\hat{\mbox{\boldmath $l$}})(\hat{\mbox{\boldmath $l$}}_b\times\hat{\mbox{\boldmath $l$}}), \label{eq:lml}\end{eqnarray} where \begin{equation} F_p(r)=\left(\!{r\over r_{\rm in}}\!\right)^{\!\!p+1}\!\! -{1\over 2p+3}, \end{equation} and $t_{\rm v2}\Omega_p$ is given by Eq.~(\ref{eq:tvis0}). In deriving Eq.~(\ref{eq:lml}), we have assumed that the warp and twist of the disk are small compared to its inclination relative to the binary axis, i.e., $|\hat{\mbox{\boldmath $l$}}(r) - \hat{\mbox{\boldmath $l$}}_{\rm out}| \ll \sin{\beta}$, or $(t_{\rm v2}\Omega_p)_{\rm in}\ll 1$. The total change in $\hat{\mbox{\boldmath $l$}}$ across the disk is then \begin{equation} \hat{\mbox{\boldmath $l$}}_{\rm in} - \hat{\mbox{\boldmath $l$}}_{\rm out} \simeq -{8(t_{\rm v2}\Omega_p)_{\rm in}\over 2p+3} (\hat{\mbox{\boldmath $l$}}_b\cdot\hat{\mbox{\boldmath $l$}}_{\rm out})(\hat{\mbox{\boldmath $l$}}_b\times\hat{\mbox{\boldmath $l$}}_{\rm out}). \end{equation} The net twist angle across the disk, $\Delta\gamma_{\rm twist}\equiv \gamma_{\rm in}-\gamma_{\rm out}$, is \begin{eqnarray} \Delta\gamma_{\rm twist} &\simeq & -{8(t_{\rm v2}\Omega_p)_{\rm in} \over (2p+3)}\cos\Theta\nonumber\\ &=& -{12\over (2p+3)}\, \left({\alpha\eta\over \delta_{\rm in}^2}\right)\! \left({a\over r_{\rm in}}\right)^2\cos\Theta. \label{eq:deltatwist} \end{eqnarray} For Keplerian disks, the warping is only a second-order effect: to first order $\hat{\mbox{\boldmath $l$}}_{\rm in} - \hat{\mbox{\boldmath $l$}}_{\rm out}$ is perpendicular to $\hat{\mbox{\boldmath $l$}}_{\rm out}$, and the disk is only twisted. The torque acting on the inner disk does however have a component in the $\hat{\mbox{\boldmath $l$}}_b-\hat{\mbox{\boldmath $l$}}_{\rm out}$ plane, due to the small difference in orientation between $\hat{\mbox{\boldmath $l$}}_{\rm in}$ and $\hat{\mbox{\boldmath $l$}}_{\rm out}$. The net warp angle, $\Delta\beta_{\rm warp}\equiv \beta_{\rm in}-\beta (r_{\rm out})=\beta_{\rm in}-\Theta$, is given by \begin{eqnarray} \Delta\beta_{\rm warp} &\simeq & -\left[{4 (t_{\rm v2}\Omega_p)_{\rm in}\over 2p+3}\right]^2 \sin(2\Theta) \nonumber\\ &=&-\left[{6\over 2p+3}\,\left({\alpha\eta\over \delta_{\rm in}^2}\right)\! \left({a\over r_{\rm in}}\right)^{\!2}\right]^2\sin(2\Theta). \label{eq:betaK} \end{eqnarray} As noted before, these expressions for $\Delta\gamma_{\rm twist}$ and $\Delta\beta_{\rm warp}$ are valid only for $|\Delta\gamma_{\rm twist}|\ll 1$, or $(t_{\rm v2}\Omega_p)_{\rm in}\ll 1$. Figure 1 shows that the numerically integrated disk profile agrees with both the analytic amplitudes of the warp and twist and their scaling with the viscosity parameter $\alpha$. Thus, for standard disk and binary parameters ($\eta=0.25$, $\alpha=10^{-3}$-$10^{-2}$, $\delta=0.1$ and $r_{\rm in}\simeq 2a$), the steady-state Keplerian disk is almost flat, with its orientation determined by $\hat{\mbox{\boldmath $l$}}_{\rm out}$, i.e., the angular momentum axis of the gas falling onto the outer disk. Deviations from a Keplerian disk profile modify the above results, as differences between the epicyclic and orbital frequency of the disk are induced by both the finite thickness of the disk and the deviation of the binary gravitational potential from its point-mass value. To first order in $\delta^2$ and $\eta (a/r)^2$, we have (assuming small binary-disk inclination $\Theta$) \begin{eqnarray} \label{eq:OmNK} \Omega^2 &\approx& \frac{GM_t}{r^3} \left( 1 + \frac{3\eta a^2}{4 r^2} - C \delta^2\right)\\ \label{eq:OmrNK} \Omega_r^2 &\approx& \frac{GM_t}{r^3} \left(1 - \frac{3\eta a^2}{4r^2} -D \delta^2 \right) \end{eqnarray} where $C$ and $D$ are constants of order unity which depend on the density/pressure profile of the disk, and the epicyclic frequency was computed from $\Omega_r^2=(2\Omega/r)d(r^2\Omega)/dr$. We thus have \begin{equation} \frac{\Omega^2 - \Omega_r^2}{\Omega^2} \approx \frac{3\eta a^2}{2 r^2} + (D-C) \delta^2. \end{equation} It is worth noting that for $\delta={\rm constant}$, the $\delta^2$ term vanishes (since $C=D$ in that case). Including this result in Eq.~(\ref{eq:dtG}) leads to an additional warp \begin{equation} \label{eq:betaNK} \Delta \beta_{\rm warp}^{\rm NK} \approx - K \frac{\eta}{\delta_{\rm in}^2} \left(\frac{a}{r_{\rm in}}\right)^2 \sin(2\Theta) \end{equation} with \begin{equation} K = \frac{0.9\eta}{2p+7} \left(\frac{a}{r_{\rm in}}\right)^2 + \kappa \delta_{\rm in}^2 \end{equation} and $\kappa$ a constant depending on the profile of $\delta(r)$ close to $r_{\rm in}$ ($\kappa=0$ for constant $\delta$, and of order unity for slowly varying $\delta$). Numerical results for the non-Keplerian steady-state disk profile are shown in Figure 1 for constant thickness $\delta$. We see that even though the warp remains relatively small ($\Delta \beta_{\rm warp}^{\rm NK} \sim 0.1 \beta$), for $\alpha \lo 0.03$ it will be larger than the second-order Keplerian warp given by Eq.~(\ref{eq:betaK}). As most of the torque on the disk is due to the contributions at radii $r\sim r_{\rm in}$, this warp also causes a reduction of the Keplerian twist $\Delta \gamma_{\rm twist}$ [See Eq.~(\ref{eq:deltatwist})] by a factor of order $(\sin \beta_{\rm in} / \sin \beta_{\rm out})$. \section{Evolution of the Relative Binary - disk Inclination} As discussed in Section 3, the binary torque [Eq.~(\ref{eq:torque})] induces a small warp and twist in the circumbinary disk. For disks satisfying $\delta\go \alpha$, the steady-state warp/twist is achieved when transient bending waves either damp out or propagate out of the disk (depending on their behavior at large radius). Bending waves propagate at half the sound speed, and will thus reach the outer boundary of the disk over a timescale \begin{equation} t_{\rm warp}\sim \int_{r_{\rm in}}^{r_{\rm out}}{dr\over H\Omega/2} \sim {2\over \delta_{\rm out}\Omega_{\rm out}}. \end{equation} As for the damping of transient bending waves, it is due to the $\alpha \Omega {\bf G}$ term in Eq.~(\ref{eq:dtG}). Numerical results (Lubow \& Ogilvie 2000) have confirmed that the damping timescale is \begin{equation} t_{\rm damp} \sim {1 \over \alpha \Omega_{\rm out}}. \end{equation} Both timescales are much shorter than the age of the system or the gas accretion time \begin{equation} t_{\rm acc}\sim \left({r^2\over\nu}\right)_{\rm out}\sim {1\over \alpha\delta_{\rm out}^2\Omega_{\rm out}}. \end{equation} On a timescale longer than $t_{\rm damp}$, the warped disk exerts a back-reaction torque on the binary, aligning $\hat{\mbox{\boldmath $l$}}_b$ with the disk axis (more precisely, with $\hat{\mbox{\boldmath $l$}}_{\rm out}$). To determine this torque, recall that in the Cartesian coordinate system that we have set up, $\hat{\mbox{\boldmath $l$}}_b=(0,0,1)$ and $\hat{\mbox{\boldmath $l$}}_{\rm out}=(\sin\Theta,0,\cos\Theta)$. In the small-warp approximation, $\hat{\mbox{\boldmath $l$}}(r)\simeq (\sin\Theta + \Delta{\hat l}_x,\Delta{\hat l}_y,\cos\Theta)$, where $\Delta{\hat l}_{x,y}$ are the ($x$,$y$)-components of $\hat{\mbox{\boldmath $l$}}(r)-\hat{\mbox{\boldmath $l$}}_{\rm out}$. $\Delta{\hat l}_y$ is well approximated by [see Eq.~(\ref{eq:lml})] \begin{equation} \Delta{\hat l}_y\simeq -{4\,t_{\rm v2}\Omega_p\over (1+p)} F_p(r)\cos\Theta\sin\Theta, \end{equation} and thus $|\Delta{\hat l}_{y}|\ll\sin\Theta$ when $t_{\rm v2}\Omega_p\ll 1$. $\Delta{\hat l}_x$ is mainly due to non-Keplerian effects [see Eq.~(\ref{eq:betaNK})], and is generally a small correction ($\sim 10\%$) to $\sin\Theta$. Thus the torque on the disk element [Eq.~(\ref{eq:torque})] is, to leading order for each component, \begin{equation} {\bf T}_b\simeq -\frac{3}{4} \frac{G M_t\eta\Sigma a^2}{r^3} \cos\Theta \left(-\Delta{\hat l}_y,\sin\Theta,0\right). \end{equation} The back-reaction torque on the binary is \begin{equation} {\mbox{\boldmath ${\cal T}$}}=-\int_{r_{\rm in}}^{r_{\rm out}}\! 2\pi r{\bf T}_b\,dr \end{equation} The $x$-component of ${\mbox{\boldmath ${\cal T}$}}$ tends to align $\hat{\mbox{\boldmath $l$}}_b$ with $\hat{\mbox{\boldmath $l$}}_{\rm out}$: \begin{equation} {\cal T}_x\simeq {72\pi\over (2p+3)(4p+5)}\left({\alpha\eta^2\over\delta_{\rm in}^2} \right)a^4\Sigma_{\rm in}\Omega_{\rm in}^2\cos^2\!\Theta\sin\Theta. \label{eq:Tx}\end{equation} If we define the angular momentum of the inner disk region by \begin{equation} (\Delta J)_{\rm in}\equiv 2\pi (\Sigma r^4\Omega)_{\rm in}, \end{equation} then ${\cal T}_x$ can be written as \begin{equation} {\cal T}_x={32\over (2p+3)(4p+5)}(\Delta J)_{\rm in}\, (\Omega_p^2t_{\rm v2})_{\rm in}, \end{equation} where $t_{\rm v2}\Omega_p=(3\alpha\eta/2)(a/r\delta)^2$ [see Eq.~(\ref{eq:tvis0})]. The $y$-component of ${\mbox{\boldmath ${\cal T}$}}$ on the binary is \begin{equation} {\cal T}_y={3\pi\over 2(1+p)}GM_t\eta\, {a^2\Sigma_{\rm in}\over r_{\rm in}} \cos\Theta\sin\Theta. \end{equation} This makes the binary axis $\hat{\mbox{\boldmath $l$}}_b$ precess around $\hat{\mbox{\boldmath $l$}}_{\rm out}$ at the rate \begin{equation} {\mbox{\boldmath $\Omega$}}_{\rm prec}=-{3\pi\over 2(1+p)}\left({\Sigma_{\rm in} a^3\over M_t r_{\rm in}}\right)\Omega_b\cos\Theta\,\hat{\mbox{\boldmath $l$}}_{\rm out}, \end{equation} where $\Omega_b=(GM_t/a^3)^{1/2}$ is the orbital frequency of the binary. Since ${\cal T}_y$ does not induce permanent change of the inclination angle $\Theta$, we will not consider it further in this paper. It is worth noting that to leading order the back-reaction torque is independent of the non-Keplerian warp computed in Eq.~(\ref{eq:betaNK}), even when that warp is the main deviation from the flat-disk profile. Indeed, ${\cal T}_x$ is proportional to the twist $\Delta \hat{\mbox{\boldmath $l$}}_y$ and ${\cal T}_y$ to $\sin \Theta$. The only effect of the non-Keplerian warp is thus to modify ${\mbox{\boldmath ${\cal T}$}}$ by a factor of order ($\sin \beta_{\rm in}/\sin \beta_{\rm out}$). Mass accretion from the circumbinary disk onto the binary can also contribute to the alignment torque. The accretion streams from $r_{\rm in}$ will likely land in both stars, probably through circumstellar disks (e.g., Artymowitz \& Lubow 1994; MacFadyen \& Milosavljevic 2008). Given the complexity of the process, we parametrize the alignment torque due to accretion as \begin{equation} {\cal T}_{{\rm acc},x}=g\,\dot M (GM_t r_{\rm in})^{\!1/2} \sin\Theta, \label{eq:Tacc}\end{equation} where $g$ is a dimensionless number of order unity. In writing Eq.~(\ref{eq:Tacc}), we have used the result of Section 3 that the steady-state circumbinary disk is only slightly warped [$\beta(r_{\rm in})\simeq \Theta$]. Since the mass accretion rate is given by $\dot M\simeq 3\pi\nu\Sigma=3\pi\alpha (\delta^2 r^2\Omega\Sigma)_{\rm in}$, we can rewrite Eq.~(\ref{eq:Tx}) as \begin{equation} {\cal T}_x\simeq f\,\dot M (GM_t r_{\rm in})^{\!1/2}\cos^2\!\Theta \sin\Theta, \label{eq:ctx}\end{equation} where \begin{equation} f={24\over (2p+3)(4p+5)}\,\eta^2\left({a\over\delta_{\rm in} r_{\rm in}}\right)^4. \end{equation} The total alignment torque on the binary is then \begin{eqnarray} &&{\cal T}_{\rm align}={\cal T}_{{\rm acc},x}+{\cal T}_x\nonumber\\ &&\qquad\quad \simeq (g+f\cos^2\!\Theta) \dot M (GM_t r_{\rm in})^{\!1/2}\sin\Theta. \end{eqnarray} Assuming that the angular momentum of the binary, $L_b=\eta M_t(GM_ta)^{1/2}$, is much less than that of the disk (and the material falling onto the disk), the torque ${\cal T}_{\rm align}$ leads to alignment between $\hat{\mbox{\boldmath $l$}}_b$ and $\hat{\mbox{\boldmath $l$}}_{\rm out}$, on the timescale \begin{equation} t_{\rm align}={L_b\sin\Theta\over {\cal T}_{\rm align}} ={\eta M_t\over\dot M}\left({a\over r_{\rm in}}\right)^{\!\!1/2}\!\! {1\over (g+f\cos^2\!\Theta)}. \label{eq:talign}\end{equation} The secular evolution of $\Theta(t)$ is determined by the equation \begin{equation} {d\Theta\over dt}=-{\sin\Theta\over t_{\rm align}}. \label{eq:dtheta}\end{equation} For $\Theta\ll 1$, this can be easily solved: Starting from the initial angle $\Theta (t_i)$, the inclination evolves according to \begin{equation} \Theta (t)=\Theta (t_i)\exp \left[-{\Delta M\over \eta M_t} \left({r_{\rm in}\over a}\right)^{\!\! 1/2}(g+f)\right], \label{eq:betat}\end{equation} where $\Delta M$ is the total mass accreted through the disk during the time between $t_i$ and $t$. \section{Discussion} The calculations presented in Sections 3 and 4 show that a circumbinary disk formed with its rotation axis $\hat{\mbox{\boldmath $l$}}_{\rm out}$ (at large distance) inclined with respect to the binary angular momentum axis $\hat{\mbox{\boldmath $l$}}_b$ will attain a weakly warped/twisted state, such that the whole disk is nearly aligned with $\hat{\mbox{\boldmath $l$}}_{\rm out}$ (see Section 3). However, the interaction torque between the disk and the binary tends to drive $\hat{\mbox{\boldmath $l$}}_b$ toward alignment with $\hat{\mbox{\boldmath $l$}}_{\rm out}$. The timescale of this alignment is given by Eq.~(\ref{eq:talign}), and the relative binary-disk inclination $\Theta$ evolves according to Eq.~(\ref{eq:betat}). Note that both the accretion torque and the gravitational torque contribute to the alignment. If only the accretion torque were present (i.e., $g\sim 1$, $f=0$), the alignment timescale would be of the same order as the mass-doubling time of the binary ($t_{\rm align} \sim 4\times 10^7$~yr for $M_1=M_2=1~M_\odot$, $r_{\rm in}\simeq 2a$ and $\dot M\sim 10^{-8}M_\odot\,{\rm yr}^{-1}$), and a significant fraction of the binary mass would have to be accreted ($\Delta M\sim 0.4M_\odot$) in order to achieve an $e$-fold reduction of $\Theta$ [see Eq.~(\ref{eq:betat})]. However, the gravitational torque dominates over the accretion torque, since the condition $f\gg 1$ can be satisfied for a wide range of disk/binary parameters (although $\alpha^2f\ll 1$ must be satisifed for our equations to be valid). For example, for $p=3/2$ (the density index for the minimum solar nebula), and $\eta=1/4$ (equal mass binary), we have \begin{equation} f\simeq 14\left({0.1\over\delta_{\rm in}}\right)^4 \left({2\,a\over r_{\rm in}}\right)^4. \end{equation} Thus, the alignment timescale is (for $f\gg 1$ and $\cos^2\Theta\simeq 1$) \begin{equation} t_{\rm align}\simeq 2.5\left(\!{\eta M_t\over 0.5M_\odot}\!\right) \!\left(\!{\dot M\over 10^{-8}M_\odot/{\rm yr}}\!\right)^{\!\!-1}\!\! \left(\!{\delta_{\rm in}\over 0.1}\!\right)^{\!\!4}\! \left({r_{\rm in}\over 2a}\right)^{\!3.5}{\rm Myrs} \label{eq:talign2}\end{equation} ($\eta M_t$ is the reduced mass of the binary). The amount of mass accretion needed for an $e$-fold reduction of $\Theta$ is [see Eq.~(\ref{eq:betat})] \begin{equation} (\Delta M)_e\simeq 0.05\,(\eta M_t) \left(\!{\delta_{\rm in}\over 0.1}\!\right)^{\!\!4}\! \left({r_{\rm in}\over 2a}\right)^{\!3.5}. \label{eq:dme}\end{equation} Thus, only a small fraction of the binary mass has to be accreted to achieve significant reduction of $\Theta$. We comment on two assumptions adopted in our calculations of Sections 3-4: (i) We have assumed that the binary separation $a$ is constant. In reality, the binary-disk alignment can take place simultaneously as the binary orbit shrinks (due to binary-disk interactions). However, since the alignment timescale depends only on the ratio $r_{\rm in}/a$, we expect our result to be largely unchanged in such a situation as long as $r_{\rm in}$ keeps track of $a$ while the orbit decays. (ii) We have assumed that there is a constant supply of gas at the outer disk and the total angular momentum of the disk, $L_{\rm disk}$, is much larger than that of the binary, $L_b$. If we consider an isolated circumbinary disk (e.g., when an episode of mass infall from the turbulent cloud/core onto the central binary occurs) with $L_{\rm disk}$ comparable or smaller than $L_b$, then both the binary and the disk will precess around a common total angular momentum axis while the mutual inclination $\Theta$ evolves. In this case, equation (\ref{eq:dtheta}) is replaced by \begin{equation} {d\Theta\over dt}=-{{\cal T}_{\rm align}\over L_b}-{{\cal T}_{\rm align}\over L_{\rm disk}}\cos\Theta=-{\sin\Theta\over t_{\rm align}} \left(1+{L_b\cos\Theta\over L_{\rm disk}}\right). \label{eq:dthetadt} \end{equation} This equation neglects the accretion torque, so that ${\cal T}_{\rm align}\simeq {\cal T}_x$. Note that in general, ${\cal T}_x$ and $t_{\rm align}$ will be modified from the expressions given in Section 3. But as long as $r_{\rm out}\gg r_{\rm in}$, we expect the corrections to be small (Foucart \& Lai, in preparation). It is also of interest to note that when $\cos\Theta< - L_{\rm disk}/L_b$, the gravitational torque tends to drive $\Theta$ toward $180^\circ$ (i.e., counter-alignment). However, this criteria is only valid instantaneously: both $L_{\rm disk}$ and $L_b$ vary in time due to dissipation in the disk and accretion onto the binary, and there is thus no guarantee that the evolution of $\Theta$ is monotonic. To determine which initial conditions lead to counter-alignment, assumptions have to be made regarding the evolution of $L_{\rm disk}$ and $L_b$. King et al.(2005) showed that if $L_b$ is constant (negligible accretion onto the binary), counter-alignment will occur if the less restrictive condition $\cos\Theta < - L_{\rm disk}/(2L_b)$ is satisfied --- even though initially $d\Theta/dt < 0$. \footnote{ King et al.(2005) studied accretion disks around spinning black holes, not circumbinary disks. However, the mathematical form of the evolution equation for $\Theta$ is identical to the circumbinary case (compare e.g. Eq.~(\ref{eq:dthetadt}) of this work with Eq.(18) of King et al.). } \section{Conclusions and Implications} In this paper, we have considered scenarios for the assembly of proto-planetary disks around newly formed stellar binaries. The shape and inclination of the disk relative to the binary will determine the orbital orientations of the circumbinary planets that are formed in the disk. Because of the turbulence in molecular clouds and dense cores, inside which protostellar binaries and circumbinary disks form, and also because of the tidal torques between nearby cores, it is possible, and even likely, that gas falling onto the outer region of the circumbinary disk rotates along a direction different from the rotation axis of the binary. Thus in general, the newly assembled circumbinary disk will be misaligned with respect to the binary. However, the gravitational torque from the binary produces a warp and twist in the disk, and the back-reaction torque associated with that twist tends to align (under most conditions) the disk and the binary orbital plane. We have presented new calculations of the interaction between the warped/twisted disk and the binary, and showed that the disk warp is small under typical conditions. More importantly, we have derived new analytic expressions for the binary-disk alignment torque and the associated timescale [see Eq.~(\ref{eq:talign2})]. Our results show that the misalignment angle can be reduced appreciably after the binary accretes a few precent of its reduced mass [see Eq.~\ref{eq:dme}]. Proto-binaries formed by fragmentation (either turbulent fragmentation or disk fragmentation; see Section 2) have initial separations much larger than 1~AU. Significant inward migration must occur to produce close (sub-AU) binaries. Since mass accretion necessarily takes place during disk-driven binary migration, our results then suggest that close binaries are likely to have aligned circumbinary disks, while wider binaries {\it can} have misaligned disks. This can be tested by future observations. The circumbinary planetary systems discovered by {\it Kepler} (see Section 1) all contain close (period $\lo 41$~d) binaries. If the planets form in the late phase of the circumbinary disk (as is likely to be the case considering the relatively small planet masses in the {\it Kepler} systems), then the planetary orbits will be highly aligned with the binary orbits, even if the initial disk has a misaligned orientation. This is indeed what is observed. Of course, given the complexity of the various processes involved, one may see some exceptions. In particular, in this paper we have not considered any dynamical processes (few body interactions) that may take place after the binary and planet formation. Such processes can also affect the mutual inclinations of circumbinary planets. Observationally, tertiary bodies on misaligned orbits around close, eclipsing binaries can be detected by searching for periodic eclipse timing variations. This has led to the identification of many binaries with tertiary companions (e.g., Liao \& Qian 2010; Gies et al.~2012). A number of post-main-sequence eclipsing binaries have been claimed to host candidate circumbinary planets, such as HW Virginis (Lee et al.~2009), HU Aquarii (Qian et al.~2011; Gozdziewski et al.~2012), NN Serpentis (Beuermann et al.~2010), DP Leonis (Beuermann et al.~2011) and NY Vir (Qian et al.~2012). However, some of these claims are controversial since the proposed planetary orbits may be dynamically unstable on short timescales (see Horner et al.~2011,2012). Currently, no misaligned (inclination $\go 5^\circ$) circumbinary planets have been confirmed around main-sequence binaries. Overall, our calculations in this paper illustrate that the mutual inclinations and other orbital characteristics of circumbinary planetary systems can serve as a diagnostic tool for the assembly and evolution of protoplanetary disks and the condition of formation of these planetary systems. \section*{Acknowledgments} This work has been supported in part by the NSF grants AST-1008245, AST-1211061 and the NASA grant NNX12AF85G.
1,108,101,562,406
arxiv
\section{Introduction}\label{sec:intro} Energy prediction problems are essential for operating, monitoring, and optimizing (efficiency, cost) in diverse energy systems, from the supply side (e.g., wind energy, solar energy, power systems, battery) to the demand side (e.g., load monitoring, usage of electric vehicles, building energy management). Numerous studies are being carried out in terms of predicting the energy generation/consumption using time-series data \cite{ziel2016forecasting,liu2015highly,alessandrini2015analog,zuluaga2015short,wang2015study,garshasbi2016hybrid}. For instance, Kalman filtering, wavelet packet transforms, and least square support vector machines are used to predict wind power performance \cite{zuluaga2015short,wang2015study}, while an analog ensemble method is applied to forecast solar power \cite{alessandrini2015analog}. Liu et al. \cite{liu2015highly} predicts remaining state of charge of electric vehicle batteries based on predictive control theory. Hybrid genetic algorithms and Monte Carlo simulation approaches are applied to predict energy generation and consumption in net-zero energy buildings \cite{garshasbi2016hybrid}. For modern energy systems, a large number of subsystems is usually involved, for example, hundreds of wind turbines are closely collocated in a wind farm where the wind resource is similar and the conditions of them are analogous in terms of the power transmission to the power system. As a result, prediction of wind turbine output is related among each of them, and the characteristics of spatial interactions can be potentially applied for prediction \cite{jiang2015understanding} and design optimization. The prediction approaches discussed above can be viewed as methods of exploring temporal relationships. Spatial and temporal relationship widely exists in energy systems \cite{jain2014forecasting,liu2010prediction,jung2014current,kwon2010uncertainty}, yet spatiotemporal features are less commonly leveraged for energy prediction problems. The exploration of such features has been shown efficient in wind speed forecasting problems \cite{tascikaraoglu2016exploiting,jung2014current,tascikaraoglucompressive}. To facilitate the energy prediction for energy systems with both spatial and temporal characteristics, probabilistic graphical models (PGM, including a variety of models described by conditional dependence structures, so-called graphs, including Bayesian networks and undirected/directed Markov networks, can be used to deal with dynamics systems and relational data \cite{koller2009probabilistic}), can possibly be employed as the spatiotemporal interactions are naturally suited for graph representations and can be evaluated by the associated probabilities. Bayesian networks are a type of PGM that captures causal relationships using directed edges \cite{koller2009probabilistic}, where the overall joint probability distribution of the network nodes (variables) is computed as a product of the conditional distributions (factors) defined by the nodes in the network. However, prediction problems are not straightforward for Bayesian networks, as they only encode node-based conditional probabilities, and the approximation of the joint distribution using node-based structures is often intractable \cite{sarkar2016pgm}. This is because a certain directed acyclic graphical structure may not allow for easy and exact computation of certain probabilities related to inference questions. Markov models, as a class of statistical models, have been widely applied to different domains, e.g., natural language processing and speech recognition \cite{leek1997information}. These models are shown to be efficient in identifying the probabilistic dependencies among random variables in both directed and undirected manner. Hidden Markov Models (HMMs) have been particularly successful for learning temporal dynamics of an underlying process \cite{rabiner1989tutorial}. Several modifications for HMMs have been proposed, such as integrated HMM (IHMM) \cite{beal2001infinite} which integrated several parameters to three hyper-parameters to model countably infinite hidden state sequences, integrated hierarchical HMM (IHHMM) \cite{heller2009infinite} extended HMMs to an infinite number of hierarchical levels, and \cite{wakabayashi2012forward} applied a forward-backward algorithm to reduce model complexity through the order of operations. However, Markov Models with hidden states usually rely on iterative learning algorithms that may be computationally expensive. To alleviate such issues, symbolic dynamic filtering (SDF) was proposed \cite{ray2004symbolic,rajagopalan2006symbolic} based on the concepts of symbolic dynamics and probabilistic finite state automata (PFSA). Several improvements related to coarse graining of continuous variables \cite{SSS13}, state splitting and merging techniques for PFSA \cite{mukherjee2014state}, efficient inference algorithms \cite{sarkar2013symbolic}, and hierarchical model learning \cite{akintayo2015symbolic} have been proposed over the last decade within the SDF framework. SDF has been shown to be extremely efficient for anomaly detection and fault diagnostics of various complex systems, such as gas turbine engines~\cite{SSM12}, shipboard auxiliary systems~\cite{SSV14}, nuclear power plants~\cite{JGSRE11}, coal gasification systems~\cite{CSGR08} and bridge monitoring process~\cite{LGLPS17}. For the purpose of addressing prediction problems in disparate energy systems, this work presents a new data-driven framework (namely spatiotemporal pattern networks, or STPN) to leverage the spatiotemporal interactions of energy systems for prediction. Built on SDF, a STPN aims to capture the spatiotemporal characteristics of complex energy systems, and implement prediction at both spatial and temporal resolutions. For validation, two representative cases are proposed using the proposed approach, the first is taken from the energy supply side, wind power prediction in large-scale wind farm, and the second is from the energy demand side, energy disaggregation (also as non-intrusive load monitoring (NILM), a well-established problem that involves disaggregating the total electrical energy consumption of a household into its constituent load components without the necessity for extensive metering installations on individual household or appliance \cite{GH92,MZKR11,cominola2017hybrid}). \textbf{Contributions}: First, a novel data-driven method for energy prediction based on the STPN framework is proposed and the concepts of interests and relevance are established. Second, two typical case studies based on wind turbine power (supply side energy) and residential building energy disaggregation (demand side energy) are performed for validating the proposed scheme. For wind turbine power prediction, the spatiotemporal characteristics between different wind turbines are identified, while the complex coupled temporal features for home energy disaggregation. A STPN-based convex programming is presented in this work in order to improve energy disaggregation performance. We also present a comparative study of energy prediction performance of the proposed technique for both cases with other state-of-the-art methods. The remaining sections are outlined as follows. In Section~\ref{sec:Symbolic} some necessary background of SDF is presented as well as the concepts of a $D$-Markov machine. While the prediction approach based on STPN is given in Section~\ref{sec:STPN}, two typical case studies, i.e., supply side (wind turbines) and demand side (NILM), for validating the proposed framework are presented in Section~\ref{wind_turbine} and Section~\ref{NILM}, respectively. In Section~\ref{sec:con}, conclusive remarks and future research directions beyond the existing results are offered. \section{Symbolic Dynamic Filtering and $D$-Markov Machines}\label{sec:Symbolic} This section gives an essential background on symbolic dynamic filtering necessary to characterize the proposed prediction method. We refer interested readers to~\cite{SSS13} for more details. SDF is built upon the relevant concepts of discrete dynamic systems in which discretization and symbolization are critical steps to convert collected or observed continuous data to discrete symbol sequences. Therefore, the dynamic systems can be studied in deterministic or probabilistic settings in terms of symbolic space by using language-theoretic approaches, e.g., shift-maps and sliding block codes. The simplest approaches for partitioning are the uniform partitioning and maximum entropy, while these two methods were mainly applied to simple dynamic systems with data of less variance. The state-of-the-art partitioning or discretization approaches include symbolic false nearest neighbor partitioning (SFNNP)~\cite{PhysRevLett.91.084102}, wavelet transform~\cite{SSS13}, and Hilbert-transform-based analytic signal space partitioning (ASSP)~\cite{SR08}. Recently, a supervised partitioning scheme, i.e., maximally bijective discretization (MBD)~\cite{SSS13} has been proposed for modeling and analyzing complex dynamic systems. Unlike the other methods, MBD is able to maximally preserve the input-output relationship originating from the continuous domain after discretization in dynamical systems. After discretization of the time-series data in continuous domain, symbolization is implemented subsequently for establishing the $D$-Markov machines. For SDF, a critical assumption is that we can approximate any symbol sequence generated by a time series data as a Markov chain of order $D$ (which is a positive integer). Therefore, such a Markov chain is called $D$-Markov machine, which is used to establish the model for each time series data due to the temporal features associated with the symbol sequence. Some relevant definitions are more formally given as follows. \begin{definition}\label{def:DFSA}~\cite{SSV14} (DFSA) A deterministic finite state automaton (DFSA) is a 3-tuple $\mathcal{G} = (H, \mathcal{Q}, \phi)$ where: \begin{enumerate} \item $H$ is a set of finite size for the symbol alphabet and $H\neq\varnothing (empty\;set)$; \item $\mathcal{Q}$ is a set of finite size for states and $\mathcal{Q}\neq\varnothing$; \item $\phi : \mathcal{Q} \times H \rightarrow \mathcal{Q}$ is the mapping function for state transition; \end{enumerate} while $H^\star$ represents the collection of all finite symbol sequences from $H$ including the empty sequence $\varepsilon$. \end{definition} \begin{definition}\label{def:PFSA}~\cite{SSV14} (PFSA) A probabilistic finite state automaton (PFSA) is an extension to probabilistic setting from a DFSA $\mathcal{G} = (H, \mathcal{Q}, \phi)$ as a pair $\mathcal{K}=(\mathcal{G}, F)$, i.e., the PFSA $K$ is a 4-tuple $\mathcal{K} = (H, \mathcal{Q}, \phi, F)$, where: \begin{enumerate} \item $H, \mathcal{Q}$, and $\phi$ have the same definitions as in Definition~\ref{def:DFSA}; \item $F : \mathcal{Q} \times H \rightarrow [0, 1]$ is defined as a symbol generation function, i.e., probability morph function which are such that $\sum_{\sigma \in H}F(q, \sigma) = 1 \ \ \forall q \in \mathcal{Q}$, where $p_{ij}$ indicates the probability of the symbol $\sigma_j \in H$ occurring with the state $q_i \in \mathcal{Q}$. \end{enumerate} \end{definition} \begin{definition} \label{def:D-Markov}~\cite{SSV14} (D-Markov) A D-Markov machine is an extension of a PFSA where the previous $D$ symbols form a state as defined by: \begin{enumerate} \item $D$ signifies the depth of a Markov machine; \item $\mathcal{Q}$ is a set of finite size for states with $|\mathcal{Q}| \leq |H|^D$, i.e., each state in a Markov machine is identified by some equivalence class of symbol strings whose length are $D$ with symbols in $H$; \item $\phi : \mathcal{Q} \times H \rightarrow \mathcal{Q}$ signifies the state transition function such that if $|\mathcal{Q}| = |H|^D$, then there exist any two symbols $\alpha, \beta \in H$ and $\gamma \in H^\star$ such that $\phi(\alpha \gamma, \beta) = \gamma \beta$ and $\alpha \gamma, \gamma \beta \in \mathcal{Q}$. \end{enumerate} \end{definition} \begin{remark} Based on the Definition~\ref{def:D-Markov} it can be concluded that a D-Markov machine is naturally a stationary stochastic process $X = \cdots x_{-1} x_0 x_1 \cdots$, in which the probability of occurrence of a new symbol $x_n$ is determined by the last $D$ symbols, i.e., $P[x_n | x_{n-1} \cdots x_{n-D} \cdots x_0] = P[x_n | x_{n-1} \cdots x_{n-D}]$. \end{remark} We denote by $\Pi$ the state transition matrix and each entry of the matrix demonstrates the transition probability from one symbolic state to another. We give a simple example to illustrate this. Let the $k^{th}$ state of one dynamical system $A$ be $s_{k}^A$ such that the $ij^{th}$ entry, i.e., $\pi_{ij}^A$ of the matrix $\Pi^A$ indicates the probability of $s_{k+1}^A$ as $i$ given that the previous state $s_{k}^A$ was $j$, i.e., \begin{equation*} \label{PiA} \pi_{ij}^A := P\left(s_{k+1}^A = i \ | \ s_{k}^A =j\right) \forall k \end{equation*} Moreover, one can model individual dynamical system making use of $D$-Markov machines. Because a $D$-Markov machine cannot capture the interaction dependencies for multiple systems or sub-systems in a large complex system, it has recently been extended to a x$D$-Markov machine, which was originally developed in order to obtain the internally causal dependencies among different systems or sub-systems. Different from correlation-based analysis, such a model can efficiently build up and fairly generalize the causal dependencies~\cite{C14}. The following shows the formal definition of x$D$-Markov machine. \begin{definition}~\cite{SSV14} (xD-Markov)\label{xD-Markov} Let $\mathcal{R}_1$ and $\mathcal{R}_2$ be the PFSAs which correspond to symbol streams $\{\textbf{x}_1\}$ and $\{\textbf{x}_2\}$ respectively. Therefore a $xD$-Markov machine is defined as a 5-tuple $\mathcal{R}_{1\rightarrow 2} := (\mathcal{Q}_1,H_1,H_2,\phi_{1},F_{12})$ such that: \begin{enumerate} \item $H_1 = \{H_0, ... ,H_{|H_1|-1}\}$ represents the alphabet set of symbol sequence $\{\textbf{x}_1\}$ \item $\mathcal{Q}_1 =\{s_1,s_2,\dots,s_{|\mathcal{Q}_1|}\}$ is the state set which corresponds to symbol sequence $\{\textbf{x}_1\}$ \item $H_2 = \{H_0, ... ,H_{|H_2|-1}\}$ represents the alphabet set of symbol sequence $\{\textbf{x}_2\}$ \item $\phi_{1}:Q_1 \times H_1 \rightarrow \mathcal{Q}_1$ gives the state transition mapping that maps the transition in symbol sequence $\{\textbf{x}_1\}$ from one state to another based on occurrence of a symbol in $\{\textbf{x}_1\}$ \item $F_{12}$ is the symbol generation matrix of size $|{\mathcal{Q}_1}|\times|{H_2}|$; the $ij^{th}$ entry of $F_{12}$ denotes the probability of obtaining the symbol $\sigma_j$ of $\{\textbf{x}_2\}$ while making a transition from the state $s_i$ of $\{\textbf{x}_1\}$ \end{enumerate} \end{definition} Therefore, it can be observed that one can obtain the probability of a new symbol occurring after the previous $D$ symbols given for an individual symbol sequence. On the other hand, in order to know the probability of a new symbol occurring in a symbol sequence with the last $D$ symbols given in another different symbol sequence, a x$D$-Markov machine can be applied correspondingly. Equivalently speaking, given a x$D$-Markov machine, the causal dependency of one symbol sequence on another symbol sequence can be captured. \begin{figure*} \centering \subfigure[]{\includegraphics[width=0.95\textwidth]{sdf1-eps-converted-to.pdf}} \subfigure[]{\includegraphics[width=0.95\textwidth]{SDF1_1-eps-converted-to.pdf}} \caption{\textit{Illustration of generation of a PFSA using (a) maximal bijectively discretization and (b) maximum entropy partitioning for system A}.}\label{Figure1-1:1} \end{figure*} \section{Spatiotemporal Pattern Network}\label{sec:STPN} This section mainly presents how to construct the spatiotemporal pattern network (STPN) for two dynamical systems, $A$ and $B$, based on the concepts of SDF introduced above. First we start with data partitioning/discretization and symbolization followed by the details of STPN construction. \subsection{Discretization and Symbolization} Suppose there are two different dynamic systems $A$ and $B$. In real-world problems, such as wind power prediction, $A$ and $B$ can represent two different wind turbines in a large wind power farm. Alternatively, in residential home energy disaggregation, $A$ and $B$ could represent HVAC system electricity consumption and that of all appliances. For each system, there are various measured variables and typically some key observations are picked to establish the model and analyze. For example, for a wind turbine, wind speed and wind power are those two key observations for power predictions. It is, however, noted that some other variables, e.g., wind direction and humidity possibly affect power such that these variables can also be taken into account. The first step to model dynamic systems in terms of symbolic dynamics is the data discretization. As mentioned above, there are many approaches that can be used; in this paper, maximally bijective discretization (MBD) is applied to the supply side dynamic systems (wind turbines) and maximum entropy partitioning is used in demand side dynamic systems (HVAC, appliances, etc.). The reason we select different methods is because of the difference of measured variables. For wind turbines, wind speed and wind power are chosen and their input-output relation in the continuous domain can be maximally maintained. However, for home energy disaggregation, the unique variable for each part of the home energy use is the energy consumption itself such that there is no input-output relation in the continuous domain. \subsection{Symbolic Modeling of Dynamical Systems and Interactions} Figure~\ref{Figure1-1:1} shows the symbol sequence generation in the form of PFSA using two different methods, i.e., maximally bijective discretization and maximum entropy partitioning, respectively. As discussed before it has been acknowledged that a $D$-Markov machine can be represented by a PFSA using a previous $D$ symbols to indicate one state. In this context, we take into consideration two different systems for addressing the quantification of their spatiotemporal or temporal relations. From Figure~\ref{Figure2:1}, the state transition matrices $\Pi^A$ and $\Pi^B$ show the self-relations of systems $A$ and $B$ respectively. Then the cross-state transition matrices $\Pi^{AB}$ and $\Pi^{BA}$ correspondingly represent the cause-effect relations from A to B and B to A respectively. However, it should be noted that such casual dependencies between systems $A$ and $B$ are not necessarily equivalent. For quantification of the relations in a $D$-Markov machine, a x$D$-Markov machine, atomic patterns (AP) and relational patterns (RP) were introduced in~\cite{SSV14}, which can give more details. More formally, the entries of the cross-state transition matrices $\Pi^{AB}$ and $\Pi^{BA}$ can be expressed by: \begin{gather*} \label{PiAB} \pi_{k\ell}^{AB} := P\left(s_{n+1}^B = \ell \ | \ s_{n}^A =k\right) \ \forall n \end{gather*} \begin{gather*} \label{PiBA} \pi_{ij}^{BA} := P\left(s_{n+1}^A = j \ | \ s_{n}^B =i\right) \ \forall n \end{gather*} where $j,k \in Q^A$ and $i,\ell \in Q^B$. The above relations show that a cross-state transition matrix can be constructed from symbol sequences obtained from two different dynamical systems while each entry of each matrix signifies the transition probability from one state in the first dynamical system to another state in the second dynamical system. For instance, $\pi_{ij}^{BA}$ means the transition probability from state $i$ in the system $B$ to another state $j$ in the system $A$. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{stpn-eps-converted-to.pdf} \caption{\textit{Construction of STPN: Atomic patterns (APs) and relational patterns (RPs) formulation}.}\label{Figure2:1} \end{figure} Moreover, we use an information metric in order to quantify the value of the atomic and relational patterns (in this work the relational pattern is the major concern). In this context, mutual information is a metric of interest introduced to address the quantification. For example, from Figure~\ref{Figure2:1}, we denote by $I^{AA}$ and $I^{AB}$ the atomic and relational patterns respectively associated with systems A to B. Formally, the atomic pattern of system A is expressed as follows: \begin{gather*} \label{InfA} I^{AA} = I(s_{n+1}^A; s_n^A) = \mathcal{H}(s_{n+1}^A) - \mathcal{H}(s_{n+1}^A|s_n^A) \end{gather*} where \begin{gather*} \label{HA} \mathcal{H}(s_{n+1}^A) = -\sum_{i=1}^{\mathcal{Q}_A} P(s_{n+1}^A = i)\log_2 P(s_{n+1}^A = i) \end{gather*} \begin{gather*} \label{HAA} \mathcal{H}(s_{n+1}^A|s_{n}^A) = -\sum_{i=1}^{\mathcal{Q}_A} P(s_{n}^A = i)\mathcal{H}(s_{n+1}^A|s_{n}^A = i) \end{gather*} \begin{gather*} \label{HAAcond} \mathcal{H}(s_{n+1}^A|s_{n}^A = i) = -\sum_{i=1}^{\mathcal{Q}_A} P(s_{n+1}^A = l|s_n^A = i)\cdot \nonumber \\ \log_2 P(s_{n+1}^A = l|s_n^A = i) \end{gather*} Therefore, based on the quantity $I^{AA}$ (defined using different entropy $\mathcal{H}$ values as presented above), the temporal self-prediction capability of the system A can be correspondingly identified. On the other hand, the mutual information for the relational pattern involved in systems A and B can be described as: \begin{gather*} \label{InfAB} I^{AB} = I(s_{n+1}^B; s_n^A) = \mathcal{H}(s_{n+1}^B) - \mathcal{H}(s_{n+1}^B|s_n^A) \end{gather*} where \begin{gather*} \label{HBA} \mathcal{H}(s_{n+1}^B|s_{n}^A) = -\sum_{i=1}^{\mathcal{Q}_A} P(s_{n}^A = i)\mathcal{H}(s_{n+1}^B|s_{n}^A = i) \end{gather*} \begin{gather*} \label{HBAcond} \mathcal{H}(s_{n+1}^B|s_{n}^A = i) = -\sum_{i=1}^{\mathcal{Q}_B} P(s_{n+1}^B = l|s_n^A = i)\cdot \nonumber \\ \log_2 P(s_{n+1}^B = l|s_n^A = i) \end{gather*} Hence, the quantity of $I^{AB}$ identifies system A's capability of predicting system B's outputs and vice versa for $I^{BA}$. Furthermore, based on the mutual information, patterns can be assigned with weights such that some patterns with low mutual information may be rejected for simplifying the model. Interested readers can find more details in~\cite{SSV14}. Based on the above analysis, it has been shown that the proposed STPN in this paper can be an effective tool for capturing the spatiotemporal interactions between different dynamic systems. For validating such a data-driven method this paper offers two case studies in terms of supply side dynamic systems (i.e., wind turbines in a wind farm) and demand side dynamic systems (i.e., home electric energy disaggregation) to demonstrate the efficacy and effectiveness. The prediction process can be described as follows: Given a training data set in the continuous domain, we use partitioning methods to discretize and symbolize the data for running the x$D$-Markov machine. The probability transition matrices are obtained for predictions in symbolic or continuous domains. For the symbolic prediction, we find out the most likely symbol sequence for system $A$ given another symbol sequence of system $B$ running the x$D$-Markov model numerous times. While in continuous domain the prediction can be acquired based on the symbolic prediction using expectation as follows: \begin{equation}\label{prediction_STPN} W(k) = \sum_{j=1}^{m} Pr_k(j)W(E|j) \end{equation} where, $W(k)$ represents the expectation of energy at the $k^{th}$ instant, $Pr_k(j)$ signifies the probability of $j^{th}$ symbol occurring at the $k^{th}$ instant after running numerous simulations of Monte Carlo Markov Chain, $W(E|j)$ indicates the expectation of energy for the discrete bin labelled by symbol $j$ (suppose that in that bin there are $m$ discrete symbols). The pseudocode of energy prediction based on STPN is as follows. \begin{algorithm}[H]\label{STPN} \caption{Energy Prediction based on STPN} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Training data sets of systems $i$, $C'_i$ ($i$ represents any system), depth of $D$} \Output{Predicted results $\hat{C}_i$} \text{Discretize and symbolize the continuous data $C'_i$ to $s_i$}\; \text{Calculate state transition matrices and mutual information by $s_i$}\; \text{Calculate the expected value of energy in the discrete bin}\; \text{Use Eqn.~\ref{prediction_STPN} to calculate the prediction results $\hat{C}_i$}\; \end{algorithm} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{spatial-information-of-wt-eps-converted-to.pdf} \caption{\textit{Geographical information of wind turbines under analysis which are located in California, between 35.28-35.33n and 118.09-118.17w}}\label{Figure3:1} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{abstracted-network-of-wind-turbine-eps-converted-to.pdf} \caption{\textit{Representation of STPN for 12 wind turbines}}\label{Figure4:1} \end{figure} \section{Supply Side: Wind Turbines}\label{wind_turbine} \subsection{Geographical information} In this subsection, a case study based on the supply side energy systems, i.e., wind turbines, is used for validating the data-driven method proposed in this work. The STPN framework is used in a wind turbine network in order to capture the causal dependencies among different wind turbines that can be regarded as sub-systems of a wind farm. This paper uses the 2006 Western Wind Integration data set obtained from NREL~\cite{NREL06} to uncover causal dependencies which are vitally important to individual wind turbine power prediction in a mutual turbine-turbine setting. For establishing the STPN, twelve wind turbines (located in California) that have capacity factors in excess of 40\% are chosen; their IDs can be identified as: 4494, 4495, 4496, 4497, 4423, 4424, 4425, 4426, 4427, 4361, 4313 and 4314 (labeled by 1-12) in this context and the capacity factors are between 41\% and 45\% approximately. For completeness, the geographical information of the wind turbines is also provided. The annual average wind velocity in the area where the considered turbines are located is around 9 $m/s$, with an elevation from 1019 to 1207 m. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{data-partition-eps-converted-to.pdf} \caption{\textit{Discretization of a typical wind turbine systems using MBD}}\label{Figure5:1} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{symbol-eps-converted-to.pdf} \caption{\textit{Symbol sequence plot for a typical wind turbine}}\label{Figure6:1} \end{figure} As shown in Figure~\ref{Figure3:1}, twelve wind turbines are distributed in various locations, which can be identified as nodes in the STPN represented by Figure~\ref{Figure4:1}. From Figure~\ref{Figure5:1} the relation between wind speed and wind power can be observed and the other wind turbines as well have the same pattern. The input-output relation involving a wind turbine is significant such that MBD enables the maximum preservation of their correspondence in the symbolic domain. The spatiotemporal patterns between different wind turbines and the very relational patterns between them can be found from the symbol sequences. Figure~\ref{Figure6:1} shows an instance of symbol sequence for a wind turbine and it can be observed that most of the symbols are 1, 5, 8 and 9. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{timelagvsmutualinformation-eps-converted-to.pdf} \caption{\textit{Mutual information of relational patterns for selected pairs of wind turbines}.}\label{Figure7:1} \end{figure} \subsection{Results and Discussion} The mutual information of RP between a pair of wind turbines is first to be investigate according to the state transition matrices generated by x$D$-Markov machines. We set the depth as 1 for simplicity and one can increase the parameter. Therefore, it can be immediately known that the current state of one selected wind turbine depends only on the last state of another selected wind turbine. The effect of time lag on the mutual information between wind turbines is studied for addressing the temporal characteristics. The results in Figure~\ref{Figure7:1} show that as the time lag increases the mutual information decreases correspondingly. Thus in this work one can maximize the causal dependencies between any two different wind turbines at time lag 1. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{causality-network-eps-converted-to.pdf} \caption{\textit{Spatiotemporal pattern network for the group of wind turbines}}\label{Figure8:1} \end{figure*} The spatial characteristics between two different wind turbines is also another critical factor in STPN. Wind turbines labeled by 5, 6, 7, 1, and 10 are chosen for the purpose of such an analysis. Figure~\ref{Figure8:1} shows that the causal dependency between any two wind turbines reduces with the increment of geographical (spatial) distance between them along any direction. Figure~\ref{Figure9:1} also illustrates that the metric based on mutual information for a pair of wind turbines with the Euclidean distance between them exhibits a generally decreasing trend. Consequently, in summary, based on both of these observations made, applying the metric based on mutual information is an effective technique to capture the spatial and temporal patterns in wind turbine systems. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{euclidean-distance-vs-mi-eps-converted-to.pdf} \caption{\textit{A monotonically decreasing relationship for different pairs of wind turbines when spatial distances increase }}\label{Figure9:1} \end{figure} Next, we evaluate the effectiveness of the STPN in revealing causal dependencies through wind power prediction. The symbolic and continuous prediction of one wind turbine power is based on the observed symbol sequence emerging from another turbine. According to the procedure of energy prediction described above, Figure~\ref{Figure10:1} and Figure~\ref{Figure11:1} show the symbol prediction results in which the predicted symbol sequences emerging from the wind turbine 5 under the observations of wind turbines 6 and 7 respectively are compared to the true symbol sequences emerging from the wind turbine 5. It is noted that the model is trained by the data from the first half-year of 2006 while tested by the second half-year data. From those two plots it can be observed that for most of time the proposed x$D$-Markov machines have a strong prediction capability, while some errors may come from the transient symbols. Moreover from observation it can be found that the prediction by wind turbine 6 is slightly better than that by wind turbine 7 as implied by mutual information. Figure~\ref{Figure12:1} shows that the mean square error (MSE) is a function of spatial distance between any pair of wind turbines using wind turbines 5, 6, 7, 8, and 9 and it displays a monotonically increasing trend. The prediction capacity in terms of symbols using the proposed STPN has been shown. An example of energy prediction for wind turbine 5 in the continuous domain with the observation of symbol sequence for wind turbine 6 is shown here to validate the energy prediction method. The plot of Figure~\ref{Figure13:1} shows that the major trend in the actual data can be caught quite well and accurately for the continuous domain prediction as the partitioning method MBD is effective in preserving the input-output relation. However, a finer discretization may improve the prediction result in the continuous domain even though that requires a larger amount of data and increases the computational complexity correspondingly. In order to evaluate the proposed scheme in wind power prediction, in this work we compare the performing capabilities of the STPN framework and a quite popular approach, namely, the Hidden Markov Model (HMM) with mixture which is adapted from HMM to deal with multiple variables. A toolbox compatible with MATLAB~\cite{murphy2013hidden} is applied in this context. The results in Figure~\ref{Figure13:1} have shown that the proposed prediction method based on STPN framework outperforms the HMM with mixture under visual inspection. Quantitatively, while the MSE for predicted power using HMM with mixture is $99.8842$, the MSE for predicted power using the proposed algorithm is $18.9521$. Therefore, it can be concluded that the STPN scheme in which causal dependencies between different wind turbines are captured is a quite effective technique in wind power prediction. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{wt6-wt5-eps-converted-to.pdf} \caption{\textit{Symbolic prediction of wind turbine 5 behavior with the observation of wind turbine 6}}\label{Figure10:1} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{wt7-wt5-eps-converted-to.pdf} \caption{\textit{Symbolic prediction of wind turbine 5 behavior with the observation of wind turbine 7}}\label{Figure11:1} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{MSE-eps-converted-to.pdf} \caption{\textit{MSEs of prediction of wind turbine 5 power using observation from other turbines: As geographical (spatial) distance increases, MSE increases}}\label{Figure12:1} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Wind_power_prediction_new-eps-converted-to.pdf} \caption{\textit{Wind power prediction for wind turbine 5 under the observation of symbol sequence of wind turbine 6 using STPN and HMM with mixture}}\label{Figure13:1} \end{figure} \section{Demand Side: Non-Intrusive Load Monitoring}\label{NILM} This subsection presents a second case study based on demand side energy systems; in particular, non-intrusive load monitoring (NILM) of electrical demand with the purpose of identifying electric load components for residential homes. As described in the above section, the STPN framework is used as well for electric load component disaggregation. In order to best identify the disaggregated energy usage corresponding to each electric energy consuming component from the total energy consumption, convex programming is applied here. This is necessary because for NILM there is no clear input-output relation with the result that--even though the STPN is used in this case study--the results obtained may not be optimal. Here, optimal means that the summation of all load components of residential home energy consumer adds up to the whole building electricity use. Therefore, with the prediction results by STPN, a convex programming based modification is introduced to achieve said optimal disaggregation. \subsection{Problem Description} For this case, the data set used for energy disaggregation is based on the Building America 2010 data set available from NREL~\cite{hendron2010building}. The data is for the hot and dry location of Bakersfield, California with ample of heating, ventilation, and air-conditioning (HVAC) in the summer and includes the whole building electric (WBE), which is the sum of HVAC, lights, appliances (APPL), and miscellaneous electric loads (MELS). The goal here is to apply the measured WBE time series to predict HVAC, LIGHTS, APPL, and MELS, respectively. It is noted that WBE is the only known variable and for each part of prediction one month data is adopted where the first three week data is used for training the model, while the fourth week for testing. \textbf{Convex Programming}: Before stating the prediction results, the convex programming problem setup is formulated for completeness. Suppose that the results obtained by STPN framework are group truth for each part except WBE. Thus the optimization problem can be expressed by \begin{equation}\label{convex_programming} \begin{aligned} &\text{minimize}_{C_i, i = 1,2,3,4}J:=\sum_{i=1}^{4}\|C_i-\hat{C}_i\|^2_2\\ &\text{s.t.}\sum_{i=1}^{4}C_i = S; C_i\in\mathbb{R}^n_{\geq 0} \end{aligned} \end{equation} where $C_i$ represent the decision variables to be determined, $\hat{C}_i$ signify the prediction results obtained from STPN, $S$ is the known values of WBE, $\|C_i-\hat{C}_i\|_2$ is the Euclidean norm between $C_i$ and $\hat{C}_i$. The pseudocode of energy prediction based on STPN framework and convex programming is shown as follows. We use STPN+convex programming for reference of the combination of the STPN framework and convex programming technique throughout the rest of analysis. \begin{algorithm}[H] \caption{Energy Prediction using STPN+convex programming} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Training data sets $S, C'_i(i=1,2,3,4)$, depth of $D$} \Output{Optimal results $C_i(i=1,2,3,4)$} \text{Run all of steps in Algorithm~\ref{STPN}}\; \text{Get results by STPN and solve the optimization problem in Eq.~\ref{convex_programming}}\; \text{Obtain the optimal results $C_i(i=1,2,3,4)$}\; \end{algorithm} \vspace{1cm} \textbf{Factorial Hidden Markov Model}: Factorial Hidden Markov Model (FHMM) \cite{ZGMJ97} is an extension of Hidden Markov Models that parallelizes multiple Markov models in a distributed manner, and performs some task--related inference to arrive at predicted observation. The application of such models is done by representing each end--use as a hidden state that is modeled by multinomial distribution using $\mathbb{K}$ discrete values, and then sum each appliance meter's individual independent contribution to the expected observation (i.e., the total expected main meter value). AFAMAP \cite{JZKTJ12} variant of FHMM which includes the trends in the hidden states of FHMM have also been reported to be effective in the disaggregation task. In our application of FHMM, the number of hidden states are the number of testing appliances, while $\mathbb{K}$ = 3 in order to keep the computational requirements low. \textbf{Combinatorial Optimization}: Combinatorial optimization (CO)~\cite{BKJV11} algorithm is a heuristic scheme that attempts to minimize the $\ell_1$--norm of the total power at the mains and the sum of the power of the end--uses, given either single or multi--state formulation of the sum. The drawbacks of CO for disaggregation tasks are its sensitivity to transients and degradation with increasing number of devices or similarity in device characteristics.\\ We applied the algorithms as available in the non--intrusive load monitoring toolkit~\cite{NJOHWAAM14} with an exact inference~\cite{ZGMJ97} for the FHMM. \begin{figure} \centering \includegraphics[width=0.85\textwidth]{MI_1-eps-converted-to.pdf} \caption{\textit{Mutual information between WBE and HVAC, WBE and LIGHTS, WBE and APPL, and WBE and MELS with the increment of time lag of 2 minutes in July, 2010}}\label{MI_1} \end{figure} \begin{figure} \centering \includegraphics[width=0.85\textwidth]{RP_network-eps-converted-to.pdf} \caption{\textit{STPN using variables, WBE, HVAC, LIGHTS, APPL and MELS in July}}\label{RP_network} \end{figure} \subsection{Results and Discussion} For validation of the proposed energy prediction approach, two months, i.e., April and July, are selected to study the prediction performance accordingly. As the Building America 2010 data set has 1 hour sampling frequency and three weeks data is for training, such scale of data may not meet the requirement of data size for the construction of STPN. Building up STPN with not enough amount of data may result in the poor accuracy of causal dependencies between different variables. Therefore, a data reprocessing technique, i.e., upsampling is applied in this case and the upsampling fold is 30 such that the sampling frequency for the data set is 2 minutes. First, we study the causal dependencies among these five variables by computing the mutual information. Figure~\ref{MI_1} shows the variation of mutual information with respect to time lag in 2 minutes for addressing the temporal characteristics. The depth of x$D$Markov machine is still 1 such that the current symbol of any part of HVAC, LIGHTS, APPL and MELS depends only on the past one symbol of WBE. Different from the wind turbine systems, the causal dependencies between WBE and the other four load components have decreased little with an increase of time lag, which reflects that using WBE to predict other parts of energy consumption is temporally robust. However, it also shows that the causal dependency between WBE and HVAC in July is the maximum compared with those between WBE and other load components (i.e., LIGHTS, APPL, and MELS) such that the prediction of HVAC using WBE yields the best accuracy. The results in Figure~\ref{RP_network} show the causal dependencies quantified by mutual information among all of five variables. It can be observed that the causal dependency between HVAC and APPL is larger than that between HVAC and MELS as well as that between HVAC and LIGHTS. While the relations among LIGHTS, APPL and MELS can be seen to be quite significant due to the causal dependencies obtained in this context. In summary, this relational pattern network captures temporal interactions between different end uses that can be an effective technical tool for energy disaggregation. Figure~\ref{Figure14:1} shows the energy disaggregation of HVAC, LIGHTS, APPL and MELS using STPN and STPN+convex programming in April. In this month, the energy consumption of HVAC is most significant such that it accounts for the largest percentage of WBE. A strong prediction capabilities of STPN can be observed from the plots and based on that the STPN+convex programming is able to improve STPN performance, which is attributed to the constraint imposed in the convex programming. It can also be seen from Figure~\ref{Figure15:1} that the total energy consumption by STPN without convex programming is worse than STPN+convex programming results and the optimal disaggregation appears to be achieved. However, the prediction performance for APPL and LIGHTS is slightly worse than HVAC and MELS because they account for a lower percentage of WBE, which is also evident as suggested by Figure~\ref{Figure16:1}. Therefore, it can be implied that for energy disaggregation the more accurate prediction can be achieved when one load component (i.e., HVAC, LIGHTS, APPL, and MELS) accounts for a more significant percentage of WBE. It is seen from Figure~\ref{Figure16:1} that the prediction for the last two days in the fourth week is worse though it is able to catch the trend, which may be attributed to the fact that on those two days some transient external factors, such as weather and occupancy, affect the energy consumption. A similar observation can be made from Figure~\ref{Figure17:1} that the optimal disaggregation can be made via STPN+convex programming. For a direct visual inspection of the prediction capability difference, Figure~\ref{Figure18:1} and Figure~\ref{Figure19:1} reveal that STPN+convex programming outperforms STPN alone as for each part the energy consumption is predicted optimally. The fact that these two plots show an energy prediction difference by STPN or STPN+convex programming of less than 5\% demonstrates efficacy and effectiveness of the proposed framework. \begin{figure} \centering \subfigure[]{\includegraphics[width=0.95\textwidth]{separate_11_april-eps-converted-to.pdf}} \subfigure[]{\includegraphics[width=0.95\textwidth]{separate_11_messy_april-eps-converted-to.pdf}} \caption{\textit{Energy prediction of HVAC, LIGHTS, APPL, and MELS in April 2010 using STPN, STPN+convex programming, FHMM, and CO separately shown in (b) for better visualization}}\label{Figure14:1} \end{figure} \begin{figure} \centering \includegraphics[width=0.75\textwidth]{sum_11_april-eps-converted-to.pdf} \caption{\textit{Calculated WBE from disaggregated energy values in April 2010 using STPN, STPN+convex programming, FHMM and CO}}\label{Figure15:1} \end{figure} \begin{figure} \centering \subfigure[]{\includegraphics[width=0.95\textwidth]{separate_11_july-eps-converted-to.pdf}} \subfigure[]{\includegraphics[width=0.95\textwidth]{separate_1_messy_july-eps-converted-to.pdf}} \caption{\textit{Energy prediction of HVAC, LIGHTS, APPL, and MELS in July 2010 using STPN, STPN+convex programming, FHMM, and CO separately shown in (b) for better visualization}}\label{Figure16:1} \end{figure} \begin{figure} \centering \includegraphics[width=0.75\textwidth]{sum_11_july-eps-converted-to.pdf} \caption{\textit{Calculated WBE from disaggregated energy values in July 2010 using STPN, STPN+convex programming, FHMM and CO}}\label{Figure17:1} \end{figure} \begin{figure} \centering \includegraphics[width=0.95\textwidth]{bar_april_1-eps-converted-to.pdf} \caption{\textit{Energy prediction difference of HVAC, LIGHTS, APPL, and MELS in April 2010 among STPN, STPN+convex programming, FHMM and CO}}\label{Figure18:1} \end{figure} \begin{figure} \centering \includegraphics[width=0.95\textwidth]{bar_july_1-eps-converted-to.pdf} \caption{\textit{Energy prediction difference of HVAC, LIGHTS, APPL, and MELS in July 2010 among STPN, STPN+convex programming, FHMM and CO}}\label{Figure19:1} \end{figure} To see the comparison between the proposed method and the current state-of-the-art techniques in literature, in this context we compare the STPN and STPN+convex programming method to FHMM and CO. However, for obtaining enough accuracy of prediction results, the data set is as well upsampled for FHMM with upsampling fold being 1200. Thus the sampling frequency becomes 3 sec accordingly and the number of states used is 3. The energy disaggregation results from Figure~\ref{Figure14:1} show that both FHMM and CO perform worse than the proposed method although the predicted WBE in Figure~\ref{Figure15:1} looks quite promising. It is because FHMM cannot predict the transient peaks appearing as quite well as the proposed method and CO is unable to disaggregate the load component well. The very similar conclusion is made as well for the month of July. From Figure~\ref{Figure16:1} it is observed that when the energy curves are more oscillatory, the proposed method is able to outperform FHMM and CO. It can be suggested both from Figures~\ref{Figure15:1} and~\ref{Figure17:1} that the proposed STPN and STPN+convex programming present better energy prediction in terms of WBE. Results in Figure~\ref{Figure18:1}, ~\ref{Figure19:1} and Table~\ref{table2} quantitatively present the difference among the proposed method (STPN, STPN+convex programming), FHMM, and combinatorial optimization method. It strengthens the conclusion that using STPN and STPN+convex programming yield quite encouraging and promising disaggregation results in NILM. Hence, the comparison among the proposed method and FHMM, combinatorial optimization indicates the effectiveness of the STPN-based energy prediction scheme as an important tool to deal with energy prediction. We also remark on the computational efficiency on the proposed method, FHMM, and CO. \begin{table}[h!] \centering \caption{Computational information for different methods in April} \label{table2} \begin{tabular}{cccc} \toprule Method & Time (s) & Memory (MB) & Accuracy (MSE)\\ \midrule STPN & 28.74 & 962 & 0.0072\\ STPN+convex programming & 369.64 & 2756 & 0.0070\\ FHMM & 38.10 & 798.67 & 0.0163\\ CO & 11.25 & 769.37 & 0.0564\\ \bottomrule \end{tabular} \end{table} \begin{remark} In this case we also consider the computational time, memory along with accuracy (MSE) in order to compare the performance of different methods. FHMM and combinatorial optimization methods were implemented in ipython notebook for the NILM toolkit (NILMTK) while STPN and STPN+convex programming in the MATLAB environment and CVX package~\cite{grant2008cvx}. The results in Table~\ref{table2} show that STPN can spend less time than FHMM while more memory is required as the number of states for STPN is more than FHMM in this case. STPN+convex programming approach needs more computational time and memory to run the whole process due to the optimizing iterations. FHMM and CO use less memory compared to the proposed schemes. However, in terms of accuracy, the STPN outperforms FHMM and CO approaches as shown in Table~\ref{table2}. The MSE of FHMM is more than two times as that of STPN. Moreover, STPN+convex programming is able to improve the accuracy obtained from the STPN framework. In summary, the energy prediction method based on the STPN framework may be an effective way in the applications of energy prediction. Note, the FHMM and the CO codes used here are part of a well--optimized toolbox and we expect that similar code and platform optimization can bring our proposed methods to a comparable level in terms of memory and time complexity. \end{remark} \section{Conclusions and Future Work}\label{sec:con} This paper presents a novel data-driven framework, spatiotemporal pattern network (STPN) to predict energy consumption for both supply side and demand side energy systems. While symbolic dynamic filtering performs the discretization and symbolization of continuous domain data for data level fusion of different variables in a dynamic system, a $D$-Markov machine is able to capture its temporal characteristics. This work establishes another PFSA, called x$D$-Markov machine, for addressing the issue of how to capture the causal dependencies between two time-series in this work. Moreover, for the quantification of causal dependencies, a mutual information based metric is applied in this regard. Prediction based on the STPN framework is proposed using expectation from symbolic domain to symbolic and continuous domain. The proposed scheme is validated by two case studies, wind turbine power prediction (supply side energy systems) and non-intrusive load monitoring (demand side energy systems). For wind power prediction, the primary observation made in this paper is that the proposed STPN models can capture the salient spatiotemporal features and it is demonstrated that causal dependencies decrease with an increase in both spatial distances and temporal lags as intuitively expected. Based on such observation, the power prediction for a wind turbine is performed by using the observation from another wind turbine with a high degree of accuracy. For non-intrusive load monitoring, energy disaggregation performance of the proposed STPN framework with and without a convex programming step is evaluated. While the STPN scheme shows that each part of disaggregated energy can be predicted significantly better than state-of-the-art techniques such as FHMM and combinatorial optimization, a convex programming approach based on STPN is able to improve the prediction performance to achieve a further optimized disaggregation involving the constraint -- disaggregated energy values should sum up to the total energy usage. While current efforts are focusing on applying the proposed techniques on real data and problems, some of the other future research directions include: \begin{enumerate} \item For wind power prediction -- Impact analysis of other physical variables, e.g., wind direction on model quality for wind power prediction; \item For energy disaggregation -- Joint state prediction by taking multiple variables into account for energy disaggregation; \item For energy disaggregation -- Weighted factor and penalty term analysis in convex optimization for energy disaggregation. \end{enumerate} \section*{Acknowledgement} This work was supported by the National Science Foundation under Grant No. CNS-1464279.
1,108,101,562,407
arxiv
\section{Interpretable User Clustering} \label{sec:clustering} In this section, we study what the typical new users are like on Snapchat and how they connect to the social network. We aim to find an interpretable clustering of new users based on their initial behaviors and evolution patterns when they interact with the various functions of a social app and other users. Moreover, we want to study the correlations between user types and user churn, so as to enable better churn prediction and personalized retention. We also note that, besides churn prediction, interpretable user clustering is crucial for the understanding of user behaviors so as to enable various product designs, which can ultimately lead to different actions towards the essentially different types of users. Therefore, while we focus on the end task of churn prediction, the framework proposed in this work is generally useful for any downstream applications that can potentially benefit from the understanding of user types, such as user engagement promotion. \subsection{Challenges} Automatically finding interpretable clustering of users \textit{w.r.t.~} multi-dimensional time series data poses quite a few challenges, which makes the canonical algorithms for clustering or feature selection such as $k$-means and principal component analysis impractical \cite{han2011data}. {\flushleft \textbf{Challenge 1: Zero-shot discovery of typical user types}}. As we discuss in Section \ref{sec:intro}, users are often heterogeneous. For example, some users might actively share contents, whereas others only passively consume \cite{hu2014we}; some users are social hubs that connect to many friends, while others tend to keep their networks neat and small \cite{kwak2010twitter}. However, for any arbitrary social app, is there a general and systematic framework, through which we can automatically discover the user types, without any prior knowledge about possible user types or even the proper number of clusters? {\flushleft \textbf{Challenge 2: Handling correlated multi-dimensional behavior data}}. Users interact with a social app in multiple ways, usually by accessing different functions of the app as well as interacting with other users. Some activities are intuitively highly correlated, such as \textit{chat\_sent} and \textit{chat\_received}, whereas some correlations are less obvious, such as \textit{story\_viewed} and \textit{lens\_sent}. Moreover, even highly correlated activities cannot be simply regarded as the same. For example, users with more chats sent than received are quite different from users in the opposite. Therefore, what is a good way to identify and leverage the correlations among multiple dimensions of behavior data, including both functional and social activities? {\flushleft \textbf{Challenge 3: Addressing noises and outliers}}. User behavior data are always noisy with random activities. An active user might pause accessing the app for various hidden reasons, and a random event might cause a dormant user to be active for a period of time as well. Moreover, there are always outliers, with extremely high activities or random behavior patterns. A good clustering framework needs to be robust to various kinds of noises and outliers. {\flushleft \textbf{Challenge 4: Producing interpretable clustering results}}. A good clustering result is useless unless the clusters are easily interpretable. In our scenario, we want the clustering framework to provide insight into user types, which can be readily turned into actionable items to facilitate downstream applications such as fast-response and targeted user retention. \subsection{Methods} To deal with those challenges, we design a robust three-step clustering framework. Consider a total of two features, namely, $\mathbf{f}^1$ (\textit{chat\_received}) and $\mathbf{f}^2$ (\textit{chat\_sent}), for four users, $u_1$, $u_2$, $u_3$ and $u_4$. Figure \ref{fig:clus} illustrates a toy example of our clustering process with the details described in the following. {\flushleft \textbf{Step 1: Single-feature clustering}}. For each feature, we apply $k$-means with Silhouette analysis \cite{rousseeuw1987silhouettes} to automatically decide the proper number of clusters $K$ and assign data into different clusters. For example, as illustrated in Figure \ref{fig:clus}, for \textit{chat\_received}, we have the feature of four users $\{\mathbf{f}^1_1, \mathbf{f}^1_2, \mathbf{f}^1_3, \mathbf{f}^1_4\}$, each of which is a $4$-dimensional vector (\textit{i.e.}, $\mathbf{f}=\{u,l,q,\phi\})$. Assume $K$ chosen by the algorithm is 3. Then we record the cluster belongingness, \textit{e.g.}, $\{l^1_1=1, l^1_2=1, l^1_3=2, l^1_4=3\}$, and cluster centers $\{\mathbf{c}^1_1, \mathbf{c}^1_2, \mathbf{c}^1_3\}$. Let's also assume that for \textit{chat\_sent}, we have $K=2$, $(l^2_1=1, l^2_2=1, l^2_3=1, l^2_4=2)$ and $\{\mathbf{c}^2_1, \mathbf{c}^2_2\}$. This process helps us to find meaningful types of users \textit{w.r.t.~} every single feature, such as users having high volumes of \textit{chat\_received} all the time versus users having growing volumes of this same activity day by day. \begin{figure}[h!] \includegraphics[width=0.8\linewidth]{figures/fig_clus.png} \caption{A toy example of our 3-step clustering framework.} \vspace{-15pt} \label{fig:clus} \end{figure} {\flushleft \textbf{Step 2: Feature combination}}. We convert the features of each user into a combination of the features of her nearest cluster center in each feature. Continue our toy example in Figure \ref{fig:clus}. Since user $u_1$ belongs to the first cluster in feature \textit{chat\_received} and first cluster in feature \textit{chat\_sent}, it is replaced by $\mathbf{x}_1$, which is a concatenation of $\mathbf{c}^1_1$ and $\mathbf{c}^2_1$. $u_2$, $u_3$ and $u_4$ are treated in the same way. This process helps us to largely reduce the influence of noises and outliers because every single feature is replaced by that of a cluster center. {\flushleft \textbf{Step 3: Multi-feature clustering}}. We apply $k$-means with Silhouette analysis again on the feature combinations. As for the example, the clustering is done on $\{\mathbf{x}_1, \mathbf{x}_2, \mathbf{x}_3, \mathbf{x}_4\}$. The algorithm explores all existing combinations of single-dimensional cluster centers, which record the typical values of combined features. Therefore, the multi-feature clustering results are the typical combinations of single-dimensional clusters, which are inherently interpretable. \subsection{Results} {\flushleft \textbf{Clustering on single features}}. We first present our single-feature clustering results on each of users' 12-dimensional behaviors. Figure \ref{fig:func_res} provides the detailed results on \textit{lens\_sent} as an example. The results on other features are slightly different regarding the numbers of clusters, shapes of the curves, and numbers of users in each cluster. However, the method used is the same and they are omitted to keep the presentation concise. \begin{figure}[h!] \vspace{-5pt} \centering \subfigure[Parameter dist.]{ \includegraphics[width=0.13\textwidth]{figures/fig_func_res_a.pdf}} \subfigure[Activity patterns.]{ \includegraphics[width=0.33\textwidth]{figures/fig_func_res_b.png}} \vspace{-5pt} \caption{\textbf{4 types of users shown with different colors.}} \vspace{-5pt} \label{fig:func_res} \end{figure} Figure \ref{fig:func_res} (a) shows the four parameters we compute over the 14-day period on users' \textit{lens\_sent} activities, as they distribute into the four clusters detected by the $k$-means algorithm. The number of clusters is automatically selected with the largest average Silhouette score when $k$ is iterated from 2 to 6, which corresponds to clusters that are relatively far away from each other while having similar sizes. Figure \ref{fig:func_res} (b) shows the corresponding four types of users with different activity patterns on \textit{lens\_sent}. The first type of users (red) have no activity at all, while the second type (green) have stable activities during the two weeks. Type 3 users (blue) are only active in the beginning, and type 4 users (black) are occasionally active. These activity patterns are indeed well captured by the volume and burstiness of their daily measures, as well as the shape of the curves of their aggregated measures. Therefore, the clusters are highly interpretable. By looking at the clustered curves, we can easily understand the activity patterns of each type of users. {\flushleft \textbf{Clustering on network properties}}. For single-feature clustering on network properties, as we get four clusters on ego-network size and three clusters on density, there is a total of $4\times 3=12$ possible combinations of different patterns. However, when putting these two features of network properties together with the ten features of daily activities through our multi-feature clustering framework, we find that our new users only form three typical types of ego-networks. This result proves the efficacy of our algorithm since it automatically finds that only three out of the twelve combinations are essentially typical. Figure \ref{fig:net_res} illustrates three example structures. The ego-networks of type 1 users have relatively large sizes and high densities; type 2 users have relatively small ego-network sizes and low densities; users of type 3 have minimal values on both measures. \begin{figure}[h!] \includegraphics[width=0.9\linewidth]{figures/fig_net_res.pdf} \caption{\small Examples of 3 types of ego-network structures.} \vspace{-5pt} \label{fig:net_res} \end{figure} Through further analysis, we find that these three types of new users clustered by our algorithm based on the features of their ego-networks have strong correlations with their positions in the whole social network. Precisely, if we define network core as the top 5\% users that have the most friends in the entire network, and depict the whole network into a jellyfish structure as shown in Figure \ref{fig:jelly}, we can exactly pinpoint each of the three user types into the tendrils, outsiders, and disconnected parts. Specifically, type 1 users are mostly tendrils with about 58\% of direct friends in the core; type 2 users are primarily outsiders with about 20\% of direct friends in the core; type 3 users are mostly disconnected with almost no friends in the core. Such result again proves that our clustering framework can efficiently find important user types. \begin{figure}[h!] \includegraphics[width=0.7\linewidth]{figures/fig_jelly.pdf} \caption{The whole network depicted into a jellyfish shape.} \vspace{-5pt} \label{fig:jelly} \end{figure} {\flushleft \textbf{Clustering on all behaviors}}. Combining new users' network properties with their daily activities, we finally come up with six cohorts of user types, which is also automatically discovered by our algorithm without any prior knowledge. Looking into the user clusters, we find their different combinations of features quite meaningful, regarding both users' daily activities and ego-network structures. Subsequently, we are able to give the user types intuitive names, which are shown in Table \ref{tab:type}. Figure \ref{fig:type} (a) shows the portions of the six types of new users. We define a user churns if there is no activity at all in the second week after account registration. To get more insight from the user clustering results and motivate an efficient churn prediction model, we also analyze the churn rate of each type of users and present the results in Figure \ref{fig:type} (b). The results are also very intuitive. For example, All-star users are very unlikely to churn, while Swipers and Invitees are the most likely to churn. \begin{table}[h] \footnotesize \centering \begin{tabular}{|c|c|c|c|} \hline ID&Type Name&Daily Activities & Ego-network Type\\ \hline 0 & All-star & Stable active chat, snap, story \& lens & Tendril\\ \hline 1 & Chatter & Stable active chat \& snap, few other acts & Tendril\\ \hline 2 & Bumper & Unstable chat \& snap, few other acts & Tendril\\ \hline 3 & Sleeper & Inactive & Disconnected\\ \hline 4 & Swiper & Active lens swipe, few other acts & Disconnected\\ \hline 5 & Invitee & Inactive & Outsider\\ \hline \end{tabular} \caption{\label{tab:type}\textbf{\small 6 types of new users and their characteristics.}} \vspace{-5pt} \end{table} \begin{figure}[h!] \centering \subfigure[Portions]{ \includegraphics[width=0.47\linewidth]{figures/fig_type.pdf}} \subfigure[Churn rates]{ \includegraphics[width=0.47\linewidth]{figures/fig_churn.png}} \caption{\textbf{Portions and churn rates of the six new user types. The y-axis is rescaled to not show the absolute values.}} \vspace{-5pt} \label{fig:type} \end{figure} Note that, our new user clustering results are highly intuitive, and in the meantime provide a lot of valuable insights. For example, the main differences between All-star users and Chatters are their activities on \textit{story} and \textit{lens}, which are the additional functions of Snapchat. Being active in using these functions indicates a much lower churn rate. The small group of Swipers is impressive too since they seem to only try out the lenses a lot without utilizing any other functions of the app, which is related to an entirely high churn rate. Quite a lot of new users seem to be invited to the app by their friends, but they are highly likely to quit if not interacting with their friends, exploring the app functions or connecting to core users. Insights like these are highly valuable for user modeling, growth, retention and so on. Although we focus our study on Snapchat data in this paper, the clustering pipeline we develop is general and can be applied to any online platforms with multi-dimensional user behavior data. The code of this pipeline has also been made publicly available. \section{Conclusions} \label{sec:con} In this paper, we conduct in-depth analysis of Snapchat's new user behavior data and develop \textit{ClusChurn}, a coherent and robust framework for interpretable new user clustering and churn prediction. Our study provides valuable insights into new user types, based on their daily activities and network structures, and our model can accurately predict user churn jointly with user types by taking limited data of new users' initial behaviors after joining Snapchat on large scales. While this paper focuses on the Snapchat data as a comprehensive example, the techniques developed here can be readily leveraged for other online platforms, where users interact with the platform functions as well as other users. We deployed \textit{ClusChurn} in Snap Inc.~to deliver real-time data analysis and prediction results to benefit multiple productions including user modeling, growth, retention, and so on. Future works include but are not limited to the study on user ego-network heterogeneity, where we hope to understand how different types of users connect with each other, as well as the modeling of user evolution patterns, where we aim to study how users evolve among different types and how such evolvements influence their activeness and churn rates. \section{Large-Scale Data Analysis} \label{sec:data} To motivate our study on user clustering and churn prediction, and gain insight into proper model design choices, we conduct an in-depth data analysis on a large real-world dataset from Snapchat. Sensitive numbers are masked for all data analysis within this paper. \subsection{Dataset} We collect the anonymous behavior data of all new users who registered their accounts during the two weeks from August 1, 2017, to August 14, 2017, in a particular country. We choose this dataset because it is a relatively small and complete network, which facilitates our in-depth study on users' daily activities and interactions with the whole network. There are a total of 0.5M new users registered in the specific period, and we also collect the remaining about 40M existing users with a total of approximately 700M links in this country to form the whole network. \begin{table}[h] \small \centering \begin{tabular}{|c|c|c|} \hline ID&Feat. Name&Feat. Description\\ \hline 0&chat\_received&\# textual messages received by the user\\ \hline 1&chat\_sent&\# textual messages sent by the user\\ \hline 2&snap\_received&\# snap messages received by the user\\ \hline 3&snap\_sent&\# snap messages sent by the user\\ \hline 4&story\_viewed&\# stories viewed by the user\\ \hline 5&discover\_viewed&\# discovers viewed by the user\\ \hline 6&lens\_posted&\# lenses posted to stories by the user\\ \hline 7&lens\_sent&\# lenses sent to others by the user\\ \hline 8&lens\_saved&\# lenses saved to devices by the user\\ \hline 9&lens\_swiped&\# lenses swiped in the app by the user\\ \hline \end{tabular} \caption{\label{tab:act}\textbf{\small Daily activities we collect for users on Snapchat.}} \vspace{-5pt} \end{table} We leverage two types of features associated with users, \textit{i.e.}, their daily activities and ego-network structures. Both types of data are collected during the two-week time window since each user's account registration. Table \ref{tab:act} provides the details of the daily activities data we collect, which are from users' interactions with some of the core functions of Snapchat: \textit{chat, snap, story, lens}. We also collect each user's complete ego-network, which are formed by her and her direct friends. The links in the networks are bi-directional friendships on the social app. For each user, we compute the following two network properties and use them as a description of her ego-network structures. \begin{itemize} \item Size: the number of nodes, which describes how many friends a user has. \item Density: the number of actual links divided by the number of all possible links in the network. It describes how densely a user's friends are connected. \end{itemize} As a summary, given a set of $N$ users $\mathcal{U}$, for each user $u_i \in \mathcal{U}$, we collect her 10-dimensional daily activities plus 2-dimensional network properties, to form a total of 12-dimensional time series $\mathbf{A}_i$. The length of $\mathbf{A}_i$ is 14 since we collect each new user's behavioral data during the first two weeks after her account registration. Therefore, $\mathbf{A}_i$ is a matrix of $12\times14$. \subsection{Daily Activity Analysis} Figure \ref{fig:act} (a) shows an example of daily measures on users' \textit{chat\_received} activities. Each curve corresponds to the number of chats received by one user every day during the first two weeks after her account registration. The curves are very noisy and bursty, which poses challenges to most time series models like HMM (Hidden Markov Models), as the critical information is hard to be automatically captured. Therefore we compute two parameters, \textit{i.e.}, $\mu$, the mean of daily measures to capture the activity volume, and $l$, the $lag(1)$ of daily measures to capture the activity burstiness. Both metrics are commonly used in time series analysis \cite{box2015time}. \begin{figure}[h!] \centering \subfigure[Daily Measures]{ \includegraphics[width=0.23\textwidth]{figures/fig_act_a.png}} \subfigure[Aggregated Measures]{ \includegraphics[width=0.23\textwidth]{figures/fig_act_b.png}} \vspace{-5pt} \caption{\textbf{Activities on \textit{chat\_received} in the first two weeks. Y-axis is masked in order not to show the absolute values.}} \vspace{-5pt} \label{fig:act} \end{figure} Figure \ref{fig:act} (b) shows the aggregated measures on users' \textit{chat\_received} activities. Every curve corresponds to the total number of chats received by one user until each day after her account registration. The curves have different steepness and inflection points. Motivated by a previous study on social network user behavior modeling \cite{ceyhan2011dynamics}, we fit a sigmoid function $y(t)=\frac{1}{1+e^{-q(t-\phi)}}$ to each curve, and use the two parameters $q$ and $\phi$ to capture the shapes of the curves. \begin{figure}[h!] \includegraphics[width=0.8\linewidth]{figures/fig_sigmoid.pdf} \vspace{-5pt} \caption{Main curve shapes captured by sigmoid functions with different parameter configurations.} \vspace{-5pt} \label{fig:sigmoid} \end{figure} Figure $\ref{fig:sigmoid}$ shows 4 example shapes of curves captured by the sigmoid function with different $q$ and $\phi$ values.After such feature engineering on the time series data, each of the 12 features is described by a vector of 4 parameters $\mathbf{f}=\{\mu, l, q, \phi\}$. We use $\mathbf{F}_i$ to denote the feature matrix of $u_i$ and $\mathbf{F}_i$ is of size $12\times4$. \subsection{Network Structure Analysis} In addition to daily activities, we also study how new users connect with other users. The 0.5M new users in our dataset directly make friends with a subset of a few million users in the whole network during the first two weeks since their account registration. We mask the absolute number of this group of users and use $\kappa$ to denote it. We find these $\kappa$ users very interesting since there are about 114M links formed among them and 478M links to them. However, there are fewer than 700M links created in the whole network of the total about 40M users in the country. It leads us to believe that there must be a small group of well-connected popular users in the network, which we call the \textit{core} of a network, and in fact, this core overlaps with a lot of the $\kappa$ direct friends of new users. \begin{figure}[h!] \centering \subfigure[Overlapping of core and the $\kappa$ users]{ \includegraphics[width=0.23\textwidth]{figures/fig_core_a.png}} \subfigure[Degree distribution of the $\kappa$ users]{ \includegraphics[width=0.23\textwidth]{figures/fig_core_b.png}} \vspace{-5pt} \caption{\textbf{Most of the $\kappa$ users are within the core.}} \vspace{-5pt} \label{fig:core} \end{figure} To validate this assumption, we define the core of social networks as the set of users with the most friends, \textit{i.e.}, nodes with highest degrees, motivated by earlier works on social network analysis \cite{shi2008very}. Figure \ref{fig:core} (a) shows the percentage of the $\kappa$ users within the core as we increase the size of the core from the top 1\% nodes with highest degrees to the top 10\%. Figure \ref{fig:core} (b) shows the particular degrees of the $\kappa$ users drawn together with all other nodes, ordered by degrees on the x-axis. As we can see, 44\% of the $\kappa$ users are among the top 5\% nodes with highest degrees, and 67\% of them have 10\% highest degrees. This result confirms our hypothesis that most links created by new users at the beginning of their journeys are around the network core. Since the $\kappa$ direct friends do not entirely overlap with the core, it also motivates us to study how differently new users connect to the core, and what implications such differences can have on user clustering and churn prediction. \section{Introduction} \label{sec:intro} Promoted by the widespread usage of internet and mobile devices, hundreds of online systems are being developed every year, ranging from general platforms like social media and e-commerce websites to vertical services including news, movie and place recommenders. As the market is overgrowing, the competition is severe too, with every platform striving to attract and keep more users. While many of the world's best researchers and engineers are working on smarter advertisements to expand businesses by acquisition \cite{harris2006method, moriarty2014advertising}, retention has received less attention, especially from the research community. The fact is, however, acquiring new users is often much more costly than retaining existing ones\footnote{https://www.invespcro.com/blog/customer-acquisition-retention}. With the rapid evolution of mobile industry, the business need for better user retention is larger than ever before\footnote{http://info.localytics.com/blog/mobile-apps-whats-a-good-retention-rate}, for which, \textit{accurate}, \textit{scalable} and \textit{interpretable} churn prediction plays a pivotal role\footnote{https://wsdm-cup-2018.kkbox.events}. Churn is defined as a user quitting the usage of a service. Existing studies around user churn generally take one of the two ways: data analysis and data-driven models. The former is usually done through user surveys, which can provide valuable insights into users' behaviors and mindsets. But the approaches require significant human efforts and are hard to scale, thus are not suitable for nowadays ubiquitous mobile apps. The development of large-scale data-driven models has largely improved the situation, but no existing work has looked into user behavior patterns to find the reasons behind user churn. As a consequence, the prediction results are less interpretable, and thus cannot fundamentally solve the problem of user churn. In this work, we take the anonymous data from Snapchat as an example to systematically study the problem of interpretable churn prediction. We notice that online platform users can be highly heterogeneous. For example, they may use (and leave) a social app for different reasons\footnote{http://www.businessofapps.com/data/snapchat-statistics}. Through extensive data analysis on users' multi-dimensional temporal behaviors, we find it intuitive to capture this heterogeneity and assign users into different clusters, which can also indicate the various reasons behind their churn. Motivated by such observations, we develop \textit{ClusChurn}, a framework that jointly models the types and churn of new users (Section \ref{sec:data}). To understand user types, we encounter the challenges of automatically discovering interpretable user clusters, addressing noises and outliers, and leveraging correlations among features. As a series of treatments, we apply careful feature engineering and adopt $k$-means with Silhouette analysis \cite{rousseeuw1987silhouettes} into a three-step clustering mechanism. The results we get include six intuitive user types, each having typical patterns on both daily activities and ego-network structures. In addition, we also find these clustering results highly indicative of user churn and can be directly leveraged to generate type labels for users in an unsupervised manner (Section \ref{sec:clustering}). To enable interpretable churn prediction, we propose to jointly learn user types and user churn. Specifically, we design a novel deep learning framework based on LSTM \cite{hochreiter1997long} and attention \cite{denil2012learning}. Each LSTM is used to model users' temporal activities, and we parallelize multiple LSTMs through attention to focus on particular user types. Extensive experiments show that our joint learning framework delivers supreme performances compared with baselines on churn prediction with limited user activity data, while it also provides valuable insights into the reasons behind user churn, which can be leveraged to fundamentally improve retention (Section \ref{sec:prediction}). Note that, although we focus on the example of Snapchat data, our \textit{ClusChurn} framework is general and able to be easily applied to any online platform with user behavior data. A prototype implementation of \textit{ClusChurn} based on PyTorch is released on Github\footnote{https://github.com/yangji9181/ClusChurn}. The main contributions of this work are summarized as follows: \begin{enumerate} \item Through real-world large-scale data analysis, we draw attention to the problem of interpretable churn prediction and propose to jointly model user types and churn. \item We develop a general automatic new user clustering pipeline, which provides valuable insights into different user types. \item Enabled by our clustering pipeline, we further develop a prediction pipeline to jointly predict user types and user churn and demonstrate its interpretability and supreme performance through extensive experiments. \item We deploy \textit{ClusChurn} as an analytical pipeline to deliver real-time data analysis and prediction to multiple relevant teams within Snap Inc. It is also general enough to be easily adopted by any online systems with user behavior data. \end{enumerate} \section{Fast-Response Churn Prediction} \label{sec:prediction} Motivated by our user type analysis and the correlations between user types and churn, we aim to develop an efficient algorithm for interpretable new user churn prediction. Our analysis of real data shows that new users are most likely to churn in the very beginning of their journey, which urges us to develop an algorithm for fast-response churn prediction. The goal is to accurately predict the likelihood of churn by looking at users' very initial behaviors, while also providing insight into possible reasons behind their churn. \subsection{Challenges} New user churn prediction with high accuracy and limited data is challenging mainly for the following three reasons. {\flushleft \textbf{Challenge 1: Modeling sequential behavior data.}} As we discuss in Section \ref{sec:data}.1, we model each new user by their initial interactions with different functions of the social app as well as their friends, and we collect a 12-dimensional time series $\mathbf{A}_i$ for each new user $u_i \in \mathcal{U}$. However, unlike for user clustering where we leverage the full two-week behavior data of each user, for fast-response churn prediction, we only focus on users' very limited behavior data, \textit{i.e.}, from the initial few days. The data are naturally sequential with temporal dependencies and variable lengths. Moreover, the data are very noisy and bursty. These characteristics pose great challenges to traditional time series models like HMM. {\flushleft \textbf{Challenge 2: Handling sparse, skewed and correlated activities.}} The time series activity data generated by each new user are multi-dimensional. As we show in Section \ref{sec:clustering}, such activity data are very sparse. For example, \textit{Chatters} are usually only active in the first four dimensions as described in Table \ref{tab:act}, while \textit{Sleepers} and \textit{Invitees} are inactive in most dimensions. Even \textit{All-star} users have a lot of 0's in certain dimensions. Besides the many 0's, the distributions of activity counts are highly skewed instead of uniform and many activities are correlated, like we discuss in Section \ref{sec:clustering}.1. {\flushleft \textbf{Challenge 3: Leveraging underlying user types.}} As shown in our new user clustering analysis and highlighted in Figure \ref{fig:type} (b), our clustering of new users is highly indicative of user churn and should be leveraged for better churn prediction. However, as we only get access to initial several days instead of the whole two-week behaviors, user types are also unknown and should be jointly inferred with user churn. Therefore, how to design the proper model that can simultaneously learn the patterns for predicting user types and user churn poses a unique technical challenge that cannot be solved by existing approaches. \subsection{Methods and Results} We propose a series of solutions to treat the challenges listed above. Together they form our efficient churn prediction framework. We also present comprehensive experimental evaluations for each proposed model component. Our experiments are done on an anonymous internal dataset of Snapchat, which includes 37M users and 697M bi-directional links. The metrics we compute include accuracy, precision, and recall, which are commonly used for churn prediction and multi-class classification \cite{tsoumakas2006multi}. The baselines we compare with are logistic regression and random forest, which are the standard and most widely practiced models for churn prediction and classification. We randomly split the new user data into training and testing sets with the ratio 8:2 for 10 times, and run all compared algorithms on the same splits to take the average performance for evaluation. All experiments are run on a single machine with a 12-core 2.2GHz CPU and no GPU, although the runtimes of our neural network models can be largely improved on GPUs. {\flushleft \textbf{Solution 1: Sequence-to-sequence learning with LSTM.}} The intrinsic problem of user behavior understanding is sequence modeling. The goal is to convert sequences of arbitrary lengths with temporal dependences into a fixed-length vector for further usage. To this end, we propose to leverage the state-of-the-art sequence-to-sequence model, that is, LSTM (Long-Short Term Memory) from the family of RNN (Recurrent Neural Networks) \cite{hochreiter1997long, mikolov2010recurrent}. Specifically, we apply a standard multi-layer LSTM to the multi-dimensional input $\mathbf{A}$. Each layer of the LSTM computes the following functions \begin{align} i_t&=\sigma(W_i\cdot[h_{t-1}, x_t]+b_i) \\\nonumber f_t&=\sigma(W_f\cdot[h_{t-1}, x_t]+b_f) \\\nonumber c_t&=f_t*c_{t-1}+i_t*\text{tanh}(W_c\cdot[h_{t-1}, x_t]+b_c) \\\nonumber o_t&=\sigma(W_o\cdot[h_{t-1}, x_t]+b_o) \\\nonumber h_t&=o_t*\text{tanh}(c_t) \end{align} where $t$ is the time step in terms of days, $h_t$ is the hidden state at time $t$, $c_t$ is the cell state at time $t$, $x_t$ is the hidden state of the previous layer at time $t$, with $x_t=a_{\cdot t}$ for the first layer, and $i_t$, $f_t$, $o_t$ are the input, forget and out gates, respectively. $\sigma$ is the sigmoid function $\sigma(x)=1/(1+e^{-x})$. Dropout is also applied to avoid overfitting. We use $\Theta_l$ to denote the set of parameters in all LSTM layers. A linear projection with a sigmoid function is connected to the output of the last LSTM layer to produce user churn prediction as \begin{align} \hat{y} = \sigma(\mathbf{W}_c o_T+\mathbf{b}_c). \label{eq:linear} \end{align} We use $\Theta_c$ to denote the parameters in this layer, \textit{i.e.}, $\mathbf{W}_c$ and $\mathbf{b}_c$. Unlike standard methods for churn prediction such as logistic regression or random forest, LSTM is able to model user behavior data as time series and capture the evolvement of user activities through recognizing the intrinsic temporal dependencies. Furthermore, compared with standard time series models like HMM, LSTM is good at capturing both long term and short term dependences within sequences of variable lengths. When the lengths are short, LSTM acts similarly as basic RNN \cite{mikolov2010recurrent}, but when more user behaviors become available, LSTM is expected to excel. Figure \ref{fig:exp} (a) shows the performances of compared models. The length of the output sequence of LSTM is empirically set to 64. In the experiments, we vary the amounts of user behavior data the models get access to and find that more days of behavior data generally lead to better prediction accuracy. We can also see that new users' initial activities in the first few days are more significant in improving the overall accuracy. A simple LSTM model can outperform all compared baselines with a substantial margin. The runtime of LSTM on CPU is within ten times of the runtimes of other baselines, and it can be significantly improved on GPUs. \begin{figure*}[h!] \centering \vspace{-20pt} \subfigure[Single LSTM]{ \includegraphics[width=0.235\textwidth]{figures/fig_exp1.png}} \subfigure[Activity embedding]{ \includegraphics[width=0.235\textwidth]{figures/fig_exp2.png}} \subfigure[Parallel LSTMs]{ \includegraphics[width=0.235\textwidth]{figures/fig_exp3.png}} \subfigure[User type prediction]{ \includegraphics[width=0.26\textwidth]{figures/fig_exp4.pdf}} \caption{\textbf{Comprehensive experimental results on our churn prediction framework compared with various baseline methods.}} \label{fig:exp} \end{figure*} {\flushleft \textbf{Solution 2: LSTM with activity embedding.}} To deal with sparse, skewed and correlated activity data, we propose to add an activity embedding layer in front of the standard LSTM layer. Specifically, we connect a fully connected feedforward neural network to the original daily activity vectors, which converts users' sparse activity features of each day into distributional activity embeddings, while deeply exploring the skewness and correlations of multiple features through the linear combinations and non-linear transformations. Specifically, we have \begin{align} e_{\cdot t} = \psi^H(\ldots\psi^2(\psi^1(a_{\cdot t}))\ldots), \end{align} where \begin{align} \psi^h(e) = \text{ReLU}(W_e^h \text{Dropout}(e)+b^h_e). \end{align} $H$ is the number of hidden layers in the activity embedding network. $\Theta_e$ is the set of parameters in these $H$ layers. With the activity embedding layers, we simply replace $\mathbf{A}$ by $\mathbf{E}$ for the input of the first LSTM layer, with the rest of the architectures unchanged. Figure \ref{fig:exp} (b) shows the performances of LSTM with activity embedding of varying number of embedding layers and embedding sizes. The length of the output sequence of LSTM is kept as 64. The overall performances are significantly improved with one single layer of fully connected non-linear embedding (\textit{LSTM+1}), while more layers (\textit{e.g.}, \textit{LSTM+2}) and larger embedding sizes tend to yield similar performances. The results are intuitive because a single embedding layer is usually sufficient to deal with the sparsity, skewness, and correlations of daily activity data. We do not observe significant model overfitting due to the dropout technique and the large size of our data compared with the number of model parameters. {\flushleft \textbf{Solution 3: Parallel LSTMs with joint training.}} To further improve our churn prediction, we pay attention to the underlying new user types. The idea is that, for users in the training set, we get their two-week behavior data, so besides computing their churn labels $y$ based on their second-week activities, we can also compute their user types $\mathbf{t}$ with our clustering framework. For users in the testing set, we can then compare the initial behaviors with those in the training set to guess their user types, and leverage the correlation between user types and churn for better churn prediction. To implement this idea, we propose parallel LSTMs with joint training. Specifically, we assume there are $K$ user types. $K$ can be either chosen automatically by our clustering framework or set to specific values. Then we jointly train $K$ sub-LSTMs on the training set. Each sub-LSTM is good at modeling one type of users. We parallelize the $K$ sub-LSTMs and merge them through attention \cite{denil2012learning} to jointly infer hidden user types and user churn. \begin{figure}[h!] \includegraphics[width=1\linewidth]{figures/fig_plstm.pdf} \caption{\small Parallel LSTMs with user type attention.} \label{fig:plstm} \end{figure} As shown in Figure \ref{fig:plstm}, for each user, the input of a sequence of activity embedding vectors $\mathbf{E}$ is put into $K$ sub-LSTMs in parallel to generate $K$ \textit{typed sequences}: \begin{align} \mathbf{s}_k = \text{LSTM}_k(\mathbf{E}). \end{align} To differentiate hidden user types and leverage this knowledge to improve churn prediction, we introduce an attention mechanism to generate user behavior embeddings by focusing on their latent types. A positive attention weight $w_k$ is placed on each user type to indicate the probability of the user to be of a particular type. We compute $w_k$ as a similarity of the corresponding typed sequence $\mathbf{s}_k$ and a global unique \textit{typing vector} $\mathbf{v}_t$, which is jointly learned during the training process. \begin{align} w_k=\text{softmax}(\mathbf{v}_t^T \mathbf{s}_k). \end{align} Here \textsf{softmax} is taken to normalize the weights and is defined as $\text{softmax}(x_i)=\frac{\text{exp}(x_i)}{\sum_j \text{exp}(x_j)}$. The user behavior embedding $\mathbf{u}$ is then computed as a sum of the typed sequences weighted by their importance weights: \begin{align} \mathbf{u}=\sum_{k=1}^K w_k \mathbf{s}_k. \end{align} The same linear projection with sigmoid function as in Eq.~\ref{eq:linear} is connected to $\mathbf{u}$ to predict user churn as binary classification. \begin{align} \hat{y} = \sigma(\mathbf{W}_c \mathbf{u}+\mathbf{b}_c). \end{align} To leverage user types for churn prediction, we jointly train a typing loss $l_t$ and a churn loss $l_c$. For $l_t$, we firstly compute users' soft clustering labels $\mathbf{Q}$ as \begin{align} q_{ik} = \frac{(1+||\mathbf{f}_i-\mathbf{c}_k||^2)^{-1}}{\sum_j(1+||\mathbf{f}_i-\mathbf{c}_j||^2)^{-1}}. \end{align} $q_{ik}$ is a kernel function that measures the similarity between the feature $\mathbf{f}_i$ of user $u_i$ and the cluster center $\mathbf{c}_k$. It is computed as the probability of assigning $u_i$ to the $k$th type, under the assumption of Student's $t$-distribution with degree of freedom set to 1 \cite{maaten2008visualizing}. We use $w_{ik}$ to denote the attention weight for user $u_i$ on type $t_k$. Thus, for each user $u_i$, we compute her typing loss as the cross entropy on $q_{i\cdot}$ and $w_{i\cdot}$. So we have \begin{align} l_t=-\sum_i \sum_k q_{ik}\log(w_{ik}). \end{align} For $l_c$, we simply compute the log loss for binary predictions as \begin{align} l_c=\sum -y_i \log\hat{y}_i - (1-y_i)\log(1-\hat{y}_i), \end{align} where $y_i$ is the binary ground-truth churn label and $\hat{y}_i$ is the predicted churn label for user $u_i$, respectively. Subsequently, the overall objective function of our parallel LSTM with joint training is \begin{align} l=l_c+\lambda l_t, \end{align} where $\lambda$ is a hyper-parameter controlling the trade-off between churn prediction and type prediction. We empirically set it to a small value like 0.1 in our experiments. Figure \ref{fig:exp} (c) shows the performances of parallel LSTMs with and without joint training (\textit{PLSTM}+ vs.~\textit{PLSTM}). The only difference between the two frameworks is that \textit{PLSTM} is not trained with the correct user types produced by our clustering framework. In the experiments, we vary the number of clusters and sub-LSTMs and find that joint training is always significantly helpful. The performance of parallel LSTMs with joint training peaks with 3 or 6 sub-LSTMs. While the number 3 may accidentally align with some trivial clusters, the number 6 actually aligns with the six interpretable cohorts automatically chosen by our clustering framework, which illustrates the coherence of our two frameworks and further supports the sanity of splitting the new users into six types. Besides churn prediction, Figure \ref{fig:exp} (d) shows that we can also predict what type a new user is by looking at her initial behaviors rather than two-week data, with different precisions and recalls. Our algorithm is good at capturing \textit{All-star}, \textit{Sleeper} and \textit{Invitee}, due to their distinct behavior patterns. \textit{Swiper} and \textit{Bumper} are harder to predict because their activity patterns are less regular. Nonetheless, such fast-response churn predictions with insights into user types can directly enable many actionable production decisions such as fast retention and targeted promotion. \section{Related Work} \label{sec:related} \subsection{User Modeling} The problem of user modeling on the internet has been studied since the early 80's \cite{rich1979user}, with the intuitive consideration of stereotypes that can group and characterize individual users. However, during that time, with the scarce data, a lot of knowledge has to be manually collected or blindly inferred. Such labor and uncertainty make the models hard to capture stereotypes accurately. While the lack of large datasets and labeled data have hindered deep user modeling for decades \cite{yin2015dynamic, webb2001machine}, the recent rapid growth of online systems ranging from search engine and social media to vertical recommenders have been collecting vast amounts of data and generating tons of new problems, which has enabled the resurgence of machine learning in user modeling. Nowadays, there has been a trend in fusing personalization into various tasks, including search \cite{speretta2005personalized, guha2015user}, rating prediction \cite{tang2015user, golbeck2006filmtrust}, news recommendation \cite{abel2013twitter, liu2010personalized, yang2017bi}, place recommendation \cite{zhang2013igslr, tang2013exploiting, li2016point, yang2017bridging}, to name a few. Most of them design machine learning algorithms to capture users' latent interests and mutual influences. However, while boosting the overall performance for particular tasks, such algorithms work more like black boxes, without yielding interpretable insight into the user behaviors. Instead of building the model based on vague assumptions of user behavior patterns, in this work, we systematically analyze users' activity data and strive to come up with a general framework that can find interpretable user clusters. For any real-world online system as we consider, such interpretation can lead to both better prediction models and more personalized services. \subsection{Churn Prediction} It has been argued for decades that acquiring new users is often more expensive than keeping the old ones \cite{daly2002pricing, gillen2005winning}. Surprisingly, however, user retention and its core component, churn prediction, have received much less attention from the research community. Only a few papers can be found discussing user churn, by modeling it as an evolutionary process \cite{au2003novel} or based on network influence \cite{kawale2009churn}. While significant research efforts have been invested on brilliant ads to acquire new users, when dealing with old users, the most common practice is to simply plug in off-the-shelf logistic regression or random forest model\footnote{http://blog.yhat.com/posts/predicting-customer-churn-with-sklearn.html}\footnote{https://www.dataiku.com/learn/guide/tutorials/churn-prediction.html}. To the best of our knowledge, this is the first effort in the research community to seriously stress the emergence and challenge of churn prediction for online platforms. We are also the first to shift the focus of churn prediction from black box algorithms to deep understanding while maintaining high performance and scalability. \subsection{Network Analysis} Recent algorithms on network analysis are mostly related to the technique of network embedding \cite{perozzi2014deepwalk, grover2016node2vec, tang2015line, cao2015grarep, yang2017cone, yang2015network, niepert2016learning, kipf2016semi, defferrard2016convolutional, yang2018did}. They are efficient in capturing the high-order similarities of nodes regarding both structural distance and random-walk distance in large-scale networks. However, the latent representations of embedding algorithms are hard to interpret, as they do not explicitly capture the particular essential network components, such as hubs, cliques, and isolated nodes. Moreover, they are usually static and do not capture the evolution or dynamics of the networks. On the other hand, traditional network analysis mostly focuses on the statistical and evolutionary patterns of networks \cite{epasto2015ego, leskovec2005graphs}, which provides more insights into the network structures and dynamics. For example, the early work on Bowtie networks \cite{arasu2002pagerank} offers key insight into the overall structure of the web; more recent works like \cite{danescu2013no, lo2016understanding, mcauley2013amateurs} help people understand the formation and evolution of online communities; the famous Facebook paper analyzes the romantic partnerships via the dispersion of social ties \cite{backstrom2014romantic} and so on. Such models, while providing exciting analysis of networks, do not help improve general downstream applications. In this work, we combine traditional graph analysis techniques such as ego-network construction, degree and density analysis, as well as network core modeling, together with advanced neural network models, to coherently achieve high performance and interpretability on user clustering and churn prediction. \subsection{Deep Learning} While much of the current research on deep learning is focused on image processing, we mainly review deep learning models on sequential data, because we model user activities as multi-dimensional time series. Current research on deep learning has agreed that RNN \cite{mikolov2010recurrent} is the best model for sequential data. Its network design with loops allows information to persist and has been widely used in tasks like sentiment classification \cite{tang2015document}, image captioning \cite{mao2014deep} and language translation \cite{cho2014learning}, as a great substitute to traditional models like HMM. Among many RNN models, LSTM \cite{hochreiter1997long}, which specifically deals with long-term dependencies, is often the most popular choice. Variants of LSTM have also been developed, such as GRU (Gated Recurrent Unit) \cite{cho2014learning}, bi-directional LSTM \cite{schuster1997bidirectional}, tree LSTM \cite{tai2015improved}, latent LSTM \cite{zaheer2017latent} and parallel LSTM \cite{bouaziz2016parallel}. They basically tweak on the design of LSTM neural networks to achieve better task performance, but the results are still less interpretable. In this work, we leverage the power of LSTM, but also care about its interpretability. Rather than using neural networks as black boxes, we integrate it with an interpretable clustering pipeline and leverage the hidden correlations among user types and user churn with an attention mechanism. Attribute embedding is also added to make the model work with sparse noisy user behavior data.
1,108,101,562,408
arxiv
\section{Acknowledgment} We acknowledge useful conversations with D. Xiao and R. deSousa. Research sponsored by the Laboratory Directors Fund of Oak Ridge National Laboratory. The data that support the findings of this study are available from the corresponding author upon reasonable request.
1,108,101,562,409
arxiv
\section{Introduction} Flexible dynamic utilization methods are needed to manage intermittent resources, such as wind power, during both normal and contingency operating conditions \cite{nyiso}. One way of managing this is to deploy fast-responding storage, and have stand-by reserves. However, the amount of storage and reserve needed greatly depends on how well uncertainties are accounted for by utilizing wind forecast, and/or by scheduling wind power offered by the stakeholders. In markets, it becomes necessary to provide right incentives for flexible operations and participation by flexible technologies, such as fast-responding power plants and storage \cite{ferc2222}. To achieve this, it is necessary to enhance today's operating and planning protocols so that the basic technical tasks (T1. scheduling supply to meet predictable demand; T2. compensating delivery losses; T3. enabling grid delivery; T4. compensating hard-to-predict deviations from schedules; and, T5. providing reserves to ensure reliable service during equipment failures \cite{galiana}) account for growing intermittent resources, and so that the right technologies are incentivized to participate in performing these tasks at value. Needless to say that enhancing today's deterministic operating/market clearing practices to systematically account for significant effects of intermittency represents a somewhat monumental ongoing challenge. This paper is motivated, in particular, by our observation that both integration protocols by the ISOs/markets, and the bidding strategies by the stakeholders interactively affect each other. To support this claim, we first in Section \ref{Sec:dynreserve} elaborate on the role of temporal dynamic reserves in ensuring reliability in the changing industry. In Section \ref{Sec:windpower}, we consider the impact of three different integration protocols on wind power stakeholders. As expected, different integration protocols result in different wind power used and different profits. We formalize temporal aspects of dynamic reserve needed to account for the effect of intermittent resources. In Section \ref{Sec:temporal}, we propose that there exists a wind power integration protocol which both gives choice to wind power plants to decide on their own sub-objectives, and requires near-minimal dynamic reserve. In Section \ref{Sec:best}, we consider protocols for reconciling stakeholders' choices and ISO's objectives. We stress that this is only the case if bidding strategies are truthful since the probability at which reserves will be used to manage uncertain wind power would be minimized. In Section \ref{subsec:monitor}, we stress the key role of a market monitor which in coordination with ISO's market monitor, will provide publicly available statistics about wind power plants meeting their committed schedules. With the right data processing, it becomes possible to estimate break-even outcomes in which profits made by the stakeholders are similar to the cost of dynamic reserves. A Dynamic Monitoring and Decision Systems (DyMoNDS) framework offers the basis for assessing trade-offs between the two \cite{dymonds}. Notably, if a Model Predictive Control (MPC) mode is enabled by the market monitor, the resulting power imbalances are minimized by a combination of MPC-based truthful bidding and/or MPC-based stochastic decision making by the ISOs. It has been already shown that the distributed interactive MPC is scalable and it can be used by real-world systems, such as the one in Puerto Rico during extreme hard-to-predict massive equipment failures \cite{puertorico}. We propose that similar frameworks be explored by the continental ISOs. \section{Temporal aspects of dynamic reserve} \label{Sec:dynreserve} The electric power industry has been based for a long time on a deterministic worst- approach to ensuring reliability at a reasonable cost. This has led to standards in support of functions $T1$-$T5$. Over time, Supervisory Control and Data Acquisition (SCADA)-enabled computer applications have been introduced for analysis assistance and decision-making needed to perform these functions. In particular, basic $T1$ function is used to compute feed-forward energy scheduling given the predicted system load. Similarly, functions $T2$ and $T3$ are performed to compensate for delivery losses and to avoid congestion during normal operations. Also, function $T5$ ensuring secure, reliable service during the worst- $(N-1)$ or $(N-2)$ equipment outages is done in a preventive, feed-forward manner. Additional spinning or non-spinning reserve and transmission headroom are determined to ensure that when even the worst such contingency takes place, the service to the end users remains un-interrupted during 10 minutes while contingency is still present, and during 30 minutes even after the equipment status is brought back to normal. Today this is being done in a static preventive manner. These five functions performed in real-time operations are not additive, and different computer applications are used in co-optimization. In particular, joint co-optimization of energy and reserve resources is conducted to perform all functions except function $T4$ which is a feedback Automatic Generation Control (AGC) intended to balance hard-to-predict load deviations within the scheduling time intervals so that frequency is regulated. \footnote{Notably, there exists a grey line between today's AGC Task $T4$ and the reliable service Task $T5$ in systems with intermittent resources.} \begin{figure*}[h] \centering \hspace{-1ex} \subfigure[]{ \raisebox{-1mm}{\includegraphics[width=3.1in]{./peaking}}} \hspace{-0ex} \subfigure[]{ \raisebox{-1mm}{\includegraphics[width=3.1in,scale=.7]{./nyisocapfactor}}} \hspace{-3ex} \caption{(a) Sudden wind gusts \cite{le}. Effect of demand response on reducing needs for fast-responding gas power and/or storage; (b) Wind capacity factors in NYISO in different seasons \cite{nohacapfactor,windnoa}.}\label{fig:damrtmpdf} \end{figure*} In this paper, we discuss challenges presented to these established operational tasks in systems with intermittent resources. To start with, it is difficult to pose the problem as a deterministic worst-case contingency without accounting for wind fluctuations shown in Figure \ref{fig:profile}. As a result, it is difficult to compute spinning- and non-spinning reserves in a preventive manner. Instead, as the wind power outputs are forecasted, (independent) System Operators ((I)SOs) are exploring ways of computing these more dynamically, and at the same time performing the other functions. For example, the New York Independent System Operator (NYISO) has proposed a notion of ``dynamic reserve" which must account for the simultaneous effect of equipment failure and sudden wind deviations \cite{nyiso}. This is a work in progress. UL Solutions provides as accurate forecasts as possible for assisting NYISO with this. DAM utilizes forecasts made 72 hours ahead at hourly granularity, and 8 hours ahead of time for use in RTM at a 5-minute temporal granularity. Managing risk created by wind power fluctuations can be done in many non-unique ways. \section{Diverse wind power bidding strategies } \label{Sec:windpower} Wind power is fundamentally an intermittent power resource, whose control is challenging. Because of this, it is often modeled as a negative load, but its real power is much harder to predict than power consumed by conventional large-scale loads. The more fluctuating, the harder it is to balance system supply-demand and the higher need for fast-responding resources, including storage, to balance power in RTM. Given the critical importance of knowing wind power, to achieve this, research is done on applying various ML/AI tools, in addition to many conventional data-enabled prediction methods. While high-accuracy system load prediction has been quite successful, the prediction of wind power remains a challenging problem. For example, NYISO relies on the third-party company, UI Solutions, to collect wind power forecasts as input to DAM scheduling and RTM clearing \cite{nyiso}. Shown in Figures \ref{fig:profile} and \ref{fig:price-annual} are the system loads and wind outputs currently seen by the NYISO, for the entire year (from Nov 2021 to Oct 2022) and for representative seasonal days, respectively \cite{nyisowebsite}. The wind power generated is below 2 GW and fluctuates at different rates in different seasons. We use this real data to illustrate the impact of bidding strategies on stakeholders' profits and quantities used by the system, as discussed next. \begin{figure}[h] \centering \hspace{-2ex} \subfigure[]{ \raisebox{-2mm}{\includegraphics[width=1.7in,scale=.6]{./all_winter_sep}}} \hspace{-2ex} \subfigure[]{ \raisebox{-2mm}{\includegraphics[width=1.7in,scale =.6]{./all_spring_sep}}} \hspace{-2ex} \subfigure[]{ \raisebox{-2mm}{\includegraphics[width=1.7in,scale=.6]{./all_summer_sep}}} \hspace{-2ex} \subfigure[]{ \raisebox{-2mm}{\includegraphics[width=1.7in,scale=.6]{./all_fall_sep}}} \vspace{-2mm} \caption{\small NYISO daily load, wind energy, RTM prices, and DAM prices on different days: (a) Jan 15 (winter); (b) Apr 15 (spring); (c) Jul 15 (summer); (d) Oct 15 (fall) \cite{nyisowebsite}.} \label{fig:profile} \vspace{-2ex} \end{figure} \begin{figure}[h] \centering \hspace{-2ex} \subfigure[]{ \raisebox{-2mm}{\includegraphics[width=3in]{./price_load_wind_whole}}} \hspace{-1ex} \subfigure[]{ \raisebox{-2mm}{\includegraphics[width=3in]{./price_whole}}} \vspace{-2ex} \caption{\small (a) Annual load, wind, and RTM prices; (b) Annual DAM and RTM price (from Nov 2021 to Oct 2022)\cite{nyisowebsite}.}\label{fig:price-annual} \vspace{-2ex} \end{figure} \subsection{Bidding in uncertain technology-diverse environment} \label{Sec:bidding} It is well-known that wind and solar power bidding is quite diverse and not standardized. Some stakeholders bid aggressively into DAM, while others only participate in an RTM. Their bidding strategies are equally varying. One way or the other, unless there are rigid system constraints, such as ramp rates and congestion, wind power generally gets scheduled first, given its near-zero operating cost and also no environmental impact cost. To illustrate the impact of diverse bidding, we consider three different ways of integrating wind power into real-time operations. In the formulation, we denote by $\lambda^{\text{DA}}$ as the hourly DAM price. In RTM, we denote the operation interval as $\Delta t$, e.g., 15 minutes, and the interval set of one hour as $\mathcal{H}$. The RTM price is denoted as $\lambda_t^{\text{RT}}$ for any interval $t \in \mathcal{H}$. For a wind power plant, we denote by $W^f$ the day-ahead forecast of wind generation and by $W_t^g$ the actual generation in real-time for any interval $t \in \mathcal{H}$. We consider three bidding strategies for wind producers. {\bf Scenario 1: Wind power forecast used in both DAM and RTM.} The producer bids the forecasted quantity $W^f$ in the day-ahead market, and then the actual generation deviation is settled in the real-time market. This leads to the following profit. \begin{align} \lambda^{\text{DA}}\cdot {W}^f - \sum_{t\in \mathcal{H}} \Delta t\cdot \lambda_t^{\text{RT}}\cdot ({W}^f-W_t^{g}). \end{align} We assume un-congested prices, wind power gets paid a uniform clearing price, much the same as all other scheduled generators. {\bf Scenario 2: Forecast wind power is scheduled in RTM only.} This scenario is different from Scenario 1 because the wind power plant does not participate in DAM, and the overall uncertainty seen by the ISO gets reduced because wind power only participates in RTM and bids the actual quantity $W_t^g$ in the real-time market. The profit is \begin{align} \sum_{t\in \mathcal{H}} \Delta t \cdot \lambda_t^{\text{RT}}\cdot W_t^g. \end{align} {\bf Scenario 3: Wind power provides bids into DAM.} Wind power plants create bids by attempting to maximize their overall profit according to the following strategy \begin{align} \max_{0\leq q\leq W^f}~ &\lambda^{\text{DA}}\cdot q - \sum_{t\in \mathcal{H}} \Delta t\cdot \lambda_t^{\text{RT}} \cdot (q-W_t^{g}) \label{scenario3} \end{align} The objective is to maximize the revenue made in DAM while recognizing that in RTM failure to produce will reduce the DAM profit. The producer optimizes the bidding quantity $q$ in the day-ahead market to maximize its total profit in the DAM and RTM. We have the following optimal result of decision-making in Scenario 3. \begin{prop} The optimal bidding quantity $q^*$ in Scenario 3 is given as follows. \begin{itemize} \item If $\lambda^{\text{DA}}\leq \Delta t \cdot \sum_{t\in \mathcal{H}} \lambda_t^{\text{RT}}$, $q^*=0$. \item If $\lambda^{\text{DA}}> \Delta t \cdot \sum_{t\in \mathcal{H}} \lambda_t^{\text{RT}}$, $q^*=W^f$. \end{itemize} \end{prop} As shown in this proposition, the optimal bidding strategy is affected by both DAM and RTM prices shown in Figure \ref{fig:price-annual}. If the DAM price is no higher than the average RTM price, the producer should bid zero in the day-ahead market to take advantage of the real-time high price, otherwise, it should bid the maximum quantity in the day-ahead market to avoid the unfavorable real-time price. Note that this Scenario 3 always gives a higher profit to the producer than Scenario 1 and Scenario 2. Scenarios 1 and 2 are equivalent to the strategy $q=W^f$ and $q=0$ in Scenario 3, respectively. \subsection{Numerical illustration} We examine the profits and day-ahead bidding quantities of wind producers (aggregated level) for four days of different seasons: Jan 15 (winter), Apr 15 (spring), Jul 15 (summer), and Oct 15 (fall). We consider 15-minute windows for real-time operations and use average wind generations of one day as the forecast. Figure \ref{fig:profit}(a) shows the profits of Scenario-1 (blue bar), Scenario-2 (orange bar), and Scenario-3 (green bar) strategies on the four selected days, respectively. Figure \ref{fig:profit}(b) shows the corresponding total day-ahead bidding quantities in one day. As shown in Figure \ref{fig:profit}(a), the Scenario-3 strategy always gives the highest profit to producers, which can be over 20\% higher than the best of Scenarios -1 and -2. It can be seen that the wind producers' profits are affected by both seasonal market prices and wind generation. On Jul 15, the profit is much lower than on other days because the wind generation amount is small (below 100 MW all day) as shown in Figure \ref{fig:profile}(c). Furthermore, the DAM and RTM prices are relatively high but not extreme (between 40 to 90\$/MWh) in summer. On Jan 15, although the wind generation amount is small (below 300 MW almost all day), the DAM and RTM prices are very high (beyond 700\$/MWh for some time) in winter as shown in Figure \ref{fig:profile}(a), which provides high profits for producers. The day-ahead bidding quantities in the three strategies on Jan 15 and Jul 15 are also small as in Figure \ref{fig:profit}(b). On Apr 15 and Oct 15, the DAM and RTM prices are low (below 50\$/MWh almost for all day) in spring and fall, but the wind generation amount is very high (beyond 1000 MW almost all day). The difference between DAM prices and RTM prices will affect the profit ranking of Scdenarios-2 and -1. Especially on Apr 15, the -2 strategy will lead to a much lower profit than other strategies. The reason is that the RTM prices are overall lower than the DAM prices as shown in Figure \ref{fig:profile}(b), which can be due to a lot of hydropower in spring. This makes direct participation in the real-time market less profitable. \begin{figure}[ht] \hspace{-2ex} \subfigure[]{ \raisebox{-2mm}{\includegraphics[width=1.74in]{./profit_season}}} \hspace{-2ex} \subfigure[]{ \raisebox{-2mm}{\includegraphics[width=1.74in]{./bidquantity_season}}} \vspace{-2ex} \caption{(a) \small Profits on four days of different seasons; (b) \small Day-ahead bidding quantities on different seasonal days.}\label{fig:profit} \vspace{-3ex} \end{figure} \section{The impact of wind power variability on dynamic reserve requirements} \label{Sec:temporal} Based on the discussion in the above section, we suggest that wind power deviations can occur for a variety of reasons beyond weather factors typically accounted for when making the forecast. Notably, different bidding strategies by heterogeneous wind power plants as analyzed in Section \ref{Sec:windpower} above, result in different power actually used in real time. The uncertainties caused by deviations of scheduled or forecast wind power generation from the actual are becoming a big concern to ISOs during contingencies \cite{nyiso}. Because of this, it has been proposed to account for a sudden loss of wind power generation when $(N-1)$ or $(N-2)$ contingency screening is done using well-established Security Constrained Economic Dispatch (SCED)/Security Constrained Unit Commitment (SCUC). For different wind capacities recorded in NYISO as shown in Figure \ref{fig:damrtmpdf} b), even the basic economic dispatch optimizes generation during normal conditions \cite{nohailic}. Given these typical wind capacities in NYISO \cite{windnoa} shown in Figure \ref{fig:damrtmpdf}, economic dispatch using somewhat outdated generation data in NYISO \cite{allen} results in total generation cost savings shown in Figure \ref{fig:nohailic}. It can be seen that the economic outcomes are quite sensitive to how accurate the wind capacity factor is. This makes ISO decisions quite difficult when setting aside the reserve necessary to meet today's $(N-1)$ reliability criteria. The ISO can best justify the cost of reserve when the capacity factor is known. Also, it can be seen from Figure \ref{fig:damrtmpdf} (b) that the variability of capacity factors is quite significant. This requires recomputing reserve in anticipation of wind capacity factor and the equipment failures more dynamically in DAM and RTM. A major challenge, not discussed in this paper, concerns locational aspects of deliverable reserves \cite{itroops}. A quick look at the NYISO website shows that the congestion cost dominates energy cost in zones J and K of NYISO, Figure \ref{fig:nohailic}. This is in part a result of zonal reserve requirements. These are going to drastically change with the planned deployment of off-shore wind in Long Island. It follows that the more predictable short-and long-term wind power capacity factors are, the lower requirements for expensive congestion-caused reserve requirements become. While beyond the objectives of this paper, such a study should be carried out as it greatly affects the resulting benefits from wind power deployment. \footnote{These economic saving estimates are obtained using homegrown GYPSIS DC Optimal Power Flow at Carnegie Mellon, for details see \cite{nohailic}. Optimizing deliverable reserves as more wind power is deployed in Long Island must be done using AC Optimal Power Flow, since major bottlenecks to deliverable reserve in NYISO are voltage-related problems, as documented in \cite{nyserda}. We estimate that the cost of deliverable reserves obtained using AC OPF will greatly be reduced when voltage is optimized.} \begin{figure}[h] \includegraphics[width=3.43in,scale = .5]{./nyisosavings} \vspace{-0.5ex} \caption{\small Variable cost saving in NYISO system\cite{nohailic}.} \label{fig:nohailic} \vspace{-2ex} \end{figure} \section{Possible protocols for reconciling stakeholders' choice and ISO requirements} \label{Sec:best} Fundamentally, there exists a break-even between the benefits of utilizing more wind power and the costs of dynamically providing reserve to ensure reliable service despite wind power deviations. This trade-off can be computed after the fact and can be used to provide incentives for operating close to this break-even schedule. It is important to observe that if the wind power plant bids truthfully at a high probability that their bids would be physically implementable as committed and scheduled, then, as with any other type of controllable generation, the less dynamic reserve would be needed, all else being the same. The cost of reserve is generally quite high, and, because of this, we conclude that Scenario 3 bidding strategy probably comes the closest to the break-even optimal real-time operations. While earning more than wind power plants that follow bidding Scenarios 1 and 2, Strategy 3 is still better when measuring the overall deliverable energy and reserve dispatch cost. We also observe from the analysis in Section \ref{Sec:windpower} that it is not always optimal to bid the maximum forecast power either into DAM or RTM. The outcomes are highly sensitive to underlying uncertainties, and one's ability to schedule implementable wind power generation. Notably, having the worst-case approach to scheduling wind power for co-optimized energy and reserve is not optimal. The frequency of deviating from expected, worst wind power is not uniform over seasons and throughout days. Taking the overall conclusions in this paper, it follows that wind power bidding according to own sub-objectives, and having protocols for assessing how truthful their bidding is, will lead to minimizing the cost of reliable service in systems with higher wind power generation. \subsection{The role of a market monitor and/or a new Risk Buro} \label{subsec:monitor} To ensure truthful bidding, the market monitor can and should work closely with wind power generators to understand, audit, and certify their bidding strategies. Given very diverse risk preferences and wind generation and control technologies, market monitor should be responsible for ensuring that there is no significant un-justified gaming for making excessive profits through bidding. At present levels of wind power, it is highly unlikely that the wind power bidding can significantly affect DAM clearing price. The process of market monitors interacting directly with different stakeholders to approve bidding beyond their short-marginal costs has already begun. Hydropower plants in New York Power Authority (NYPA) are allowed to bid higher than their short-run marginal costs because of the key role as storage. It is only a matter of time when diverse storage will be allowed to bid according to model predictive control (MPC) strategies. Even conventional power plants should be allowed to bid in an MPC manner because they contribute to managing wind variations by scheduling their outputs according to their ramp rates which benefit them over the longer time horizons. Scheduling when the market price is low, to prepare to sell when the price is higher has been shown to be quite beneficial to supporting large deviations in renewable power and, to, at the same time maximizing their longer-term profits \cite{ferc}. This look-ahead MPC bidding for the best forecast possible plays effectively the role of fast gas power plant scheduling in real time and/or utilizing fast storage. But, the clearing price with reasonable wind power generation predicted is fundamentally much lower than when the bidding protocol is to only bid short-run marginal cost and not account for uncertainties. It has also been proposed recently that instead of relying entirely on forecasts by a third party and on market monitor, one could have an open Risk Buro platform which works closely with all market participants, the ISO, stakeholders, market monitor, and companies providing forecasts. Its main objective is to support interactive information exchange between different layers of an otherwise fractured environment. Data-enabled decision making and interactive information exchange result in market-clearing outcomes (quantities, prices) which are further post-processed to assess how truthful stakeholders are in creating their bids, and how accurately they actually meet their schedules. Buro creates repositories of such data, processes various correlations, and evolves into Standard and Poor's (S\&P's) for the changing electric energy industry. In particular, a Risk Buro produces, in collaboration with the market monitor, indices that help classify stakeholders according to their performance. ISOs can use this information when computing the dynamic reserve needed and the like. This DyMoNDS-based Risk Buro represents a platform that facilitates the self-adjusting of all market participants in an MPC manner, with stakeholders bidding according to their sub-objectives and aligning with ISO-level objectives. \subsection{Closing takeaways \cite{3rs}} \begin{itemize} \item Needs and resources can be accurately forecasted only when producers and responsive demand provide binding self-commitments. \item (Today) The probability of utilizing full capacity is very low and resources are, by and large, under-utilized. \item Internalizing the risk by those creating it would make the responsibilities much better understood. \item ``Solar (and wind) power makes lots of sense, but utilities are used to dealing with rotating equipment and this is a whole new animal." Ryan Sather, Accenture, 2009. \end{itemize}
1,108,101,562,410
arxiv
\section{Introduction}\label{intro} A large amount of data is needed to train robust and generalizable machine learning models. A single institution often does not have enough data to train such models effectively. Meanwhile, there are emerging regulatory and privacy concerns about the data sharing and management~\cite{kaissis2021end,invertg}. Federated Learning (FL)~\cite{fedavg} mitigates such concerns as it leverages data from different clients or institutions to collaboratively train a global model while allowing the data owners to control their private datasets. Unlike the conventional centralized training, FL algorithms open the potential for multi-institutional collaborations in a privacy-preserving manner~\cite{yang2019federated}. This multi-institutional collaboration scenario often refers to \emph{cross-silo} FL~\cite{kairouz2019advances} and is the main focus of this paper. In this FL setting, clients are autonomous data owners, such as medical institutions storing patients' data, and collaboratively train a general model to overcome the data scarcity issue and privacy concerns~\cite{yang2019federated}. This makes \emph{cross-silo} FL applications especially attractive to the healthcare sector~\cite{rieke2020future}. Several methods have already been proposed to leverage FL for multi-institutional collaborations in digital healthcare~\cite{autofedavg,flmr,sheller2020federated,roth2020federated}. The most recently introduced FL frameworks~\cite{autofedavg,flmr,intel,fedprox} are variations of the Federated Averaging (FedAvg)~\cite{fedavg} algorithm. The training process of FedAvg consists of the following steps: (i) clients perform local training and upload model parameters to the server. (ii) The server carries out the averaging aggregation over the received parameters from clients and broadcasts aggregated parameters to clients. (iii) Clients update local models and evaluate its performance. After sufficient communication rounds between clients and the server, a global model can be obtained. The design of FedAvg is based on standard Stochastic Gradient Descent (SGD) learning with the assumption that data is uniformly distributed across clients~\cite{fedavg}. However, in real-world applications, one has to deal with underlying unknown data distributions that are likely not independent and identically distributed (non-iid). The heterogeneity of data distributions has been identified as a critical problem that causes the local models to diverge during training and consequently sub-optimal performance of the trained global model~\cite{fedbn,intel,fedprox}. To achieve the required performance, the proper tuning of hyperparameters (\eg, the learning rate, the number of local iterations, aggregation weights, \etc) plays a critical role for the success of FL~\cite{intel,autofedavg}. \cite{flconvergence} shows that the learning rate decay is a necessary condition of the convergence for FL on non-iid data. While the general hyperparameter optimization has been intensively studied~\cite{randomsearch,bayesian1,bayesian2}, the unique setting of FL makes federated hyperparameters optimization especially difficult~\cite{weightsharing}. Reinforcement learning (RL) provides a promising solution to approach this complex optimization problem. Compared to other methods for finding the optimal hyperparameters, RL-based methods do not require the prior knowledge of the complicated underlying system dynamics~\cite{rladvantage}. Thus, federated hyperparameter optimization can be reduced to defining appropriate reward metrics, search space, and RL agents. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figs/compute_detail.pdf} \vskip-10pt \caption{The computational details of different search strategies under the same setting on CIFAR-10 when the number of clients equals to 2 ($\bigtriangleup$), 4 ($+$), 6 ($\Diamond$), and 8 ($\times$). The green box shows the zoomed-in region.}\label{fig:cd} \vskip-18pt \end{figure} In this paper, we aim to make the automated hyperparameter optimization applicable in realistic FL applications. An online RL algorithm is proposed to dynamically tune hyperparameters during a single trial. Specifically, the proposed Auto-FedRL formulates hyperparameter optimization as a task of discovering optimal policies for the RL agent. Auto-FedRL can dynamically adjust hyperparameters at each communication round based on relative loss reduction. Without the need for multiple training trails, an online RL agent is introduced to maximize the rewards in small intervals, rather than the sum of all rewards. While RL-based hyperparameter optimization method has been explored in~\cite{intel}, our experiments show that the prior work has several deficiencies impeding its practical use in real-world applications. (\textbf{i}) The discrete action space (\ie, hyperparameter search space) not only leads to limited available actions but also suffers from scalability issues. At each optimization step, the gradient of all possible hyperparameter combinations is retained, which causes high memory consumption and computational inefficiency. Therefore, as shown in Fig.~\ref{fig:cd}, the hardware limitation can be reached quickly, when one needs to collaborate with multiple institutions using a large search space. To circumvent this challenge, Auto-FedRL can leverage continuous search space. Its memory usage is practically constant as the memory consumption per hyperparameter is negligible and does not explode with increased search space and the number of involved clients. Meanwhile, its computational efficiency is significantly improved compared to discrete search space. (\textbf{ii}) The flexibility of hyperparameter search space is limited. \cite{intel} focuses on a small number of hyperparameters (\eg, one or two hyperparameters) in less general settings. In contrast, our method is able to tune a wide range of hyperparameters (\eg, client/server learning rates, the number of local iterations, and the aggregation weight of each client) in a realistic FL setting. It is worth noting that the averaging model aggregation is replaced by a pseudo-gradient optimization~\cite{fedopt} in Auto-FedRL. Thus, we are able to search server-side hyperparameters. To this end, we propose a more practical federated hyperparameter optimization framework with notable computational efficiency and flexible search space. Our main contributions in this work are summarized as follows: \begin{itemize} \item A novel federated hyperparameter optimization framework Auto-FedRL is proposed, which enables the dynamic tuning of hyperparameters via a single trial. \item Auto-FedRL makes federated hyperparameter optimization more practical in real-world applications by efficiently incorporating continuous search space and the deep RL agent to tune a wide range of hyperparameters. \item Extensive experiments on multiple datasets show the superior performance and notable computational efficiency of our methods over existing FL baselines. \end{itemize} \section{Related Works} \noindent\textbf{Federated Learning on Heterogeneous Data. } The heterogeneous data distribution across clients impedes the real-world deployment of FL applications and draws emerging attentions. Several methods~\cite{noniid1,noniid2,noniid3,noniid4,noniid5} have been proposed to address this issue. FedOpt~\cite{fedopt} introduced the adaptive federated optimization, which formulated a more flexible FL optimization framework but also introduced more hyperparameters, such as the server learning rate and server-side optimizers. FedProx~\cite{fedprox} and Agnostic Federated Learning (AFL)~\cite{afl} are variants of FedAvg~\cite{fedavg} which attempted to address the learning bias issue of the global models for local clients by imposing additional regularization terms. FedDyn~\cite{feddyn} was proposed to address the problem that the minima of the local-device level loss are inconsistent with those of the global loss by introducing a dynamic regularizer for each device. Those works demonstrated good theoretical analysis but are evaluated only on manually created toy datasets. Recently, FL-MRCM~\cite{flmr} was proposed to address the domain shift issue among different clients by aligning the distribution of latent features between the source domain and the target domain. Although those methods~\cite{li2020multi,flmr} achieved promising results in overcoming domain shift in the multi-institutional collaboration, directly sharing latent features between clients increased privacy concerns. \noindent\textbf{Conventional Hyperparameter Optimization. } Grid and random search~\cite{randomsearch} can perform automated hyperparameter tuning but require long running time due to often exploring unpromising regions of the search space. While advanced random search~\cite{adrandomsearch} and Bayesian optimization-based search methods~\cite{bayesian1,bayesian2} require fewer iterations, several training trails are required to evaluate the fitness of hyperparameter configurations. Repeating the training process multiple times is impractical in the FL setting, especially for deep learning models, due to the limited communication and compute resources in real-world FL setups. \noindent\textbf{Federated Hyperparameter Optimization. } Auto-FedAvg~\cite{autofedavg} is a recent automated search method, which only is compatible with differentiable hyperparameters and focuses on searching client aggregation weights. The method proposed in~\cite{intel} is the most relevant to our work. However, as discussed in the previous section, it suffers from limited practicability and flexibility of search space in real-world applications. Inspired by the recent hyperparameter search~\cite{h1,h2,h3} and differentiable~\cite{d1,d2}, evolutionary~\cite{e1,e2} and RL-based automated machine learning methods~\cite{rl1,rl2}, we propose an efficient automated approach with flexible search space to discover a wide range of hyperparameters. \section{Methodology} In this section, we first introduce the general notations of FL and the adaptive federated optimization that provides the theoretical foundation of tuning FL server-side hyperparameters (Sec.~\ref{sec:fl}). Then, we describe our method in detail (Sec.~\ref{sec:auto-fedrl}), including online RL-based hyperparameter optimization, the discrete/continuous search space, and the deep RL agent. In addition, we provide theoretical analysis to guarantee the convergence of Auto-FedRL in the supplementary material. \subsection{Federated Learning}\label{sec:fl} In a FL system, suppose $K$ clients collaboratively train a global model. The goal is to solve the optimization problem as follows: \begin{equation} \label{eq:1} \begin{aligned} \min\limits_{x\in \mathbb{R}^d}\frac{1}{K}\sum\limits_{k=1}^{K}\mathcal{L}_k(x), \end{aligned} \end{equation} where $\mathcal{L}_k(x) = \mathbb{E}_{z\sim \mathcal{D}_k} [\mathcal{L}_k(x,z)]$ is the loss function of the $k^{\text{th}}$ client. $z \in \mathcal{Z}$, and $\mathcal{D}_k$ represents the data distribution of the $k^{\text{th}}$ client. Commonly, for two different clients $i$ and $j$, $\mathcal{D}_i$ and $\mathcal{D}_j$ can be dissimilar, so that Eq.~\ref{eq:1} can become nonconvex. A widely used method for solving this optimization problem is FedAvg~\cite{fedavg}. At each round, the server broadcasts the global model to each client. Then, all clients conduct local training on their own data and send back the updated model to the server. Finally, the server updates the global model by a weighted average of these local model updates. FedAvg's server update at round $q$ can be formulated as follows: \begin{equation} \label{eq:2} \begin{aligned} \Theta^{q+1} = \sum\limits_{k=1}^{K} \alpha_k \Theta^{q}_k, \end{aligned} \end{equation} where $\Theta_k^q$ denotes the local model of $k^{\text{th}}$ client and $\alpha_k$ is the corresponding aggregation weight. The update of global model $\Theta^{q+1}$ in Eq.~\ref{eq:2} can be further rewritten as follows: \begin{equation} \label{eq:3} \begin{aligned} \Theta^{q+1} & = \Theta^{q} - \sum\limits_{k=1}^{K} \alpha_k (\Theta^{q}-\Theta^{q}_k)\\ & = \Theta^{q} - \sum\limits_{k=1}^{K} \alpha_k\Delta_k^q \\ & = \Theta^{q} - \Delta^q, \end{aligned} \end{equation} where $\Delta_k^q := \Theta^{q}-\Theta^{q}_k$ and $\Delta^q := \sum\limits_{k=1}^{K} \alpha_k \Delta_k^q$. Therefore, the server update in FedAvg is equivalent to applying optimization to the \emph{pseudo-gradient} $-\Delta^q$ with a learning rate $\gamma = 1$. This general FL optimization formulation refers to adaptive federated optimization~\cite{fedopt}. Auto-FedRL utilizes this \emph{pseudo-gradient} update formulation to enable the server-side hyperparameter optimization, such as the server learning rate $\gamma$. \begin{figure*}[tbp] \centering \includegraphics width=\textwidth]{figs/diff_search.pdf} \vskip-10pt \caption{The sampling workflow comparison of different search strategies in the proposed Auto-FedRL. PMF denotes the probability mass function.}\label{fig2} \vskip-10pt \end{figure*} \subsection{Auto-FedRL}\label{sec:auto-fedrl} \noindent\textbf{Online RL Hyperparameter Optimization.} The online setting in the targeted task is very challenging since the same actions at different training stages may receive various responses. Several methods~\cite{rl_ns1,rl_ns2,rl_ns3} have been proposed in the literature to deal with such non-stationary problems. However, these methods require multiple training runs, which is usually not affordable in FL settings where clients often have limited computation resources. Typically, a client can run only one training procedure at the same time and the resources for parallelization as would be done in a cluster environment is not available. To circumvent the limitations of conventional hyperparameter optimization methods and inspired by previous works~\cite{rl2,intel,rl1}, we introduce an online RL-based approach to directly learn the proper hyperparameters from data at the clients' side during a single training trial. At round $q$, a set of hyperparameters $h^q$ can be sampled from the distribution $P(\mathcal{H}| \psi^q)$. We denote the validation loss of client $k$ at round $q$ as $\mathcal{L}_{\text{val}_{k}}^q$ and the hyperparameter loss at round $q$ as \begin{equation} \begin{aligned} \mathcal{L}_h^q=\frac{1}{K}\sum\limits_{k=1}^{K} \mathcal{L}_{\text{val}_{k}}^q. \end{aligned} \end{equation} The relative loss reduction reward function of the RL agent is defined as follows: \begin{equation} \label{eq:4} \begin{aligned} r^q = \frac{\mathcal{L}_h^q-\mathcal{L}_h^{q+1}}{\mathcal{L}_h^q}. \end{aligned} \end{equation} The goal of the RL agent at round $q$ is to maximize the objective as follows: \begin{equation} \label{eq:5} \begin{aligned} J^q = \mathbb{E}_{P(h^q|\psi^q)} [r^q]. \end{aligned} \end{equation} By leveraging the one-sample Monte Carlo estimation technique~\cite{rl}, we can approximate the derivative of $J^q$ as follows: \begin{equation} \label{eq:6} \begin{aligned} \nabla_{\psi^q} J^q = r^q \nabla_{\psi^q} \log (P(h^q| \psi^q)). \end{aligned} \end{equation} To this end, we can evaluate Eq.~\ref{eq:5} and use it to update the condition of hyperparameter distribution $\psi^q$. To formulate an online algorithm, we utilize the averaged rewards in a small interval (``window'') rather than counting the sum of all rewards to update $\psi^q$ as follows: \begin{equation} \label{eq:7} \begin{aligned} &\psi^{q+1} \leftarrow \psi^{q} - \gamma_h \sum\limits_{\tau = q-Z}^{\tau = q} (r^\tau - \hat{\tau}^q) \nabla_{\psi^\tau} \log (P(h^\tau| \psi^\tau)), \end{aligned} \end{equation} where $Z$ is the size of the update window and $\gamma_h$ is the RL agent learning rate. The averaged rewards $\hat{\tau}^q$ in the interval $[q-Z,q]$ are defined as follows: \begin{equation} \label{eq:8} \begin{aligned} \hat{\tau}^q = \frac{1}{Z+1} \sum\limits_{\tau = q-Z}^{\tau = q} r^\tau. \end{aligned} \end{equation} \noindent\textbf{Discrete Search. } Selecting the form of hyperparameter distribution $P(\mathcal{H}| \psi)$ is non-trivial, since it determines the available actions in the search space. We denote the proposed method using discrete search (DS) space as Auto-FedRL(DS). Here, $P(\mathcal{H}| \psi)$ is defined by a $D$-dimensional discrete Gaussian distribution, where $D$ denotes the number of searchable hyperparameters. For each hyperparameter, there is a finite set of available selections. Therefore, $\mathcal{H}$ is a grid that consists of all possible combinations of available hyperparameters. A hyperparameter combination $h^q$ at round q is a point on $\mathcal{H}$ as follows: \begin{equation} \label{eq:9} \begin{aligned} P(h^q|\psi^q) = \mathcal{N}(h^q|\mu^q,\Sigma^q), \end{aligned} \end{equation} where $h^q=\{h^q_1,\dots,h_D^q\}$. $\psi^q$ is defined by the mean vector $\mu^q$ and the covariance matrix $\Sigma^q$, which are learnable parameters that the RL agent targets to optimize. To increase the stability of RL training and encourage learning in all directions, different types of predefined hyperparameter selections are normalized to the same scale with zero-mean when constructing the search space. This hyperparameter sampling procedure is presented in Fig.~\ref{fig2} (a). \begin{figure*}[htbp] \centering \includegraphics[width=0.85\textwidth]{figs/al.pdf} \vskip-11pt \end{figure*} \begin{figure}[htbp] \centering \includegraphics[width=\columnwidth]{figs/fig1.pdf} \caption{The schematics of Auto-FedRL at round $q$.}\label{fig1} \end{figure} \noindent\textbf{Continuous Search. } While defining a discrete action space can be more controllable for hyperparameter optimization, as discussed in Section~\ref{intro}, it limits the scalability of the search space. The gradients of all possible hyperparameter combinations are retained in the discrete search during the windowed update as in Eq.~\ref{eq:7}, which requires a large amount of memory. To overcome this issue, we extend Auto-FedRL(DS) to Auto-FedRL(CS), that can utilize a continuous search (CS) space for the RL agent. Instead of constructing a gigantic grid that stores all possible hyperparameter combinations, one can directly sample a choice from a continuous multivariate Gaussian distribution $\mathcal{N}(\mu^q,\Sigma^q)$. It is worth noting that with the expansion of search space, the increase of memory usage of Auto-FedRL(CS) is negligible. A comparison between the hyperparameter sampling workflows in discrete and continuous search are presented in Fig.~\ref{fig2}. The main difference between DS and CS lies in the sampling process. In practice, one can adopt the Box–Muller transform~\cite{bmt} for sampling the continuous Gaussian distribution. However, as shown in Fig.~\ref{fig2}(a), the sampling for multivariate discrete Gaussian distributions typically involves the following steps: \textbf{(i)} We compute the probabilities of all possible combinations. \textbf{(ii)} Given the probabilities, we draw a choice from the multinomial distribution or alternatively can use the ``inverse CDF'' method~\cite{inversecdf}. In either way, we need to compute the probabilities of all possible hyperparameter combinations for DS, which is not required for CS. Hence, our CS is much more efficient for hyperparameter optimization, as shown in Fig.~\ref{fig:cd}. \noindent\textbf{Deep RL Agent. } An intuitive extension of Auto-FedRL(CS) is to leverage neural networks (NN) as the agent to update the condition of hyperparameter distribution $\psi^{q}$ rather than the direct optimization. A more complicated RL agent design could deal with potentially more complex search spaces~\cite{deeprl}. To investigate the potential of NN-based agent in our setting, we further propose the Auto-FedRL(MLP), which leverages a multilayer perceptron (MLP) as the agent to update the $\psi$. The sampling workflow of Auto-FedRL(MLP) is presented in Fig.~\ref{fig2}(c). The proposed MLP takes the condition of previous hyperparameter distribution $\psi^{q-1}$ as the network's input and predicts the updated $\psi^{q}$. Meanwhile, due to our online setting (\ie limited optimization steps), we have to keep the learnable parameters in MLP small but effective. The detailed network configuration can be found in the supplementary material. \noindent\textbf{Full Algorithm. } The overview of Auto-FedRL framework is presented in Alg.~\textcolor{red}{1} and Fig.~\ref{fig1}. At each training round $q$, the training of Auto-FedRL consists of following steps: \textbf{(i)} As shown in Fig.~\ref{fig1}(a), clients receive the global model $\Theta^{q}$ and hyperparameters $h^{q}$. Clients perform {\tt LocalTrain} based on the received hyperparameters. \textbf{(ii)} The updated local models are then uploaded to the server as shown in Fig.~\ref{fig1}(b). Instead of performing the average aggregation, we use \emph{pseudo-gradient} $-\Delta^q$ in Eq.~\ref{eq:3} to carry out the server update with a searchable server learning rate. \textbf{(iii)} Clients evaluate the received the updated global model $\Theta^{q+1}$ and upload the validation loss $\mathcal{L}_{\text{val}_{k}}^{q+1}$ to the server. The server performs the RL update as shown in {\tt RLUpdate} of Alg.~\textcolor{red}{1}. Here, we consider the applicability in a real-world scenario, in which each client maintains its own validation data rather than relying on validation data being available on the server. Then, the server computes the reward $r^{q+1}$ as in Eq.~\ref{eq:4} and updates the RL agent ({\tt RLOpt}) as in Eq.~\ref{eq:7}. Finally, hyperparameters for the next training round $h^{q+1}$ can be sampled from the updated hyperparameter distribution $P(\mathcal{H}| \psi^{q+1})$. As shown in Fig.~\ref{fig1}(c), the proposed method requires one extra round of communication between clients and the server for $\mathcal{L}_{\text{val}_{k}}$. It is worth noting that the message size of $\mathcal{L}_{\text{val}_{k}}$ is negligible. Thus, this extra communication can still be considered practical under our targeted scenario in which all clients have a reliable connection (\ie, multi-institutional collaborations in cross-silo FL). \subsection{Datasets and Implementation Details} \noindent\textbf{CIFAR-10. } We simulate an environment in which the number of data points and label proportions are imbalanced across clients. Specifically, we partition the standard CIFAR-10 training set~\cite{cifar10} into 8 clients by sampling from a Dirichlet distribution ($\alpha=0.5$) as in \cite{fedma}. The original test set in CIFAR-10 is considered as the global test set used to measure performance. VGG-9~\cite{vgg} is used as the classification network. All models are trained using the following settings: Adam optimizer for RL; SGD optimizer for clients and the server; $\gamma_h$ of $1\times10^{-2}$; initial learning rate of $1\times10^{-2}$; maximum rounds of 100; initial local epochs of 20; batch size of 64. \noindent\textbf{Multi-national COVID-19 Lesion Segmentation. } This dataset contains 3D computed tomography (CT) scans of COVID-19 infected patients collected from three medical centers We partition this dataset into three clients based on collection locations as following: 671 scans from China (Client I), 88 scans from Japan (Client II), and 186 scans from Italy (Client III). Each voxel containing a COVID-19 lesion was annotated by two expert radiologists. The training/validation/testing data splits are as follows: 447/112/112 (Client I), 30/29/29 (Client II), and 124/31/31 (Client III). The architecture of the segmentation network is 3D U-Net~\cite{3dunet}. All models are trained using the following settings: Adam optimizer for RL and clients; $\gamma_h$ of $1\times10^{-2}$; SGD optimizer for the server; initial learning rate of $1\times10^{-3}$; initial local iterations of 300; maximum rounds of 300; batch size of 16. \noindent\textbf{Multi-institutional Pancreas Segmentation. } Here, we utilize 3D CT scans from three public datasets, including 281 scans from the pancreas segmentation subset of the Medical Segmentation Decathlon~\cite{p1} as Client I, 82 scans from the Cancer Image Archive (TCIA) Pancreas-CT dataset~\cite{p2} as Client II, and 30 scans from Beyond the Cranial Vault (BTCV) Abdomen dataset~\cite{p3} as Client III. The training/validation/testing data splits are as follows: 95/93/93 (Client I), 28/27/27 (Client II), and 10/10/10 (Client III). All models are trained using the same network architecture and settings as COVID-19 lesion segmentation except that the maximum rounds are 50. \section{Experiments and Results} In this section, the effectiveness of our approach is first validated on a heterogeneous data split of the CIFAR-10 dataset (Sec.~\ref{sec:cifar}). Then, experiments are conducted on two multi-institutional medical image segmentation datasets (\ie, COVID-19 lesion segmentation and pancreas segmentation) to investigate the real-world potential of the proposed Auto-FedRL (Sec.~\ref{sec:realworld}). Finally, detailed comparisons between discrete and continuous search space, and the exploration of deep RL agents are provided (Sec.~\ref{sec:ab}). We evaluate the performance of our method against the following popular FL methods: FedAvg~\cite{fedavg} and FedProx~\cite{fedprox} as well as FL-based hyperparameter optimization methods: Auto-FedAvg~\cite{autofedavg}, and Mostafa \etal~\cite{intel}. \input{tables/cifar} \input{tables/cifar_computation_details} \subsection{CIFAR-10}\label{sec:cifar} Table~\ref{tb1} shows the quantitative performance of different methods in terms of the average accuracy across 8 clients. We denote the model that is directly trained with all available data as \emph{Centralized} in Table~\ref{tb1}. We treat it as an upper bound when data can be shared. As can be seen from this table, the proposed Auto-FedRL methods clearly outperform the other competing FL alternatives. Auto-FedRL(MLP) gains the best performance improvement by taking the advantage of a more complex RL agent design. To investigate the underlying hyperparameter change, we plot the evolution of aggregation weights in Fig.~\ref{fig4}. We found that the proposed RL agent is able to reveal more informative clients (i.e., clients containing more unique labels) and assign larger aggregation weights to those client's model updates. In particular, in Fig.~\ref{fig4}(a), C4 (red), C5 (purple), and C8 (gray) are gradually assigned three of the largest aggregation weights. As shown in Fig.~\ref{fig4}(b), although those three clients do not contain the largest number of images, all have the most number of unique labels (\ie 10 in CIFAR-10). This behavior further demonstrates the effectiveness of Auto-FedRL. Moreover, we provide the computational details of different search strategies to investigate their practicability under a same setting in Table~\ref{tb5}. Without losing performance, the proposed continuous search requires only 7\% memory usage but is 690$\times$ faster compared to the discrete search. While Auto-FedRL(MLP) introduces the deep RL agent, it is still 430$\times$ faster compared to the discrete version. The notable computational efficiency and low memory usage of Auto-FedRL validate our motivation of making federated hyperparameter optimization more practical in real-world applications. \begin{figure}[t!] \centering \includegraphics[clip, trim=0cm 0cm 0cm 0.1cm, width=\columnwidth]{figs/fig_cifar.pdf} \vskip-12pt \caption{Analysis of the learning process of Auto-FedRL(MLP) in CIFAR-10. (a) the evolution of aggregation weights during the training. (b) the data statistics of different clients. }\label{fig4} \vskip-2pt \end{figure} \input{tables/covid} \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{figs/fig3.pdf} \vskip -6pt \caption{Qualitative results of different methods that correspond to (a) COVID-19 lesion segmentation of Client III and (b) Pancreas segmentation of Client II. GT shows human annotations in green and others show the segmentation results from different methods. Red arrows point to erroneous segmentation. The dice score is presented in the lower-right corner of each subplot.}\label{fig3} \vskip-0pt \end{figure*} \begin{figure*}[tbp] \centering \includegraphics[ clip, trim=0cm 0.1cm 0cm 0cm, width=\textwidth]{figs/covid_ana.pdf} \vskip-8pt \caption{Analysis of the learning process of Auto-FedRL(CS) in COVID-19 lesion segmentation. (a) The parallel plot of the hyperparameter change during the training. LR, LI, AW, and SLR denote the learning rate, local iterations, the aggregation weight of each client, and the server learning rate, respectively. (b) The aggregation weights evolution of Auto-FedAvg in the \emph{top row} and Auto-FedRL(CS) in the \emph{bottom row}. (c) The importance analysis of different hyperparameters.}\label{fig5} \vskip-15pt \end{figure*} \subsection{Real-world FL Medical Image Segmentation}\label{sec:realworld} \noindent\textbf{Multi-national COVID-19 Lesion Segmentation: } The quantitative results are presented in Table~\ref{tb2}. We show the segmentation results of different methods for qualitative analysis in Fig.~\ref{fig3}(a). Dice score is used to evaluate the quality of segmentation. We repeatedly run all FL algorithms 3 times and report the mean and standard deviation. The main metric of evaluating the generalizability of the global model is \emph{Global Test Avg.}, which is computed by the average performance of the global model across all clients. In the first three rows of Table~\ref{tb2}, we evaluate three locally trained models as the baseline. Due to the domain shift, all locally trained models exhibit low generalizability on the other clients. By leveraging the additional regularization on weight changes, FedProx (with the empirically best $\mu$=0.001) can slightly outperform the FedAvg baseline. Mostafa \etal that uses the RL agent to perform discrete search can achieve slightly better performance than Auto-FedAvg. We find that with the nearly constant memory usage and notable computational efficiency, the proposed Auto-FedRL(CS) achieves the best performance, outperforming the most competitive method~\cite{intel} by 1.0\% in terms of the global model performance, by 1.5\% and 2.6\% on clients II and III, respectively. The performance gap between the FL algorithm and centralized training is shrunk to only 0.7\%. Figure~\ref{fig5} presents the analysis of learning process in our best performing model. As shown in Fig.~\ref{fig5}(a), we can observe that the RL agent is able to naturally form the training scheduler for each hyperparameter (\eg, the learning rate decay for the client/server), which is aligned with the theoretical analysis about the convergence on non-iid data of FL algorithms~\cite{flconvergence}. Since Auto-FedAvg specially aims to learn the optimal aggregation weights, we compare the aggregation weights learning process between our approach and Auto-FedAvg in Fig.~\ref{fig5}(b). It can be observed that the two methods exhibit a similar trend of learning aggregation weights, which further demonstrates the effectiveness of Auto-FedRL in aggregation weights searching. Finally, we use FANOVA~\cite{FANOVA} to assess the hyperparameter importance. As shown in Fig.~\ref{fig5}(c), LR, SLR, and AW1 rank as the top-3 most important hyperparameters, which implies the necessity of tuning server-side hyperparameters in FL setting. \input{tables/pancreas} \noindent\textbf{Multi-institutional Pancreas Segmentation. } Table~\ref{tb3} and Fig.~\ref{fig3}(b) present the quantitative and qualitative results on this dataset, respectively. Similar to the results on COVID-19 segmentation, our Auto-FedRL algorithms achieves the significantly better overall performance. In particular, Auto-FedRL(MLP) outperforms the best counterpart by 1.3\%. We aslo observe that our methods exhibits better generalizability on the relatively smaller clients. Specifically, on Client III, Auto-FedRL(MLP) improves the Dice score from 51.1\% to 75.3\%, which is even 3.28\% higher than the centralized training. These results implies that by leveraging the dynamic hyperparameter tuning during the training, Auto-FedRL algorithms can achieve better generalization and are more robust towards the heterogeneous data distribution. As shown in Fig.~\ref{fig3}(b), the proposed methods have a better capacity of handling the challenging cases, which is consistent with the quantitative results. The detailed hyperparameter evolution analysis on pancreas segmentation is provided in the supplementary material. \input{tables/ablation} \subsection{Ablation Study}~\label{sec:ab} The effectiveness of the proposed continuous search and NN-based RL agent is demonstrated by the previous sets of experiments in three datasets. Here, we conduct a detailed ablation study to analyze the benefit of individually adding each hyperparameter into the search space. As shown in Table~\ref{tb4}, the performance of trained global model can be further improved with the expansion of the search space, which also validates our motivation that the proper hyperparameter tuning is crucial for the success of FL algorithms. More visualizations, experimental results, and the theoretical analysis to guarantee the convergence of Auto-FedRL are provided in the supplementary material. \section{Conclusion and Discussion} In this work, we proposed an online RL-based federated hyperparameter optimization framework for realistic FL applications, which can dynamically tune the hyperparameters during a single trial, resulting in improved performance compared to several existing baselines. To make federated hyperparameter optimization more practical in real-world applications, we proposed Auto-FedRL(CS) and Auto-FedRL(MLP), which can operate on continuous search space, demand nearly constant memory and are computationally efficient. By integrating the adaptive federated optimization, Auto-FedRL supports a more flexible search space to tune a wide range of hyperparameters. The empirical results on three datasets with diverse characteristics reveal that the proposed method can train global models with better performance and generalization capabilities under heterogeneous data distributions. While our proposed method yielded a competitive performance, there are potential areas for improvement. First, we are aware that the performance improvement brought by the proposed method is not uniform across participating clients. Since the proposed RL agent jointly optimizes the whole system, minimizing an aggregate loss can lead to potentially advantage or disadvantage a particular client's performance. We can also observe that all FL methods exhibit a relatively small improvement on the client with the largest amount of data. This is a common phenomenon of FL methods since the client itself already provides diverse and rich training and testing data. Future research could include additional fairness constraints~\cite{fair1,fair2,fair3,fair4,fair5} to achieve a more uniform performance distribution across clients and reduce potential biases. Second, the NN-based RL agent could be benefiting from transfer learning. The effectiveness of RL transfer learning has been demonstrated in the literature for related tasks~\cite{tl}. Pre-training the NN-based agent on large-scale FL datasets and then finetuning on target tasks may further boost the performance of our approach. {\small \bibliographystyle{ieee_fullname}
1,108,101,562,411
arxiv
\section{Introduction} This manuscript is aimed at addressing several long standing limitations of dynamic mode decompositions (DMD) in the application of Koopman analysis. Principle among these limitations are the convergence of associated Dynamic Mode Decomposition algorithms and the existence of Koopman modes, where the first has only been established with respect to the strong operator topology (which does not guarantee the convergence of the spectrum), and the second is only guaranteed to exist when the Koopman operator is compact as well as self-adjoint or normal (which is a rare occurrence over the typical sample spaces). DMD methods are data analysis methods that aim to decompose a time series corresponding to a nonlinear dynamical system into a collection of dynamic modes \cite{kutz2016dynamic,budivsic2012applied,mezic2005spectral,korda2018convergence}. When they are effective, a given time series can be expressed as a linear combination of dynamic modes and exponential functions whose growth rates are derived from the spectrum of a finite rank representation of a particular operator, usually the Koopman operator. The use of Koopman operators places certain constraints on the continuous time dynamics that can be studied with DMD methods. In particular, Koopman operators analyze continuous time dynamics through a discrete time proxy obtained by fixing a time-step for a continuous time system \cite{mauroy2016global}. However, only a small subset of continuous time dynamics satisfy the forward invariance property necessary to obtain a discretization \cite{rosenfeld2019dynamic}. Moreover, to establish convergence guarantees for DMD routines, additional structure is required of Koopman operators, where a sequence of finite rank operators converge to Koopman operators in norm only if the Koopman operator is compact \cite{pedersen2012analysis}. Compactness is rarely satisfied for Koopman operators, where the Koopman operators obtained through discretizations of the simplest dynamical system $\dot x = 0$ are the identity operator and are not compact. A partial result has been demonstrated for when Koopman operators are bounded in \cite{korda2018convergence}, where a sequence of finite rank operators converge to a Koopman operator in the Strong Operator Topology (SOT). However, SOT convergence does not guarantee convergence of the spectra (cf. \cite{pedersen2012analysis}), which is necessary for a DMD routine. There are stronger theoretical difficulties associated with Koopman operators. It has been demonstrated that among the typical Hilbert spaces leveraged in sampling theory, such as the exponential dot product's \cite{carswell2003composition}, the Gaussian RBF's \cite{gonzalez2021anti}, and the polynomial kernel's native spaces as well as the classical Paley Wiener space \cite{chacon2007composition}, the only discrete time dynamics that yield a bounded Koopman operator are those dynamics that are affine. Hence, depending on the kernel function selected for the approximation of a Koopman operator, a given Koopman operator can at best be expected to be a densely defined operator, which obviates the aforementioned convergence properties. Another motivation for the use of Koopman operators in the study of continuous time dynamical systems is a heuristic that for small timesteps the spectra and eigenfunctions of the resultant Koopman operator should be close to that of the Liouville operator representing the continuous time systems \cite{brunton2019data}. However, for two fixed timesteps, the corresponding Koopman operators can have different collections of eigenfunctions and eigenvalues, and these are artifacts of the discretization itself \cite{gonzalez2021anti}. Since in most cases the Koopman operators are used for this analysis, it is not clear if there is a method for distinguishing which of these eigenfunctions and eigenvalues are a product of the discretization and which are fundamental to the dynamics themselves. Finally, and perhaps most alarming, is that Koopman modes themselves exist for only a small subset of Koopman operators \cite{gonzalez2021anti}. Specifically, if a Koopman operator is self-adjoint, then it admits an orthonormal basis of eigenfunctions \cite{brunton2019data}, and the projection of the full state observable onto this basis yields a collection of (vector valued) coefficients attached to these basis functions. These coefficients are known as Koopman Modes or Dynamic Modes. Koopman operators are not necessarily diagonalizable over a given Hilbert space, and when they are diagonalizable, their complete eigenbasis is not always an orthogonal basis. Hence, as the full state observable is projected on larger and larger finite collections of eigenfunctions, the weights attached to each eigenfunction will change as more are added. This adjustment to the weights with the addition of more eigenfunctions is why a series expansion is only ever given in Hilbert space theory when there is an orthonormal basis of eigenfunctions, otherwise an expansion is written as limit of finite linear combinations of eigenfunctions \cite{pedersen2012analysis}.\footnote{There are notable exceptions, such as in atomic decompositions \cite{zhu2012analysis}. However, this is another rare property of a basis.} To address these limitations, two major modifications are made, where Koopman operators are removed from the analysis in light of Liouville operators (known as Koopman generators in special cases), and these operators are shown to be compact for certain pairs of Hilbert spaces selected separately as the domain and range of the operator. (This separation of the domain and range is not possible for Koopman operators.) While eigenfunctions are discarded in the general analysis, a viable reconstruction algorithm is still achievable, and the sacrifice of eigenfunctions realizes the theoretical goals of DMD analysis that have yet to be achieved in other contexts. It should be noted that Liouville and Koopman operators rarely admit a diagonalization, and as such, this approach discards that additional assumption on the operators. However, at the cost of well defined Dynamic Modes, an eigenfunction approach is still achievable when the domain is embedded in the range of the operator. This allows for the search of eigenfunctions through finite rank approximations that converge to the Liouville operator. The result is a norm convergence DMD routine (using eigenfunctions), which is an achievement over the SOT convergent results previously established in the field \cite{korda2018convergence}. This gives a balance between the two convergence methods presented in this manuscript, where well defined modes come at the price of ease of reconstruction, and a straightforward reconstruction algorithm may not have well defined limiting dynamic modes (a problem shared with all other DMD routines). To be explicit, the singular DMD approach yields the following benefits: \begin{enumerate} \item Eliminates the requirement of forward invariance. (Aligning with the method given in \cite{rosenfeld2019occupation}). \item Provides well defined Dynamic Modes. \item Approximates a compact operator, thereby achieving convergence. \item Yields an orthonormal basis through which the full state observable may be decomposed. \end{enumerate} However, this achievement comes at the expense of eigenfunctions of the given operator. As it turns out, the abandonment of eigenfunctions for the analysis does not actually limit the applicability, where even for very simple dynamics, such as $f(x) = x^2$ in the one dimensional setting, the corresponding Liouville operators will have no eigenfunctions over any space of continuous functions. For the present example, the solution to the eigenfunction equation, $g'(x)x^2 = \lambda g(x)$, gives $g(x) = e^{\lambda/x}$ for $\lambda \neq 0$, a discontinuous function on the real line. Additionally, reconstruction of the original time series may still be achieved using Runge-Kutta like methods. Where the DMD routine leveraging the case where the domain is embedded in the range provides the following: \begin{enumerate} \item Eliminates the requirement of forward invariance. (Aligning with the method given in \cite{rosenfeld2019occupation}). \item Approximates a compact operator, thereby achieving convergence. \item Yields an approximate eigenbasis through which the full state observable may be decomposed. \item An ease of reconstruction through the eigenfunctions. \end{enumerate} It should be noted that there have been several attempts at providing compact operators for the study of DMD. The approaches \cite{das2021reproducing} and \cite{rosenfeld2019dynamic} find compact operators through the multiplication of auxiliary operator against Koopman and Liouville operators respectively. However, the resultant operators are not the operators that truly correspond to the dynamics in question, and as such, the decomposition of those operators can only achieve heuristic results. The approach taken presently gives compact Liouville operators directly connected with the continuous time dynamics. \section{Reproducing Kernel Hilbert Spaces} A reproducing kernel Hilbert space (RKHS), $H$, over a set $X$ is a space of functions from $X$ to $\mathbb{R}$ such that the functional of evaluation, $E_x g := g(x)$ is bounded for every $x \in X$. By the Riesz theorem, this means for each $x \in X$ there exists a function $K_x \in H$ such that $\langle f, K_x \rangle_H = f(x)$ for all $f$. The function $K_x$ is called the kernel function centered at $X$, and the function $K(x,y) := \langle K_y, K_x \rangle_H$ is called the kernel function corresponding to $H$. Note that $K_y(x) = K(x,y).$ Classical examples of kernel functions in data science are the Gaussian radial basis function for $\mu > 0$, $K(x,y) = \exp(-\frac{1}{\mu} \| x- y\|^2)$, and the exponential dot product kernel, $\exp(\frac{1}{\mu} x^T y)$ \cite{steinwart2008support}. The function $K(x,y)$ is a positive definite kernel function, which means that for every finite collection of points, $\{ x_1, \ldots, x_M \} \subset X$, the Gram matrix $( K(x_i,x_j) )_{i,j=1}^M$ is positive definite. For each positive definite kernel function, there exists a unique RKHS for which $K$ is the kernel function for that space by the Aronszajn-Moore theorem in \cite{aronszajn1950theory}. Given a RKHS, $H$, over $X \in \mathbb{R}^n$ consisting of continuous functions and given a continuous signal, $\theta:[0,T] \to X$, the linear functional $g \mapsto \int_0^T g(\theta(t)) dt$ is bounded. Hence, there exist a function, $\Gamma_{\theta} \in H$, such that $\langle g, \Gamma_{\theta} \rangle_{H} = \int_0^T g(\theta(t)) dt$ for all $g \in H$. The function $\Gamma_{\theta}$ is called the occupation kernel in $H$ corresponding to $\theta$. These occupation kernels were first introduced in \cite{rosenfeld2019occupation,rosenfeld2019occupation2}. \section{Compact Liouville Operators} This section demonstrates the existence of compact Liouville operators, given formally as $A_f g(x) = \nabla g(x) f(x)$, where compactness is achieved through the consideration of differing spaces for the domain an range of the operator. Section \ref{sec:classical} builds on a classical result where differentiation between differing weighted Hardy spaces can be readily shown to be compact. Following a similar argument, Section \ref{sec:severalvariables} presents several examples of compact Liouville operators over spaces of functions of several variables. We would like to emphasize that the collections of compact Liouville operators are not restricted to these particular pairs of functions spaces, but rather this section provides several examples demonstrating the existence of such operators, thereby validating the approach in the sequel. \subsection{Inspirations from Classical Function Theory}\label{sec:classical} Consider the weighted Hardy spaces (cf. \cite{beneteau2018remarks}), $H^2_{\omega}$, where $\omega = \{ \omega_{m} \}_{m=0}^\infty$ is a sequence of positive real numbers such that $|\omega_{m+1}/\omega_m| \to 1$, and $g(z) = \sum_{m=0}^\infty a_m z^m$ is a function in $H^2_{\omega}$ if the coefficients of $g$ satisfy $\|g\|_{H_{\omega}^2}^2 := \sum_{m=0}^\infty \omega_m |a_m|^2 < \infty$. Each weighted Hardy space is a RKHS over the complex unit disc $\mathbb{D} =\{ z \in \mathbb{C} : |z| = 1\}$ with kernel function given as $K_{\omega}(z,w) = \sum_{m=0}^\infty \omega_m z^m \bar w^m$, and the monomials $\left\{ \frac{z^m}{\sqrt{\omega_m}} \right\}_{m=0}^\infty$ form an orthonormal basis for each space. The weighted Hardy space corresponding to the sequence $\omega_{(0)} := \{ 1, 1, \ldots \}$ is the classical Hardy space, $H^2$, that was introduced by Riesz in 1923 \cite{riesz1923randwerte}. The Dirichlet space corresponds to the weight sequence $\omega_{(1)} = \{ (m+1) \}_{m=0}^\infty$, and the Bergman space corresponds to $\omega_{(-1)} = \{ (m+1)^{-1} \}_{m=0}^\infty$. Of interest here is the weighted Hardy space corresponding to $\omega_{(3)} := \{ m^3 \}_{m=0}^\infty$, which will be denoted as $H^{2}_3$ for convenience. It is immediately evident that the operation of differentiation on elements of $H^2_3$ is bounded as an operator from $H^2_3$ to $H^2$. The reason for this inclusion can be seen directly through the power series for these function spaces. In particular, a function $h(z) = \sum_{m=0}^\infty a_m z^m$ is in $H^2_2$ if $\| h\|_{H_2^2} = \sum_{m=0}^\infty (m+1)^3 |a_m|^2 < \infty$, and in the Hardy space if $\| h\|_{H^2} = \sum_{m=0}^\infty |a_m|^2 < \infty.$ A function $g$ in $H^2_3$ has derivative $g'(z) = \sum_{m=1}^\infty m a_m z^{m-1} = \sum_{m=0} (m+1) a_{m+1} z^{m}$, and by considering the Hardy space norm, \[\left\| \frac{d}{dz} g \right\|_{H^2} = \sum_{m=0}^\infty (m+1)^2 |a_{m+1}|^2 \le \sum_{m=0}^\infty (m+1)^3 |a_{m+1}|^2,\] but this is exactly the $H^2_3$ norm on $g$ less the constant term. Hence differentiation is a bounded operator from the space $H^2_3$ to the Hardy space with operator norm at most $1$. \begin{proposition}The operator $\frac{d}{dz} : H_3^2 \to H^2$ is compact. Moreover, if $f:\overline{\mathbb{D}} \to \mathbb{D}$ is a bounded analytic function corresponding to a bounded multiplication operator, $M_f g := g(x) f(x)$, over the Hardy space, then the Liouville operator, $A_f := M_f\frac{d}{dz}$, is compact from $H_3^2$ to $H^2$.\end{proposition} \begin{proof} To see that differentiation is a compact operator from the $H^2_3$ to the Hardy space, we may select a sequence of finite rank operators that converge in norm to differentiation. In particular, note that the monomials form an orthonormal basis of the Hardy space as is evident from the given norm. Let $\alpha_M := \{ 1, z, \ldots, z^M\}$ be the first $M$ monomials in $z$, and let $P_{\alpha_M}$ be the projection onto the span of these monomials. The operator $P_{\alpha_M} \frac{d}{dz}$ is a finite rank operator, where the image of this operator is a polynomial of degree up to $M$. To demonstrate that this sequence of finite rank operators converges to differentiation in the operator norm it must be shown that the difference under the operator norm, \[\left\| P_{\alpha_M} \frac{d}{dz} - \frac{d}{dz} \right\|_{H_3^2}^{H^2} := \sup_{g \in H_3^2} \frac{\| P_{\alpha_M} \frac{d}{dz} g - \frac{d}{dz} g\|_{H^2}}{\|g\|_{H_3^2}},\] goes to zero. Note that \begin{gather*}\| P_{\alpha_M} \frac{d}{dz} g - \frac{d}{dz} g\|_{H^2}^2 = \sum_{m=M+1}^\infty (m+1)^2 |a_{m+1}|^2 \\= \sum_{m=M+1}^\infty \frac{1}{m+1} (m+1)^3 |a_{m+1}|^2 \le \frac{1}{M+1} \sum_{m=M+1}^\infty (m+1)^3 |a_{m+1}|^2 \le \frac{1}{M+1} \|g\|_{H_{3}^2}.\end{gather*} Hence $\left\| P_{\alpha_M} \frac{d}{dz} - \frac{d}{dz} \right\|_{H_3^2}^{H^2} \le \frac{1}{M+1} \to 0.$ This proves that differentiation is a compact operator from $H_3^2$ to $H^2$. If a function, $f$, is a bounded analytic function on the closed unit disc, then it is the symbol for a bounded multiplier over $H^2$. Hence, the $M_f \frac{d}{dz}$ is a compact operator from $H_3^2$ to $H^2$. To be explicit, since $P_{\alpha_M} \frac{d}{dz}$ has finite rank, $M_f \left(P_{\alpha_M} \frac{d}{dz}\right)$ also has finite rank. Moreover, $\left\| M_f P_{\alpha_M} \frac{d}{dz} - M_f \frac{d}{dz} \right\|_{H_3^2}^{H^2} = \left\| M_f \left(P_{\alpha_M} \frac{d}{dz} - \frac{d}{dz}\right) \right\|_{H_3^2}^{H^2} \le \| M_f \|_{H^2}^{H^2} \left\| P_{\alpha_M} \frac{d}{dz} - \frac{d}{dz} \right\|_{H_3^2}^{H^2} \to 0.$ Hence, $M_f \frac{d}{dz}$ is an operator norm limit of finite rank operators, and is compact. Finally, it can be seen that $M_f \frac{d}{dz} g(z) = g'(z) f(z) = A_f g(z)$, and $A_f$ is a compact Liouville operator from $H_3^2$ to $H^2$. \end{proof} \subsection{Compact Liouville Operators of Several Variables}\label{sec:severalvariables} The example of the previous section demonstrated that compact Liouville operators may be obtained in one dimension. However, this is readily extended to higher dimensions through similar arguments, and in particular can be demonstrated for dot product kernels of the form $K(x,y) = (1+\mu x^T y)^{-1}$. In some cases, such as with the exponential dot product kernel and the Gaussian RBF, where the kernel functions over $\mathbb{R}^n$ decompose as a product of kernel functions over $\mathbb{R}$ for the individual variables, the establishment of compact Liouville operators from the single variable spaces to an auxiliary range RKHSs yields compact Liouville operators through tensor products of the respective spaces. The exponential dot product kernel, with parameter $\mu > 0$, is given as $K(x,y) = exp\left(\mu x^Ty\right)$. In the single variable case, the native space for this kernel may be expressed as $F^2_{\mu}(\mathbb{R}^n) = \left\{ f(x) = \sum_{m=0}^\infty a_m x^m : \sum_{m=0}^\infty |a_m|^2 \frac{m!}{\mu^m} < \infty \right\}$. This definition can be readily extended to higher dimensions, where collection of monomials, $x^{\alpha} \frac{\mu^{|\alpha|}}{\sqrt{\alpha!}}$, with multi-indices $\alpha \in \mathbb{N}^n$ form an orthonormal basis. The norm of functions in $F_\mu^2(\mathbb{R}^n)$ will be denoted by $\|g\|_\mu.$ In this setting, if $\mu_2 > \mu_1$ (i.e. $1/\mu_1 > 1/\mu_2$), then by arguments similar to those given in the previous section, it follows that partial differentiation with respect to each variable is a compact operator from $F^2_{\mu_1}$ to $F^2_{\mu_2}$. However, since multiplication operators are unbounded from $F^2_{\mu}$ to itself for every $\mu > 0$, another step is necessary to ensure compactness. \begin{lemma} Suppose that $\eta < \mu$, then given any polynomial of several variables, $f$, the multiplication operator $M_{f} : F_\eta^2(\mathbb{R}^n) \to F_\mu^2(\mathbb{R}^n)$ is bounded. \end{lemma} \begin{proof}To facilitate a clarity of exposition, this will be proven with respect to functions of a single variable. The same arguments extend to the spaces of several variables, albeit with more bookkeeping. Let $g \in F^2_{\eta}$. Then $g(x) = \sum_{m=0}^\infty a_m x^m$, and $\| g\|_\eta^2 = \sum_{m=0}^\infty |a_m|^2 \frac{m!}{\eta^m}$. For $f \equiv 1$, $M_1$ is the identity operator. Thus, the boundedness of $M_1$ is equivalent to demonstrating that $F_\eta^2$ is boundedly included in $F_\mu^2$. In particular, note that \begin{gather*} \|M_1 g \|_\mu^2 = \| g \|_\mu^2 = \sum_{m=0}^\infty |a_m|^2 \frac{m!}{\mu^m} = \sum_{m=0}^\infty |a_m|^2 \left(\frac{\eta}{\mu}\right)^m \frac{m!}{\eta^m} \\< \sum_{m=0}^\infty |a_m|^2 \frac{m!}{\eta^m} = \| g \|^2_\eta \end{gather*} Fix $k \in \mathbb{N}$ and consider the multiplication operator $M_{x^k} : F^2_\eta \to F^2_\mu$ defined as $M_{x^k} g := xg$ for all $g \in F^2_{\eta}$. Note that the power series of $M_{x^k} g$ is given as $xg(x) = \sum_{m=0}^\infty a_m x^{m+k} = \sum_{m=k}^\infty a_{m-k} x^m$. Hence, \begin{gather*}\| x^k g(x) \|^2_{\mu} = \sum_{m=k}^\infty |a_{m-k}|^2 \frac{m!}{\mu^m}= \sum_{m=0}^\infty |a_{m}|^2 \frac{(m+k)!}{\mu^{m+k}}\\ = \sum_{m=0}^\infty |a_m|^2 \frac{(m+k)!}{m!\mu^k} \frac{m!}{\mu^m} = \sum_{m=0}^\infty |a_m|^2 \left(\frac{m+k}{m!\mu^k}\right) \left(\frac{\eta}{\mu}\right)^m \frac{m!}{\eta^m}, \end{gather*} and as $\left(\frac{m+k}{m!\mu^k}\right) \left(\frac{\eta}{\mu}\right)^m$ is bounded as a function of $m$ by some constant $C > 0$ (owing to the exponential decay of $\left(\eta/\mu\right)^m$), it follows that $\| M_{x^k} \|_{F_\eta^2}^{F_\mu^2} < C$. Hence, by linear combinations of monomials it has been demonstrated that a multiplication operator with polynomial symbol is a bounded operator. \end{proof} \begin{remark} The authors emphasize that the collection of bounded multiplication operators between these spaces is strictly larger than the those with polynomial symbols. The purpose of this lemma is to simply support the existence of compact Liouville operators, rather than to provide a complete classification. \end{remark} \begin{theorem} Let $\mu_3 > \mu_1$, and suppose that $f$ is a vector valued function over several variables, where each entry is a polynomial. Then the Liouville operator $A_f : F^2_{\mu_1}(\mathbb{R}^n) \to F^2_{\mu_3}(\mathbb{R}^n)$ defined as $A_f g = \nabla g \cdot f$ is a compact operator. \end{theorem} \begin{proof} Let $f = (f_1, f_2, \ldots, f_n)^T$, and select $\mu_2$ such that $\mu_1 < \mu_2 < \mu_3$. For each $i = 1,\ldots, n$, the operator of partial differentiation $\frac{\partial}{\partial x_i} : F_{\mu_1}^2(\mathbb{R}^n) \to F_{\mu_2}^2(\mathbb{R}^n)$ is a compact operator, and the multiplication operator $M_{f_i} : F_{\mu_2}^2(\mathbb{R}^n) \to F_{\mu_3}^2(\mathbb{R}^n)$ is bounded. Hence, the operator $M_{f_i} \frac{\partial}{\partial x_i}$ is compact. As $A_f = M_{f_1} \frac{\partial}{\partial x_1} + \cdots + M_{f_n} \frac{\partial}{\partial x_n}$, it follows that $A_f$ is a compact operator from $F_{\mu_1}^2(\mathbb{R}^n)$ to $F_{\mu_3}^2(\mathbb{R}^n)$. \end{proof} This section has thus established the existence of compact Liouville operators between various pairs of spaces. It is emphasized that these are not the only pairs for which a compact Liouville may be determined. \section{Singular Dynamic Mode Decompositions for Compact Liouville Operators} The objective of this section is to determine a decomposition of the full state observable, $g_{id}(x) := x$, with respect to an orthonormal basis obtained from a Liouville operator corresponding to a continuous time dynamical system $\dot x = f$. We will let $H$ and $\tilde H$ be two RKHSs over $\mathbb{R}^n$ such that the Liouville operator, $A_f g(x) = \nabla g(x) f(x)$ is compact as an operator from $H$ to $\tilde H$. To obtain an orthonormal basis, a singular value decomposition for the compact operator $A_f$ is obtained. Specifically, note that as $A_f$ is compact, so is $A_f^*$. Hence, $A_f^* A_f$ is diagonalizable as a self adjoint compact operator. Thus, there is a countable collection of non-negative eigenvalues $\sigma_m^2 \ge 0$ and eigenfunctions $\varphi_m$ corresponding to $A_f^* A_f$, such that $A_f^* A_f \varphi_m = \sigma_m^2 \varphi_m$. Since $A_f^* A_f$ is self adjoint, $\{ \varphi_m \}_{m=0}^\infty$ may be selected in such a way that they form an orthonormal basis of $H$. The functions $\varphi_m$ are the right singular vectors of $A_f$. For $\sigma_m \neq 0$, the left singular vectors may be determined as $\psi_m := \frac{A_f \varphi_m}{\sigma_m},$ and the collection of nonzero $\psi_m$ form an orthonormal set in $\tilde H$. This may be seen via \begin{gather*}\langle \psi_m, \psi_{m'} \rangle_{\tilde H} = \frac{1}{\sigma_{m}\sigma_{m'}}\langle A_f \varphi_m, A_f \varphi_{m'} \rangle_{\tilde H}\\= \frac{1}{\sigma_{m}\sigma_{m'}} \langle \varphi_m, A_f^* A_f \varphi_{m'} \rangle_{H} = \frac{\sigma_{m'}^2}{\sigma_{m}\sigma_{m'}} \langle \varphi_m, \varphi_{m'} \rangle= \frac{\sigma_{m'}^2}{\sigma_{m}\sigma_{m'}} \delta_{m,m'},\end{gather*} where $\delta_{\cdot,\cdot}$ is the Kronecker delta function. Finally, \[A_f g = \sum_{\sigma_m \neq 0} \langle g, \varphi_m \rangle_H \sigma_m \psi_m\] for all $g \in H$, and \[A_f^* h = \sum_{\sigma_m \neq 0} \langle h, \psi_m \rangle_{\tilde H} \sigma_m \varphi_m.\] To find a decomposition for the full state observable, $g_{id}$, first note that the full state observable is vector valued, whereas the Hilbert spaces consist of scalar valued functions. To ameliorate this discrepancy, we will work with the individual entries of the full state observable, namely the maps $x \mapsto (x)_i$, for $i=1,\ldots,n$, which are the mappings of $x$ to its individual components. When $(x)_i$ resides in the Hilbert space, such as with the space $F_\mu^2(\mathbb{R}^n)$, and $(x)_i$ may be directly expanded with respect to the right singular vectors of $A_f$. If $(x)_i$ is not in the space, as in the case with the Gaussian RBF, if the space is universal, then a suitable approximation may be determined over a fixed compact subset, and the approximation will be expanded instead. Performing the entry wise decomposition of the full state observable is equivalent to performing the decomposition over vector valued RKHSs with diagonal kernel operators, and replacing the gradient of $g$ with the matrix valued derivative. Hence, for each $i=1,\ldots,n$, we have $(x)_i = \sum_{m=0}^\infty (\xi_m)_i \varphi_m(x)$, where $(\xi_m)_i = \langle (x)_i, \varphi_m \rangle_{H}$. The vectors $\xi_m$ are called the \emph{singular Liouville modes} of the dynamical system with respect to the pair of Hilbert space $H$ and $\tilde H$. Note that for a trajectory of the system, given as $x(t)$, it can be seen that \begin{gather*} \dot x(t) = f(x(t)) = \nabla g_{id}(x(t)) f(x(t)) = A_f g_{id}(x(t))\\ = \sum_{m=0}^\infty \langle g_{id}, \varphi_{m} \rangle_{H} \sigma_m \psi_m(x(t)) = \sum_{m=0}^\infty \xi_m \sigma_m \psi_m(x(t)). \end{gather*} Hence, $x(t)$ satisfies a differential equation with respect to the left singular vectors of the Liouville operator and the singular Liouville modes. Given these quantities, reconstruction of $x(t)$ is possible using tools from the solution of initial value problems. In particular, the following form of the equation may be exploited: \[ x(t) = x(0) + \sum_{m=0}^\infty \xi_m \sigma_m \int_0^t \psi_m(x(\tau)) d\tau. \] \section{Recovering an Eigenfunction Approach in Special Cases} While the majority of this mansucript is aimed at the singular Dynamic Mode Decomposition, where the domain and range are different for the compact Liouville operator, there is still a possibility of obtaining an eigendecomposition in special cases. In particular, for many of the examples shown above, the domain and range spaces have similar structure and the range space has less stringent requirement for the functions it contains. This means that the domain itself may be embedded in the range space, and if there is a complete set of eigenfunctions in this embedded space, then the operator may still be diagonalized. Note that the operator is still mapping between two different Hilbert spaces, which means that the inner product on the embedding is different than the inner product on the domain. This difference will appear in the numerical methods given in subsequent sections. The following is a well known result (cf. \cite{zhu2012analysis}), and is included here for illustration purposes. \begin{proposition} If $\mu_1 < \mu_2$, then $F^2_{\mu_1}(\mathbb{R}^n) \subset F^2_{\mu_2}(\mathbb{R}^n)$. \end{proposition} \begin{proof} Again this is shown for the single variable case, where the multivariate case follows by an identical argument, but with more bookkeeping. Suppose that $g \in F^2_{\mu_1}(\mathbb{R})$ with $g(z) = \sum_{m=0}^\infty a_m z^m$. Then \begin{gather*}\|g\|_{F_{\mu_2}^2(\mathbb{R})}^2 = \sum_{m=0}^\infty |a_m|^2 \frac{m!}{\mu_2^m} = \sum_{m=0}^\infty |a_m|^2 \left( \frac{\mu_1}{\mu_2} \right)^m \frac{m!}{\mu_1^m} \le \sum_{m=0}^\infty |a_m|^2 \frac{m!}{\mu_1^m} = \|g\|_{F_{\mu_1}^2(\mathbb{R})}^2.\end{gather*} Since the quantity on the right is bounded, so is the quantity on the left. Hence $g \in F_{\mu_2}^2(\mathbb{R}).$ \end{proof} \begin{example} A simple example demonstrating that an eigenbasis may be found between the two spaces arises in the study of $A_x:F_{\mu_1}^2(\mathbb{R}) \to F_{\mu_2}^2(\mathbb{R})$ for $\mu_1 < \mu_2$. Note that an eigenfunction, $\varphi$, for $A_z$ must reside in $F_{\mu_1}^2(\mathbb{R}) \cap F_{\mu_2}^2(\mathbb{R}) = F_{\mu_1}^2(\mathbb{R})$, and satisfy $\varphi'(x) x = \lambda \varphi(x)$. Consequently, takes the form $\varphi(x) = x^{\lambda}$, and is in $F_{\mu_1}^2(\mathbb{R})$ only for $\lambda = 0, 1, 2, \ldots$. Hence, the eigenfuncitons of $A_x$ are the monomials. Monomials are contained in $F_{\mu_1}^2(\mathbb{R})$ and form a complete eigenbasis for both spaces. Note that the norm of $x^m$ is $\sqrt{\frac{m!}{\mu_1^m}}$ in $F_{\mu_{1}}^2(\mathbb{R})$ and $\sqrt{\frac{m!}{\mu_2^m}}$ in $F_{\mu_{2}}^2(\mathbb{R}).$ \end{example} The following proposition is obtained in the same manner as in the classical case. \begin{proposition} Suppose that $H$ and $\tilde H$ are two RKHSs over $\mathbb{R}^n$, and that $H \subset \tilde H$. If $\varphi \in H$ is an eigenfunction for $A_f$ as $A_{f} \phi = \lambda \phi$, then given a trajectory $x:[0,T] \to \mathbb{R}^n$ satisfying $\dot x = f(x)$ the following holds $\varphi(x(t)) = e^{\lambda t} \varphi(x(0)).$ \end{proposition} \begin{proof} Since $A_{f}\varphi = \nabla \varphi f$, it follows that \[\frac{d}{dt} \varphi(x(t)) = \nabla \varphi(x(t)) \dot x(t) = \nabla \varphi(x(t)) f(x(t)) = A_f \varphi(x(t)) = \lambda \varphi(x(t)).\] That is, $\frac{d}{dt} \varphi(x(t)) = \lambda \varphi(x(t)).$ Thus, the conclusion follows. \end{proof} Suppose that $A_f : H \to \tilde H$ has a complete eigenbasis in the sense that the span of the eigenfunctions, $\{ \varphi_m \}_{m = 1}^\infty$, are dense in $H$. Then the full state observable, $g_{id}$, is the full state observable, then each entry of $g_{id}$, $(x)_i$ for $i=1,\ldots,n$, may be expressed as \[ (x)_i = \lim_{M\to\infty} \sum_{m=1}^M (\xi_{m,M})_i \varphi_m(x),\] where $(\xi_{m,M})_i$ is the $m$-th coefficient obtained from projecting $(x)_i$ onto the span of the first $M$ eigenfunctions. If the eigenfunctions are orthogonal, then the dependence on $M$ may be removed from $\xi_{m,M}$. Hence, the full state observable is obtained from \begin{equation}\label{eq:eigendecomp} g_{id}(x) = \lim_{M\to\infty} \sum_{m=1}^M \xi_{m,M} \varphi_m(x),\end{equation} with $\xi_{m,M}$ being the vector obtained by stacking $(\xi_{m,M})_i$. Finally, by substituting $x(t)$ into this representation (where $\dot x = f(x)$), the following holds \begin{equation}\label{eq:dmdrepresentation} x(t) = g_{id}(x(t)) = \lim_{M\to\infty} \sum_{m=1}^M \xi_{m,M} e^{\lambda t} \varphi_m(x(0)).\end{equation} Hence, this methodology yields a DMD routine, where the finite rank representations will converge to the compact Liouville operators, following the proof given in the Appendix of \cite{rosenfeld2019dynamic}. \section{Singular Dynamic Mode Decomposition Algorithm} This section is aimed at determining a convergent algorithm that can determine approximations of the singular Liouville modes and the singular vectors of $A_f$. While an eigenfunction expansion is still possible in the case of nested spaces, the Singular Dynamic Mode Decomposition algorithm is technically more general. Moreover, the SVD ensures the existence of dynamic modes, which may not be well defined fixed concepts for the eigenfunction case. From the data perspective, a collection of trajectories, $\{ \gamma_j : [0,T_j] \to \mathbb{R}^n \}_{j=1}^M$, corresponding to an unknown dynamical system, $f:\mathbb{R}^n \to \mathbb{R}^n$, as $\dot \gamma_j = f(\gamma_j)$ have been observed. The objective of DMD is to get an approximation of the dynamic modes of the system, and to obtain an approximate reconstruction of a given trajectory. Once a reconstruction is determined, then data driven predictions concerning future states of the trajectory may be determined. A DMD routine is somewhat like a Fourier series representation, which can reproduce a continuous trajectory exactly, however DMD methods exploit a trajectory's underlying dynamic structure. This routine effectively interpolates the action of the Liouville operator on a collection of basis functions. When these basis functions form a complete set within the Hilbert space, which can be achieved by selecting a dense collection of short trajectories throughout the workspace, then a sequence of finite rank approximations determined by this routine converges to the compact Liouville operator in norm. Which means that the left and right singular functions of the finite rank operators in the sequence converge to those of the Liouville operator, and that the singular values converge as well. DMD routines involving the Koopman operator add the additional requirement of forward invariance for the sake of discretizations. This method as well as that of \cite{rosenfeld2019dynamic} sidestep that requirement by accessing the Liouville operators directly through their connection with the occupation kernels of the RKHSs. To wit, given two RKHSs of continuously differentiable functions, $H$ and $\tilde H$, with kernels $K$ and $\tilde K$ respectively, and a compact Liouville operator, $A_f : H \to \tilde H$, the occupation kernel, $\Gamma_{\gamma_{j}} \in \tilde H$ corresponding to the trajectory $\gamma_j$ satisfies $A_f^* \Gamma_{\gamma_j} = K(\cdot,\gamma_j(T_j)) - K(\cdot,\gamma_j(0)),$ where $K$ is the kernel function for the space $H$. In particular, given $g \in H$, \begin{gather*} \langle A_f g, \Gamma_{\gamma_j} \rangle_{\tilde H} = \int_0^{T_j} \nabla g(\gamma_j(t)) f(\gamma_j(t)) dt\\ = \int_0^{T_j} \dot g(\gamma_j(t)) dt = g(\gamma_j(T_j)) - g(\gamma_j(0)) = \langle g, K_{\gamma_j(T)} - K_{\gamma_j(0)} \rangle_{H}. \end{gather*} The objective is to construct a finite rank approximation of $A_f$ through which an SVD may be performed to find approximate singular values and singular vectors, and to ultimately approximate the singular Liouville modes. Note that since the dynamics are unknown, the adjoint must be approximated instead, where the action of the adjoint on the occupation kernels provides a sample of the operator. Thus, the finite rank representation will be determined through the restriction of $H$ to the span of the ordered basis $\alpha = \{ \Gamma_{\gamma_j} \}_{j=1}^M$. A corresponding basis for $H$ must also be selected, and given the available information, $\beta = \{ K(\cdot,\gamma_j(T_j)) - K(\cdot,\gamma_j(0))\}$ is most reasonable. Of course, this leads to a rather benign matrix representation of \[ [A_f^*]_{\alpha}^\beta = \begin{pmatrix} 1 & & \\ & \ddots & \\ & & 1\end{pmatrix}.\] Moreover, if this matrix is input into an SVD routine, typical algorithms would not be aware of the non-orthogonal inner products between the basis elements. To rectify this, two orthonormal bases $\alpha'$ and $\beta'$ may be obtained from an eigendecomposition of the Gram matrices (which are assumed to be strictly positive definite) for $\alpha$ and $\beta$ respectively: \begin{gather*} \tilde G := \begin{pmatrix} \langle \Gamma_{\gamma_1}, \Gamma_{\gamma_1} \rangle_{\tilde H} & \cdots & \langle \Gamma_{\gamma_1}, \Gamma_{\gamma_M} \rangle_{\tilde H}\\ \vdots & \ddots & \vdots\\ \langle \Gamma_{\gamma_M}, \Gamma_{\gamma_1} \rangle_{\tilde H} & \cdots & \langle \Gamma_{\gamma_M}, \Gamma_{\gamma_M} \rangle_{\tilde H} \end{pmatrix} = V \Lambda V^*:=\\ \begin{pmatrix} | & & |\\ \tilde v_1 & \cdots & \tilde v_M\\ | & & | \end{pmatrix} \begin{pmatrix} \tilde \lambda_1 & & \\ & \cdots & \\ & & \tilde \lambda_M \end{pmatrix} \begin{pmatrix} - &\tilde v_1^* & -\\ & \vdots & \\ - & \tilde v_M^* & - \end{pmatrix},\text{ and} \\ G = \begin{pmatrix} \langle A_f^* \Gamma_{\gamma_1}, A_f^* \Gamma_{\gamma_1} \rangle_{ H} & \cdots & \langle A_f^* \Gamma_{\gamma_1}, A_f^* \Gamma_{\gamma_M} \rangle_{ H}\\ \vdots & \ddots & \vdots\\ \langle A_f^* \Gamma_{\gamma_M},A_f^* \Gamma_{\gamma_1} \rangle_{ H} & \cdots & \langle A_f^* \Gamma_{\gamma_M}, A_f^* \Gamma_{\gamma_M} \rangle_{ H}\\ \end{pmatrix} = \tilde V \tilde \Lambda \tilde V^*:=\\\\ = \begin{pmatrix} | & & |\\ v_1 & \cdots & v_M\\ | & & | \end{pmatrix} \begin{pmatrix} \lambda_1 & & \\ & \cdots & \\ & & \lambda_M \end{pmatrix} \begin{pmatrix} - & v_1^* & -\\ & \vdots & \\ - & v_M^* & - \end{pmatrix}. \end{gather*} A more meaningful representation of $A_f^*$ may be found by re-expressing $[A_f^*]_{\alpha}^{\beta}$ in terms of the orthornormal sets $\alpha' = \{ q_j \}_{j=1}^M$ and $\beta' = \{ p_j \}_{j=1}^M$ where \begin{gather*} p_j = \frac{1}{\sqrt{v_j^* G v_j}} \sum_{\ell = 1}^M (v_j)_{\ell} ( K(\cdot,\gamma_\ell(T_\ell)) - K(\cdot,\gamma_\ell(0))), \text{ and}\\ q_j = \frac{1}{\sqrt{\tilde v_j^* \tilde G \tilde v_j}} \sum_{\ell=1}^M (\tilde v_j)_{\ell} \Gamma_{\gamma_\ell}.\end{gather*} In other words, \[ \begin{pmatrix} q_1(x) \\ \vdots \\ q_M(x) \end{pmatrix} = \begin{pmatrix} \left(\sqrt{\tilde v_1^* \tilde G \tilde v_1}\right)^{-1} & &\\ & \ddots &\\ & & \left(\sqrt{\tilde v_M^* \tilde G \tilde v_M}\right)^{-1} \end{pmatrix} \tilde V^T \begin{pmatrix} \Gamma_{\gamma_1}(x)\\ \vdots\\ \Gamma_{\gamma_M}(x) \end{pmatrix}, \] and a similar expression may be written for $p_j$. Write \begin{gather*}\tilde V_0 = \tilde V \diag\left(\left(\sqrt{\tilde v_1^* \tilde G \tilde v_1}\right)^{-1}, \ldots, \left(\sqrt{\tilde v_M^* \tilde G \tilde v_M}\right)^{-1}\right),\text{ and}\\ V_0 = V \diag\left(\left(\sqrt{ v_1^* G v_1}\right)^{-1}, \ldots, \left(\sqrt{ v_M^* G v_M}\right)^{-1}\right)\end{gather*} the coefficients of each column of $V_0$ and $\tilde V_0$ correspond to functions of norm 1 for their respective spaces. It follows that \begin{gather*} [A_f^*]_{\alpha'}^{\beta'} = V_0^{-1} [A_f^*]_{\alpha}^\beta \tilde V_0 = V_0^{-1} \tilde V_0. \end{gather*} That is, the matrix representation with respect to the bases $\beta'$ and $\alpha'$ are obtained by sending elements of $\alpha'$ to $\alpha$, computing the action of $[A_f^*]_{\alpha}^\beta$ on this transformation, and then sending the result expressed in terms of the $\beta$ basis to $\beta'$. Now the approximate singular vectors may be obtained for $A_f$ by taking the SVD of $[A_f^*]_{\alpha'}^{\beta'}$. In particular, the right singular vectors of $[A_f^*]_{\alpha'}^{\beta'}$ will be correspond to the approximate left singular functions of $A_f$ and vice versa. In particular, writing the SVD of $[A_f^*]_{\alpha'}^{\beta'}$ as \begin{gather*} [A_f^*]_{\alpha'}^{\beta'} = \hat U \hat \Sigma \hat V^* = \begin{pmatrix} | & & |\\ \hat u_1 & \cdots & \hat u_M\\ | & & | \end{pmatrix} \begin{pmatrix} \hat \sigma^2_1 & & \\ & \ddots &\\ & & \hat \sigma^2_M \end{pmatrix} \begin{pmatrix} - & \hat v_1^* & -\\ & \vdots & \\ - & \hat v_M^* & - \end{pmatrix}, \end{gather*} and the approximate right singular vector for $A_f$ is $\hat \varphi_j = \frac{1}{\sqrt{u_j^* G_p u_j}}\sum_{\ell} (\hat u_j )_\ell p_\ell$, and the approximate left singular vector for $A_f$ is $\hat \psi_j = \frac{1}{\sqrt{\hat v_j^* G_q \hat v_j}}\sum_{\ell} (\hat v_j )_\ell q_\ell,$ where $G_p$ and $G_q$ are the Gram matrices for the ordered bases $\beta'$ and $\alpha'$ respectively. Translating this to the original bases $\alpha$ and $\beta$, we find the following: \begin{gather*} \hat \varphi_j = \frac{1}{\sqrt{\hat u_j^* G_p \hat u_j}} u_j^T V_0^T \begin{pmatrix} K(\cdot, \gamma_1(T_1)) - K(\cdot,\gamma_1(0))\\ \vdots \\ K(\cdot,\gamma_M(T_M)) - K(\cdot,\gamma_M(0)) \end{pmatrix},\text{ and}\\ \hat \psi_j = \frac{1}{\sqrt{\hat v_j^* G_q \hat v_j}} \hat v_j^T \tilde V_0^T \begin{pmatrix} \Gamma_{\gamma_1}\\ \vdots \\ \Gamma_{\gamma_M} \end{pmatrix}. \end{gather*} Thus, if $x : [0,T] \to \mathbb{R}^n$ satisfies $\dot x = f(x)$, then it may be approximately expressed through the following integral equation: \begin{gather*} x(t) \approx x(0) + \int_0^t \sum_{j=1}^M \hat \xi_j \hat \psi_j(x(\tau)) d\tau, \end{gather*} where \begin{gather*}\hat \xi_j = \begin{pmatrix} \langle (x)_1,\hat \varphi_j \rangle_H\\ \vdots\\ \langle (x)_n, \hat \varphi_j \rangle_H\end{pmatrix}, \text{ and}\\ \langle (x)_i, \hat \phi_j \rangle_H = \diag\left(\frac{1}{\sqrt{\hat u_1^* G_p \hat u_1}}, \cdots, \frac{1}{\sqrt{\hat u_M^* G_p \hat u_M}}\right)\\ \times \begin{pmatrix} - & \hat u_1^T & -\\ & \vdots & \\ - & \hat u_M^T & -\end{pmatrix} V_0^T \begin{pmatrix} (\gamma_1(T_1))_i - (\gamma_1(0))_i\\ \vdots \\ (\gamma_M(T_M))_i - (\gamma_M(0))_i \end{pmatrix}. \end{gather*} \section{The Eigenfunction based DMD Algorithm}\label{sec:eigenfunctiondmd} In this section it will be assumed that $A_f : H \to \tilde H$ is a compact operator, and that $H \subset \tilde H$. Since $A_f$ is compact, it is bounded, which means that unlike \cite{gonzalez2021anti} and \cite{rosenfeld2019dynamic}, no additional assumptions are needed concerning the domain of this operator. For a collection of observed trajectories, $\{ \gamma_1,\ldots,\gamma_M\}$ consider the collection of occupation kernels, $\alpha = \{\Gamma_{\gamma_1}, \ldots, \Gamma_{\gamma_M} \}_{m=1}^M$, where these are the occupation kernels for the space $H$, and let $\beta = \{ \tilde \Gamma_{\gamma_1}, \ldots, \tilde \Gamma_{\gamma_M}\}$ be the occupation kernels in $\tilde H$. Let $P_\alpha$ be the projection from $H$ to $H$ onto the span of $\alpha$, and let $\tilde P_{\alpha}$ and $\tilde P_{\beta}$ be the corresponding projections onto the spans of $\alpha$ and $\beta$ respectively (viewed as subspaces of $\tilde H$). The numerical method presented in this section will construct a matrix representation for the operator $\tilde P_{\alpha} \tilde P_{\beta} A_f P_{\alpha}$, where the matrix, $[\tilde P_{\alpha}\tilde P_{\beta} A_f P_{\alpha}]_{\alpha}^\alpha$, represents this operator on the span of $\alpha$ in the domain and range respectively. Note that since the matrix representation is defined over $\alpha$, $[\tilde P_{\alpha} \tilde P_{\beta} A_f P_{\alpha}]_{\alpha}^\alpha = [\tilde P_{\alpha}\tilde P_{\beta} A_f]_{\alpha}^\alpha.$ Recall that for a function $g \in \tilde H$, $\tilde P_{\beta} g$ is a linear combination of the functions of $\alpha$ as $\sum_{m=1}^M w_m \Gamma_{\gamma_m}$, where the weights are obtained via \[ \begin{pmatrix} \langle \tilde \Gamma_{\gamma_1},\tilde \Gamma_{\gamma_1} \rangle_{\tilde H} & \cdots & \langle \tilde \Gamma_{\gamma_1},\tilde \Gamma_{\gamma_M} \rangle_{\tilde H}\\ \vdots & \ddots & \vdots\\ \langle \tilde \Gamma_{\gamma_M},\tilde \Gamma_{\gamma_1} \rangle_{\tilde H} & \cdots & \langle \tilde \Gamma_{\gamma_M},\tilde \Gamma_{\gamma_M} \rangle_{\tilde H} \end{pmatrix} \begin{pmatrix}w_1 \\ \vdots \\ w_M \end{pmatrix} = \begin{pmatrix}\langle g, \tilde \Gamma_{\gamma_1} \rangle_{\tilde H} \\ \vdots \\ \langle g, \tilde \Gamma_{\gamma_M} \rangle_{\tilde H} \end{pmatrix}, \] and the matrix is called the Gram matrix for the basis $\alpha$ in the space $\tilde H$. Hence, for each $\Gamma_{\gamma_j}$, the weights for the projection of $A_f \Gamma_{\gamma_j}$ onto the span of $\beta$ may be obtained as \begin{gather*} \begin{pmatrix} \langle \tilde \Gamma_{\gamma_1},\tilde \Gamma_{\gamma_1} \rangle_{\tilde H} & \cdots & \langle \tilde \Gamma_{\gamma_1},\tilde \Gamma_{\gamma_M} \rangle_{\tilde H}\\ \vdots & \ddots & \vdots\\ \langle \tilde \Gamma_{\gamma_M},\tilde \Gamma_{\gamma_1} \rangle_{\tilde H} & \cdots & \langle \tilde \Gamma_{\gamma_M},\tilde \Gamma_{\gamma_M} \rangle_{\tilde H} \end{pmatrix} \begin{pmatrix}w_1 \\ \vdots \\ w_M \end{pmatrix}\\ = \begin{pmatrix}\langle A_f \Gamma_{\gamma_j}, \tilde \Gamma_{\gamma_1} \rangle_{\tilde H} \\ \vdots \\ \langle A_f \Gamma_{\gamma_j}, \tilde \Gamma_{\gamma_M} \rangle_{\tilde H} \end{pmatrix} = \begin{pmatrix}\langle \Gamma_{\gamma_j}, A_f^* \tilde \Gamma_{\gamma_1} \rangle_{H} \\ \vdots \\ \langle \Gamma_{\gamma_j}, A_f^* \tilde \Gamma_{\gamma_M} \rangle_{H} \end{pmatrix}\\ = \begin{pmatrix}\langle \Gamma_{\gamma_j}, K_{\gamma_1(T_1)} - K_{\gamma_1(0)} \rangle_{H} \\ \vdots \\ \langle \Gamma_{\gamma_j}, K_{\gamma_M(T_M)} - K_{\gamma_M(0)} \rangle_{H} \end{pmatrix} = \begin{pmatrix}\Gamma_{\gamma_j}(\gamma_1(T_1)) - \Gamma_{\gamma_j}(\gamma_1(0)) \\ \vdots \\ \Gamma_{\gamma_j}(\gamma_M(T_M)) - \Gamma_{\gamma_j}(\gamma_M(0)) \end{pmatrix}. \end{gather*} Next, a projection onto the span of $\alpha$ within $\tilde H$ must be performed. For each $\tilde \Gamma_{\gamma_j}$, the weights corresponding to its projection onto $\alpha$ are given via \begin{gather*} \begin{pmatrix} \langle \Gamma_{\gamma_1}, \Gamma_{\gamma_1} \rangle_{\tilde H} & \cdots & \langle \Gamma_{\gamma_1}, \Gamma_{\gamma_M} \rangle_{\tilde H}\\ \vdots & \ddots & \vdots\\ \langle \Gamma_{\gamma_M}, \Gamma_{\gamma_1} \rangle_{\tilde H} & \cdots & \langle \Gamma_{\gamma_M}, \Gamma_{\gamma_M} \rangle_{\tilde H} \end{pmatrix} \begin{pmatrix}v_{1,j} \\ \vdots \\ v_{M,j}\end{pmatrix} = \begin{pmatrix}\langle \tilde \Gamma_{\gamma_j}, \Gamma_{\gamma_1} \rangle_{\tilde H} \\ \vdots \\ \langle \tilde \Gamma_{\gamma_j}, \Gamma_{\gamma_M} \rangle_{\tilde H} \end{pmatrix} \end{gather*} Hence, the projection of $A_{f} \Gamma_{\gamma_j}$ is given as \begin{gather*} \tilde P_{\alpha} \tilde P_{\beta} A_f \Gamma_{\gamma_j} = \sum_{m=1}^M w_{m} \sum_{\ell=1}^M v_{\ell,m} \Gamma_{\gamma_{\ell}} =\\ \sum_{m=1}^M w_{m} \sum_{\ell=1}^M\left( \begin{pmatrix} \langle \Gamma_{\gamma_1}, \Gamma_{\gamma_1} \rangle_{\tilde H} & \cdots & \langle \Gamma_{\gamma_1}, \Gamma_{\gamma_M} \rangle_{\tilde H}\\ \vdots & \ddots & \vdots\\ \langle \Gamma_{\gamma_M}, \Gamma_{\gamma_1} \rangle_{\tilde H} & \cdots & \langle \Gamma_{\gamma_M}, \Gamma_{\gamma_M} \rangle_{\tilde H} \end{pmatrix}^{-1} \begin{pmatrix}\langle \tilde \Gamma_{\gamma_m}, \Gamma_{\gamma_1} \rangle_{\tilde H} \\ \vdots \\ \langle \tilde \Gamma_{\gamma_m}, \Gamma_{\gamma_M} \rangle_{\tilde H} \end{pmatrix} \right)^T \begin{pmatrix}\Gamma_{\gamma_1} \\ \vdots \\ \Gamma_{\gamma_M} \end{pmatrix}\\ = \left( \begin{pmatrix} \langle \tilde \Gamma_{\gamma_1},\tilde \Gamma_{\gamma_1} \rangle_{\tilde H} & \cdots & \langle \tilde \Gamma_{\gamma_1},\tilde \Gamma_{\gamma_M} \rangle_{\tilde H}\\ \vdots & \ddots & \vdots\\ \langle \tilde \Gamma_{\gamma_M},\tilde \Gamma_{\gamma_1} \rangle_{\tilde H} & \cdots & \langle \tilde \Gamma_{\gamma_M},\tilde \Gamma_{\gamma_M} \rangle_{\tilde H} \end{pmatrix}^{-1} \begin{pmatrix}\Gamma_{\gamma_j}(\gamma_1(T_1)) - \Gamma_{\gamma_j}(\gamma_1(0)) \\ \vdots \\ \Gamma_{\gamma_j}(\gamma_M(T_M)) - \Gamma_{\gamma_j}(\gamma_M(0)) \end{pmatrix} \right)^T \\ \times \left( \begin{pmatrix} \langle \Gamma_{\gamma_1}, \Gamma_{\gamma_1} \rangle_{\tilde H} & \cdots & \langle \Gamma_{\gamma_1}, \Gamma_{\gamma_M} \rangle_{\tilde H}\\ \vdots & \ddots & \vdots\\ \langle \Gamma_{\gamma_M}, \Gamma_{\gamma_1} \rangle_{\tilde H} & \cdots & \langle \Gamma_{\gamma_M}, \Gamma_{\gamma_M} \rangle_{\tilde H} \end{pmatrix}^{-1} \begin{pmatrix}\langle \tilde \Gamma_{\gamma_1}, \Gamma_{\gamma_1} \rangle_{\tilde H} & \cdots & \langle \tilde \Gamma_{\gamma_M}, \Gamma_{\gamma_1} \rangle_{\tilde H}\\ \vdots \\ \langle \tilde \Gamma_{\gamma_1}, \Gamma_{\gamma_M} \rangle_{\tilde H} & \cdots & \langle \tilde \Gamma_{\gamma_M}, \Gamma_{\gamma_M} \rangle_{\tilde H}\end{pmatrix} \right)^T \begin{pmatrix}\Gamma_{\gamma_1} \\ \vdots \\ \Gamma_{\gamma_M} \end{pmatrix}, \end{gather*} and the final representation, $[\tilde P_{\alpha} \tilde P_{\beta} A_f ]_{\alpha}^\alpha$ is given as \begin{gather}\label{eq:finiterankrep} [\tilde P_{\alpha} \tilde P_{\beta} A_f ]_{\alpha}^\alpha = \\ \nonumber \begin{pmatrix} \langle \Gamma_{\gamma_1}, \Gamma_{\gamma_1} \rangle_{\tilde H} & \cdots & \langle \Gamma_{\gamma_1}, \Gamma_{\gamma_M} \rangle_{\tilde H}\\ \vdots & \ddots & \vdots\\ \langle \Gamma_{\gamma_M}, \Gamma_{\gamma_1} \rangle_{\tilde H} & \cdots & \langle \Gamma_{\gamma_M}, \Gamma_{\gamma_M} \rangle_{\tilde H} \end{pmatrix}^{-1} \begin{pmatrix} \langle \tilde \Gamma_{\gamma_1}, \Gamma_{\gamma_1} \rangle_{\tilde H} & \cdots & \langle \tilde \Gamma_{\gamma_M}, \Gamma_{\gamma_1} \rangle_{\tilde H}\\ \vdots \\ \langle \tilde \Gamma_{\gamma_1}, \Gamma_{\gamma_M} \rangle_{\tilde H} & \cdots & \langle \tilde \Gamma_{\gamma_M}, \Gamma_{\gamma_M} \rangle_{\tilde H} \end{pmatrix} \\ \nonumber \times \begin{pmatrix} \langle \tilde \Gamma_{\gamma_1},\tilde \Gamma_{\gamma_1} \rangle_{\tilde H} & \cdots & \langle \tilde \Gamma_{\gamma_1},\tilde \Gamma_{\gamma_M} \rangle_{\tilde H}\\ \vdots & \ddots & \vdots\\ \langle \tilde \Gamma_{\gamma_M},\tilde \Gamma_{\gamma_1} \rangle_{\tilde H} & \cdots & \langle \tilde \Gamma_{\gamma_M},\tilde \Gamma_{\gamma_M} \rangle_{\tilde H} \end{pmatrix}^{-1} \\ \nonumber \times \begin{pmatrix} \Gamma_{\gamma_1}(\gamma_1(T_1)) - \Gamma_{\gamma_1}(\gamma_1(0)) & \cdots & \Gamma_{\gamma_M}(\gamma_1(T_1)) - \Gamma_{\gamma_M}(\gamma_1(0)) \\ \vdots \\ \Gamma_{\gamma_1}(\gamma_M(T_M)) - \Gamma_{\gamma_1}(\gamma_M(0)) & \cdots & \Gamma_{\gamma_M}(\gamma_M(T_M)) - \Gamma_{\gamma_M}(\gamma_M(0)) \end{pmatrix}. \end{gather} Note that when $H = \tilde H$ and the occupation kernels are assumed to be in the domain of the Liouville operator, the first two matrices cancel, and the representation reduces to that of \cite{rosenfeld2019dynamic}. Under the assumption of diagonalizability for \eqref{eq:finiterankrep}, which holds for almost all matrices, an eigendecomposition for \eqref{eq:finiterankrep} may be determined as \[ [\tilde P_{\alpha} \tilde P_{\beta} A_f ]_{\alpha}^\alpha = \begin{pmatrix} | & & |\\ V_1 & \cdots & V_M\\ | & & | \end{pmatrix} \begin{pmatrix} \lambda_1 & & \\ & \ddots & \\ & & \lambda_M \end{pmatrix} \begin{pmatrix} | & & |\\ V_1 & \cdots & V_M\\ | & & | \end{pmatrix}^{-1}, \] where each column, $V_j$, is an eigenvector of $[\tilde P_{\alpha} \tilde P_{\beta} A_f ]_{\alpha}^\alpha$ with eigenvalue $\lambda_j$. The corresponding normalized eigenfunction is given as \[ \hat \varphi_j(x) = \frac{1}{\sqrt{V_j^T G_{\alpha} V_j}} V_j^T \begin{pmatrix} \Gamma_{\gamma_1} \\ \vdots \\ \Gamma_{\gamma_M}\end{pmatrix},\] where the normalization is performed in the Hilbert space $H$ through the Gram matrix for $\alpha$, $G_\alpha$, according to $H$'s inner product. Set $\bar V_j := \frac{1}{\sqrt{V_j^T G_{\alpha} V_j}} V_j$, and let $\bar V := \left( V_1 \cdots V_M \right)$. The Gram matrix for the normalized eigenbasis may be quickly computed as $\bar V^T G_{\alpha} \bar V$, and the weights for the projection of the full state observable onto this eigenbasis may be written as \begin{gather*} \begin{pmatrix} - & \hat \xi_1^T & -\\ & \vdots & \\ - & \hat \xi_M^T & - \end{pmatrix} = (\bar V^T G_{\alpha} \bar V)^{-1} \begin{pmatrix} \langle (x)_1, \hat \varphi_1 \rangle_H & \cdots & \langle(x)_n, \hat \varphi_1 \rangle_H\\ \vdots & \ddots & \vdots\\ \langle (x)_1, \hat \varphi_M \rangle_H & \cdots & \langle(x)_n, \hat \varphi_M \rangle_H \end{pmatrix} \\ = (\bar V^T G_{\alpha} \bar V)^{-1} \bar V^T \begin{pmatrix} \langle (x)_1, \Gamma_{\gamma_1} \rangle_H & \cdots & \langle(x)_n, \Gamma_{\gamma_1} \rangle_H\\ \vdots & \ddots & \vdots\\ \langle (x)_1, \Gamma_{\gamma_M} \rangle_H & \cdots & \langle(x)_n, \Gamma_{\gamma_M} \rangle_H \end{pmatrix} \\ = (\bar V^T G_{\alpha} \bar V)^{-1} \bar V^T \begin{pmatrix} \int_{0}^{T_1} \gamma_1(t)^T dt\\ \vdots\\ \int_{0}^{T_1} \gamma_M(t)^T dt \end{pmatrix} \end{gather*} and thus, \begin{equation}\label{eq:fullstateprojection} g_{id}(x) \approx \sum_{m=1}^M \hat \xi_{m} \hat \varphi_m(x).\end{equation} The approximation error (with respect to the norm of the RKHS) approaches zero if the number of trajectories increases and the corresponding collection of occupation kernels forms a dense set. Convergence in the norm of the RKHS implies uniform convergence on compact subsets of the domain. Consequently, a trajectory $x:[0,T] \to \mathbb{R}^n$ satisfying $\dot x = f(x)$ may be approximately expressed as \begin{equation*} x(t) = g_{id}(x(t)) \approx \sum_{m=1}^M \hat \xi_{m} e^{\lambda_m t} \hat \varphi_m(x(0)), \end{equation*} where the eigenfunctions for the finite rank approximation of $A_f$ play the role of eigenfunctions for the original operator, $A_f$. Note that for a given $\epsilon > 0$ there is a sufficiently large collection of trajectories and occupation kernels such that $\| \tilde P_{\alpha} \tilde P_{\beta} A_{f} P_{\alpha} - A_{f} \|_H^{\tilde H} < \epsilon.$ Hence, if $\hat\varphi$ is a normalized eigenfunction for the finite rank representation with eigenvalue $\lambda$, then \[ \| \lambda \hat\varphi - A_{f} \hat\varphi \|_{\tilde H} = \| \tilde P_{\alpha} \tilde P_{\beta} A_{f} P_{\alpha} \hat\varphi - A_{f} \hat\varphi \|_{\tilde H} \le \epsilon \| \hat\varphi\|_H = \epsilon. \] Consequently, given a compact subset of $\mathbb{R}^n$ and a given tolerance, $\epsilon_0$, a finite rank approximation may be selected such that for each normalized eigenfunction the relation $\left| \frac{d}{dt} \hat\varphi(x(t)) - \lambda \hat\varphi(x(t)) \right| < \epsilon_0$ for all $x(t)$ in the compact set. Hence, for sufficiently rich information, $\hat \varphi(x(t)) \approx e^{\lambda t} \hat\varphi(x(0)).$ \section{Computational Remarks for the Eigenfunction Method} In the above computations, some entries for the matrices require a bit more analysis. Namely, this includes the inner products, $\langle \Gamma_{\gamma_i},\Gamma_{\gamma_j} \rangle_{\tilde H}$ and $\langle \Gamma_{\gamma_i},\tilde \Gamma_{\gamma_j} \rangle_{\tilde H}$. All the other quantities have been discussed at length in \cite{rosenfeld2019occupation,rosenfeld2019occupation2,rosenfeld2019dynamic}. The second quantity simply utilizes the functional definition of the function $\tilde \Gamma_{\gamma_j}$ as a function in $\tilde H$, $\langle \Gamma_{\gamma_i},\tilde \Gamma_{\gamma_j} \rangle_{\tilde H} = \int_0^{T_j} \Gamma_{\gamma_i}(\gamma_j(t))dt = \int_0^{T_j} \int_0^{T_i} K(\gamma_j(t),\gamma_i(\tau)) d\tau dt,$ where $K$ is the kernel function for $H$. Note that this means $\langle \Gamma_{\gamma_i},\tilde \Gamma_{\gamma_j} \rangle_{\tilde H} = \langle \Gamma_{\gamma_i}, \Gamma_{\gamma_j} \rangle_{ H}$. However, the first quantity is more complicated and is context dependent. In particular, $\Gamma_{\gamma_i}$ is not the occupation kernel corresponding to $\tilde H$, so it's functional relationship cannot be exploited in the same manner. On the other hand, $\Gamma_{\gamma_i}(x) = \int_0^{T_i} K(x,\gamma_i(t))$. To compute the inner product in $\tilde H$, a specific selection of spaces must be considered. In the particular setting where $H = F_{\mu_1}^2(\mathbb{R}^n)$ and $\tilde H = F_{\mu_2}^2(\mathbb{R}^n)$, with $\mu_1 < \mu_2$, it follows that $\Gamma_{\gamma_i}(x) = \int_0^T e^{\mu_1 x^T \gamma_i(t)}dt.$ Moreover, $K(x,\gamma_i(t)) = e^{\mu_1 x^T \gamma_i(t)} = e^{\mu_2 x^T\left(\frac{\mu_1}{\mu_2} \gamma(t)\right)} = \tilde K(x,(\mu_1/\mu_2)\gamma_i(t))$. Hence, $\Gamma_{\gamma_i}(x) = \tilde \Gamma_{(\mu_1/\mu_2)\gamma_i}(x)$, and \begin{gather*} \langle \Gamma_{\gamma_i},\Gamma_{\gamma_j} \rangle_{\tilde H} = \langle\tilde\Gamma_{(\mu_1/\mu_2)\gamma_i},\tilde\Gamma_{(\mu_1/\mu_2)\gamma_j} \rangle_{\tilde H}\\= \int_0^{T_i}\int_0^{T_j} \tilde K((\mu_1/\mu_2)\gamma_i(t),(\mu_1/\mu_2)\gamma_j(\tau)) d\tau dt . \end{gather*} \section{Numerical Results}\label{sec:numericalresults} This section presents the results obtained through implementation of Section \ref{sec:eigenfunctiondmd} with the domain viewed as embedded in the range of the operator. The experiments were performed on the benchmark cyllinder flow data set found in \cite{kutz2016dynamic} by setting $mu_1 = 1/1000$ and $\mu_2 = 1/999$ for the exponential dot product kernel. It should be noted that the timesteps for that data set are $h = 0.02$. The total dataset comprises $151$ snapshots, and the trajectories for the system were selected from strings of adjacent snapshots of length $5$ yielding $147$ trajectories. Computations were performed using Simpson's Rule. Presented in Figure \ref{fig:LiouvilleModes} are a selection of approximate Liouville modes obtained for this operator through the finite rank approximation determined by Section \ref{sec:eigenfunctiondmd}. Examples of the reconstructed and original data are shown in Figure \ref{fig:reconstruction} and Figure \ref{fig:original}. \begin{figure} \label{fig:LiouvilleModes} \centering \includegraphics[width=0.48\textwidth]{mode3.eps} \includegraphics[width=0.48\textwidth]{mode23.eps} \includegraphics[width=0.48\textwidth]{mode53.eps} \includegraphics[width=0.48\textwidth]{mode67.eps} \includegraphics[width=0.48\textwidth]{mode70.eps} \includegraphics[width=0.48\textwidth]{mode82.eps} \includegraphics[width=0.48\textwidth]{mode95.eps} \includegraphics[width=0.48\textwidth]{mode109.eps} \includegraphics[width=0.48\textwidth]{mode115.eps} \includegraphics[width=0.48\textwidth]{mode120.eps} \includegraphics[width=0.48\textwidth]{mode124.eps} \includegraphics[width=0.48\textwidth]{mode136.eps} \caption{A selection of the real parts of approximate Liouville modes obtained using the exponential dot product kernel, where the domain corresponds to $\mu_1 = 1/1000$ and the range corresponds to $\mu_2 = 1/999$.} \end{figure} \begin{figure} \label{fig:reconstruction} \centering \includegraphics[width=0.48\textwidth]{reconstructionstep1.eps} \includegraphics[width=0.48\textwidth]{reconstructionstep81.eps} \includegraphics[width=0.48\textwidth]{reconstructionstep21.eps} \includegraphics[width=0.48\textwidth]{reconstructionstep101.eps} \includegraphics[width=0.48\textwidth]{reconstructionstep41.eps} \includegraphics[width=0.48\textwidth]{reconstructionstep121.eps} \includegraphics[width=0.48\textwidth]{reconstructionstep61.eps} \includegraphics[width=0.48\textwidth]{reconstructionstep141.eps} \caption{A selection of reconstructed snapshots for the cyllinder flow example. The first column from the top presents snapshots $1$, $21$, $41$, and $61$, and the second column presents $81$, $101$, $121$, and $141$.} \end{figure} \begin{figure} \label{fig:original} \centering \includegraphics[width=0.48\textwidth]{original1.eps} \includegraphics[width=0.48\textwidth]{original81.eps} \includegraphics[width=0.48\textwidth]{original21.eps} \includegraphics[width=0.48\textwidth]{original101.eps} \includegraphics[width=0.48\textwidth]{original41.eps} \includegraphics[width=0.48\textwidth]{original121.eps} \includegraphics[width=0.48\textwidth]{original61.eps} \includegraphics[width=0.48\textwidth]{original141.eps} \caption{The original snapshots from the cyllinder flow data set in \cite{kutz2016dynamic}. The first column from the top presents snapshots $1$, $21$, $41$, and $61$, and the second column presents $81$, $101$, $121$, and $141$.} \end{figure} \section{Discussion} The methods presented in this manuscript give two algorithms for performing a dynamic mode decomposition. Together with the compactness of the Liouville operators, the singular DMD approach guarantees the existence of dynamic modes and convergence through singular value decomposition of compact operators. Singular DMD is a general purpose approach to performing a dynamic mode decomposition for when the domain and range of the operators disagree. The major drawback of this approach is that even though it can guarantee the existence of dynamic modes, which cannot be done for eigenfunction methods, the reconstruction involves the solution of an initial value problem, which is technically more involved than the eigenfunction approach. The second method adds an additional assumption to the problem, where the domain is assumed to be embedded in the range of the operator. These embeddings frequently occur in the study of RKHSs, where the adjustment of a parameter loosens the requirement on functions within that space. It was demonstrated that this embedding may be established for the exponential dot product kernel, and it also holds for the native spaces of Gaussian RBFs with differing parameters. Convergence of these routines follow the proof found in \cite{rosenfeld2019dynamic}, which is a general purpose approach for showing convergence of operator level interpolants to the compact operators they are approximating. In particular, given an infinite collection of trajectories for a dynamical system, if the span of the occupation kernels form a dense subset of their respective Hilbert spaces, convergence of the overall algorithm is achieved. The density of the occupation kernels corresponding to trajectories are easily established for Lipschitz continuous dynamics. This follows since, given any initial point, $x_0$ in $\mathbb{R}^n$, there is a $T_0$ such that the trajectory starting at $x_0$, $\gamma_{x_0}$, exists over the interval $[0,T_0]$. Consider the sequence of occupation kernels indexed by $\delta \in [0,T_0]$, $\Gamma_{\gamma_{x_0},\delta}(x) := \int_0^\delta K(x,\gamma_{x_0}(t))dt$. Then $\frac{1}{\delta} \Gamma_{\gamma_{x_0},\delta} \to K(x,x_0)$ in the Hilbert space norm. Hence, as $x_0$ was arbitrary, every kernel may be approximated by an occupation kernel corresponding to a trajectory, and since kernels are dense in $H$, so are these occupation kernels. Finally, if $H$ and $\tilde H$ are spaces of real analytic functions, the dynamics must also be real analytic by the same proof found in \cite{rosenfeld2019occupation}. Spaces of real analytic functions include the Gaussian RBF and the exponential dot product kernel space. One interesting result of the structure of the finite rank approximation given in Section \ref{sec:eigenfunctiondmd} is that as $\mu_1 \to \mu_2$, the first two matrices cancel. The matrix computations then approach the computations in \cite{rosenfeld2019dynamic}. Hence, for close enough $\mu_1$ and $\mu_2$ the computations are computationally indistinguishable from \cite{rosenfeld2019dynamic} over a fixed compact set containing the trajectories. Finally, it should be noted that this methodology is not restricted to spaces of analytic functions, but rather it can work for a large collection of pairs of spaces. As a rule, the range space should be less restrictive as to the collection of functions in that space than the domain space. With this in mind, for many of the cases where compact Liouville operators may be established, the domain will embed into the range. The complications arise in computing the first matrix in \eqref{eq:finiterankrep}, where the inner product of the occupation kernels for the domain are computed in the range space. Hence, the explicit description for spaces of real analytic functions help resolve that computation. \section{Conclusion} This manuscript presented a theoretical and algorithmic framework that achieves many long standing goals of dynamic mode decompositions. To wit, by selecting differing domains and ranges for the Liouville operators (sometimes Koopman generators), the resulting operators are compact. This comes at the sacrifice of eigenfunctions when the domain is not embedded in the range of the operator, but achieves well defined dynamic modes and convergence. Reconstruction can then be determined using typical numerical methods for initial value problems. However, in the case of an embedding between the spaces, an algorithm may be established to determine approximate eigenfunctions for the operators, resulting in a more typical DMD routine that also converges. \bibliographystyle{siamplain}
1,108,101,562,412
arxiv
\section{experimental setup} The entire control and readout layout of the experiment is shown in Fig. \ref{Wiring}. Electronic devices for qutrit readout, Josephson parametric amplifier (JPA) control, and qutrit control are displayed from top to bottom. The readin and XY control signals are the mixtures of low-frequency signals [from two independent digital-to-analog converter (DAC) channels I and Q] and high-frequency signals (from microwave sources) to achieve nanosecond fast tuning. On the other hand, the qutrit Z control signal is sent directly from the DAC without mixing, whose frequency can also be slowly tuned by the direct current (DC) bias line. Additionally, the output signal of the readout feeder is amplified by the JPA, high electron mobility transistor (HEMT), and room temperature amplifier, and then demodulated by the analog-to-digital converter (ADC). Four cryogenic unidirectional circulators are inserted between the JPA and the 4K HEMT to block the reflection and noise from outside. As for the JPA, it is pumped by an independent microwave signal source and a DC bias, which is converged by the bias tee. At different temperature stages of the dilution refrigerator, each control line is balanced with some attenuators and filters (intuitively shown in Fig. \ref{Wiring}) to prevent unwanted noise from affecting the equipment. \begin{figure*} \includegraphics[width=15cm]{Wiring.eps} \caption{(Color online) The wiring diagram of the experimental setup. Green top channel: readin and readout of the Xomn qutrit; blue bottom channel: XY and Z control of the Xomn qutrit. The detailed description is placed in the text.} \label{Wiring} \end{figure*} \section{qutrit readout and calibration} Here we show the results of the qutrit readout in Fig. \ref{IQraw}, where blue, orange, and yellow points represent the results of applying $I$, $X_{01}$, and $X_{01}X_{12}$ to the qutrit, i.e., depicting the results of preparing $|0\rangle$, $|1\rangle$, $|2\rangle$, respectively. The readout pulses are 1 $\mu$s-long and the repetition is 3000. It can be seen that these three states are clearly separated, though with several error transitions caused by the readout errors. Like that in Refs. \citep{cao-PRL,ning-PRL}, the calibration matrix can be defined as \begin{eqnarray}\label{calibration1} F=\left( \begin{array}{ccc} F_{00} & F_{01} & F_{02} \\ F_{10} & F_{11} & F_{12} \\ F_{20} & F_{21} & F_{22} \\ \end{array} \right), \end{eqnarray} where $F_{ii'}$ ($i,i'=0,1,2$) is the probability of measuring the qutrit in $|i\rangle$ when it is prepared in $|i'\rangle$. Such that the measured probability is given by $P_i^{\rm m}=\sum_{i'} F_{ii'}P_{i'}^{\rm c}$, with $P_{i'}^{\rm c}$ being the calibrated probability of $|i'\rangle$, viz., \begin{eqnarray}\label{calibration2} \left( \begin{array}{c} P_{0}^{\rm c} \\ P_{1}^{\rm c} \\ P_{2}^{\rm c} \\ \end{array} \right)=F^{-1}\left( \begin{array}{c} P_{0}^{\rm m} \\ P_{1}^{\rm m} \\ P_{2}^{\rm m} \\ \end{array}\right). \end{eqnarray} We can derive, for instance, $F_{01}=N_{\rm OL }/3000$, in which $N_{ \{\rm B,O,Y\} \rm \{L,R,T\}}$ is the number of \{blue, orange, yellow\} points in the \{left, right, top\} zones in Fig. \ref{IQraw}. In addition, we repeat the measurement for 10 times to take the average values, yielding \begin{eqnarray}\label{calibration3} F=\left( \begin{array}{ccc} 0.974 & 0.102 & 0.041 \\ 0.017 & 0.885 & 0.141 \\ 0.009 & 0.013 & 0.818 \\ \end{array} \right). \end{eqnarray} Note that all the experimental data have been calibrated by the matrix in Eq. (\ref{calibration3}). \begin{figure} \includegraphics[width=8cm]{IQraw.eps} \caption{(Color online) Results of the qutrit readout in the I-Q plane. Blue, orange and yellow points are the results of $|0\rangle$, $|1\rangle$, $|2\rangle$, respectively. The geometric centers of the three cluster points (3000 points per cluster) are marked by black cross signs.} \label{IQraw} \end{figure} \section{variation of Eqs. (3, 4) in the main text for the coherence control} Equations (3, 4) in the main text are used to control the populations, with designable $f_k$ ($k=1,2$) and iterative $h_{i+1}$ ($i=0,1,2$). If we alternatively aim to control two parts of the coherence (e.g., $h_2$ and $h_3$) of systems, we need to alter Eqs. (3, 4) in the main text as \begin{small} \begin{eqnarray}\label{index} &&\Omega_{01}=\frac{h_3 [(\gamma+\Gamma)+2\dot{h}_3]+h_2[ (\gamma_2+\Gamma_2)+2 \dot{h}_2]}{2 h_1 h_2-2 h_3 (2 f_1+f_2-1)}, \cr\cr &&\Omega_{12}=\frac{ (2 f_1+f_2-1) [h_2(\gamma_2+\Gamma_2)+2 \dot{h}_2]+h_1 [h_3 (\gamma+\Gamma)+2\dot{h}_3]}{2 h_1 h_2-2 h_3 (2 f_1+f_2-1)}, \cr\cr &&\dot{h}_1=\Omega_{12} (f_1-f_2)-\frac{1}{2} h_1 (\gamma+\Gamma+\gamma_2+\Gamma_2)-h_2\Omega_{01}, \cr\cr &&\dot{f}_1=-\Gamma f_1+\Gamma_2 f_2+2h_3\Omega_{01}-2h_1\Omega_{12}, \cr\cr &&\dot{f}_2=-\Gamma_2f_2+2h_1\Omega_{12}, \end{eqnarray} \end{small} in which $h_{k+1}$ is designable and $f_k$ is iterative. Therefore one can design suitable $h_2$ and $h_3$ to accomplish desired coherence control. \section{Feasible and infeasible areas of the coherence control} For the control of $h_2$ and $h_3$, similarly, there are feasible area (FA) and infeasible area (IFA), which mainly depend on the intermediate function $f$ and decoherence. For $f=[1+e^{-a(t-t_f/2)}]^{-1}$, we plot FA and IFA of the coherence control in Fig. \ref{IFAco}. The total area is constructed by conditions $[h_2,h_3]\le0.5$ and $h_2^2+h_3^2\le [1-f_1^2-f_2^2-(1-f_1-f_2)^2-2h_1^2]/2\le1/3$. \begin{figure} \includegraphics[width=8cm]{IFAco.eps} \caption{(Color online) Feasible area (surrounded by solid, dashed, and dotted dashed curves) and infeasible area (the outside zone surrounded by dotted curves) of the coherence control in the experiment when the evolution time is 3, 5, and 10 $\mu$s.} \label{IFAco} \end{figure} \section{Tomography of the coherence control} The populations of three-level systems can be directly measured by the readout cavity. (In the experiments, only diagonal elements of the density matrix can be directly measured.) While the coherence can be indirectly measured by a simplified tomography, including 4 steps as follows, (i) $U_1=I$, \begin{eqnarray}\label{U1} \rho_1=U_1\rho U_1^\dag=\left( \begin{array}{ccc} f_2 & -ih_1 & h_2 \\ ih_1 & f_1 & -ih_3 \\ h_2 & ih_3 & 1-f_1-f_2 \\ \end{array} \right), \end{eqnarray} (ii) $U_2=(X/2)_{01}$, \begin{small} \begin{eqnarray}\label{U2} \rho_2&=&U_2\rho U_2^\dag\cr\cr &=&\left( \begin{array}{ccc} f_2 & -\frac{i (h_1-h_2)}{\sqrt{2}} & \frac{h_1+h_2}{\sqrt{2}} \\ \frac{i (h_1-h_2)}{\sqrt{2}} & -\frac{f_2}{2}+h_3+\frac{1}{2} & \frac{1}{2} i (2 f_1+f_2-1) \\ \frac{h_1+h_2}{\sqrt{2}} & -\frac{1}{2} i (2 f_1+f_2-1) & \frac{1}{2} (-f_2-2 h_3+1) \\ \end{array} \right), \cr & \end{eqnarray} \end{small} (iii) $U_3=(X/2)_{12}$, \begin{small} \begin{eqnarray}\label{U3} \rho_3&=&U_3\rho U_3^\dag\cr\cr&=&\left( \begin{array}{ccc} \frac{f_1+f_2}{2}+h_1& \frac{-i}{2}(f_1-f_2)& \frac{1}{\sqrt{2}}(h_2-h_3) \\ \frac{i}{2}(f_1-f_2) & \frac{1}{2}(f_1+f_2-2h_1) & \frac{-i}{\sqrt{2}}(h_2+h_3) \\ \frac{1}{\sqrt{2}}(h_2-h_3) &\frac{i}{\sqrt{2}}(h_2+h_3) & 1-f_1-f_2 \\ \end{array} \right),\cr & \end{eqnarray} \end{small} (iv) $U_4=U_3U_2$, \begin{small} \begin{widetext} \begin{eqnarray}\label{U4} \rho_4&=&U_4\rho U_4^\dag\cr\cr&=&\ \left( \begin{array}{ccc} \frac{1}{4} \left(f_2+2 \sqrt{2} h_1-2 \sqrt{2} h_2+2 h_3+1\right) & \frac{1}{4} i (3 f_2-2 h_3-1) & \frac{2 f_1+f_2+\sqrt{2} h_1+\sqrt{2} h_2-1}{2 \sqrt{2}} \\ -\frac{1}{4} i (3 f_2-2 h_3-1) & \frac{1}{4} \left(f_2-2 \sqrt{2} h_1+2 \sqrt{2} h_2+2 h_3+1\right) & -\frac{i \left(-2 f_1-f_2+\sqrt{2} h_1+\sqrt{2} h_2+1\right)}{2 \sqrt{2}} \\ \frac{2 f_1+f_2+\sqrt{2} h_1+\sqrt{2} h_2-1}{2 \sqrt{2}} & \frac{i \left(-2 f_1-f_2+\sqrt{2} h_1+\sqrt{2} h_2+1\right)}{2 \sqrt{2}} & \frac{1}{2} (-f_2-2 h_3+1) \\ \end{array} \right), \end{eqnarray} \end{widetext} \end{small} where $I$ represents identity gate and $(X/2)_{01(12)}$ denotes a $\pi/2$ rotation over the X axis of Bloch sphere in basis \{$|0\rangle$,$|1\rangle$\} (\{$|1\rangle$,$|2\rangle$\}). For all the experimental data points of the coherence (i.e., $h_{i+1}$), we measure three diagonal elements of $\rho_{p}$ ($p=1,2,3,4$), signed as $\rho_{p}^{(ii)}$ ($i=0,1,2$), and further deduce $h_{i+1}$, i,e., \begin{eqnarray}\label{tomoh2h3} h_1&=&\rho_{3}^{(22)}-\frac{\rho_{1}^{(22)}+\rho_{1}^{(11)}}{2}, \ \ h_3=\rho_{2}^{(11)}-\frac{1-\rho_{1}^{(22)}}{2}, \cr\cr h_2&=&-\frac{\rho_{1}^{(22)}-2\sqrt{2}h_1+2h_3-4\rho_{1}^{(11)}+1}{2\sqrt{2}}. \end{eqnarray} By now, a simplified tomography for reading out coherence $h_{i+1}$ of three-level systems is completed. \section{Supplementary experimental data for the dynamical control} Here we show 30 and 36 groups of experimental data for the populations and coherence control in Figs. \ref{tile} and \ref{tile36}, respectively, whose average error rates are 1.02\% and 1.39\%, respectively, demonstrating the feasibility of the dynamical control in three-level open systems. \begin{figure*} \includegraphics[width=17.5cm]{tile.eps} \caption{(Color online) Experimental results (error bars) of the controls of populations. The ideal results of $P_0(t)$, $P_1(t)$, and $P_2(t)$ are represented by solid, dashed and dotted dashed curves, respectively. The error rates are shown in corresponding subgraphs.} \label{tile} \end{figure*} \begin{figure*} \includegraphics[width=17.5cm]{tile36.eps} \caption{(Color online) Experimental results (error bars) of the controls of coherence. The ideal results of $h_0(t)$, $h_1(t)$, and $h_2(t)$ are represented by solid, dashed and dotted dashed curves, respectively. The error rates are shown in corresponding subfigures.} \label{tile36} \end{figure*}
1,108,101,562,413
arxiv
\section{Introduction} Church's type theory~\cite{Church40} is a basic formulation of higher-order logic. Henkin~\cite{Henkin50} found a natural class of models for which Church's Hilbert-style proof system turned out to be complete. Equality, originally expressed with higher-order quantification, was later identified as the primary primitive of the theory~\cite{Henkin63,Andrews72a,AndrewsBook}. In this paper we consider simple type theory with primitive equality but without descriptions or choice. We call this system STT for simple type theory. The semantics of STT is given by Henkin models with equality. Modern proof theory started with Gentzen's~\cite{Gentzen1935} invention of a cut-free sequent calculus for first-order logic. While Gentzen proved a cut-elimination theorem for his calculus, Smullyan~\cite{SmullyanBook} found an elegant technique (abstract consistency classes) for proving the completeness of cut-free first-order calculi. Smullyan~\cite{SmullyanBook} found it advantageous to work with a refutation-oriented variant of Gentzen's sequent calculi~\cite{Gentzen1935} known as tableau calculi~\cite{Beth1955,Hintikka1955,SmullyanBook}. The development of complete cut-free proof systems for simple type theory turned out to be hard. In 1953, Takeuti~\cite{Takeuti53} introduced a sequent calculus for a version of simple type theory without primitive equality and conjectured that cut elimination holds for this calculus. Gentzen's~\cite{Gentzen1935} inductive proof of cut-elimination for first-order sequent calculi does not generalize to the higher-order case since instances of formulas may be more complex than the formula itself. Moreover, Henkin's~\cite{Henkin50} completeness proof cannot be adapted for cut-free systems. Takeuti's conjecture was answered positively by Tait~\cite{Tait66} for second-order logic, by Takahashi~\cite{Takahashi67} and Prawitz~\cite{Prawitz68} for higher-order logic without extensionality, and by Takahashi~\cite{Takahashi68} for higher-order logic with extensionality. Building on the possible-values technique of Takahashi~\cite{Takahashi67} and Prawitz~\cite{Prawitz68}, Takeuti \cite{Takeuti75} finally proves Henkin completeness of a cut-free sequent calculus with extensionality. The first cut-elimination result for a calculus similar to Church's type theory was obtained by Andrews~\cite{Andrews71} in 1971. Andrews considers elementary type theory (Church's type theory without equality, extensionality, infinity, and choice) and proves that a cut-free sequent calculus is complete relative to a Hilbert-style proof system. Andrews' proof employs both the possible-values technique~\cite{Takahashi67,Prawitz68} and the abstract consistency technique~\cite{SmullyanBook}. In 2004 Benzm\"uller, Brown and Kohlhase~\cite{BBKweb04} gave a completeness proof for an extensional cut-free sequent calculus. The constructions in~\cite{BBKweb04} also employ abstract consistency and possible values. None of the cut-free calculi discussed above has equality as a primitive. Following Leibniz, one can define equality of $a$ and $b$ to hold whenever $a$ and $b$ satisfy the same properties. While this yields equality in standard models (full function spaces), there are Henkin models where this is not the case as was shown by Andrews~\cite{Andrews72a}. A particularly disturbing fact about the model Andrews constructs is that while it is extensional (indeed, it is a Henkin model), it does not satisfy a formula corresponding to extensionality (formulated using Leibniz equality). In~\cite{Andrews72a} Andrews gives a definition of a {\em general model} which is essentially a Henkin model with equality. This notion of a general model was generalized to include non-extensional models in~\cite{BBK04} and a condition called property $\mathfrak{q}$ was explicitly included to ensure Leibniz equality is the same as semantic equality. The constructions of Prawitz, Takahashi, Andrews and Takeuti described above do not produce models guaranteed to satisfy property $\mathfrak{q}$. A similar generalization of Henkin models to non-extensional models is given by Muskens~\cite{Muskens07} but without a condition like property $\mathfrak{q}$. Muskens uses the Prawitz-Takahashi method to prove completeness of a cut-free sequent calculus for a formulation of elementary type theory via a model existence theorem, again producing a model in which Leibniz equality may not be the same as semantic equality. The models constructed in~\cite{BBK04} do satisfy property $\mathfrak{q}$, as do the models constructed in~\cite{BBKweb04}. In addition to the model-theoretic complication, defined equality also destroys the cut-freeness of a proof system. As shown in~\cite{BBK2009} any use of Leibniz equality to say two terms are equal provides for the simulation of cut.\footnote{From a Leibniz formula of the form $\forall p.p s\to p t$ one can easily infer $u\to u$ for any formula $u$, and then use $u$ as a formula introduced by cut.} Hence calculi that define equality as Leibniz equality cannot claim to provide cut-free equational reasoning. In the context of resolution, Benzm\"uller gives serious consideration to primitive equality and its relationship to Leibniz equality in his 1999 doctoral thesis~\cite{Benzmuller99a} (see also~\cite{Benzmuller99b}). The completeness proofs there are relative to an assumption that corresponds to cut. The first completeness proof for a cut-free proof system for extensional simple type theory with primitive equality relative to Henkin models was given by Brown in his 2004 doctoral thesis~\cite{Brown2004a} (later published as a book~\cite{BrownARHO}). Brown proves the Henkin completeness of a novel one-sided sequent calculus with primitive equality. His model construction starts with Andrews'~\cite{Andrews71} non-extensional possible-values relations and then obtains a structure isomorphic to a Henkin model by taking a quotient with respect to a partial equivalence relation. Finally, abstract consistency classes~\cite{SmullyanBook,Andrews71} are used to obtain the completeness result. The equality-based decomposition rules of Brown's sequent calculus have commonalities with the unification rules of the systems of Kohlhase~\cite{KohlhaseTableaux1995} and Benzm\"uller~\cite{Benzmuller99b}. Note, however, that the completeness proofs of Kohlhase and Benzm\"uller assume the presence of cut. In this paper we improve and simplify Brown's result~\cite{BrownARHO}. For the proof system we switch to a cut-free tableau calculus $\TS$ that employs an abstract normalization operator. With the normalization operator we hide the details of lambda conversion from the tableau calculus and most of the completeness proof. For the completeness proof we use the new notion of a value system to directly construct surjective Henkin models. Value systems are logical relations~\cite{Statman85a} providing a relational semantics for simply-typed lambda calculus. The inspiration for value systems came from the possible-values relations used in~\cite{BrownARHO,BrownSmolkaBasic,BrownSmolkaEFO}. In contrast to Henkin models, which obtain values for terms by induction on terms, value systems obtain values for terms by induction on types. Induction on types, which is crucial for our proofs, has the advantage of hiding the presence of the lambda binder. As a result, only a single lemma of our completeness proof deals explicitly with lambda abstractions and substitutions. Once we have established the results for STT, we turn to its first-order fragment EFO (for extended first-order), which restricts equality and quantification to base types but retains lambda abstraction and higher-order variables. EFO contains the usual first-order formulas but also contains formulas that are not first-order in the traditional sense. For instance, a formula $p(\lam{x}{\neg fx})$ is EFO even though the predicate $p$ is applied to a $\lambda$-abstraction and the negation appears embedded in a nontrivial way. We sharpen the results for STT by proving that they hold for EFO with respect to standard models and for a constrained rule for the universal quantifier (first published in~\cite{BrownSmolkaEFO}). Finally, we consider three decidable fragments of EFO: the lambda-free fragment, the pure fragment (disequations between simply typed $\lambda$-terms not involving logic), and the Bernays-Sch\"onfinkel-Ramsey fragment. For each of these fragments, decidability follows from termination of the tableau calculus for EFO (first published in~\cite{BrownSmolkaBasic} and~\cite{BrownSmolkaEFO}). \section{Basic Definitions} We assume a countable set of \emph{base types} ($\beta$). \emph{Types} ($\sigma$, $\tau$, $\mu$) are defined inductively: (1)~every base type is a type; (2)~if $\sigma$ and $\tau$ are types, then $\sigma\tau$ is a type. We assume a countable set of \emph{names} ($x$, $y$), where every name comes with a unique type, and where for every type there are infinitely many names of this type.\footnote{Later we will partition names into variables and logical constants.} \emph{Terms} ($s$, $t$, $u$, $v$) are defined inductively: (1)~every name is a term; (2)~if $s$ is a term of type $\tau\mu$ and $t$ is a term of type $\tau$, then $st$ is a term of type $\mu$; (3)~if $x$ is a name of type $\sigma$ and $t$ is a term of type $\tau$, then $\lam{x}t$ is a term of type $\sigma\tau$. We write \emph{$s:\sigma$} to say that $s$ is a term of type $\sigma$. Moreover, we write \emph{$\Wff_\sigma$} for the set of all terms of type $\sigma$. We assume that the set of types and the set of terms are disjoint. A \emph{frame} is a function $\mcd$ that maps every type to a nonempty set such that $\mcd(\sigma\tau)$ is a set of total functions from $\mcd\sigma$ to $\mcd\tau$ for all types $\sigma$, $\tau$ (i.e., $\mcd(\sigma\tau)\incl(\mcd\sigma\to\mcd\tau)$). An \emph{assignment} into a frame $\mcd$ is a function $\mci$ that extends $\mcd$ (i.e., $\mcd\incl\mci$) and maps every name $x:\sigma$ to an element of $\mcd\sigma$ (i.e., $\mci x\in\mcd\sigma$). If~$\mci$ is an assignment into a frame $\mcd$, $x:\sigma$ is a name, and $a\in\mcd\sigma$, then~\emph{$\subst\mci{x}a$} denotes the assignment into $\mcd$ that agrees everywhere with~$\mci$ but possibly on~$x$ where it yields $a$. For every frame $\mcd$ we define a function \emph{$\hat{~}$} that for every assignment~$\mci$ into $\mcd$ yields a function $\hat\mci$ that for some terms $s:\sigma$ returns an element of $\mcd\sigma$. The definition is by induction on terms. \begin{align*} \hat\mci x&\eqdef\mci x \\ \hat\mci(st)&\eqdef fa &&\text{if \ $\hat\mci s=f$ \ and \ $\hat\mci t=a$} \\ \hat\mci(\lam{x}s)&\eqdef f &&\text{if \ $\lam{x}s:\sigma\tau$, \ $f\in\mcd(\sigma\tau)$, \ and \ $\forall a\in\mcd\sigma\col~~ \widehat{\subst\mci{x}a}s=fa$} \end{align*} We call $\hat\mci$ the \emph{evaluation function} of $\mci$. The evaluation function may be partial since in the last clause of the definition even assuming there is some function $f$ such that $\widehat{\subst\mci{x}a}s=fa$ for every $a\in\mcd\sigma$, this $f$ may not be in $\mcd(\sigma\tau)$. In such a case, $\hat\mci$ will not be defined on $\lam{x}s$. Of course, in such a case $\hat\mci$ will also not be defined on a term of the form $(\lam{x}s)t$ since the second clause of the definition will fail. An \emph{interpretation} is an assignment whose evaluation function is defined on all terms. An assignment $\mci$ is \emph{surjective} if for every type $\sigma$ and every value $a\in\mci\sigma$ there exists a term $s:\sigma$ such that $\hat\mci s=a$. \begin{prop} Let $\mci$ be an interpretation, $x:\sigma$, and $a\in\mci\sigma$. Then~$\subst\mci{x}a$ is an interpretation. \end{prop} \begin{prop} If $\mci$ is a surjective interpretation, then $\mci\sigma$ is a countable set for every type $\sigma$. \end{prop} A \emph{standard frame} is a frame $\mcd$ such that $\mcd(\sigma\tau)=(\mcd\sigma\to\mcd\tau)$ for all types $\sigma$, $\tau$. A \emph{standard interpretation} is an assignment into a standard frame. Note that every standard interpretation is, in fact, an interpretation. We assume a \emph{normalization operator $\nf{\cdot}$} that provides for lambda conversion. The normalization operator $\nf{\cdot}$ must be a type preserving total function from terms to terms. We call $\nf{s}$ the \emph{normal form of $s$} and say that $s$ is \emph{normal} if $\nf{s}=s$. One possible normalization operator is a function that for every term $s$ return a $\beta$-normal term that can be obtained from $s$ by $\beta$-reduction. We will not commit to a particular normalization operator but state explicitly the properties we require for our results. To start, we require the following properties: \begin{description} \item[{N1}~] $\nf{\nf{s}}=\nf{s}$ \item[{N2}~] $[[s]t]=\nf{st}$ \item[{N3}~] $\nf{x\ddd s n}=x\nf{s_1}\dots\nf{s_n}$ \quad if $x\ddd s n:\beta$ and $n\ge0$ \item[{N4}~] $\hat\mci\nf{s}=\hat\mci{s}$ \quad if $\mci$ is an interpretation \end{description} \begin{prop} $x\ddd{s}n:\beta$ is normal iff $\dd{s}n$ are normal. \end{prop} For the proofs of Lemma~\ref{lem-admissibility} and Theorem~\ref{theo-admissible-interpretations} we need further properties of the normalization operator that can only be expressed with substitutions. A \emph{substitution} is a type preserving partial function from names to terms. If $\theta$ is a substitution, $x$ is a name, and $s$ is a term that has the same type as $x$, we write \emph{$\subst\theta x s$} for the substitution that agrees everywhere with~$\theta$ but possibly on $x$ where it yields $s$. We assume that every substitution $\theta$ can be extended to a type preserving total function \emph{$\hat\theta$} from terms to terms such that the following conditions hold: \enlargethispage*{5mm} \begin{description} \item[{S1}~] $\hat\theta x=\Cond{x\in\Dom\theta}{\theta{x}}{x}$ \item[{S2}~] $\hat\theta(st)=(\hat\theta{s})(\hat\theta{t})$ \item[{S3}~] $[(\hat\theta(\lam{x}s){})t]=[\widehat{\subst\theta{x}t}s]$ \item[{S4}~] $\nf{\hat\eset s}=\nf{s}$ \end{description} Note that $\eset$ (the empty set) is the substitution that is undefined on every name. \section{Value Systems} \label{sec:value-sys} We introduce value systems as a tool for constructing surjective interpretations. Value systems are logical relations inspired by the possible-values relations used in~\cite{BrownARHO,BrownSmolkaEFO,BrownSmolkaBasic}. A \emph{value system} is a function $\canbe$ that maps every base type $\beta$ to a binary relation~$\canbe_\beta$ such that $\Dom(\canbe_\beta)\incl\Wff_\beta$ and $s\canbe_\beta a$ iff $\nf{s}\canbe_\beta a$. For every value system~$\canbe$ we define by induction on types: \begin{align*} \N{\mcd\sigma}&\eqdef\Ran(\canbe_\sigma)\\ \N{\canbe_{\sigma\tau}}&\eqdef\mset{(s,f)\in\Wff_{\sigma\tau}\times(\mcd\sigma\to\mcd\tau)} {\forall(t,a)\in\canbe_\sigma\col~(st,fa)\in\canbe_\tau} \end{align*} Note that $\mcd(\sigma\tau)\incl(\mcd\sigma\to\mcd\tau)$ for all types $\sigma\tau$. We usually drop the type index in $s\canbe_\sigma a$ and read $s\canbe a$ as $s$ can be $a$ or $a$ is a \emph{possible value} for $s$. \begin{prop} \label{prop-norm-poss-value} For every value system: \ $s\canbe_\sigma a$ iff $\nf{s}\canbe_\sigma a$. \end{prop} \begin{proof} By induction on $\sigma$. For base types the claim holds by the definition of value systems. Let $\sigma=\tau\mu$. For all $s\in\Wff_\sigma$, $t\in\Wff_\tau$, $a\in\mcd\tau\to\mcd\mu$, and $b\in\mcd\tau$, $$st\canbe_\mu ab {\mbox{ iff }} \nf{st}\canbe_\mu ab {\mbox{ iff }} \nf{\nf{s}t}\canbe_\mu ab {\mbox{ iff }} \nf{s}t\canbe_\mu ab$$ by the inductive hypothesis and N2. Hence $s\canbe_\sigma a$ iff $\nf{s}\canbe a$. \end{proof} A value system $\canbe$ is \emph{functional} if $\canbe_\beta$ is a functional relation for every base type $\beta$. (That is, for each $s\in\Wff_\beta$ there is at most one $b$ such that $s\canbe b$.) \begin{prop} \label{prop-functional-vs} If $\canbe$ is functional, then $\canbe_\sigma$ is a functional relation for every type~$\sigma$. \end{prop} \begin{proof} By induction on $\sigma$. For $\sigma=\beta$, the claim is trivial. Let $\sigma=\tau\mu$ and $s\canbe_{\tau\mu}f,g$. We show $f=g$. Let $a\in\mcd\tau$. Then $t\canbe_\tau a$ for some $t$. Now $st\canbe_\mu fa,ga$. By inductive hypothesis $fa=ga$. \end{proof} A value system $\canbe$ is \emph{total} if $x\in\Dom\canbe_\sigma$ for every name $x:\sigma$. An assignment $\mci$ is \emph{admissible} for a value system $\canbe$ if $\mci\sigma=\mcd\sigma$ for all types $\sigma$ and $x\canbe\mci x$ for all names $x$. (Recall that $\canbe$ is used to define $\mcd$.) Note that every total value system has admissible assignments. We will show that admissible assignments are interpretations that evaluate terms to possible values. \begin{lem} \label{lem-admissibility} Let $\mci$ be an assignment that is admissible for a value system $\canbe$ and $\theta$ be a substitution such that $\theta{x}\canbe\mci{x}$ for all $x\in\Dom\theta$. Then $s\in\Dom\hat\mci$ and $\hat\theta{s}\canbe\hat\mci{s}$ for every term $s$. \end{lem} \begin{proof} By induction on $s$. Let $s$ be a term. Case analysis. \br $s=x$. The claim holds by assumption and S1. \br $s=tu$. Then $t\in\Dom\hat\mci$, \ $\hat\theta{t}\canbe\hat\mci{t}$, \ $u\in\Dom\hat\mci$, \ and $\hat\theta{u}\canbe\hat\mci{u}$ by inductive hypothesis. Thus $s\in\Dom\hat\mci$ and $\hat\theta{s}= (\hat\theta{t})(\hat\theta{u})\canbe (\hat\mci{t})(\hat\mci{u})=\hat\mci{s}$ using S2. \br $s=\lam{x}t$, $x:\sigma$ and $t:\tau$. We need to prove $s\in\Dom\hat\mci$ and $\hat\theta{s}\canbe\hat\mci{s}$. First we prove \begin{equation}\label{lem-adm-funcase} t\in\Dom\widehat{\subst\mci{x}{a}} {\mbox{ and }} (\hat\theta{s})u\canbe \widehat{\subst\mci{x}a}t {\mbox{ whenever }} u\canbe_\sigma a. \end{equation} Let $u\canbe_\sigma a$. By inductive hypothesis we have $t\in\Dom\widehat{\subst\mci{x}a}$ and $\widehat{\subst\theta{x}u}t\canbe \widehat{\subst\mci{x}a}t$. Now $\nf{(\hat\theta{s})u}= \nf{\widehat{\subst\theta{x}u}t}\canbe \widehat{\subst\mci{x}a}t$ using~S3. Using Proposition~\ref{prop-norm-poss-value} we conclude (\ref{lem-adm-funcase}) holds. By definition of $\mcd\sigma$ for every $a\in\mcd\sigma$ there is a $u$ such that $u\canbe a$. Using this and (\ref{lem-adm-funcase}) we know $t\in\Dom\widehat{\subst\mci{x}a}$ for every $a\in\mcd\sigma$. Let $f:\mcd\sigma\to\mcd\tau$ be defined by $fa = \widehat{\subst\mci{x}{a}}{t}$ for each $a\in\mci\sigma$. For all $u\canbe_\sigma a$ we have $(\hat\theta{s})u\canbe fa$ by (\ref{lem-adm-funcase}). Hence $\hat\theta{s}\canbe f$. This implies $f\in\mcd{(\sigma\tau)}$, $s\in\Dom\hat\mci$, $\hat\mci s = f$ and $\hat\theta{s}\canbe \hat\mci s$ as desired. \end{proof} \begin{thm} \label{theo-admissible-interpretations} Let $\mci$ be an assignment that is admissible for a value system~$\canbe$. Then $\mci$ is an interpretation such that $s\canbe\hat\mci s$ for all terms $s$. Furthermore, $\mci$~is surjective if~$\canbe$ is functional. \end{thm} \begin{proof} Follows from Lemma~\ref{lem-admissibility} with Proposition~\ref{prop-norm-poss-value} and S4. To prove the second claim, let $a\in\mcd\sigma$ be given. By definition of $\mcd$ there is some $s$ such that $s\canbe a$. Since $s\canbe\hat\mci s$ we know $\hat\mci s = a$ by Proposition~\ref{prop-functional-vs}. \end{proof} \section{Simple Type Theory} We now define the terms and semantics of simple type theory (\emph{STT}). We fix a base type $\N{o}$ for the truth values and a name $\N{\neg}:oo$ for negation. Moreover, we fix for every type $\sigma$ a name $\N{=_\sigma}:\sigma\sigma o$ for the identity predicate for $\sigma$. An assignment $\mci$ is \emph{logical} if $\mci o=\set{0,1}$, $\mci(\neg)$ is the negation function and $\mci(=_\sigma)$ is the identity predicate for $\sigma$. We refer to the base types different from $o$ as \emph{sorts}, to the names $\neg$ and $=_\sigma$ as \emph{logical constants}, and to all other names as \emph{variables}. From now on \emph{$x$} will range over variables. Moreover, \emph{$c$} will range over logical constants and \emph{$\alpha$} will range over sorts. A \emph{formula} is a term of type $o$. We employ infix notation for formulas obtained with $=_\sigma$ and often write \emph{equations} $s=_\sigma t$ without the type index. We write \emph{$s\neq t$} for $\neg (s{=}t)$ and speak of a \emph{disequation}. Note that quantified formulas $\forall x.s$ can be expressed as equations $(\lam{x}s)=(\lam{x}x=x)$. A logical interpretation $\mci$ \emph{satisfies} a formula $s$ if $\hat\mci s=1$. A \emph{model} of a set of formulas $A$ is a logical interpretation that satisfies every formula~$s\in A$. A set of formulas is \emph{satisfiable} if it has a model. \section{Tableau Calculus} We now give a deductive calculus for STT. A \emph{branch} is a set of normal formulas. The \emph{tableau calculus $\TS$} operates on finite branches and employs the rules shown in Figure~\ref{fig-tableau-rules}. \begin{figure}[t] \begin{mathpar} \inferrule*[left=\emph{\TRDN}~]{\neg\neg s}{s} \and \inferrule*[left=\emph{\TRBQ}~]{s =_ot}{s\,,\,t~\mid~\neg s\,,\,\neg t} \and \inferrule*[left=\emph{\TRBE}~]{s\neq_ot}{s\,,\,\neg t~\mid~\neg s\,,\,t} \\ \inferrule*[left=\emph{\TRFQ}~,right=~$u:\sigma$ normal] {s =_{\sigma\tau} t}{\nf{su} =\nf{tu}} \and \inferrule*[left=\emph{\TRFE}~,right=~$x:\sigma$ fresh] {s\neq_{\sigma\tau} t}{\nf{sx}\neq\nf{tx}} \\ \inferrule*[left=\emph{\TRMat}~,right=~$n\geq 0$] {xs_1\dots s_n\,,\,\neg xt_1\dots t_n} {s_1\neq t_1\mid\dots\mid s_n\neq t_n} \and \inferrule*[left=\emph{\TRDec}~,right=~$n\geq 0$] {xs_1\dots s_n\neq_\alpha xt_1\dots t_n} {s_1\neq t_1\mid\dots\mid s_n\neq t_n} \\\ \inferrule*[left=\emph{\TRCon}~] {s=_\alpha t\,,\,u\neq_\alpha v} {s\neq u\,,\,t\neq u\mid s\neq v\,,\,t\neq v} \end{mathpar} \caption{Tableau rules for STT} \label{fig-tableau-rules} \end{figure} The side condition ``$x$~fresh'' of rule \TRFE requires that $x$ does not occur free in the branch the rule is applied to. We say a branch $A$ is \emph{closed} if $x,\neg x\in A$ for some variable $x:o$ or if $x\not=_\iota x\in A$ for some variable $x:\iota$. Note that $A$ is closed if and only if either the $\TRMat$ or $\TRDec$ rule applies with $n=0$. We impose the following restrictions: \begin{enumerate}[(1)] \item We only admit rule instances $A/\ddd A n$ where $A$ is not closed. \item \TRFE can only be applied to a disequation $(s{\neq} t)\in A$ if there is no variable $x$ such that $(\nf{sx}\neq\nf{tx})\in A$. \end{enumerate} The set of \emph{refutable branches} is defined inductively: if $A/\ddd A n$ is an instance of a rule of~$\TS$ and $\dd A n$ are refutable, then $A$ is refutable. Note that the base cases of this inductive definition are when $n=0$. The rules where $n$ may be $0$ are $\TRMat$ and $\TRDec$. Figure~\ref{fig:refutation} shows a refutation in~$\TS$. \begin{figure}[t] \begin{equation*} \begin{array}{c} pf,~\neg p(\lam{x}{\neg\neg fx}) \\ {\mbox{[${\emph{\TRMat}}$]}} \\ f\neq(\lam{x}{\neg\neg fx}) \\ {\mbox{[${\emph{\TRFE}}$]}} \\ fx\neq\neg\neg fx \\ {\mbox{[${\emph{\TRBE}}$]}} \\ \hline \begin{array}{c|c} \begin{array}{c} fx,~\neg\neg\neg fx\\ {\mbox{[${\emph{\TRDN}}$]}} \\ \neg fx\\ {\mbox{[${\emph{\TRMat}}$]}} \\ x\neq x \\ {\mbox{[${\emph{\TRDec}}$]}} \end{array} & \begin{array}{c} \neg fx,~\neg\neg fx \\ {\mbox{[${\emph{\TRDN}}$]}} \\ fx\\ {\mbox{[${\emph{\TRMat}}$]}} \\ x\neq x \\ {\mbox{[${\emph{\TRDec}}$]}} \end{array} \end{array} \end{array} \end{equation*} \caption{Tableau refuting $\{pf,\neg p(\lam{x}{\neg\neg fx})\}$ where $p:(\alpha o)o$ and $f:\alpha o$} \label{fig:refutation} \end{figure} A remark on the names of the rules: \TRMat is called the mating rule, \TRDec the decomposition rule, \TRCon the confrontation rule, \TRBQ the Boolean equality rule, \TRBE the Boolean extensionality rule, \TRFQ the functional equality rule, and \TRFE the functional extensionality rule. \begin{prop}[Soundness] \label{prop:ts-sound} Every refutable branch is unsatisfiable. \end{prop} \begin{proof} Let $A/\ddd A n$ be an instance of a rule of $\TS$ such that $A$ is satisfiable. It suffices to show that one of the branches $\dd A n$ is satisfiable. Straightforward. \end{proof} We will show that the tableau calculus $\TS$ is \emph{complete}, that is, can refute every finite unsatisfiable branch. The rules of $\TS$ are designed such that we obtain a strong completeness result. For practical purposes one can of course include rules that close branches including $s,\neg s$ or $s\neq s$. To avoid redundancy, our definition of STT only covers the logical constants $\neg$ and $=_\sigma$. Adding further constants such as $\land$, $\lor$, $\to$, $\forall_{\!\sigma}$ and $\exists_\sigma$ is straightforward. In fact, all logical constants can be expressed with the identities $=_\sigma$~\cite{AndrewsBook}. We have included $\neg$ since we need it for the formulation of the tableau calculus. The refutation in Figure~\ref{fig:refutationneg} suggests that the elimination of $\neg$ is not straightforward. \begin{figure}[t] \begin{equation*} \begin{array}{c} (\lam{x}{x}) = \lam{x}{y} \\ {\mbox{[${\emph{\TRFQ}}$ with $x$]}} \\ x =_o y \\ {\mbox{[${\emph{\TRBQ}}$]}} \\ \hline \begin{array}{c|c} \begin{array}{c} x,y\\ {\mbox{[${\emph{\TRFQ}}$ with $\neg x$]}} \\ (\neg x) =_o y \\ {\mbox{[${\emph{\TRBQ}}$]}} \\ \hline \begin{array}{c|c} \begin{array}{c} \neg x,y \\ {\mbox{[${\emph{\TRMat}}$]}} \end{array} & \begin{array}{c} \neg \neg x,\neg y \\ {\mbox{[${\emph{\TRMat}}$]}} \end{array} \end{array} \end{array} & \begin{array}{c} \neg x,\neg y\\ {\mbox{[${\emph{\TRFQ}}$ with $\neg x$]}} \\ (\neg x) =_o y \\ {\mbox{[${\emph{\TRBQ}}$]}} \\ \hline \begin{array}{c|c} \begin{array}{c} \neg x,y \\ {\mbox{[${\emph{\TRMat}}$]}} \end{array} & \begin{array}{c} \neg \neg x,\neg y\\ {\mbox{[${\emph{\TRDN}}$]}} \\ x \\ {\mbox{[${\emph{\TRMat}}$]}} \end{array} \end{array} \end{array} \end{array} \end{array} \end{equation*} \caption{Tableau refuting $(\lam{x}{x}) = \lam{x}{y}$ where $x,y:o$} \label{fig:refutationneg} \end{figure} \section{Evidence} A branch $E$ is \emph{evident} if it satisfies the \emph{evidence conditions} in Figure~\ref{fig:evidence}. The evidence conditions correspond to the tableau rules and are designed such that every branch that is closed under the tableau rules is either closed or evident. We will show that evident branches are satisfiable. \begin{figure}[t] \renewcommand{\arraystretch}{1.4} \begin{tabular}{c>{\raggedright}p{120mm}} \emph{\EDN}&If $\neg\neg s$ is in $E$, then $s$ is in $E$. \tabularnewline \emph{\EBQ}&If $s =_o t$ is in $E$, then either $s$ and $t$ are in $E$ or $\neg s$ and $\neg t$ are in $E$. \tabularnewline \emph{\EBE}&If $s\neq_o t$ is in $E$, then either $s$ and $\neg t$ are in $E$ or $\neg s$ and $t$ are in $E$. \tabularnewline \emph{\EFQ}&If $s =_{\sigma\tau} t$ is in $E$, then $\nf{su}=\nf{tu}$ is in $E$ for every normal $u:\sigma$. \tabularnewline \emph{\EFE}&If $s\neq_{\sigma\tau} t$ is in $E$, then $\nf{sx}\neq\nf{tx}$ is in $E$ for some variable $x$. \tabularnewline \emph{\EMat}&If $x\ddd s n$ and $\neg x\ddd t n$ are in $E$,\ignore{\\} then $n\ge1$ and $s_i\neq t_i$ is in $E$ for some $i\in\set{1\cld n}$. Note that if $n=0$, this means if $\neg x\in E$, then $x\notin E$. \tabularnewline \emph{\EDec}&If $x\ddd s n\neq_\alpha x\ddd t n$ is in $E$,\ignore{\\} then $n\ge1$ and $s_i\neq t_i$ is in $E$ for some $i\in\set{1\cld n}$. Note that if $n=0$, this means $x\neq_\alpha x\notin E$. \tabularnewline \emph{\ECon}&If $s=_\alpha t$ and $u \neq_\alpha v$ are in $E$,\\ then either $s\neq u$ and $t\neq u$ are in $E$ or $s\neq v$ and $t\neq v$ are in $E$. \end{tabular} \caption{Evidence conditions} \label{fig:evidence} \end{figure} A branch $E$ is \emph{complete} if for every normal formula $s$ either $s$ or $\neg s$ is in $E$. The cut-freeness of $\TS$ shows in the fact that there are many evident sets that are not complete. For instance, $\set{pf,~\neg p(\lam{x}{\neg fx}),~f\neq\lam{x}{\neg fx},~ fx\neq\neg fx,~\neg fx}$ is an incomplete evident branch if $p:(\sigma o)o$. \subsection{Discriminants} Given an evident branch $E$, we will construct a value system whose admissible logical interpretations are models of $E$. We start by defining the values for the sorts, which we call discriminants. Discriminants first appeared in~\cite{BrownSmolkaBasic}. Let $E$ be a fixed evident branch in the following. A term $u\in\Wff_\alpha$ is \emph{$\alpha$-discriminating in $E$} if there is some term $t$ such that either $u\neq_\alpha t$ or $t\neq_\alpha u$ is in $E$. An \emph{$\alpha$-discriminant} is a maximal set $a$ of discriminating terms of type $\alpha$ such that there is no disequation $s{\neq}t\in E$ such that $s,t\in a$. We write \emph{$s\notq t$} if $E$ contains the disequation $s{\neq }t$ or~$t{\neq}s$. In~\cite{Brown2004a} a sort was interpreted using maximally compatible sets of terms of the sort (where $s$ and $t$ are compatible unless $s\notq t$). The idea is that the set $E$ insists that certain terms cannot be equal, but leaves open that other terms ultimately may be identified by the interpretation. In particular, two compatible terms $s$ and $t$ may be identified by taking a maximally compatible set of terms containing both $s$ and $t$ as a value. It is not difficult to see that a maximally compatible set is simply the union of an $\alpha$-discriminant with all terms of sort $\alpha$ that are not $\alpha$-discriminating. We now find that it is clearer to use $\alpha$-discriminants as values instead of maximally compatible sets. In particular, it is easier to count the number of $\alpha$-discriminants, as we now show. \begin{exa} Suppose $E=\set{x{\neq}y,\,x{\neq}z,\,y{\neq}z}$ and $x,y,z:\alpha$. There are~3 \text{$\alpha$-discriminants}: $\set{x}$, $\set{y}$, $\set{z}$. \end{exa} \begin{exa} Suppose $E=\mset{a_n\neq_\alpha b_n}{n\in\NN}$ where the $a_n$ and $b_n$ are pairwise distinct variables. Then $E$ is evident and there are uncountably many \text{$\alpha$-discriminants}. \end{exa} \begin{prop} \label{prop-finite-discs} If $E$ contains exactly $n$ disequations at $\alpha$, then there are at most~$2^n$ $\alpha$-discriminants. If $E$ contains no disequation at $\alpha$, then $\eset$ is the only $\alpha$-discriminant. \end{prop} \begin{prop} \label{prop-diff-discs} Let $a$ and $b$ be different discriminants. Then: \begin{enumerate}[\em(1)] \item $a$ and $b$ are separated by a disequation in $E$, that is, there exist terms $s\in a$ and $t\in b$ such that $s\notq t$. \item $a$ and $b$ are not connected by an equation in $E$, that is, there exist no terms $s\in a$ and $t\in b$ such that $(s{=} t)\in E$. \end{enumerate} \end{prop} \begin{proof} The first claim follows by contradiction. Suppose there are no terms ${s\in a}$ and $t\in b$ such that $s\notq t$. Let $s\in a$. Then $s\in b$ since $b$ is a maximal set of discriminating terms. Thus $a\incl b$ and hence $a=b$ since $a$ is maximal. Contradiction. The second claim also follows by contradiction. Suppose there is an equation $(s_1{=} s_2)\in E$ such that $s_1\in a$ and $s_2\in b$. By the first claim we have terms $s\in a$ and $t\in b$ such that $s\notq t$. By \ECon we have $s_1\notq s$ or $s_2\notq t$. Contradiction since $a$ and~$b$ are discriminants. \end{proof} \subsection{Compatibility}\label{sec:compat} For our proofs we need an auxiliary notion for evident branches that we call compatibility. Let $E$ be a fixed evident branch in the following. We define relations $\N{\comp_\sigma}\incl\Wff_\sigma\times\Wff_\sigma$ by induction on types: \begin{align*} \N{s\comp_o t}&\iffdef \set{\nf{s},\neg\nf{t}}\not\subseteq E~\mathrm{and}~ \set{\neg\nf{s},\nf{t}}\not\subseteq E\\ \N{s\compa t}&\iffdef \mathrm{not}~\nf{s}\notq\nf{t}\\ \N{s\comp_{\sigma\tau}t}&\iffdef s u\comp_\tau t v~\text{whenever}~u\comp_\sigma v \end{align*} We say that $s$ and $t$ are \emph{compatible} if $s\comp t$. \begin{lem}[Compatibility] \label{lem-compatibility}~\\ For $n\ge0$ and all terms $s$, $t$, $xs_1\dots s_n$, $xt_1\dots t_n$ of type~$\sigma$: \begin{enumerate}[\em(1)] \item We do not have both $s\comp_\sigma t$ and $\nf{s}\notq\nf{t}$. \item Either $xs_1\dots s_n\comp_\sigma xt_1\dots t_n$ or $\nf{s_i}\notq\nf{t_i}$ for some $i\in\set{1\cld n}$. \end{enumerate} \end{lem} \begin{proof} By induction on $\sigma$. Case analysis. $\sigma=o$. Claim~(1) follows with \EBE. Claim~(2) follows with N3 and \EMat. $\sigma=\alpha$. Claim~(1) is trivial. Claim~(2) follows with N3 and \EDec. $\sigma=\tau\mu$. We show~(1) by contradiction. Suppose $s\comp_\sigma t$ and $\nf{s}\notq\nf{t}$. By \EFE $\nf{\nf{s}x}\notq\nf{\nf{t}x}$ for some variable $x$. By inductive hypothesis~(2) we have $x\comp_\tau x$. Hence $sx\comp_\mu tx$. Contradiction by inductive hypothesis~(1) and N2. To show~(2), suppose $xs_1\dots s_n\ncomp_\sigma xt_1\dots t_n$. Then there exist terms such that $u\comp_\tau v$ and $xs_1\dots s_nu\ncomp_\mu xt_1\dots t_nv$. By inductive hypothesis~(1) we know that $\nf{u}\notq\nf{v}$ does not hold. Hence $\nf{s_i}\notq\nf{t_i}$ for some $i\in\set{1\cld n}$ by inductive hypothesis~(2). \end{proof} \section{Model Existence} \label{sec:model-existence} Let $E$ be a fixed evident branch. We define a value system~$\canbe$ for $E$: \begin{align*} \N{s \canbe_o 0}&\iffdef s\in\Wff_o \text{ and } \nf{s}\notin E\\ \N{s \canbe_o 1}&\iffdef s\in\Wff_o \text{ and } \neg\nf{s}\notin E\\ \N{s\canbe_\alpha a}\!&\iffdef s\in\Wff_\alpha,~a \text{ is an $\alpha$-discriminant, and } \nf{s}\in a \text{ if } \nf{s} \text{ is discriminating} \end{align*} Note that N1 ensures the property $s\canbe_\beta a$ iff $\nf{s}\canbe_\beta a$. \begin{prop} \label{prop-mod-ex-o} For all variables $x_o$, either $x\canbe 0$ and $\neg x\canbe 1$ or $x\canbe 1$ and $\neg x\canbe 0$. In particular, $\mcd o=\set{0,1}$. \end{prop} \begin{proof} By $\EMat$ either $x\notin E$ or $\neg x\notin E$. If $x\notin E$, then $x\canbe 0$ and $\neg x\canbe 1$ by N3 and $\EDN$. If $\neg x\notin E$, then $x\canbe 1$ and $\neg x\canbe 0$ by N3. \end{proof} \begin{lem} \label{lemma-adm-model} A logical assignment is a model of $E$ if it is admissible for $\canbe$. \end{lem} \begin{proof} Let $\mci$ be a logical assignment that is admissible for $\canbe$, and let $s\in E$. By Theorem~\ref{theo-admissible-interpretations} we know that $\mci$ is an interpretation and that $s\canbe_o\hat\mci s$. Thus $\hat\mci s\neq0$ since $s\in E$. Hence $\hat\mci s=1$. \end{proof} It remains to show that $\canbe$ admits logical interpretations. First we show that all sets $\mcd\sigma$ are nonempty. To do so, we prove that compatible equi-typed terms have a common value. A set $T$ of equi-typed terms is \emph{compatible} if $s\comp t$ for all terms $s,t\in T$. We write \emph{$T\canbe_\sigma a$} if $T\incl\Wff_\sigma$, $a\in\mcd\sigma$, and $t\canbe a$ for every~$t\in T$. \begin{lem}[Common Value] \label{lem-common-value} Let $T\incl\Wff_\sigma$. Then $T$ is compatible if and only if there exists a value~$a$ such that $T\canbe_\sigma a$. \end{lem} \begin{proof} By induction on $\sigma$. \br $\sigma=\alpha,~{\Rightarrow}$. Let $T$ be compatible. Then there exists an $\alpha$-discriminant $a$ that contains all the $\alpha$-discriminating terms in $\mset{\nf{t}}{t\in T}$. Clearly, $T\canbe a$. \br $\sigma=\alpha,~{\Leftarrow}$. Suppose $T\canbe a$ and $T$ is not compatible. Then there are terms $s,t\in T$ such that $(\nf{s}{\neq}\nf{t})\in E$. Thus $\nf{s}$ and $\nf{t}$ cannot be both in $a$. This contradicts $s,t\in T\canbe a$ since $\nf{s}$ and $\nf{t}$ are discriminating. \br $\sigma=o,~{\Rightarrow}$. By contraposition. Suppose $T\ncanbe0$ and $T\ncanbe1$. Then there are terms $s,t\in T$ such that $\nf{s},\neg\nf{t}\in E$. Thus $s\ncomp t$. Hence $T$ is not compatible. \br $\sigma=o,~{\Leftarrow}$. By contraposition. Suppose $s\ncomp_o t$ for $s,t\in T$. Then $\nf{s},\neg\nf{t}\in E$ without loss of generality. Hence $s\ncanbe0$ and $t\ncanbe1$. Thus $T\ncanbe0$ and $T\ncanbe1$. \br $\sigma=\tau\mu,~{\Rightarrow}$. Let $T$ be compatible. We define $T_a:=\mset{ts}{t\in T,~s\canbe_\tau a}$ for every value $a\in\mci\tau$ and show that $T_a$ is compatible. Let $t_1,t_2\in T$ and $s_1,s_2\canbe_\tau a$. It suffices to show $t_1s_1\comp t_2s_2$. By the inductive hypothesis $s_1\comp_\tau s_2$. Since $T$ is compatible, $t_1\comp t_2$. Hence $t_1s_1\comp t_2s_2$. By the inductive hypothesis we now know that for every $a\in\mci\tau$ there is a $b\in\mci\mu$ such that $T_a\canbe_\mu b$. Hence there is a function $f\in\mci\sigma$ such that $T_a\canbe_\mu fa$ for every $a\in\mci\tau$. Thus $T\canbe_\sigma f$. \br $\sigma=\tau\mu,~{\Leftarrow}$. Let $T\canbe_\sigma f$ and $s,t\in T$. We show $s\comp_\sigma t$. Let $u\comp_\tau v$. It suffices to show $su\comp_\mu tv$. By the inductive hypothesis $u,v\canbe_\tau a$ for some value $a$. Hence $su,tv\canbe_\mu fa$. Thus $su\comp_\mu tv$ by the inductive hypothesis. \end{proof} \begin{lem}[Admissibility] \label{lemma-inhabitation} For every variable $x:\sigma$ there is some $a\in\mcd\sigma$ such that $x\canbe a$. In particular, $\mcd\sigma$ is a nonempty set for every type $\sigma$. \end{lem} \begin{proof} Let $x:\sigma$ be a variable. By Lemma~\ref{lem-compatibility}\,(2) we know $x\comp_\sigma x$. Hence $\set{x}$ is compatible. By Lemma~\ref{lem-common-value} there exists a value $a$ such that $x\canbe_\sigma a$. The claim follows since $a\in\mcd\sigma$ by definition of $\mcd\sigma$. \end{proof} \begin{lem}[Functionality] \label{lem-functionality} If $s\canbe_\sigma a$, $t\canbe_\sigma b$, and $(s{=}t)\in E$ , then $a=b$. \end{lem} \begin{proof} By contradiction and induction on $\sigma$. Assume $s\canbe_\sigma a$, $t\canbe_\sigma b$, $(s{=}t)\in E$, and $a\neq b$. Case analysis. $\sigma=o$. By $\EBQ$ either $s,t\in E$ or $\neg s,\neg t\in E$. Hence $a$ and $b$ are either both $1$ or both $0$. Contradiction. $\sigma=\alpha$. Since $a\neq b$, there must be discriminating terms of type $\alpha$. Since $(s{=}t)\in E$, we know by N3 and \ECon that $s$ and $t$ are normal and discriminating. Hence $s\in a$ and $t\in b$. Contradiction by Proposition~\ref{prop-diff-discs}\,(2). $\sigma = \tau\mu$. Since $a\neq b$, there is some $c\in\mcd\tau$ such that $ac\not= bc$. By the definition of $\mcd\tau$ and Lemma~\ref{prop-norm-poss-value} there is a normal term $u$ such that $u\canbe_\tau c$. Hence $su\canbe ac$ and $tu\canbe bc$. By Proposition~\ref{prop-norm-poss-value} $\nf{su}\canbe_\mu ac$ and $\nf{tu}\canbe_\mu bc$. By $\EFQ$ the equation $\nf{su} = \nf{tu}$ is in~$E$. Contradiction by the inductive hypothesis. \end{proof} We now define the canonical interpretations for the logical constants: \begin{align*} \N{\mcl({\neg})}&\eqdef\lam{a{\in}\mcd o}{~\Cond{a{=}1}01}\\ \N{\mcl({=_\sigma})}&\eqdef\lam{a{\in}\mcd\sigma}{~\lam{b{\in}\mcd\sigma}{~\Cond{a{=}b}10}} \end{align*} \begin{lem}[Logical Constants] \label{lem-log-constants} $c\canbe\mcl(c)$ for every logical constant $c$. \end{lem} \begin{proof} We show $\neg\canbe \mcl(\neg)$ by contradiction. Let $s\canbe_oa$ and assume $\neg s \ncanbe\mcl(\neg) a$. Case analysis. \begin{enumerate}[$\bullet$] \item $a=0$. Then $\nf{s}\notin E$ and $\neg\nf{\neg s}\in E$. Contradiction by N3 and~$\EDN$. \item $a=1$. Then $\neg\nf{s}\notin E$ and $\nf{\neg s}\in E$. Contradiction by N3. \end{enumerate} Finally, we show $(=_\sigma)\canbe\mcl(=_\sigma)$ by contradiction. Let $s\canbe_\sigma a$, $t\canbe_\sigma b$, and $(s{=_\sigma}t)\ncanbe\mcl(=_\sigma)ab$. Case analysis. \begin{enumerate}[$\bullet$] \item $a=b$. Then $\nf{s}\notq\nf{t}$ by N3 and $s,t\canbe a$. Thus $s\comp t$ by Lemma~\ref{lem-common-value}. Contradiction by Lemma~\ref{lem-compatibility}\,(1). \item $a\neq b$. Then $(\nf{s}{=}\nf{t})\in E$ by N3. Hence $a=b$ by Proposition~\ref{prop-norm-poss-value} and Lemma~\ref{lem-functionality}. Contradiction.\qedhere \end{enumerate} \end{proof} \begin{thm}[Model Existence] \label{theo-model-exist} Every evident branch is satisfiable. Moreover, every complete evident branch has a surjective model, and every finite evident branch has a finite model. \end{thm} \begin{proof} Let $E$ be an evident branch and $\canbe$ be the value system for $E$. By Proposition~\ref{prop-mod-ex-o}, Lemma~\ref{lemma-inhabitation}, and Lemma~\ref{lem-log-constants} we have a logical interpretation $\mci$ that is admissible for $\canbe$. By Lemma~\ref{lemma-adm-model} $\mci$ is a model of $E$. Let $E$ be complete. By Theorem~\ref{theo-admissible-interpretations} we know that $\mci$ is surjective if $\canbe$ is functional. Let $s\canbe_\beta a$ and $s\canbe_\beta b$. We show $a=b$. By Proposition~\ref{prop-norm-poss-value} we can assume that $s$ is normal. Thus $s{=}s$ is normal by N3. Since $\mci$ is a model of $E$, we know that the formula $s{\neq}s$ is not in $E$. Since $E$ is complete, we know that ${s}{=}{s}$ is in~$E$. By Lemma~\ref{lem-functionality} we have $a=b$. If $E$ is finite, $\mci\alpha=\mcd\alpha$ is finite by Proposition~\ref{prop-finite-discs}. \end{proof} \section{Abstract Consistency} We now extend the model existence result for evident branches to abstract consistency classes, following the corresponding development for first-order logic~\cite{SmullyanBook}. Notions of abstract consistency for simple type theory have been previously considered in~\cite{Andrews71,Kohlhase93a,KohlhaseTableaux1995,Benzmuller99a,BenzKoh98,BBK04,BBKweb04,Brown2004a,BrownARHO}. Equality was treated as Leibniz equality in~\cite{Andrews71}. Abstract consistency conditions for primitive equality corresponding to reflexivity and substutivity properties were given by Benzm\"uller in~\cite{Benzmuller99a,Benzmuller99b}. A primitive identity predicate $=_\sigma$ was considered in~\cite{BBK04} but the abstract consistency conditions for $=_\sigma$ essentially reduced it to Leibniz equality. Conditions for $=_\sigma$ analogous to $\ACon$ first appeared in~\cite{Brown2004a}. An \emph{abstract consistency class} is a set $\Gamma$ of branches such that every branch $A\in\Gamma$ satisfies the conditions in Figure~\ref{fig:abs-consistency}. An abstract consistency class $\Gamma$ is \emph{complete} if for every branch $A\in\Gamma$ and every normal formula $s$ either $A\cup\set{{s}}$ or $A\cup\set{\neg{s}}$ is in~$\Gamma$. The completeness condition was called ``saturation'' in~\cite{BBK04}. As discussed in~\cite{BBK2009} and the conclusion of~\cite{BBK04}, the condition corresponds to having a cut rule in a calculus. In~\cite{BBKweb04} conditions analogous to $\ADec$ and $\AMat$ appear (using Leibniz equality) and a model existence theorem is proven with these conditions replacing saturation. The use of Leibniz equality means that there was still not a cut-free treatment of equality in~\cite{BBKweb04}. \begin{figure}[tp] \renewcommand{\arraystretch}{1.4} \begin{tabular}{c>{\raggedright}p{120mm}} \emph{\ADN}&If $\neg\neg s$ is in $A$, then $A\cup\set{s}$ is in $\Gamma$. \tabularnewline \emph{\ABQ}&If $s =_o t$ is in $A$, then either $A\cup\set{s,t}$ or $A\cup\set{\neg s,\neg t}$ is in $\Gamma$. \tabularnewline \emph{\ABE}&If $s\neq_o t$ is in $A$, then either $A\cup\set{s,\neg t}$ or $A\cup\set{\neg s,t}$ is in $\Gamma$. \tabularnewline \emph{\AFQ}&If $s =_{\sigma\tau} t$ is in $A$,\\ then $A\cup\set{\nf{su}\neq\nf{tu}}$ is in $\Gamma$ for every normal~$u:\sigma$. \tabularnewline \emph{\AFE}&If $s\neq_{\sigma\tau} t$ is in $A$, then $A\cup\set{\nf{sx}\neq\nf{tx}}$ is in $\Gamma$ for some variable $x$. \tabularnewline \emph{\AMat}&If $x\ddd s n$ is in $A$ and $\neg x\ddd t n$ is in $A$,\\ then $n\geq 1$ and $A\cup\set{s_i\neq t_i}$ is in $\Gamma$ for some $i\in\set{1\cld n}$. \tabularnewline \emph{\ADec}&If $x\ddd s n\neq_\alpha x\ddd t n$ is in $A$, then $n\ge 1$ and $A\cup\set{s_i\neq t_i}$ is in $\Gamma$ for some $i\in\set{1\cld n}$. \tabularnewline \emph{\ACon}&If $s=_\alpha t$ and $u \neq_\alpha v$ are in $A$,\\ then either $A\cup\set{s\neq u,t\neq u}$ or $A\cup\set{s\neq v,t\neq v}$ is in $\Gamma$. \end{tabular} \caption{Abstract consistency conditions (must hold for every $A\in\Gamma$)} \label{fig:abs-consistency} \end{figure} \begin{prop} Let $A$ be a branch. Then $A$ is evident if and only if $\set{A}$ is an abstract consistency class. Moreover, $A$ is a complete evident branch if and only if $\set{A}$ is a complete abstract consistency class. \end{prop} \begin{lem}[Extension Lemma] \label{lem:extension} Let $\Gamma$ be an abstract consistency class and $A\in\Gamma$. Then there exists an evident branch $E$ such that $A\subseteq E$. Moreover, if $\Gamma$ is complete, a complete evident branch $E$ exists such that $A\subseteq E$. \end{lem} \begin{proof} Let $u_0,u_1,u_2,\ldots$ be an enumeration of all normal formulas. We construct a sequence $A_0\subseteq A_1 \subseteq A_2 \subseteq \cdots$ of branches such that every $A_n\in\Gamma$. Let $A_0 \deq A$. We define $A_{n+1}$ by cases. If there is no $B\in\Gamma$ such that $A_{n}\cup\{u_n\}\subseteq B$, then let $A_{n+1}\deq A_n$. Otherwise, choose some $B\in\Gamma$ such that $A_{n}\cup\{u_n\}\subseteq B$. We consider two subcases. \begin{enumerate}[(1)] \item If $u_n$ is of the form $s\neq_{\sigma\tau} t$, then choose $A_{n+1}$ to be $B\cup\{\nf{sx}\neq\nf{tx}\}\in\Gamma$ for some variable $x$. This is possible since $\Gamma$ satisfies $\AFE$. \item If $u_n$ is not of this form, then let $A_{n+1}$ be $B$. \end{enumerate} Let $\displaystyle E:=\bigcup_{n\in\NN} A_n$. We show that $E$ satisfies the evidence conditions. \begin{enumerate}[\EMat] \item[{\EDN}] Assume $\neg\neg s$ is in $E$. Let $n$ be such that $u_n=s$. Let $r\geq n$ be such that $\neg \neg s$ is in $A_r$. By $\ADN$, $A_r\cup\{s\}\in\Gamma$. Since $A_n\cup\{s\}\subseteq A_r\cup\{s\}$, we have $s\in A_{n+1}\subseteq E$. \item[{\EMat}] Assume $x\ddd s n$ and $\neg x\ddd t n$ are in $E$. For each $i\in\set{1\cld n}$, let $m_i$ be such that $u_{m_i}$ is $s_i\neq t_i$. Let $r \ge m_1,\ldots,m_n$ be such that $x\ddd s n$ and $\neg x\ddd t n$ are in $A_r$. By $\AMat$ $n\geq 1$ and there is some $i\in\set{1\cld n}$ such that $A_r\cup\{s_i\neq t_i\}\in\Gamma$. Since $A_{m_i}\cup\{s_i\neq t_i\}\subseteq A_r\cup\{s_i\neq t_i\}$, we have $(s_i\neq t_i)\in A_{m_{i}+1}\subseteq E$. \item[{\EDec}] Similar to $\EMat$ \item[{\ECon}] Assume $s=_\alpha t$ and $u \neq_\alpha v$ are in $E$. Let $n,m,j,k$ be such that $u_n$ is $s\neq u$, $u_m$ is $t\neq u$, $u_j$ is $s\neq v$ and $u_k$ is $t\neq v$. Let $r\ge n,m,j,k$ be such that $s=_\alpha t$ and $u \neq_\alpha v$ are in $A_r$. By $\ACon$ either $A_r\cup\{s\neq u,t\neq u\}$ or $A_r\cup\{s\neq v,t\neq v\}$ is in $\Gamma$. Assume $A_r\cup\{s\neq u,t\neq u\}$ is in $\Gamma$. Since $A_n\cup\{s\neq u\}\subseteq A_r\cup\{s\neq u,t\neq u\}$, we have $s\neq u\in A_{n+1}\subseteq E$. Since $A_m\cup\{t\neq u\}\subseteq A_r\cup\{s\neq u,t\neq u\}$, we have $t\neq u\in A_{m+1}\subseteq E$. Next assume $A_r\cup\{s\neq v,t\neq v\}$ is in $\Gamma$. By a similar argument we know $s\neq v$ and $t\neq v$ must be in $E$. \item[{\EBQ}] Assume $s =_o t$ is in $E$. Let $n,m,j,k$ be such that $u_n=s$, $u_m=t$, $u_j=\neg s$ and $u_k=\neg t$. Let $r\ge n,m,j,k$ be such that $s =_o t$ is in $A_r$. By $\ABQ$ either $A_r\cup\{s,t\}$ or $A_r\cup\{\neg s,\neg t\}$ is in $\Gamma$. Assume $A_r\cup\{s,t\}$ is in $\Gamma$. Since $A_n\cup\{s\}\subseteq A_r\cup\{s,t\}$, we have $s\in E$. Since $A_m\cup\{t\}\subseteq A_r\cup\{s,t\}$, we have $t\in E$. Next assume $A_r\cup\{\neg s,\neg t\}$ is in $\Gamma$. Since $A_j\cup\{\neg s\}\subseteq A_r\cup\{\neg s, \neg t\}$, we have $\neg s\in E$. Since $A_k\cup\{\neg t\}\subseteq A_r\cup\{\neg s,\neg t\}$, we have $\neg t\in E$. \item[{\EBE}] Similar to $\EBQ$ \item[{\EFQ}] Assume $s =_{\sigma\tau} t$ is in $E$ and $u:\sigma$ is normal. Let $n$ be such that $u_n$ is $\nf{su} =_\tau \nf{tu}$. Let $r\ge n$ be such that $s =_{\sigma\tau} t$ is in $A_r$. By $\AFQ$ we know $A_r\cup\{\nf{su} =_\tau \nf{tu}\}$ is in $\Gamma$. Hence $\nf{su} =_\tau\nf{tu}$ is in $A_{n+1}$ and also in $E$. \item[{\EFE}] Assume $s\neq_{\sigma\tau} t$ is in $E$. Let $n$ be such that $u_n$ is $s\neq_{\sigma\tau} t$. Let $r\geq n$ be such that $s\neq_{\sigma\tau} t$ is in $A_r$. Since $A_n\cup\{u_n\}\subseteq A_r$, there is some variable $x$ such that $\nf{sx}\neq_\tau\nf{tx}$ is in $A_{n+1}\subseteq E$. \end{enumerate} It remains to show that~$E$ is complete if $\Gamma$ is complete. Let $\Gamma$ be complete and $s$ be a normal formula. We show that ${s}$ or $\neg{s}$ is in $E$. Let $m$, $n$ be such that $u_m={s}$ and $u_n=\neg{s}$. We consider $m<n$. (The case $m>n$ is symmetric.) If ${s}\in A_{n}$, we have ${s}\in E$. If ${s}\notin A_{n}$, then $A_n\cup\set{{s}}$ is not in $\Gamma$. Hence $A_n\cup\set{\neg{s}}$ is in $\Gamma$ since $\Gamma$ is complete. Hence $\neg{s}\in A_{n+1}\incl E$. \end{proof} \begin{thm}[Model Existence] \label{theo-acc-model-existence} Every member of an abstract consistency class has a model, which is surjective if the consistency class is complete. \end{thm} \begin{proof} Let $A\in\Gamma$ where $\Gamma$ is an abstract consistency class. By Lemma~\ref{lem:extension} we have an evident set $E$ such that $A\incl E$, where $E$ is complete if $\Gamma$ is complete. The claim follows with Theorem~\ref{theo-model-exist}. \end{proof} \section{Completeness} \label{sec:completeness} It is now straightforward to prove the completeness of the tableau calculus $\TS$. Let~\emph{$\GammaT$} be the set of all finite branches that are not refutable. \begin{lem} \label{lem:acc-completeness} $\GammaT$ is an abstract consistency class. \end{lem} \proof We have to show that $\GammaT$ satisfies the abstract consistency conditions. \begin{enumerate}[\AMat] \item[{\ADN}] Assume $\neg\neg s$ is in $A$ and $A\cup\{s\}\notin\GammaT$. Then we can refute $A$ using $\TRDN$. \item[{\AMat}] Assume $\{x\ddd s n,\neg x\ddd t n\}\subseteq A$ and $A\cup\{s_i\neq t_i\}\notin\GammaT$ for all $i\in\set{1\cld n}$. Then we can refute $A$ using \TRMat. \item[{\ADec}] Assume $x\ddd s n\neq_\alpha x\ddd t n$ is in $A$ and $A\cup\{s_i\neq t_i\}\notin\GammaT$ for all $i\in\set{1\cld n}$. Then we can refute $A$ using \TRDec. \item[{\ACon}] Assume $s=_\alpha t $ and $u\neq_\alpha v$ are in $A$ but $A\cup\{s\neq u,t\neq u\}$ and $A\cup{\set{s\neq v,t\neq v}}$ are not in $\GammaT$. Then we can refute $A$ using \TRCon. \item[{\ABQ}] Assume $s=_o t$ is in $A$, $A\cup\{s,t\}\notin\GammaT$ and $A\cup\{\neg s,\neg t\}\notin\GammaT$. Then we can refute $A$ using \TRBQ. \item[{\ABE}] Assume $s\neq_o t$ is in $A$, $A\cup\{s,\neg t\}\notin\GammaT$ and $A\cup\{\neg s,t\}\notin\GammaT$. Then we can refute $A$ using \TRBE. \item[{\AFQ}] Let $(s=_{\sigma\tau}t)\in A\in\GammaT$. Suppose $A\cup\set{\nf{su}{=}\nf{tu}}\notin\GammaT$ for some normal $u\in\Wff_\sigma$. Then $A\cup\set{\nf{su}{=}\nf{tu}}$ is refutable and so $A$ is refutable by $\TRFQ$. \item[{\AFE}] Let $(s{\neq}_{\sigma\tau}t)\in A\in\GammaT$. Suppose $A\cup\set{\nf{sx}{\neq}\nf{tx}}\notin\GammaT$ for every variable $x:\sigma$. Then $A\cup\set{\nf{sx}{\neq}\nf{tx}}$ is refutable for every $x:\sigma$. Hence $A$ is refutable using~\TRFE and the finiteness of $A$. Contradiction.\qed \end{enumerate} \begin{thm}[Completeness] \label{thm:completeness} Every unsatisfiable finite branch is refutable. \end{thm} \begin{proof} By contradiction. Let $A$ be an unsatisfiable finite branch that is not refutable. Then $A\in\GammaT$ and hence $A$ is satisfiable by Lemma~\ref{lem:acc-completeness} and Theorem~\ref{theo-acc-model-existence}. Contradiction. \end{proof} \section{Compactness and Countable Models} It is known~\cite{Henkin50,AndrewsBook} that simple type theory is compact and has the countable-model property. We use the opportunity and show how these properties follow with the results we already have. It is only for the existence of countable models that we make use of complete evident sets and complete abstract consistency classes. A branch $A$ is \emph{sufficiently pure} if for every type $\sigma$ there are infinitely many variables of type $\sigma$ that do not occur free in the formulas of $A$. Let $\Gammacomp$ be the set of all sufficiently pure branches $A$ such that every finite subset of $A$ is satisfiable. We write \emph{$\fsubseteq$} for the finite subset relation. \begin{lem} \label{lem-aux-compactness} Let $A\in\Gammacomp$ and $\dd{B}n$ be finite branches such that $A\cup B_i\notin\Gammacomp$ for all $i\in\set{1\cld n}$. Then there exists a finite branch $A'\fsubseteq A$ such that $A'\cup B_i$ is unsatisfiable for all $i\in\set{1\cld n}$. \end{lem} \begin{proof} By the assumption, we have for every $i\in\set{1\cld n}$ a finite and unsatisfiable branch $C_i\incl A\cup B_i$. The branch $A':=(C_1\cup\dots\cup C_n)\cap A$ satisfies the claim. \end{proof} \begin{lem} \label{lem:acc-compactness} $\Gammacomp$ is a complete abstract consistency class. \end{lem} \begin{proof} We verify the abstract consistency conditions using Lemma~\ref{lem-aux-compactness} tacitly. \begin{enumerate}[\AMat] \item[{\ADN}] Assume $\neg\neg s$ is in $A$ and $A\cup\{s\}\notin\Gammacomp$. There is some $A'\fsubseteq A$ such that $A'\cup\{s\}$ is unsatisfiable. There is a model of $A'\cup\{\neg \neg s\}\fsubseteq A$. This is also a model of $A'\cup\{s\}$, contradicting our choice of $A'$. \item[{\AMat}] Assume $x\ddd s n$ and $\neg x\ddd t n$ are in $A$ and $A\cup\{s_i\neq t_i\}\notin\Gammacomp$ for all $i\in\set{1\cld n}$. There is some $A'\fsubseteq A$ such that $A'\cup\{s_i\neq t_i\}$ is unsatisfiable for all $i\in\set{1\cld n}$. There is a model $\mci$ of $A'\cup\{x\ddd s n,\neg x\ddd t n\}\fsubseteq A$. Since $\hat\mci (x\ddd s n) \neq \hat\mci (x\ddd t n)$, we must have $\hat\mci(s_i) \neq \hat\mci(t_i)$ for some $i\in\set{1\cld n}$ (and in particular $n$ must not be $0$). Thus $\mci$ models $A'\cup\{s_i\neq t_i\}$, contradicting our choice of~$A'$. \item[{\ADec}] Similar to $\AMat$ \item[{\ACon}] Assume $s=_\alpha t $ and $u\neq_\alpha v$ are in $A$, $A\cup\{s\neq u,t\neq u\}\notin\Gammacomp$ and $A\cup\{{s\neq v},t\neq v\}\notin\Gammacomp$. There is some $A'\fsubseteq A$ such that $A'\cup\{{s\neq u},t\neq u\}$ and $A'\cup\{s\neq v,t\neq v\}$ are unsatisfiable. There is a model $\mci$ of $A'\cup\{{s = t},{u\neq v}\}\fsubseteq A$. Since $\hat\mci(s) = \hat\mci(t)$ and $\hat\mci(u) \neq \hat\mci(v)$, we either have $\hat\mci(s) \neq \hat\mci(u)$ and $\hat\mci(t) \neq \hat\mci(u)$ or $\hat\mci(s) \neq \hat\mci(v)$ and $\hat\mci(t) \neq \hat\mci(v)$. Hence $\mci$ models either $A'\cup\{s\neq u,t\neq u\}$ or $A'\cup\{s\neq v,t\neq v\}$, contradicting our choice of $A'$. \item[{\ABQ}] Assume $s =_o t$ is in $A$, $A\cup\{s, t\}\notin\Gammacomp$ and $A\cup\{\neg s,\neg t\}\notin\Gammacomp$. There is some $A'\fsubseteq A$ such that $A'\cup\{s,t\}$ and $A'\cup\{\neg s,\neg t\}$ are unsatisfiable. There is a model of $A'\cup\{s =_o t\}\fsubseteq A$. This is also a model of $A'\cup\{s, t\}$ or $A'\cup\{\neg s,\neg t\}$. \item[{\ABE}] Assume $s\neq_o t$ is in $A$, $A\cup\{s,\neg t\}\notin\Gammacomp$ and $A\cup\{\neg s,t\}\notin\Gammacomp$. There is some $A'\fsubseteq A$ such that $A'\cup\{s,\neg t\}$ and $A'\cup\{\neg s,t\}$ are unsatisfiable. There is a model of $A'\cup\{s\neq_o t\}\fsubseteq A$. This is also a model of $A'\cup\{s,\neg t\}$ or $A'\cup\{\neg s,t\}$. \item[{\AFQ}] Assume $s =_{\sigma\tau} t $ is in $A$ but $A\cup \{\nf{su} =_\tau\nf{tu}\}$ is not in $\Gammacomp$ for some normal $u\in\Wff_\sigma$. There is some $A'\fsubseteq A$ such that $A'\cup\{\nf{su}=\nf{tu}\}$ is unsatisfiable. There is a model $\mci$ of $A'\cup\{s = t\}\fsubseteq A$. Since $\hat\mci(s) =\hat\mci(t)$, we know $\hat\mci(\nf{su}) = \hat\mci(su) = \hat\mci(s)\hat\mci(u) = \hat\mci(t)\hat\mci(u) = \hat\mci(tu) = \hat\mci(\nf{tu})$ using N4. Hence $\mci$ is a model of $A'\cup\{\nf{su}=\nf{tu}\}$, a contradiction. \item[{\AFE}] Assume $s\neq_{\sigma\tau} t $ is in $A$. Since $A$ is sufficiently pure, there is a variable $x:\sigma$ which does not occur in $A$. Assume $A\cup\{\nf{sx}\neq\nf{tx}\}\notin\Gammacomp$. There is some $A'\fsubseteq A$ such that $A'\cup\{\nf{sx}\neq\nf{tx}\}$ is unsatisfiable. There is a model $\mci$ of $A'\cup\{s\neq t\}\fsubseteq A$. Since $\hat\mci(s)\neq\hat\mci(t)$, there must be some $a\in\mci\sigma$ such that $\hat\mci(s)a\neq\hat\mci(t)a$. Since $x$ does not occur free in $A$, we know $\widehat{\mci^x_a}(sx)\neq\widehat{\mci^x_a}(tx)$ and $\mci^x_a$ is a model of $A'$. Since $\widehat{\mci^x_a}(\nf{sx}) =\widehat{\mci^x_a}(sx)$ and $\widehat{\mci^x_a}(\nf{tx}) =\widehat{\mci^x_a}(tx)$ by N4, we conclude $\mci^x_a$ is a model of $A'\cup\{\nf{sx}\neq\nf{tx}\}$, contradicting our choice of $A'$. \end{enumerate} We show the completeness of $\Gammacomp$ by contradiction. Let $A\in\Gammacomp$ and $s$ be a normal formula such that $A\cup\set{{s}}$ and $A\cup\set{\neg{s}}$ are not in $\Gammacomp$. Then there exists $A'\fsubseteq A$ such that $A'\cup\set{{s}}$ and $A'\cup\set{\neg{s}}$ are unsatisfiable. Contradiction since~$A'$ is satisfiable. \end{proof} \begin{thm}\label{thm:compactness-ls} Let $A$ be a branch such that every finite subset of $A$ is satisfiable. Then $A$ has a countable model. \end{thm} \begin{proof} Without loss of generality we assume $A$ is sufficiently pure. Then $A\in\Gammacomp$. Hence $A$ has a countable model by Lemma~\ref{lem:acc-compactness} and Theorem~\ref{theo-acc-model-existence}. \end{proof} \section{EFO Fragment} We now turn to the EFO fragment of STT as first reported in~\cite{BrownSmolkaEFO}. The EFO fragment contains first-order logic and enjoys the usual properties of first-order logic. We will show completeness and compactness with respect to standard models. We will also prove that countable models for evident EFO sets exist. Suppose STT were given with $\neg$, $\limplies$, $=_\sigma$ and $\forall_{\!\sigma}$. Then the natural definition of EFO would restrict $=_\sigma$ and $\forall_{\!\sigma}$ to the case where $\sigma$ is a base type. To avoid redundancy our definition of EFO will also exclude the case where $\sigma = o$. Our definition of EFO assumes the logical constants $\neg:oo$, $\limplies:ooo$, $=_\alpha:\alpha\alpha o$ and $\forall_{\!\alpha}:(\alpha o)o$ where $\alpha$ ranges over sorts. We call these constants \emph{EFO constants}. For an assignment to be logical we require that it interprets the logical constants as usual. In particular, $\mci(\forall_{\!\alpha})$ must be the function returning $1$ iff its argument is the constant $1$ function. We say a term is \emph{EFO} if it only contains the logical constants $\neg$, $\to$, $=_\alpha$ and $\forall_{\!\alpha}$. Let \emph{$\EFO_\sigma$} be the set of EFO terms of type $\sigma$. A term is \emph{quasi-EFO} if it is EFO or of the form $s\not=_{\sigma} t$ where $s,t$ are EFO and $\sigma$ is a type. A branch $E$ is \emph{EFO} if every member of $E$ is quasi-EFO. The example tableau shown in Figure~\ref{fig:refutation} only contains EFO branches. \begin{figure} \begin{mathpar} \inferrule*[left=\emph{\TRFDN}~]{\neg\neg s}{s} \and \inferrule*[left=\emph{\TRFBE}~]{s\neq_ot}{s\,,\,\neg t~\mid~\neg s\,,\,t} \and \inferrule*[left=\emph{\TRFImp}~]{s\limplies t}{\neg s\mid t} \and \inferrule*[left=\emph{\TRFImpN}~]{\neg(s\limplies t)}{s\,,\,\neg t} \\ \inferrule*[left=\emph{\TRFMat}~,right=~$n\geq 0$] {xs_1\dots s_n\,,\,\neg xt_1\dots t_n} {s_1\neq t_1\mid\dots\mid s_n\neq t_n} \and \inferrule*[left=\emph{\TRFDec}~,right=~$n\geq 0$] {xs_1\dots s_n\neq_\alpha xt_1\dots t_n} {s_1\neq t_1\mid\dots\mid s_n\neq t_n} \\ \inferrule*[left=\emph{\TRFFE}~,right=~$x:\sigma$ fresh] {s\neq_{\sigma\tau} t}{\nf{sx}\neq\nf{tx}} \and \inferrule*[left=\emph{\TRFCon}~] {s=_\alpha t\,,\,u\neq_\alpha v} {s\neq u\,,\,t\neq u\mid s\neq v\,,\,t\neq v} \\ \inferrule*[left=\emph{\TRFall}~,right=~$u\in\EFO_\alpha$ normal] {\forall_{\!\alpha} s}{\nf{su}} \and \inferrule*[left=\emph{\TRFalln}~,right=~$x:\alpha$ fresh] {\neg\forall_{\!\alpha} s}{\neg\nf{sx}} \end{mathpar} \caption{Tableau rules for EFO} \label{fig:rulesefo} \end{figure} The tableau rules in Figure~\ref{fig:rulesefo} define a tableau calculus $\TSF$ for EFO branches up to restrictions on applicability given in Section~\ref{sec:efo-complete}. After showing a model existence theorem, we will precisely define the tableau calculus $\TSF$ and prove it is complete for EFO branches. The completeness result will be with respect to standard models. For some fragments of EFO the tableau calculus $\TSF$ will terminate, yielding decidability results. \section{EFO Evidence and Compatibility} We say an EFO branch $E$ is evident if it satisfies the evidence conditions in Figure~\ref{fig:evidence} and the following additional conditions.\\ \renewcommand{\arraystretch}{1.4} \begin{tabular}{c>{\raggedright}p{120mm}} \emph{\EImp}&If $s\limplies t$ is in $E$, then $\neg s$ or $t$ is in $E$. \tabularnewline \emph{\EImpN}&If $\neg(s\limplies t)$ is in $E$, then $s$ and $\neg t$ are in $E$. \tabularnewline \emph{\Eall}&If $\forall_{\!\alpha} s$ is in $E$, then $\nf{su}$ is in $E$ for every $\alpha$-discriminating $u$ in $E$. \tabularnewline \emph{\Ealld}&If $\forall_{\!\alpha} s$ is in $E$, then $\nf{su}$ is in $E$ for some normal EFO term $u:\alpha$. \tabularnewline \emph{\Ealln}&If $\neg \forall_{\!\alpha} s$ is in $E$, then $\neg\nf{sx}$ is in $E$ for some variable $x$. \end{tabular}\\[2mm] We say an EFO branch $E$ is \emph{EFO-complete} if for all normal $s\in\EFO_o$ either $s\in E$ or $\neg s\in E$. The condition $\Eall$ is the usual condition for universal quantifiers with instantiations restricted to $\alpha$-discriminating terms. Since there may be no $\alpha$-discriminating terms in $E$, we also include the condition $\Ealld$ to ensure that at least one instantiation has been made. Without the condition $\Ealld$, the set $\{\forall_{\!\alpha} x.\neg(y\limplies y)\}$ would be evident. Let $E$ be an evident EFO branch. Compatibility can be defined exactly as in Section~\ref{sec:compat} and Lemma~\ref{lem-compatibility} holds. In the proof of Lemma~\ref{lem-efo-log-constants} below, we will need to know that if $E$ has some $\alpha$-discriminating term, then all $\alpha$-discriminants are nonempty. Since $\alpha$-discriminants are maximal sets of $\alpha$-discriminating terms, it is enough to prove every $\alpha$-discriminating term is compatible with itself. To be concrete, we must prove $s\not=_\alpha s$ is never in $E$. One way we could ensure this is to include it as an evidence condition and have a corresponding tableau rule of the form: \begin{mathpar} \inferrule*[left=\emph{\TRFBotD}~]{s\neq_\alpha s}{\,} \end{mathpar} This was the choice taken in~\cite{BrownSmolkaEFO}. One drawback to including the rule $\TRFBotD$ in the ground calculus is that a lifting lemma will be more difficult to show when one passes to a calculus with variables. Another alternative is to remove the restriction on instantiations in the rule $\TRFall$. If we do not restrict $\TRFall$ to discriminating terms, then we can show the existence of a model without knowing a priori that $\alpha$-discriminants are nonempty in the presence of $\alpha$-discriminating terms. In order to obtain a strong completeness result, we will not follow either of these alternatives. Instead we prove that all terms are compatible with themselves. First we prove EFO constants are compatible with themselves. \begin{lem} \label{efoconst-compat} For every EFO constant $c$, $c\comp c$. \end{lem} \begin{proof} Case analysis. $\neg\comp\neg$ follows from N3 and $\EDN$. $\to\comp\to$ follows from N3, $\EImp$ and $\EImpN$. $=_\alpha\comp =_\alpha$ follows from N3 and $\ECon$. We show $\forall_{\!\alpha}\comp \forall_{\!\alpha}$. Let $s \comp_{\alpha o} t$ be given. Assume $\forall s \ncomp \forall t$. Without loss of generality, assume $\nf{\forall s}$ and $\neg\nf{\forall t}$ are in $E$. By $\Ealln$ we have $\neg \nf{tx}$ in $E$ for some variable $x:\alpha$. By $\Ealld$ we have $\nf{su}$ in $E$ for some normal EFO term $u$. Since $su \ncomp_o tx$, we must have $u\ncomp_\alpha x$. In particular, $x$ must be an $\alpha$-discriminating term. By $\Eall$ we have $\nf{sx}$ is in $E$. Hence we must have $x\ncomp_\alpha x$, contradicting Lemma \ref{lem-compatibility}\,(2). \end{proof} Next we prove compatibility respects normalization. \begin{lem}\label{lem-norm-comp} For all $s,t:\sigma$, $s\comp_\sigma t$ iff $\nf{s}\comp_\sigma\nf{t}$. \end{lem} \begin{proof} Induction on types. At base types this follows from N1 and the definition of compatibility. Assume $\sigma$ is $\tau\mu$. Let $u\comp_\tau v$. By N2 and the inductive hypothesis (twice) we have $su\comp tv$ iff $\nf{su}\comp \nf{tv}$ iff $\nf{\nf{s}u}\comp \nf{\nf{t}v}$ iff $\nf{s}u\comp \nf{t}v$. Hence $s\comp t$ iff $\nf{s}\comp\nf{t}$. \end{proof} For two substitutions $\theta$ and $\phi$ we write \emph{$\theta \comp \phi$} when $\Dom\theta = \Dom\phi$, $\theta x \comp \phi x$ for every variable $x\in\Dom\theta$ and $\theta c \comp \phi c$ for every EFO constant $c\in\Dom\theta$. \begin{lem} \label{efotrm-compat-lema} For all $s\in\EFO_\sigma$, if $\theta\comp \phi$, then $\hat\theta s \comp \hat\phi s$. \end{lem} \begin{proof} By induction on $s$. Case analysis. \br $s$ is a variable or an EFO constant in $\Dom\theta$. The claim follows from $\theta\comp \phi$ and S1. \br $s$ is a variable not in $\Dom\theta$. The claim follows from S1 and Lemma \ref{lem-compatibility}\,(2). \br $s$ is an EFO constant not in $\Dom\theta$. The claim follows from S1 and Lemma~\ref{efoconst-compat}. \br $s=tu$. By inductive hypothesis $\hat\theta t \comp \hat\phi t$ and $\hat\theta u \comp \hat\phi u$. Hence $\hat\theta (tu) \comp \hat\phi (tu)$ using S2. \br $s=\lam{x}t$ where $x:\sigma$. Let $u\comp v$ be given. We will prove $(\hat\theta s)u\comp (\hat\phi s)v$. Using Lemma~\ref{lem-norm-comp} and S3 it is enough to prove ${\widehat{\subst\theta{x}{u}}t}\comp{\widehat{\subst\phi{x}{v}}t}$. This is the inductive hypothesis with $\subst{\theta}{x}{u}$ and $\subst{\phi}{x}{v}$. \end{proof} \begin{lem} \label{efotrm-compat-lemb} For all $s\in\EFO_\sigma$, $s\comp s$. \end{lem} \begin{proof} By Lemma~\ref{efotrm-compat-lema} we have $\hat{\emptyset}{s}\comp\hat{\emptyset}{s}$. We conclude $s\comp s$ using Lemma~\ref{lem-norm-comp} and S4. \end{proof} We can now prove $\alpha$-discriminants are nonempty if $E$ has some $\alpha$-discriminating term. \begin{lem} \label{discr-nonempty} If $a$ is an $\alpha$-discriminant and $E$ has an $\alpha$-discriminating term, then $a$ is nonempty. \end{lem} \begin{proof} Let $s$ be $\alpha$-discriminating. We know $s\comp s$ by Lemma~\ref{efotrm-compat-lemb} and so $\{s\}$ is compatible. If $a$ is empty, then $a\cup\{s\}$ is compatible, contradicting maximality of $a$. \end{proof} \section{EFO Model Construction} Let $E$ be an evident EFO branch. We inductively define a standard frame $\mcd$. \begin{align*} \N{\mcd o}& = \{0,1\} \\ \N{\mcd\alpha}& = \{a | a {\mbox{ is an $\alpha$-discriminant}}\} \\ \N{\mcd(\sigma\tau)}& = \mcd\sigma \to \mcd\tau \end{align*} We define a value system $\canbe$ as for STT, but extend it to higher types using full function spaces. \begin{align*} \N{s \canbe_o 0}&\iffdef s\in\Wff_o \text{ and } \nf{s}\notin E\\ \N{s \canbe_o 1}&\iffdef s\in\Wff_o \text{ and } \neg\nf{s}\notin E\\ \N{s\canbe_\alpha a}\!&\iffdef s\in\Wff_\alpha,~a \text{ is an $\alpha$-discriminant, and } \nf{s}\in a \text{ if } \nf{s} \text{ is discriminating}\\ \N{\canbe_{\sigma\tau}}&\eqdef\mset{(s,f)\in\Wff_{\sigma\tau}\times(\mcd\sigma\to\mcd\tau)} {\forall(t,a)\in\canbe_\sigma\col~(st,fa)\in\canbe_\tau} \end{align*} In spite of the slightly different construction, many of the previous results still hold with essentially the same proofs as before. \begin{prop} \label{efo-prop-norm-poss-value} $s\canbe_\sigma a$ iff $\nf{s}\canbe_\sigma a$. \end{prop} \begin{proof} Similar to Proposition~\ref{prop-norm-poss-value}. \end{proof} \begin{lem} \label{efo-lem-admissibility} Let $\mci$ be an assignment into $\mcd$ such that $x \canbe \mci x$ for all names $x$ and $\theta$ be a substitution such that $\theta{x}\canbe\mci{x}$ for all $x\in\Dom\theta$. Then $s\in\Dom\hat\mci$ and $\hat\theta{s}\canbe\hat\mci{s}$ for every term $s$. \end{lem} \begin{proof} Similar to Lemma~\ref{lem-admissibility} \end{proof} \begin{thm} \label{theo-efo-admissible-interpretations} Let $\mci$ be an assignment into $\mcd$ such that $x \canbe \mci x$ for all names $x$. Then $\mci$ is an interpretation such that $s\canbe\hat\mci s$ for all terms $s$. \end{thm} \begin{proof} Follows from Proposition~\ref{efo-prop-norm-poss-value}, Lemma~\ref{efo-lem-admissibility} and property S4. \end{proof} \begin{lem} \label{lemma-efo-adm-model} A logical assignment $\mci$ is a model of $E$ if $x\canbe \mci x$ for every name $x$. \end{lem} \begin{proof} Similar to Lemma~\ref{lemma-adm-model} using Theorem~\ref{theo-efo-admissible-interpretations}. \end{proof} \begin{lem}[Common Value] \label{lem-efo-common-value} Let $T\incl\Wff_\sigma$. Then $T$ is compatible if and only if there exists a value~$a$ such that $T\canbe_\sigma a$. \end{lem} \begin{proof} Similar to Lemma~\ref{lem-common-value}. \end{proof} \begin{lem}[Admissibility] \label{lemma-efo-inhabitation} For every variable $x:\sigma$ there is some $a\in\mcd\sigma$ such that $x\canbe a$. \end{lem} \begin{proof} Similar to Lemma~\ref{lemma-inhabitation} using Lemma~\ref{lem-compatibility} and Lemma~\ref{lem-efo-common-value}. \end{proof} \begin{lem}[Functionality] \label{lem-efo-functionality} If $s\canbe_\alpha a$, $t\canbe_\alpha b$, and $(s{=}t)\in E$ , then $a=b$. \end{lem} \begin{proof} Similar to Lemma~\ref{lem-functionality} restricted only to sorts. \end{proof} As before $\mcl(c)$ is the canonical interpretation for each logical constant $c$. We now have the additional logical constants $\to$ and $\forall_{\!\alpha}$: \begin{align*} \N{\mcl({\limplies}})&\eqdef\lam{a{\in}\mcd o}{~\lam{b{\in}\mcd o}{~\Cond{a{=}1}b1}}\\ \N{\mcl({\forall_{\!\alpha}})}&\eqdef\lam{f{\in}\mcd\alpha\to\mcd o}{~\Cond{f = (\lam{x\in\mcd\alpha}{~1})}10} \end{align*} \begin{lem}[Logical Constants] \label{lem-efo-log-constants} $c\canbe\mcl(c)$ for every logical constant $c$. \end{lem} \proof Similar to Lemma~\ref{lem-log-constants}. The proof for $\neg$ is the same. The proof for $\limplies$ uses N3, $\EImp$ and $\EImpN$. The proof for $=_\sigma$ requires a slight modification. Assume $s\canbe_\sigma a$, $t\canbe_\sigma b$, and $(s{=_\sigma}t)\ncanbe\mcl(=_\sigma)ab$. Case analysis. \begin{enumerate}[$\bullet$] \item $a=b$. Use Lemmas~\ref{lem-efo-common-value} and~\ref{lem-compatibility}\,(1). \item $a\neq b$. Then $(\nf{s}{=}\nf{t})\in E$ and so $\sigma$ must be a sort $\alpha$ since $E$ is EFO. This contradicts Lemma~\ref{lem-efo-functionality}. \end{enumerate} Finally, we prove $\forall_{\!\alpha}\canbe\mcl(\forall_{\!\alpha})$. Case analysis. Assume $s\canbe_{\alpha o} f$ and $\forall_{\!\alpha} s\ncanbe_o \mcl(\forall_{\!\alpha}) f$. \begin{enumerate}[$\bullet$] \item $\mcl(\forall_{\!\alpha}) f=1$. Then $\neg\nf{\forall_{\!\alpha} s} \in E$ and so by N3, $\Ealln$ and N2 we have $\neg\nf{sx}\in E$ for some variable $x:\alpha$. We know $\{x\}$ is compatible by Lemma~\ref{lem-compatibility}\,(2) and so by Lemma~\ref{lem-efo-common-value} there is some $a\in\mcd\alpha$ such that $x\canbe a$. Thus $sx\canbe 1$, contradicting $\neg\nf{sx}\in E$. \item $\mcl(\forall_{\!\alpha}) f=0$. Then $\nf{\forall_{\!\alpha} s}\in E$ and there is some $a\in\mcd\alpha$ such that $fa = 0$. Suppose there are no $\alpha$-discriminating terms. In this case $a$ is empty and $u\canbe a$ for any $u\in\Wff_\alpha$. By N3, $\Ealld$ and N2 we have $\nf{su}\in E$ for some normal EFO term $u$. Hence $su\ncanbe 0$, contradicting $s\canbe f$ and $u\canbe a$. Next suppose there are $\alpha$-discriminating terms. In this case there is some $u\in a$ by Lemma~\ref{discr-nonempty}. By N3, $\Eall$ and N2 we know $\nf{su}\in E$. In this case we also have $su\ncanbe 0$, again contradicting $s\canbe f$ and $u\canbe a$.\qed \end{enumerate} \begin{thm}[EFO Model Existence] \label{thm:efo-model-exist} Every evident EFO branch has a standard model. Every EFO-complete evident EFO branch has a standard model where each $\mcd\alpha$ is countable. Every finite evident EFO branch has a finite standard model. \end{thm} \begin{proof} We use the frame $\mcd$ and relation $\canbe$ defined above. We give an assignment $\mci$ into $\mcd$. For each variable $x$ we can choose $\mci x$ such that $x\canbe \mci x$ using Lemma~\ref{lemma-efo-inhabitation}. For each logical constant $c$ we choose $\mci c = \mcl (c)$. By Lemma~\ref{lem-efo-log-constants} we know $c\canbe \mci c$. $\mci$ is a model of $E$ by Lemma~\ref{lemma-efo-adm-model}. Suppose $E$ is EFO-complete. We prove there are only countably many $\alpha$-discriminants as follows. If there are no $\alpha$-discriminating terms, then $\emptyset$ is the only $\alpha$-discriminant. Otherwise, every $\alpha$-discriminant is nonempty by Lemma~\ref{discr-nonempty}. For each $\alpha$-discriminant $a$, choose some $s_a\in a$. We prove the function mapping $a$ to $s_a$ is injective. Assume $a,b\in\mcd\alpha$ and $a\not=b$. By EFO-completeness of $E$ and Proposition~\ref{prop-diff-discs} we must have $s_a\not= s_b \in E$. If $s_a$ and $s_b$ were the same term, then $E$ would be unsatisfiable. Hence $s_a$ and $s_b$ are different terms. Finally, if $E$ is finite, then for each sort $\alpha$ there will be only finitely many $\alpha$-discriminants (by Proposition~\ref{prop-finite-discs}) and hence $\mcd\sigma$ will be finite for all $\sigma$. \end{proof} \section{EFO Completeness}\label{sec:efo-complete} Let $\N{\TSF}$ be the tableau calculus given by taking all the rules from Figure~\ref{fig:rulesefo} subject to the following restrictions. \begin{enumerate}[$\bullet$] \item If $(s{\neq} t)$ is on a branch $A$, then $\TRFFE$ can only be applied if there is no variable $x$ such that $(\nf{sx}\neq\nf{tx})\in A$. \item If $\neg\forall_{\!\alpha} s$ is on a branch $A$, then $\TRFalln$ can only be applied if there is no variable $x:\alpha$ such that $\neg\nf{sx}\in A$. \item If $\forall_{\!\alpha} s$ is on a branch $A$ and there are $\alpha$-discriminating terms in $A$, then $\TRFall$ can only be applied with these $\alpha$-discriminating terms. \item If $\forall_{\!\alpha} s$ is on a branch $A$, $\nf{su}\notin A$ for all normal $u\in\Wff_\alpha$, some variable $x:\alpha$ occurs free in $A$ and there are no $\alpha$-discriminating terms in $A$, then $\TRFall$ can only be applied with a variable $x:\alpha$ occurring free in $A$. \item If $\forall_{\!\alpha} s$ is on a branch $A$, $\nf{su}\notin A$ for all normal $u\in\Wff_\alpha$, no variable $x:\alpha$ occurs free in $A$ and there are no $\alpha$-discriminating terms in $A$, then $\TRFall$ can only be applied with a variable $x:\alpha$. \end{enumerate} The idea behind the restrictions on $\TRFall$ is that only $\alpha$-discriminating terms should be used as instantiations, except when there are no $\alpha$-discriminating terms. In case there are no $\alpha$-discriminating terms, at most one new variable $x:\alpha$ will be used as an instantiation term for each sort $\alpha$. These restrictions will ensure that $\TSF$ terminates when given branches in certain fragments of EFO. From now on we use the term \emph{refutable} to refer to refutability in the calculus $\TSF$. That is, the set of \emph{refutable branches} is the least set such that if $A/\ddd A n$ is an instance of a rule of~$\TSF$ and $\dd A n$ are refutable, then $A$ is refutable. The proof of soundness of $\TS$ (see Proposition~\ref{prop:ts-sound}) extends to show soundness of $\TSF$. \begin{prop}[Soundness of $\TSF$] \label{prop-e-soundness}~ Every refutable branch is unsatisfiable. \end{prop} An EFO abstract consistency class is a set $\Gamma$ of EFO branches such that every branch $A\in\Gamma$ satisfies the conditions in Figure~\ref{fig:abs-consistency} and also the following conditions:\\ \begin{tabular}{c>{\raggedright}p{120mm}} \emph{\AImp}&If $s\limplies t$ is in $A$, then $A\cup\set{\neg s}$ or $A\cup\set{t}$ is in $\Gamma$. \tabularnewline \emph{\AImpN}&If $\neg(s\limplies t)$ is in $A$, then $A\cup\set{s,\neg t}$ is in $\Gamma$. \tabularnewline \emph{\Aall}&If $\forall_{\!\alpha} s$ is in $A$, then $A\cup\set{\nf{su}}$ is in $\Gamma$ for every $\alpha$-discriminating~$u$ in~$A$. \tabularnewline \emph{\Aalld}&If $\forall_{\!\alpha} s$ is in $A$, then $A\cup\set{\nf{su}}$ is in $\Gamma$ for some normal EFO term $u\in\Wff_\alpha$. \tabularnewline \emph{\Aalln}&If $\neg \forall_{\!\alpha} s$ is in $A$, then $A\cup\set{\neg\nf{sx}}$ is in $\Gamma$ for some variable $x$. \end{tabular}\\[2mm] We say an abstract consistency class $\Gamma$ is \emph{EFO-complete} if for all $A\in\Gamma$ and all normal $s\in\EFO_o$ either $A\cup\{s\}\in \Gamma$ or $A\cup\{\neg s\}\in \Gamma$. Let~\emph{$\GammaTEFO$} be the set of all finite EFO branches that are not refutable. \begin{lem} \label{lem:acc-efo-completeness} $\GammaTEFO$ is an abstract consistency class. \end{lem} \proof Similar to Lemma~\ref{lem:acc-completeness}. We only check the new conditions: $\AImp$, $\AImpN$, $\Aall$, $\Aalld$ and $\Aalln$. \begin{enumerate}[\AImpN] \item[{\AImp}] Let $s\limplies t\in A\in \GammaTEFO$. Suppose $A\cup\{\neg s\}\notin\GammaTEFO$ and $A\cup\{t\}\notin\GammaTEFO$. By $\TRFImp$ we have $A$ is refutable. Contradiction. \item[{\AImpN}] If $\neg(s\to t)\in A$ and $A\cup\{s,\neg t\}\notin\GammaTEFO$, then $A\notin\GammaTEFO$ using the rule $\TRFImpN$. \item[{\Aall}] Let $\forall_{\!\alpha} s\in A\in\GammaTEFO$. Suppose $A\cup\{\nf{su}\}\notin\GammaT$ for some normal $\alpha$-discriminating $u$. Then $A\cup\{\nf{su}\}$ is refutable. Hence $A$ can be refuted using \TRFall (with the restriction). \item[{\Aalld}] Let $\forall_{\!\alpha} s\in A\in\GammaTEFO$. If there is some $\alpha$-discriminating term, then $\Aalld$ follows from $\Aall$. Assume there are no $\alpha$-discriminating terms and $A\cup\{\nf{su}\}\notin\GammaT$ for all normal $u\in\EFO_\alpha$. In particular, $\nf{su}\notin A$ for all normal $u\in\EFO_\alpha$. Choose a variable $x:\alpha$ occurring free in $A$ (or any variable $x:\alpha$ if none occurs free in $A$). Since $A\cup\{\nf{sx}\}\notin\GammaT$, $A\cup\{\nf{sx}\}$ is refutable. Using $\TRFall$ (with the restriction), $A$ is refutable. Contradiction. \item[{\Aalln}] Let $\neg\forall_{\!\alpha} s\in A\in\GammaTEFO$. Suppose $A\cup\set{\neg\nf{sx}}\notin\GammaT$ for every variable $x:\alpha$. Let $x:\alpha$ be fresh for $A$. Then $A\cup\set{\neg\nf{sx}}$ is refutable and so $A$ can be refuted using $\TRFalln$.\qed \end{enumerate} \begin{lem}[EFO Extension Lemma] \label{lem:efo-extension} Let $\Gamma$ be an abstract consistency class and $A\in\Gamma$ be an EFO branch. Then there exists an evident EFO branch $E$ such that $A\subseteq E$. Moreover, if $\Gamma$ is EFO-complete, a EFO-complete evident EFO branch $E$ exists such that $A\subseteq E$. \end{lem} \begin{proof} Similar to Lemma~\ref{lem:extension}. Instead of using an enumeration of all normal formulas, we use an enumeration of all normal EFO formulas. The proof goes through when one makes some obvious modifications. \end{proof} \begin{thm}[EFO Completeness] \label{thm:efo-completeness} Every finite EFO branch is either refutable or has a standard model. \end{thm} \begin{proof} Follows from Lemma~\ref{lem:acc-efo-completeness}, Lemma~\ref{lem:efo-extension} and Theorem~\ref{thm:efo-model-exist}. \end{proof} We now turn to compactness and the existence of countable models. Let $\GammacompEFO$ be the set of all sufficiently pure EFO branches $A$ such that every finite subset of $A$ has a standard model. \begin{lem} \label{lem:acc-efo-compactness} $\GammacompEFO$ is an EFO-complete abstract consistency class. \end{lem} \begin{proof} Similar to Lemma~\ref{lem:acc-compactness}. \end{proof} \begin{thm}\label{thm:efo-compactness-ls} Let $A$ be a branch such that every finite subset of $A$ has a standard model. Then $A$ has a standard model where $\mcd\alpha$ is countable for all sorts $\alpha$. \end{thm} \begin{proof} Similar to Theorem~\ref{thm:compactness-ls}. \end{proof} \begin{cor}\label{cor:efo-stdsat} Let $A$ be a satisfiable EFO branch. Then $A$ has a standard model where $\mcd\alpha$ is countable for all sorts $\alpha$. \end{cor} \begin{proof} To apply Theorem~\ref{thm:efo-compactness-ls} we only need to show every subset of $A$ has a standard model. Let $A'$ be a finite subset of $A$. Since $A'$ is satisfiable, $A'$ is not refutable by Proposition~\ref{prop-e-soundness}. By Theorem~\ref{thm:efo-completeness} $A'$ has a standard model. \end{proof} \section{Decidable EFO Fragments} Given the completeness result for the tableau calculus $\TSF$ (Theorem~\ref{thm:efo-completeness}), we can show a fragment of EFO is decidable by proving $\TSF$ terminates on branches in the fragment. We will use this technique to argue decidability of three fragments: \begin{enumerate}[$\bullet$] \item The \emph{$\lambda$-free fragment}, which is EFO without $\lambda$-abstraction. \item The \emph{pure fragment}, which consists of disequations $s\neq t$ where no name used in $s$ and $t$ has a type that contains $o$. \item The \emph{BSR fragment (Bernays-Sch\"onfinkel-Ramsey)}, which consists of relational first-order $\exists^*\forall^*$-formulas with equality. \end{enumerate} \begin{prop}[Verification Soundness] \label{prop-verif-soundness} Let $A$ be a finite EFO branch that is not closed and cannot be extended with $\TSF$. Then $A$ is evident and has a finite model. \end{prop} \begin{proof} Checking $A$ is evident is easy. The existence of a finite model follows from Theorem~\ref{thm:efo-model-exist}. \end{proof} We now have a general method for proving decidability of satisfiability within a fragment. \begin{prop} Let $\TSF$ terminate on a set $\Delta$ of finite EFO branches. Then satisfiability of the branches in $\Delta$ is decidable and every satisfiable branch in $\Delta$ has a finite model. \end{prop} \begin{proof} Follows with Propositions~\ref{prop-e-soundness} and~\ref{prop-verif-soundness} and Theorem~\ref{thm:efo-model-exist}. \end{proof} The decision procedure depends on the normalization operator employed with $\TSF$. A normalization operator that yields $\beta$-normal forms provides for all termination results proven in this section. Note that the tableau calculus applies the normalization operator only to applications $st$ where $s$ and $t$ are both normal and $t$ has type $\alpha$ (for some sort $\alpha$) if it is not a variable. Hence at most one $\beta$-reduction is needed for normalization if $s$ and $t$ are $\beta$-normal. Moreover, no $\alpha$-renaming is needed if the bound variables are chosen differently from the free variables. For clarity, we continue to work with an abstract normalization operator and state further conditions as they are needed. \subsection{Lambda-Free Formulas} In~\cite{BrownSmolkaBasic} we study lambda- and quantifier-free EFO and show that the concomitant subsystem of $\TSF$ terminates on finite branches. The result extends to lambda-free branches containing quantifiers (e.g., $\set{\forall_{\!\alpha} f}$). \begin{prop}[Lambda-Free Termination] Let the normalization operator satisfy\lmcs{\linebreak} $\nf{s}=s$ for every lambda-free EFO term $s$. Then $\TSF$ terminates on finite lambda-free branches. \end{prop} \begin{proof} An application of \TRFFE disables a disequation $s{\neq_{\sigma\tau}}t$ and introduces new subterms as follows: a variable $x:\sigma$, two terms $sx:\tau$ and $tx:\tau$, and the formula $sx{\neq}tx$. The types of the new subterms are smaller than the type of $s$ and $t$, and the new subterms introduced by the other rules always have type $o$ or $\alpha$. For each branch, consider the multiset of types $\sigma\tau$ where $s,t:\sigma\tau$ are subterms of formulas on the branch but there is no $x:\sigma$ such that $sx\neq tx$ is on the branch. By considering the multiset ordering, we see that no derivation can employ \TRFFE infinitely often. Let $A\to A_1\to A_2\to\cdots$ be a possibly infinite derivation that issues from a finite lambda-free branch and does not employ \TRFFE. It suffices to show that the derivation is finite. Consider the new variables $x:\alpha$ which may be introduced by the $\TRFall$ and $\TRFalln$ rules. For each subterm $\forall_{\!\alpha} s$ at most one new variable will be introduced by these rules. Since the branches are $\lambda$-free, no rule creates new subterms of the form $\forall_{\!\alpha} s$. Hence only finitely many new variables of type $\alpha$ are introduced. Let $A_n$ be a branch in the sequence such that no new variables are introduced after this point. Let $S_\sigma$ be the set of all subterms of type $\sigma$ of the formulas in $A_n$. Let $B$ be the union of the three finite sets $S_o$, $\{\neg s | s\in S_o\}$ and $\{s\not=_\sigma t | s,t\in S_\sigma\}$. Every branch $A_m$ with $m\geq n$ can only contain members of $B$. Hence the derivation is finite. \end{proof} \subsection{Pure Disequations} A type is \emph{pure} if it does not contain $o$. A term is \emph{pure} if the type of every name occurring in it (bound or unbound) is pure. An equation $s=t$ or disequation $s\neq t$ is \emph{pure} if $s$ and $t$ are pure terms. We add a new property of normalization in order to prove termination. \begin{description} \item[{N5}] The least relation $\succ$ on terms such that \begin{enumerate}[(1)] \item ${a\ddd s n}\succ{s_i}$ \ if $i\in\set{1\cld n}$ \item $s\succ\nf{sx}$ \ if $s:\sigma\tau$ and $x:\sigma$ \end{enumerate} terminates on normal terms. \end{description} \begin{prop}[Pure Termination] Let the normalization operator satisfy N5. Then $\TSF$ terminates on finite branches containing only pure disequations. \end{prop} \begin{proof} Let $A\to A_1\to A_2\to\cdots$ be a possibly infinite derivation that issues from a finite branch containing only pure disequations. Then no other rules but possibly \TRFDec and \TRFFE apply and thus no $A_i$ contains a formula that is not a pure disequation (using S5). Using N5 it follows that the derivation is finite. \end{proof} \subsection{Bernays-Sch\"onfinkel-Ramsey Formulas} It is well-known that the satisfiability of Bernays-Sch\"onfinkel-Ramsey formulas (relational first-order $\exists^*\forall^*$-prenex formulas with equality) is decidable and the fragment has the finite model property~\cite{BGG97}. We reobtain this result by showing that $\TSF$ terminates for the respective fragment. We call a type \emph{BSR} if it is $\alpha$ or $o$ or has the form $\alpha_1\dots\alpha_n o$. We call an EFO formula $s$ \emph{BSR} if it satisfies two conditions: \begin{enumerate} \item The type of every variable that occurs in $s$ is BSR. \item $\forall_{\!\alpha}$ does not occur below a negation or an implication in $s$. \end{enumerate} Note that every subterm of a BSR formula that has type $\alpha$ is a variable. For simplicity, our BSR formulas don't provide for outer existential quantification. We need one more condition for the normalization operator: \begin{description} \item[{N6}] If $s:\alpha o$ is BSR and $x:\alpha$, then $\nf{sx}$ is BSR. \end{description} \begin{prop}[BSR Termination] Let the normalization operator satisfy N5 and~N6. Then $\TSF$ terminates on finite branches containing only BSR formulas. \end{prop} \begin{proof} Let $A\to A_1\to A_2\to\cdots$ be a possibly infinite derivation that issues from a finite branch containing only BSR formulas. Then \TRFalln and \TRFFE are not applicable and all $A_i$ contain only BSR formulas (using N6). Furthermore, for each sort $\alpha$ used in $A$ at most one new variable of sort $\alpha$ is introduced (by the restriction on $\TRFall$ in $\TSF$). Since all terms of sort $\alpha$ are variables, there is only a finite supply. Using N5 it follows that the derivation is finite. \end{proof} \section{Conclusion} In this paper we have studied a complete cut-free tableau calculus for simple type theory with primitive equality (STT). For the first-order fragment of STT (EFO) we have shown that the tableau system is complete with respect to standard models. Our development demonstrates that first-order logic can be treated naturally as a fragment of STT. For the EFO fragment we gave an interesting restriction on instantiations. In particular, one can restrict most instantiations of sort $\alpha$ to be $\alpha$-discriminating terms. Such a restriction can also be included in the tableau calculus for STT without sacrificing completeness. Confining instantiations to $\alpha$-discriminating terms is a serious restriction since each branch has only finitely many such terms. Automated theorem proving would be a natural application of the tableau calculi presented here. When designing a search procedure one often starts with a complete ground calculus (like our tableau calculi $\TS$ and $\TSF$), then extends this to include metavariables to be instantiated during search, and finally proves a lifting lemma showing the tableaux with metavariables can simulate a refutation in the ground calculus. A design principle of our calculi $\TS$ and $\TSF$ is that none of the rules look deeply into the structure of any formula on the branch. For example, consider the mating rule \begin{mathpar} \inferrule*[right=~$n\geq 0$] {xs_1\dots s_n\,,\,\neg xt_1\dots t_n} {s_1\neq t_1\mid\dots\mid s_n\neq t_n} \end{mathpar} To check if this rule applies to two formulas $s,t$ on the branch $A$, one only needs to check if $s$ has a variable $x$ at the head and if $t$ is the negation of a formula with $x$ at the head. When trying to prove a lifting lemma, we would need to show how the calculus with metavariables could simulate the mating rule. This may involve partially instantiating metavariables to expose the head $x$ in the counterpart to $s$ or the negation and the head $x$ in the counterpart to $t$. On the other hand, suppose our ground calculus included a rule to close branches with a formula of the form $s\not= s$. To simulate this in the calculus with metavariables we would need to know if some instantiation for the metavariables can yield a formula of the form $s\not=s$. In the worst case this is a problem requiring full higher-order unification. We have been careful to only include rules in our calculi which will not require arbitrary instantiations of metavariables to prove a lifting lemma. Formulating such a calculus with metavariables and proving such a lifting lemma is left for future work.
1,108,101,562,414
arxiv
\section{Introduction} In every day and every place, various events are being reported in the form of texts, and many of these don't present hierarchical and standard locations. In the context-aware text, location is a fundamental component that supports a wide range of applications. We need to focus on the normalizing location to process massive texts effectively in specific scenarios. As the text stream in social media are more quickly in accident or disaster response~\cite{munro2011subword}, location normalization is crucial for situational awareness in these fields, in which the omitted writing style often avoids redundant content. For example, ``\specialcell{十陵立交路段交通拥堵} (Traffic congestion at Shiling Interchange)'' refers to a definite location, but there's no indication of where the Shiling Interchange is to make an exact response, unless we know it belongs to Longquanyi district, Chengdu city, Sichuan province. Countries are divided up into different units to manage their land and the affairs of their people easier. Administrative division (AD) is a portion of a country or other region delineated for the purpose of administration. Due to China's large population and area, the AD of China have consisted of several levels since ancient times. For clarity and convenience, we cover three levels in our system, and treat the largest administrative division of a country as 1st-level, next subdivisions as 2nd-level and 3rd-level, which matches the provincial (province, autonomous region, municipality, and special administrative region), prefecture-level city and county in China, shown as Table~\ref{fig:administrative divisions ch}. China administers more than 3,200 divisions in these flattened levels. In such a large and complex hierarchy, much work stops at extracting the relevant locations, such as named entity tagging~\cite{srihari2000hybrid}. There are many similar named entity recognition (NER) toolkits~\cite{che2010ltp, finkel2005incorporating} for location extraction. As the ambiguity is very high for location name, ~\citet{li2002location} and \citet{al2017location} develop to the disambiguation of location extraction. We take a step closer to extract normalization information, and determine which the three hierarchical administrative area the document mainly describes. \begin{table*} \centering \includegraphics[width=12cm,height=6cm]{administrative_divisions_ch.pdf} \caption{Structural hierarchy of the administrative divisions.} \label{fig:administrative divisions ch} \end{table*} The challenges are a bit different in our location normalization, which are mainly in ambiguity and explicit absence. For example, there is a duplicate Chaoyang district as 3rd-level in Beijing and Changchun city, and ``Chaoyan'' also means the rising sun in Chinese, which may cause ambiguity. If ``Beijing'' and ``Chaoyang'' are mentioned in the same context, it is confident that ``Chaoyang'' should refer to the district of Beijing city. Similarly, \citet{yarowsky1995unsupervised} proposes a corpus-based unsupervised approach that avoids the need for costly truthed training data. However, it's common that some contexts lack enough co-occurrence of AD to disambiguate or the explicit information completely misses. We refer to it as the explicit absence problem, and neither NER nor disambiguation makes it work unless more hidden information is explored. There are many specific AD-related points identifying which division is, including: \begin{itemize} \setlength{\itemsep}{1pt} \item Location alias, e.g. ``\specialcell{鹏城} (Pengcheng)'' is the alias name of Shenzhen city; \item Old calling or customary title, e.g. ``\specialcell{老闸北} (Old Zhabei)'' is a municipal district that once existed in Shanghai city; \item The phrase about the spatial region event, e.g. ``\specialcell{中国国际徽商大会} (China Huishang Conference)'' has been held in Hefei city; \item Some POIs (point of interest), e.g. The well-known ``\specialcell{颐和园} (Summer Palace)'' is situated in the northwestern suburbs of Beijing. \end{itemize} We summary them as a concept named ROI, which is both similar and different from POI. POI dataset collects specific location points that someone may find useful or interesting. It maps the detailed address that covers the administrative division. However, many POIs only build an uni-directional association with AD. For example, Bank of China as a common POI is opened across the China. We can find many Bank of China at a specific AD, but if only ``Bank of China'' exists in a context, we can't directly confirm its location without more area information. Since POI is uncertain naturally, we propose the concept of ROI, which has a bi-directional association with AD. Given an ROI mapping the fixed hierarchical administrative area, ROI has high confidence to represent the area, as well as the area contains it definitely. In the absence of explicit patterns, the co-occurring ROI in the context can be good evidence to predict the most likely administrative area. The main contributions of the system are as follows, which can be applied to other languages: \begin{figure*} \centering \includegraphics[width=16cm,height=4cm]{demo.pdf} \caption{User interface of ROIBase.} \label{fig:demo1} \end{figure*} \begin{enumerate} \setlength{\itemsep}{1pt} \item We provide a structured AD database, and use the co-occurrence constraints to make a decision; \item The ROIBase is equipped with geographic embeddings trained by special location sequences to make an inference; \item We use a large news corpus to build a knowledge base that is made up of ROIs, which helps normalization. \end{enumerate} \section{User Interface} We design a web-based online demo~\footnote{\url{http://research.dylra.com/v/roibase/}} to show the location normalization. As shown in Figure~\ref{fig:demo1}, there are three cases split by blue lines, and each case mainly contains two components: query and result. \textbf{Query} \ Input the document into the textbox with a green border to query for ROIBase. The query accepts the Chinese format sentences, such as the text from news or social media. \textbf{Result} \ On the right of the textbox, it will show the structured result from ROIBase after submitting the query. The result consists of there parts: \textit{Confidence}, \textit{Inference} and \textit{ROI}. \textit{Confidence} represents the result that can be extracted and identified from explicit information. For example, we have confidence to fill ``\specialcell{新疆} (Xinjiang)'' when ``\specialcell{尉犁县} (Yuli County)'' and ``\specialcell{巴音郭楞蒙古自治州} (Bayingol Mongolian Autonomous Prefecture)'' are coming together in context. \textit{Inference} is complement for the \textit{Confidence} by embeddings, where the nearest uncertain administrative level will be inferred from the implicit information of the input. For example, none of the explicit administrative area appears in middle case of the Figure~\ref{fig:demo1}, so the \textit{Inference} will start with 1st-level (the largest division), and it infers ``\specialcell{广东省} (Guangdong Province)''. If the \textit{Confidence} comes up with 1st-level, the \textit{Inference} will start with 2nd-level. If the \textit{Confidence} is filled with three levels, \textit{Inference} does nothing and keeps it as before. \textit{ROI} is derived from the ROI knowledge base. We will match the input with the ROI knowledge base, and return the ROI associated with the administrative area when the match is successful. The types of ROI are many and varied, and what they have in common is that it build the bidirectional relation with a hierarchical AD. As shown in Figure~\ref{fig:demo1}, ``\specialcell{梧桐山} (Wutong Mountain)'', the highest peak in Shenzhen city, map to three levels: [Yantian district, Shenzhen city, Guangdong province]. When the user queries, the input will be segmented into tokens by a Chinese tokenizer. Two processes are running in parallel: one is calculating the \textit{Confidence} and then \textit{Inference}, the other is retrieving the ROI knowledge base. The final result will be restructured back to the front in green color. \section{Approach} \subsection{Administrative Division Co-occurrence Constraint} We support an administrative division database, including the names and partial aliases of the administrative areas in China, which are organized in the form of hierarchy. Each record is associated with its parent and children, for example, ``\specialcell{襄阳市} (Xiangyang city)'' is at 2nd-level, its alias is Xiangfan, its parent is Hubei province, and some children of its divisions are Gucheng County, Xiangzhou District, etc. we develop a co-occurrence constraint based on this database to \textit{Confidence} result, shown in Algorithm~\ref{alg:generator}. \begin{algorithm}[htb] \caption{processing \textit{Confidence}} \label{alg:generator} \textbf{Input}: S, sentences from text \\ \textbf{Output} D, hierarchical administrative division\\ $ T \leftarrow \emptyset, \ Q \leftarrow \{\} $ \\ \ForEach{{\rm word phrase} $w \in S $}{ \uIf{$w$ hit AD database}{ expand $w$ to three levels $[l_1,l_2,l_3]$ by standard AD, and add them into $T$\; } } \ForEach{{\rm hierarchical candidate} $t \in T $}{ Count the hit number of level of $t$ in $S$: \\ $Q[t] = CountLevel(S, t)$ \\ } Filter out $t \in Q $ when $Q[t] < \max{Q}$ \\ \ForEach{ {\rm filtered} $t \in Q $ }{ \ForEach{ {\rm sentence} $s \in S $ }{ $Q[t] += \frac{Count(s, t)}{(1+ CountOtherAD(s))}$ \\ } } \Return $D= \mathop{\arg\min}_{t[:\max{Q}]}(Q[t])$ \end{algorithm} Firstly, we expand the possible AD hierarchy as candidates based on the input segments, and filter the longest to next calculation. If a sentence is full of various AD information, it is probably just the listing of addresses that makes no sense, such as: {\small \specialcell{青少年橄榄球天行联赛总决赛在\underline{上海森兰体育公园}举行。 由来自\underline{北京}、\underline{上海}、\underline{深圳}、\underline{重庆}、\underline{贵阳}等地的青少年选手组成的...}} \\ where the underlined words are related to the administrative area. The more various area-related words are, and the less certainty a sentence has. We consider the frequency of the hits as well as the penalty of other surrounding area-related words, and construct a function to accumulate the weight of each sentence for AD. Finally, we get the \textit{Confidence} result based the explicit statistics. \subsection{Geographic Embeddings}\label{Geographic Embeddings} We propose to train geographic embeddings by word sequences related to AD. As the location information in a document is usually only a small part, the standard name of AD are sparse and disperse, and the words related to geographic locations (now called geographic words) in a long tail are rarely seen. We don't directly get the embedding from the raw word sequences, and instead, we assume that the raw sequences are made up of the records of AD database, geographic words, and others. To keep the former twos, we pass through a large news corpus, more than 14.3 million documents, take every phrase of news sentences that hits the AD records as a starting point, use a NER toolkit to recognise the location entities among the surrounding two sentences, and keep order to extract the candidate sequences that consist of the standard AD records and location entities. In the pattern of the NER model, it's not extremely accurate, and various types of the phrases related to location are generically recognized. We collect the candidate sequences greater than a threshold length to train geographic embeddings. Given a set $S$ of candidate sequences extracted from documents, each sequence $s = (w_1, ..., w_m ) \in S $ is made up of the AD records and location entities, where the relative order of elements in $s$ stays the same as raw text. The aim is to learn a $d$-dimensional real-valued embedding $v_{w_i}$ of each $v_{w_i}$, so that the administrative area and geographic words are in the same embedding space, and the adjacent administrative areas lie nearby in the embedding space. We learn the embedding using the skip-gram model~\cite{mikolov2013distributed} by maximizing the objective function $\mathcal L$ over the set $S$, which is defined as follows: $$ \mathcal{L} = \sum\limits_{s \in S} \sum\limits_{w_i \in s} (\sum\limits_{-n \geq j \leq n, i} \log P(w_{i+j} | w_i)) $$ $$ P(w_{i+j} | w_i) = \frac{\exp(v_{w_i}^T v_{w_{i+j}}^{'})}{\sum_{w=1}^{|V|} \exp(v_{w_i}^T v_{w}^{'})} $$ where $v$ and $v^{'}$ are the input and output vector, $n$ is the size of the sequence window, and $V$ is the vocabulary that consists of the administrative areas and geographic words. \begin{figure}[htb] \includegraphics[width=20em]{embedding_cluster.pdf} \caption{The clustering distribution of geographic embeddings about administrative areas.} \label{fig:embedding_cluster} \end{figure} To evaluate whether the region characteristics are captured by geographic embeddings, we design a visualization to show. Firstly, we perform k-means clustering on the learned embeddings of records in AD database, cluster 4,000+ standard AD to 100 clusters, and then plot the scatters on the map of China with the division borders, where the different colors represent different clusters and the coordinates are the rough locations of the self standard AD. As shown in Figure~\ref{fig:embedding_cluster}, the scatters in same clusters are mainly located in same administrative area, and it means that the geographic similarity is well encoded. \subsection{Inference} Based on the \textit{Confidence} result, we utilize the geographic embeddings that we train in the above section to infer the next administrative area. We first get the intersection of the input text and the geographic words $V$, and average the embeddings of the intersection at each dimension as the representation of the input $v_{input}$. Then we embed the latest level's divisions of \textit{Confidence} to get the candidate embeddings. For example, the \textit{Confidence} ends with 2nd level, denoted as $[l_1,l_2]$, so the embeddings of its latest level's divisions $[l_2^1, ...l_2^k]$ can be denoted as $v_{l_2^i}, i=[1,...,k]$, where $k$ is the number of $l_2$ subdivisions. It can be observed that cosine similarities between the right candidate and geographic embedding are often higher compared to other candidates embeddings. We make the \textit{Inference} by $\arg\max_{l_2^i}Cosine(v_{l_2^i}, v_{input})$ as the complement of \textit{Confidence}. \subsection{ROI} Since embeddings are implicit, we build an ROI knowledge base to improve interpretability and reduce the bias of \textit{Inference}. Unlike traditional taxonomies that require a lot of manual labor, we propose a novel method to extract ROI from large corpus, which uses the statistics to model inconsistent, ambiguous and uncertain information it contains. Given the geographic sequences $s$ in section \ref{Geographic Embeddings}, where $\bar{w} \in s$ is the geographic word, we assume that the most frequent administrative area in the window of the geographic word probably corresponds to its division. In fact, some administrative area records appear more frequently in general, such as Beijing, Shanghai and other big cites. We consider the number of the pair $(\bar{w}, w_i)$ appears in the $S$, where $w_i$ represents the administrative area name. and offset by the total count of $w_i$ in the whole corpus. Therefore, a similar tf–idf weighting scheme is applied to balance the exact division: $$ score(\bar{w}, w_i) = Count(\bar{w}, w_i) \times IDF(w_i) $$ where the $Count$ denotes the counting operation of the co-occurrence of $\bar{w}$ and $w_i$ in each geographic sequence, and $IDF$ denotes the inverse document frequency of $w_i$ in all sequences for $S$. We score each pair $\bar{w}$ and $w_i$, and filter the valid pairs by a high threshold. Then the sorted mapping $ \{\bar{w} | (w_1, g_1), ...,(w_t, g_t)\} $ is obtained for each $\bar{w}$, where $g_i$ denotes the score weight, the higher $g_i$ ranks more ahead. It is noteworthy that the geographic word is not equal to ROI. We use the information entropy to filter the valid candidates: $$ E(\bar{w}) = -\sum\limits_{i}P_{i}\log P_i, \ P_i=\frac{g_i}{\sum_{j}^{t}g_j} $$ If $\bar{w}$ can't represents the administrative area, the weights of the candidates mappings will be dispersed. The higher $E(\bar{w})$ is, the less certain the the mapping contains. We cut off the high $E(\bar{w})$ to keep the candidates of ROIs. For a specific candidate ROI, it is common that the upper level of mapping will has the higher frequency than the low level in news corpus. For example, the co-occurrence of \textit{Summer Palace} and \textit{Beijing} is larger than the co-occurrence of \textit{Summer Palace} and \textit{Haidian}, and Haidian district is a subdivision of Beijing city. We base subdivision relation to correct the weight of $w_i$ when the $w_{j}$ is the parent division of $w_i$, where $j<i$. $$ g_i = g_i / P(w_i|w_j, \neg w_i, s) $$ $$ P(w_i|w_j, \neg w_i, s) = \frac{\sum_{s \in S} H(w_i \cap f(s))}{\sum_{s \in S} H(w_j\cap \neg w_i \cap s)} $$ where $\neg w_i$ means the operation without $w_i$, $P(w_i|w_j, \neg w_i, s)$ denotes the probability that only $w_j$ appears in $s$ but actually it belong to $w_i$, $f(s)$ denotes the sequences that are in the same document excluding $s$, and $H$ is the Heaviside step function. We sort the mapping again by the re-weight scheme, and get the top few pairs, which are on same orders of magnitude, to compose ROI pairs $(\bar{w}, <l_1, l_2, l_3>)$, where $l_1, l_2, l_3$ represent the three levels of AD and it will be set to null if one is missing. Finally, the pairs are inserted into Elasticsearch~\footnote{\url{https://www.elastic.co}} engine to build the knowledge base. \begin{table*}[ht] \centering \begin{tabular}{p{0.15\textwidth}p{0.15\textwidth}p{0.6\textwidth}} \toprule ROIBase & NER+pattern & section of text \\ \hline \textbf{news} \\ \hline \small{\specialcell{-,呼和浩特市,内蒙古自治区}} & \small{\specialcell{内蒙古}} & \small{\specialcell{\underline{内蒙古大兴安岭}原始林区雷击火蔓延...}} (Lightning fire spreads in the virgin forest area of the Greater Xing'an Mountains, Inner Mongolia...) \\ \small{\specialcell{-,深圳市,广东省}} & - & \small{\specialcell{日前,\underline{华为基地}启用了无人机送餐业务...}} (A few days ago, Huawei base launched drone food delivery business...) \\ \small{\specialcell{双流区,成都市,四川省}} & \small{\specialcell{海口}} & \small{\specialcell{四川航空3u8751成都至海口航班...安全落地\underline{成都双流国际机场}...}} (Sichuan Airlines flight 3u8751 from Chengdu to Haikou returned and landed safely at Chengdu Shuangliu International Airport...) \\ \hline \textbf{Weibo} \\ \hline \small{\specialcell{-,丽江市,云南省}} & \small{\specialcell{-}} & \small{\specialcell{拍不出\underline{泸沽湖}万分之一的美这个时节少了喧嚣多了闲适}} (Can't shoot one-tenth of the beauty of Lugu Lake...) \\ \hline \small{\specialcell{-,武汉市,湖北省}} & \small{\specialcell{湖北}} & \small{\specialcell{\underline{湖北经济学院}学生爆料质疑校园联通宽带垄断性经营}} (Students from Hubei University of Economics questioned campus unicom's ...) \\ \end{tabular} \caption{The examples of the location extraction by ROIBase and NER} \label{tab:news} \end{table*} \section{Experiment} There are no publicly available datasets on text location normalization, so as no comparable methods. As many similar schemes about detecting location start from NER, we build NER+pattern as baseline, which uses NER to recognise and retrieves the AD database. We conduct the experiments on news and Weibo (social media in China) corpus. The news contains title and content, the title is usually short and cohesive, and the content always has hundreds of words with more location information, of which the changes lie in redundancy and efficiency. The Weibo corpus is short-text, and the location information is always implicit. We manually sample the finance and social news, and obtain 760 news that can be assigned to a definite place to build the news dataset. Equally, 1228 short-texts are finally picked from Weibo corpus. Location information is extracted by ROIBase and NER~\cite{che2010ltp}+pattern respectively on these datasets. As the Table~\ref{tab:news} lists examples of the results, only NER+pattern matching can't utilize the hidden information to completely normalize the locations, ROIBase contains 1.51 million geographic embeddings and 0.42 million ROIs, so it knows the more linking of AD by the underlined phrases. \begin{table}[htp] \centering \begin{tabular}{lcc} \toprule &\multicolumn{2}{c}{F1-score}\\ \textsc{method} & news & Weibo\\ \hline ROIBase & 0.812 & 0.780\\ NER+pattern & 0.525 & 0.582\\ \hline \end{tabular} \caption{F1 score on two datasets} \label{tab:F1} \end{table} A variant of F1 score is used to measure the performance, which takes the incomplete output as 0.5 hit when counting. As shown in Table~\ref{tab:F1}, ROIBase achieves better performance against NER with AD patterns by large margins. Some of Weibo texts carry the label of location, and it contributes to the recognition of AD patterns, which closes the gap with us. The long texts provide more abundant information, and ROIBase can eliminate confusion to improve the performance. \begin{table}[htp] \centering \begin{tabular}{ccccc} \toprule total & 1st & 2nd & 3rd & speed\\ \hline 36.8\% & 23\% & 48.7\% & 28.3\% & 751KB/s\\ \hline \end{tabular} \caption{ROIBase statistics on 100,000 news} \label{tab:val} \end{table} Statistics over 100 thousand news from financial and social domains by ROIBase access to detailed results. As shown in Table~\ref{tab:val}, we can normalize locations from 36.8 percent in general. Among them, there is 23 percent normalization only at the 1st level, 48.7 percent at 2nd level, and 28.3 percent with complete divisions. We show the speed on a machine with Xeon 2.0GHz CPU and 4G Memory, and the speed of ROIBase is up to 751KB/s when the NER method~\cite{che2010ltp} costs 14.4KB/s. ROIBase lets the user process vast amounts of long text in location normalization. \section{Related Work} \citet{zubiaga2017towards} makes use of eight tweet-inherent features for classification at the country level. \citet{qian2017probabilistic} formalizes the inferring location of social media into a semi-supervised factor graph model, and perform on the level of countries and provinces. A hierarchical location prediction neural network~\cite{huang-carley-2019-hierarchical} is presented for user geolocation on Twitter. However, many of these focus on a single level, only cover fewer countries or states, or utilize extra features out of text. There is room for improvement in the performance. Since \citet{mikolov2013distributed} proposes the word vector technique, there are many applications. \citet{grbovic2018real} introduces listing and user embeddings trained on bookings to capture user’s real-time and long-term interest. ~\citet{wu2012probase} demonstrates that a taxonomy knowledge base can be constructed from the entire web in special patterns. Inspired by the these cases, we make the first solution to normalize the location of text by hierarchical administrative areas. \section{Conclusion} Through the investigation, we found that there is very few work on location normalization of text, and the popular alike solutions, such as NER, are not directly transferable to it. The ROIBase system provides an efficient and interpretable solution to location normalization through a web interface, which enables to process these modules with a cascaded mechanism. We propose it as a baseline that can be applied in different languages easily, and look forward to more work on improving the location normalization.